Restore from Backup on new Host with different docker setup

Please fill in the questionnaire below.

Technical Help Questionnaire

Answer: Yes

How have you self-hosted Baserow.

Docker - Portainer on a Synology NAS following exact instructions on How to Install Baserow on Your Synology NAS – Marius Hosting

I also had an instance running on a Raspberry Pi that i installed using the Quick Start installation guide here. This was my very first project using Docker, so it was a combined ‘experience’.

What are the specs of the service or server you are using to host Baserow.

Which version of Baserow are you using.

on Synology nas, baserow:1.19.1
on Raspberry pi, baserow: 1.21.2

How have you configured your self-hosted installation?

See above

I am very much a noobie when it comes to Docker and self hosting. I discovered Baserow about 1 month ago and set it up on a Raspberry Pi in order to move away from Airtable and secure my own data.

I recently purchased a NAS and went about installing docker and portainer on it following guides on Marius Hosting website (see above). I want to migrate the baserow 'installation? container? database? whatever its called from the Rasperry pi over to the Synology Nas and have been so frustrated with the process as nothing i do seems to work. I have tried doing a full backup and restore and also tried doing the backup/restore of just the db as described here: Install with Docker // Baserow

Can someone please point me in the right direction or give me step by step advise?

The Baserow I want to restore to was configured with the following:

version: "3.9"
    image: redis
      - /bin/sh
      - -c
      - redis-server --requirepass redispass
    container_name: Baserow-REDIS
    hostname: baserow-redis
    mem_limit: 256m
    mem_reservation: 50m
    cpu_shares: 768
      - no-new-privileges:true
    read_only: true
    user: 1026:100
      test: ["CMD-SHELL", "redis-cli ping || exit 1"]
      - /volume1/docker/baserow/redis:/data:rw
      TZ: Europe/Bucharest
    restart: on-failure:5

    image: postgres
    container_name: Baserow-DB
    hostname: baserow-db
    mem_limit: 512m
    cpu_shares: 768
      - no-new-privileges:true
    user: 1026:100
      test: ["CMD", "pg_isready", "-q", "-d", "baserow", "-U", "baserowuser"]
      timeout: 45s
      interval: 10s
      retries: 10
      - /volume1/docker/baserow/db:/var/lib/postgresql/data:rw
      POSTGRES_DB: baserow
      POSTGRES_USER: baserowuser
      POSTGRES_PASSWORD: baserowpass
    restart: on-failure:5

    image: baserow/baserow:1.19.1
    container_name: Baserow
    hostname: baserow
    mem_limit: 3g
    cpu_shares: 768
      - no-new-privileges:true
    read_only: true
      - 3888:80
      - /volume1/docker/baserow/data:/baserow/data:rw
      BASEROW_PUBLIC_URL: https://baserow.somewhere.selfhosted
      DATABASE_USER: baserowuser
      DATABASE_PASSWORD: baserowpass
      DATABASE_NAME: baserow
      DATABASE_HOST: baserow-db
      DATABASE_PORT: 5432
      REDIS_HOST: baserow-redis
      REDIS_PORT: 6379
      REDIS_PROTOCOL: redis
      REDIS_USER: default
      REDIS_PASSWORD: redispass
      EMAIL_SMTP: my email
      EMAIL_SMTP_HOST: my email
      EMAIL_SMTP_PORT: 587
      EMAIL_SMTP_USER: my email
      EMAIL_SMTP_PASSWORD: my email
      EMAIL_SMTP_USE_TLS: true
      FROM_EMAIL: my email
    restart: on-failure:5
        condition: service_healthy
        condition: service_healthy

and the original baserow was built using

version: "3.4"
    container_name: baserow
    image: baserow/baserow:1.22.1
      BASEROW_PUBLIC_URL: 'http://localhost'
      - "80:80"
      - "443:443"
      - baserow_data:/baserow/data

and both backups (Full Baserow backup AND just DB backup)

docker run --rm -v baserow_data:/baserow/data -v $PWD:/backup ubuntu tar cvf /backup/backup.tar /baserow/data


docker run -it --rm -v baserow_data:/baserow/data baserow/baserow:1.22.1 \
   backend-cmd-with-db backup -f /baserow/data/backups/backup.tar.gz

The paths to the baserow containers in my Synology NAS is


and the backup files are both at:


I really am a bit lost now. Can someone please help me with building the correct command to restore from the backup to the Synology NAS?

Hi @Paul, it seems like you’re using the Baserow all-in-one image. We have documentation on how to make a full backup and restore it here: Install with Docker // Baserow. I’m not sure how to run certain commands on a Synology nas because I don’t have any experience with it. I hope this answer can still be useful.

Hello @bram - thanks for your reply. I believe you’re correct and I was/am using the all-in-one image. I’ve followed the instructions to do a full backup, however my issue is with restoring it.
The Baserow installation I have on my NAS is slightly different with the folder structure so I don’t know how to edit the restore command so that it puts the files in the correct folders.

Hey @Paul, I would need to know more about the folder structure to see if I can help you to figure out what to change in the restore commands.

The folder structure looks like:
/volume1/docker/baserow/db (where the postgres db is)

I created a further folder at
/volume1/docker/baserow/backups and have copied the full backup file there

Its from this point that i’m trying to restore from the backup into the various locations so that I can access the bases i’ve built.

I don’t think there is anything further in terms of ‘folder structure’ that is different

Anyone else able to help? I’m tearing my hair out in frustration here! :confounded:

Hi @Paul, I’m going through all your comments again, but I have a couple of additional questions/comments.

You mentioned you’re going to migrate from your Raspberry Pi to your Synology Nass, but you say that your Raspberry Pi is running 1.21.2, and the Nass 1.19.1. Does that mean you want to downgrade? If so, then that’s not possible. You can only use these backup/restore commands to migrate to an instance with the same of higher version.

It seems like you’re running the all-in-one image with the embedded PostgreSQL and Redis in your Raspberry Pi, but you’re running an external PostgreSQL server in another container on your Nass. This makes it difficult to restore using this method Install with Docker // Baserow because it the restore function wants to unpack the raw PostgreSQL data into the volume of Baserow.

I also noticed that on your Nass you’re using the postgres image, which will probably not install PG 11, which is used by your embedded image. It will also not work because of that reason.

Based on that information, I was curious which data you specifically want to migrate between instances? There as another method that exports all your databases and files of a specific Baserow workspace to a zip file, and that can be imported in another instance. Is it just one workspace that you want to migrate?

The downside is that you wouldn’t keep the same database IDs, field IDs, table IDs, etc. It basically recreates your database into the new instance. Same as the other comment, the Baserow of the environment you’re migrating into must be at least the same as the one you’re migrating from.

@bram - you legend! Thanks so much for getting back to me - i am starting to see (based on your questions) where I’m getting stuck!
I’ll work backwards from your questions:

which data you specifically want to migrate between instances?

There is a only one workspace and one user on the Pi. The workspace contains 7 databases of which 2 are ones that I put a lot of time and effort into building after importing them from Airtable. I have created an identical user account on the Synology NAS instance of Baserow.

I don’t think its important that the database ID, field/table IDs etc are retained (unless this breaks a bunch of formula fields or linked/lookup fields?)

you’re using the postgres image,

Yes - i believe so. The installation on the Synology Nas is completely new, so I will try to upgrade it to the same version as the Raspberry Pi.

It seems like you’re running the all-in-one image with the embedded PostgreSQL and Redis in your Raspberry Pi, but you’re running an external PostgreSQL server in another container on your Nass.

Again Yes.

So it seems that step one will be to upgrade (or rebuild? is that the terminology?) the docker containers on my Synology NAS so that it is running the same versions as the Raspberry Pi - I would like to continue to keep the NAS setup with postgres running in a different container. From what you’re telling me, i’d need to use a different method to backup the baserow workspace. Is that the one under the heading: Backup only Baserow’s Postgres database - on this page?

Hello @bram and everyone else - I’ve just spent an entire day (again) trying to understand what i’m doing here and trying countless variations on what I think should be a simple process.
This is where i’m currently stuck:

docker run -it --rm
-v /volume1/docker/baserow/data/backups:/baserow/old_data
-v /volume1/docker/baserow/db:/baserow/data
baserow/baserow:1.22.2 backend-cmd-with-db restore -f /baserow/old_data/backup.tar.gz
[STARTUP][2024-01-20 19:50:05] Creating BASEROW_JWT_SIGNING_KEY secret in /baserow/data/.jwt_signing_key
[STARTUP][2024-01-20 19:50:05] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
[STARTUP][2024-01-20 19:50:05] Creating REDIS_PASSWORD secret in /baserow/data/.redispass
[STARTUP][2024-01-20 19:50:05] Importing SECRET_KEY secret from /baserow/data/.secret
[STARTUP][2024-01-20 19:50:05] Creating SECRET_KEY secret in /baserow/data/.secret
[STARTUP][2024-01-20 19:50:05] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
[STARTUP][2024-01-20 19:50:05] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
[STARTUP][2024-01-20 19:50:05] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
[STARTUP][2024-01-20 19:50:05] Creating DATABASE_PASSWORD secret in /baserow/data/.pgpass
[STARTUP][2024-01-20 19:50:05] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
[STARTUP][2024-01-20 19:50:05] Didn’t find an existing postgres + redis running, starting them up now.

██████╗ █████╗ ███████╗███████╗██████╗ ██████╗ ██╗ ██╗
██╔══██╗██╔══██╗██╔════╝██╔════╝██╔══██╗██╔═══██╗██║ ██║
██████╔╝███████║███████╗█████╗ ██████╔╝██║ ██║██║ █╗ ██║
██╔══██╗██╔══██║╚════██║██╔══╝ ██╔══██╗██║ ██║██║███╗██║
██████╔╝██║ ██║███████║███████╗██║ ██║╚██████╔╝╚███╔███╔╝
╚═════╝ ╚═╝ ╚═╝╚══════╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚══╝╚══╝

Version 1.22.2

Welcome to Baserow. See for detailed instructions on
how to use this Docker image.
[STARTUP][2024-01-20 19:50:06] Running setup of embedded baserow database.
Error: Failed to connect to the postgresql database at localhost
Please see the error below for more details:
connection to server at “localhost” (, port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at “localhost” (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?

I originally thought that it might be a firewall issue or something on my NAS, but that isn’t the case. I’ve tried to follow the setup instructions here:

With a Postgresql server running on the same host as the Baserow docker container
This is assuming you are using the postgresql server bundled by ubuntu. If not then you will have to find the correct locations for the config files for your OS.

  1. Find out what version of postgresql is installed by running sudo ls /etc/postgresql/
  2. Open /etc/postgresql/YOUR_PSQL_VERSION/main/postgresql.conf for editing as root
  3. Find the commented out # listen_addresses line.
  4. Change it to be: listen_addresses = '*' # what IP address(es) to listen on;
  5. Open /etc/postgresql/YOUR_PSQL_VERSION/main/pg_hba.conf for editing as root
  6. Add the following line to the end which will allow docker containers to connect. host all all md5
  7. Restart postgres to load in the config changes. sudo systemctl restart postgresql
  8. Check the logs do not have errors by running sudo less /var/log/postgresql/postgresql-YOUR_PSQL_VERSION-main.log

But i’m running into problem after problem - the docker containers are read only and I can’t edit the file. I copy the file to my host, tried editing it there, but then i can’t copy it back to the docker container.

I’m completely stuck and about to give up completely.

Hi @Paul, because you’re running an external PostgreSQL server, it’s not going to be possible to use the backend-cmd-with-db restore commands. You mentioned that it’s not important to keep the IDs, and that you prefer to keep the external PostgreSQL server. In that case, there is an alternative way of moving your data. Both Baserow versions are ideally equal when migrate.

Please figure out the following variables:

  • WORKSPACE_ID_TO_EXPORT: This is the workspace ID in your Raspberry Pi environment, that you’d like to migrate over to your Nass. You can find the workspace ID by clicking on the three dots next to it, and then find the number between brackets.
  • CONTAINER_ID: This is the ID of the container running Baserow on your Raspberry Pi.

Then execute the following steps on your Raspberry Pi, after replacing the variables with the correct values:

  • docker exec baserow ./ backend-cmd manage export_workspace_applications WORKSPACE_ID_TO_EXPORT`
  • docker cp CONTAINER_ID:/baserow/backend/workspace_WORKSPACE_ID_TO_EXPORT.json workspace_WORKSPACE_ID_TO_EXPORT.json
  • docker cp CONTAINER_ID:/baserow/backend/

Confirm that you see a workspace_X.json and file in your working directory. Copy these files to your Synology Nass, and figure out the following variables on your Nass.

  • WORKSPACE_ID_TO_IMPORT: This is the workspace ID on your Nass, where you’d like to migrate your workspace into. You can find the workspace ID by clicking on the three dots next to it, and then find the number between brackets.
  • CONTAINER_ID: This is the ID of the container running Baserow on your Nass.

Then execute the following steps on your Nass, after replacing the variables with the correct values:

  • docker cp workspace_WORKSPACE_ID_TO_EXPORT.json CONTAINER_ID:/baserow/backend/workspace_WORKSPACE_ID_TO_EXPORT.json
  • docker cp CONTAINER_ID:/baserow/backend/
  • docker exec baserow ./ backend-cmd manage import_workspace_applications WORKSPACE_ID_TO_IMPORT workspace_WORKSPACE_ID_TO_EXPORT`

I hope this helps!

Fantastic - thanks @bram, I will try this out. Thanks for your time and help - really appreciate it!