Restore from Backup on new Host with different docker setup

Please fill in the questionnaire below.

Technical Help Questionnaire

Answer: Yes

How have you self-hosted Baserow.

Docker - Portainer on a Synology NAS following exact instructions on How to Install Baserow on Your Synology NAS – Marius Hosting

I also had an instance running on a Raspberry Pi that i installed using the Quick Start installation guide here. This was my very first project using Docker, so it was a combined ‘experience’.

What are the specs of the service or server you are using to host Baserow.

Which version of Baserow are you using.

on Synology nas, baserow:1.19.1
on Raspberry pi, baserow: 1.21.2

How have you configured your self-hosted installation?

See above

I am very much a noobie when it comes to Docker and self hosting. I discovered Baserow about 1 month ago and set it up on a Raspberry Pi in order to move away from Airtable and secure my own data.

I recently purchased a NAS and went about installing docker and portainer on it following guides on Marius Hosting website (see above). I want to migrate the baserow 'installation? container? database? whatever its called from the Rasperry pi over to the Synology Nas and have been so frustrated with the process as nothing i do seems to work. I have tried doing a full backup and restore and also tried doing the backup/restore of just the db as described here: Install with Docker // Baserow

Can someone please point me in the right direction or give me step by step advise?

The Baserow I want to restore to was configured with the following:

version: "3.9"
services:
  redis:
    image: redis
    command:
      - /bin/sh
      - -c
      - redis-server --requirepass redispass
    container_name: Baserow-REDIS
    hostname: baserow-redis
    mem_limit: 256m
    mem_reservation: 50m
    cpu_shares: 768
    security_opt:
      - no-new-privileges:true
    read_only: true
    user: 1026:100
    healthcheck:
      test: ["CMD-SHELL", "redis-cli ping || exit 1"]
    volumes:
      - /volume1/docker/baserow/redis:/data:rw
    environment:
      TZ: Europe/Bucharest
    restart: on-failure:5

  db:
    image: postgres
    container_name: Baserow-DB
    hostname: baserow-db
    mem_limit: 512m
    cpu_shares: 768
    security_opt:
      - no-new-privileges:true
    user: 1026:100
    healthcheck:
      test: ["CMD", "pg_isready", "-q", "-d", "baserow", "-U", "baserowuser"]
      timeout: 45s
      interval: 10s
      retries: 10
    volumes:
      - /volume1/docker/baserow/db:/var/lib/postgresql/data:rw
    environment:
      POSTGRES_DB: baserow
      POSTGRES_USER: baserowuser
      POSTGRES_PASSWORD: baserowpass
    restart: on-failure:5

  baserow:
    image: baserow/baserow:1.19.1
    container_name: Baserow
    hostname: baserow
    mem_limit: 3g
    cpu_shares: 768
    security_opt:
      - no-new-privileges:true
    read_only: true
    ports:
      - 3888:80
    volumes:
      - /volume1/docker/baserow/data:/baserow/data:rw
    environment:
      BASEROW_PUBLIC_URL: https://baserow.somewhere.selfhosted
      BASEROW_MAX_IMPORT_FILE_SIZE_MB: 1024 
      DATABASE_USER: baserowuser
      DATABASE_PASSWORD: baserowpass
      DATABASE_NAME: baserow
      DATABASE_HOST: baserow-db
      DATABASE_PORT: 5432
      REDIS_HOST: baserow-redis
      REDIS_PORT: 6379
      REDIS_PROTOCOL: redis
      REDIS_USER: default
      REDIS_PASSWORD: redispass
      EMAIL_SMTP: my email
      EMAIL_SMTP_HOST: my email
      EMAIL_SMTP_PORT: 587
      EMAIL_SMTP_USER: my email
      EMAIL_SMTP_PASSWORD: my email
      EMAIL_SMTP_USE_TLS: true
      FROM_EMAIL: my email
    restart: on-failure:5
    depends_on:
      redis:
        condition: service_healthy
      db:
        condition: service_healthy

and the original baserow was built using

version: "3.4"
services:
  baserow:
    container_name: baserow
    image: baserow/baserow:1.22.1
    environment:
      BASEROW_PUBLIC_URL: 'http://localhost'
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - baserow_data:/baserow/data
volumes:
  baserow_data:

and both backups (Full Baserow backup AND just DB backup)

docker run --rm -v baserow_data:/baserow/data -v $PWD:/backup ubuntu tar cvf /backup/backup.tar /baserow/data

and

docker run -it --rm -v baserow_data:/baserow/data baserow/baserow:1.22.1 \
   backend-cmd-with-db backup -f /baserow/data/backups/backup.tar.gz

The paths to the baserow containers in my Synology NAS is

\volume1\docker\baserow
\volume1\docker\db
\volume1\docker\redis

and the backup files are both at:

\volume1\Documents

I really am a bit lost now. Can someone please help me with building the correct command to restore from the backup to the Synology NAS?

Hi @Paul, it seems like you’re using the Baserow all-in-one image. We have documentation on how to make a full backup and restore it here: Install with Docker // Baserow. I’m not sure how to run certain commands on a Synology nas because I don’t have any experience with it. I hope this answer can still be useful.

Hello @bram - thanks for your reply. I believe you’re correct and I was/am using the all-in-one image. I’ve followed the instructions to do a full backup, however my issue is with restoring it.
The Baserow installation I have on my NAS is slightly different with the folder structure so I don’t know how to edit the restore command so that it puts the files in the correct folders.

Hey @Paul, I would need to know more about the folder structure to see if I can help you to figure out what to change in the restore commands.

The folder structure looks like:
/volume1/docker/baserow/data
/volume1/docker/baserow/db (where the postgres db is)
/volume1/docker/baserow/redis

I created a further folder at
/volume1/docker/baserow/backups and have copied the full backup file there

Its from this point that i’m trying to restore from the backup into the various locations so that I can access the bases i’ve built.

I don’t think there is anything further in terms of ‘folder structure’ that is different

Anyone else able to help? I’m tearing my hair out in frustration here! :confounded:

Hi @Paul, I’m going through all your comments again, but I have a couple of additional questions/comments.

You mentioned you’re going to migrate from your Raspberry Pi to your Synology Nass, but you say that your Raspberry Pi is running 1.21.2, and the Nass 1.19.1. Does that mean you want to downgrade? If so, then that’s not possible. You can only use these backup/restore commands to migrate to an instance with the same of higher version.

It seems like you’re running the all-in-one image with the embedded PostgreSQL and Redis in your Raspberry Pi, but you’re running an external PostgreSQL server in another container on your Nass. This makes it difficult to restore using this method Install with Docker // Baserow because it the restore function wants to unpack the raw PostgreSQL data into the volume of Baserow.

I also noticed that on your Nass you’re using the postgres image, which will probably not install PG 11, which is used by your embedded image. It will also not work because of that reason.

Based on that information, I was curious which data you specifically want to migrate between instances? There as another method that exports all your databases and files of a specific Baserow workspace to a zip file, and that can be imported in another instance. Is it just one workspace that you want to migrate?

The downside is that you wouldn’t keep the same database IDs, field IDs, table IDs, etc. It basically recreates your database into the new instance. Same as the other comment, the Baserow of the environment you’re migrating into must be at least the same as the one you’re migrating from.

@bram - you legend! Thanks so much for getting back to me - i am starting to see (based on your questions) where I’m getting stuck!
I’ll work backwards from your questions:

which data you specifically want to migrate between instances?

There is a only one workspace and one user on the Pi. The workspace contains 7 databases of which 2 are ones that I put a lot of time and effort into building after importing them from Airtable. I have created an identical user account on the Synology NAS instance of Baserow.

I don’t think its important that the database ID, field/table IDs etc are retained (unless this breaks a bunch of formula fields or linked/lookup fields?)

you’re using the postgres image,

Yes - i believe so. The installation on the Synology Nas is completely new, so I will try to upgrade it to the same version as the Raspberry Pi.

It seems like you’re running the all-in-one image with the embedded PostgreSQL and Redis in your Raspberry Pi, but you’re running an external PostgreSQL server in another container on your Nass.

Again Yes.

So it seems that step one will be to upgrade (or rebuild? is that the terminology?) the docker containers on my Synology NAS so that it is running the same versions as the Raspberry Pi - I would like to continue to keep the NAS setup with postgres running in a different container. From what you’re telling me, i’d need to use a different method to backup the baserow workspace. Is that the one under the heading: Backup only Baserow’s Postgres database - on this page? https://baserow.io/docs/installation%2Finstall-with-docker#backup-all-of-baserow

Hello @bram and everyone else - I’ve just spent an entire day (again) trying to understand what i’m doing here and trying countless variations on what I think should be a simple process.
This is where i’m currently stuck:

docker run -it --rm
-v /volume1/docker/baserow/data/backups:/baserow/old_data
-v /volume1/docker/baserow/db:/baserow/data
baserow/baserow:1.22.2 backend-cmd-with-db restore -f /baserow/old_data/backup.tar.gz
Password:
[STARTUP][2024-01-20 19:50:05] Creating BASEROW_JWT_SIGNING_KEY secret in /baserow/data/.jwt_signing_key
[STARTUP][2024-01-20 19:50:05] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
[STARTUP][2024-01-20 19:50:05] Creating REDIS_PASSWORD secret in /baserow/data/.redispass
[STARTUP][2024-01-20 19:50:05] Importing SECRET_KEY secret from /baserow/data/.secret
[STARTUP][2024-01-20 19:50:05] Creating SECRET_KEY secret in /baserow/data/.secret
[STARTUP][2024-01-20 19:50:05] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
[STARTUP][2024-01-20 19:50:05] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
[STARTUP][2024-01-20 19:50:05] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
[STARTUP][2024-01-20 19:50:05] Creating DATABASE_PASSWORD secret in /baserow/data/.pgpass
[STARTUP][2024-01-20 19:50:05] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
[STARTUP][2024-01-20 19:50:05] Didn’t find an existing postgres + redis running, starting them up now.
=========================================================================================

██████╗ █████╗ ███████╗███████╗██████╗ ██████╗ ██╗ ██╗
██╔══██╗██╔══██╗██╔════╝██╔════╝██╔══██╗██╔═══██╗██║ ██║
██████╔╝███████║███████╗█████╗ ██████╔╝██║ ██║██║ █╗ ██║
██╔══██╗██╔══██║╚════██║██╔══╝ ██╔══██╗██║ ██║██║███╗██║
██████╔╝██║ ██║███████║███████╗██║ ██║╚██████╔╝╚███╔███╔╝
╚═════╝ ╚═╝ ╚═╝╚══════╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚══╝╚══╝

Version 1.22.2

=========================================================================================
Welcome to Baserow. See https://baserow.io/installation/install-with-docker/ for detailed instructions on
how to use this Docker image.
[STARTUP][2024-01-20 19:50:06] Running setup of embedded baserow database.
OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.22.2,deployment.environment=unknown
Error: Failed to connect to the postgresql database at localhost
Please see the error below for more details:
connection to server at “localhost” (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at “localhost” (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?

I originally thought that it might be a firewall issue or something on my NAS, but that isn’t the case. I’ve tried to follow the setup instructions here:

With a Postgresql server running on the same host as the Baserow docker container
This is assuming you are using the postgresql server bundled by ubuntu. If not then you will have to find the correct locations for the config files for your OS.

  1. Find out what version of postgresql is installed by running sudo ls /etc/postgresql/
  2. Open /etc/postgresql/YOUR_PSQL_VERSION/main/postgresql.conf for editing as root
  3. Find the commented out # listen_addresses line.
  4. Change it to be: listen_addresses = '*' # what IP address(es) to listen on;
  5. Open /etc/postgresql/YOUR_PSQL_VERSION/main/pg_hba.conf for editing as root
  6. Add the following line to the end which will allow docker containers to connect. host all all 172.17.0.0/16 md5
  7. Restart postgres to load in the config changes. sudo systemctl restart postgresql
  8. Check the logs do not have errors by running sudo less /var/log/postgresql/postgresql-YOUR_PSQL_VERSION-main.log

But i’m running into problem after problem - the docker containers are read only and I can’t edit the file. I copy the file to my host, tried editing it there, but then i can’t copy it back to the docker container.

I’m completely stuck and about to give up completely.

Hi @Paul, because you’re running an external PostgreSQL server, it’s not going to be possible to use the backend-cmd-with-db restore commands. You mentioned that it’s not important to keep the IDs, and that you prefer to keep the external PostgreSQL server. In that case, there is an alternative way of moving your data. Both Baserow versions are ideally equal when migrate.

Please figure out the following variables:

  • WORKSPACE_ID_TO_EXPORT: This is the workspace ID in your Raspberry Pi environment, that you’d like to migrate over to your Nass. You can find the workspace ID by clicking on the three dots next to it, and then find the number between brackets.
  • CONTAINER_ID: This is the ID of the container running Baserow on your Raspberry Pi.

Then execute the following steps on your Raspberry Pi, after replacing the variables with the correct values:

  • docker exec baserow ./baserow.sh backend-cmd manage export_workspace_applications WORKSPACE_ID_TO_EXPORT`
  • docker cp CONTAINER_ID:/baserow/backend/workspace_WORKSPACE_ID_TO_EXPORT.json workspace_WORKSPACE_ID_TO_EXPORT.json
  • docker cp CONTAINER_ID:/baserow/backend/workspace_WORKSPACE_ID_TO_EXPORT.zip workspace_WORKSPACE_ID_TO_EXPORT.zip

Confirm that you see a workspace_X.json and workspace_X.zip file in your working directory. Copy these files to your Synology Nass, and figure out the following variables on your Nass.

  • WORKSPACE_ID_TO_IMPORT: This is the workspace ID on your Nass, where you’d like to migrate your workspace into. You can find the workspace ID by clicking on the three dots next to it, and then find the number between brackets.
  • CONTAINER_ID: This is the ID of the container running Baserow on your Nass.

Then execute the following steps on your Nass, after replacing the variables with the correct values:

  • docker cp workspace_WORKSPACE_ID_TO_EXPORT.json CONTAINER_ID:/baserow/backend/workspace_WORKSPACE_ID_TO_EXPORT.json
  • docker cp workspace_WORKSPACE_ID_TO_EXPORT.zip CONTAINER_ID:/baserow/backend/workspace_WORKSPACE_ID_TO_EXPORT.zip
  • docker exec baserow ./baserow.sh backend-cmd manage import_workspace_applications WORKSPACE_ID_TO_IMPORT workspace_WORKSPACE_ID_TO_EXPORT`

I hope this helps!

Fantastic - thanks @bram, I will try this out. Thanks for your time and help - really appreciate it!

I have a related question, I’ve been running my baserow plugin using what I believe is the “all in one” image (which definitely runs postgres inside).
I’m planning on deploying my own postgres 16 cluster for another project and I had the ambition to move my baserow plugin container to use that standalone postgres 16 server. One of the motivation is to have better data resiliency (replication to a secondary instance, and point-in-time recovery capability).

Is it expected to be fully compatible with baserow (according to the message above, the all in one instance is using postgres 11) ? I know I’ll probably have to go into the container and do something like pg_dump, then import on the postgres cluster.

Is there other data that I should backup ? Sounds like I’ll have to separately backup the user uploaded files, but is there anything else i’m missing ?

OK so I tried it and I ran into an issue already. Here’s what I did:

  • get into the baserow container, then su - postgres, then pg_dump baserow > data.sql
  • then on my postgres server, I created the database, and did psql baserow < data.sql

Then I reconfigured my baserow instance to use the external postgres database, and ran into the following issue. I guess i’ll have to do some reading on how permissions for django_migrations are handled.

[BACKEND][2024-03-17 14:29:14] 2024-03-17 14:29:14.948 | INFO     | baserow.core.management.commands.locked_migrate:acquire_lock:54 - Attempting to lock the postgres advisory lock with id: 123456 You can disable using locked_migrate by default and switch back to the non-locking version by setting BASEROW_DISABLE_LOCKED_MIGRATIONS=true
 [BACKEND][2024-03-17 14:29:15] 2024-03-17 14:29:14.948 | INFO     | baserow.core.management.commands.locked_migrate:acquire_lock:65 - Acquired the lock, proceeding with migration.
 [BACKEND][2024-03-17 14:29:15] INFO 2024-03-17 14:29:15,045 baserow_vocabai_plugin.fields.vocabai_fieldtypes.get_serializer_field:588- get_serializer_field
 [BACKEND][2024-03-17 14:29:15] INFO 2024-03-17 14:29:15,046 baserow_vocabai_plugin.fields.vocabai_fieldtypes.get_serializer_field:588- get_serializer_field
 [BACKEND][2024-03-17 14:29:15] INFO 2024-03-17 14:29:15,046 baserow_vocabai_plugin.fields.vocabai_fieldtypes.get_serializer_field:588- get_serializer_field
 [BACKEND][2024-03-17 14:29:15] INFO 2024-03-17 14:29:15,052 baserow_vocabai_plugin.fields.vocabai_fieldtypes.get_serializer_field:588- get_serializer_field
 [BACKEND][2024-03-17 14:29:15] Traceback (most recent call last):
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
 [BACKEND][2024-03-17 14:29:15]     return self.cursor.execute(sql, params)
 [BACKEND][2024-03-17 14:29:15] psycopg2.errors.InsufficientPrivilege: permission denied for table django_migrations
 [BACKEND][2024-03-17 14:29:15]
 [BACKEND][2024-03-17 14:29:15]
 [BACKEND][2024-03-17 14:29:15] The above exception was the direct cause of the following exception:
 [BACKEND][2024-03-17 14:29:15]
 [BACKEND][2024-03-17 14:29:15] Traceback (most recent call last):
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/backend/src/baserow/manage.py", line 41, in <module>
 [BACKEND][2024-03-17 14:29:15]     main()
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/backend/src/baserow/manage.py", line 37, in main
 [BACKEND][2024-03-17 14:29:15]     execute_from_command_line(sys.argv)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
 [BACKEND][2024-03-17 14:29:15]     utility.execute()
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
 [BACKEND][2024-03-17 14:29:15]     self.fetch_command(subcommand).run_from_argv(self.argv)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
 [BACKEND][2024-03-17 14:29:15]     self.execute(*args, **cmd_options)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
 [BACKEND][2024-03-17 14:29:15]     output = self.handle(*args, **options)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/backend/src/baserow/core/management/commands/locked_migrate.py", line 43, in handle
 [BACKEND][2024-03-17 14:29:15]     super().handle(*args, **options)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
 [BACKEND][2024-03-17 14:29:15]     res = handle_func(*args, **kwargs)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 92, in handle
 [BACKEND][2024-03-17 14:29:15]     executor = MigrationExecutor(connection, self.migration_progress_callback)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/executor.py", line 18, in __init__
 [BACKEND][2024-03-17 14:29:15]     self.loader = MigrationLoader(self.connection)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/loader.py", line 53, in __init__
 [BACKEND][2024-03-17 14:29:15]     self.build_graph()
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/loader.py", line 220, in build_graph
 [BACKEND][2024-03-17 14:29:15]     self.applied_migrations = recorder.applied_migrations()
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/recorder.py", line 78, in applied_migrations
 [BACKEND][2024-03-17 14:29:15]     return {(migration.app, migration.name): migration for migration in self.migration_qs}
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/models/query.py", line 280, in __iter__
 [BACKEND][2024-03-17 14:29:15]     self._fetch_all()
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/models/query.py", line 1324, in _fetch_all
 [BACKEND][2024-03-17 14:29:15]     self._result_cache = list(self._iterable_class(self))
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/models/query.py", line 51, in __iter__
 [BACKEND][2024-03-17 14:29:15]     results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
 [BACKEND][2024-03-17 14:29:15]     cursor.execute(sql, params)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/sentry_sdk/integrations/django/__init__.py", line 560, in execute
 [BACKEND][2024-03-17 14:29:15]     return real_execute(self, sql, params)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
 [BACKEND][2024-03-17 14:29:15]     return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
 [BACKEND][2024-03-17 14:29:15]     return executor(sql, params, many, context)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
 [BACKEND][2024-03-17 14:29:15]     return self.cursor.execute(sql, params)
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
 [BACKEND][2024-03-17 14:29:15]     raise dj_exc_value.with_traceback(traceback) from exc_value
 [BACKEND][2024-03-17 14:29:15]   File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
 [BACKEND][2024-03-17 14:29:15]     return self.cursor.execute(sql, params)
 [BACKEND][2024-03-17 14:29:15] django.db.utils.ProgrammingError: permission denied for table django_migrations

Hi @lucw, it’s possible to dump your PostgreSQL from the embedded PostgreSQL, and restore that in an external PostgreSQL server. Because Baserow can create many PostgreSQL tables, this sometimes comes with challenges. We’ve therefore wrote a script that can help you do this. I would therefore like to point you to the documentation that’s available here: Install with Docker // Baserow

The steps I would take are:

  • Stop your container.
  • Make the backup as to how the command describes.
  • Spin up your new PostgreSQL instance.
  • Run the restore command, but this time also add the POSTGRESQL_* environment variables to the docker run restore command. This will make sure that the dump will be restored in the new PostgreSQL cluster.
  • Start your container again, but also with the new POSTGRESQL_* environment variables.

I’d recommend trying first try this out on a testing copy.

The user files would still be stored in the docker volume. So unless you would move it to an S3 bucket, there is nothing you would have to do.

I’m sure that PG 15 is compatible with Baserow. I’ve never tried out PG 16, but I don’t expect any problems there either. The all-in-one image is indeed running PG 11. We will be launching a script in the next release that will update it to PG 15.

I hope that helps!

Awesome, let me try that!

I just tried this:

docker run -it --rm \
  -v baserow_data:/baserow/data -v \
  /root/restore_baserow:/baserow/host \
  -e DATABASE_HOST=postgresdb.ipv6n.net \
  -e DATABASE_USER=vocabai_words_qa \
  -e DATABASE_NAME=vocabai_words_qa \
  -e DATABASE_PASSWORD=password123 \
  lucwastiaux/baserow-vocabai-plugin:20240316.1 backend-cmd-with-db restore -f /baserow/host/backup-20240319-a.tar.gz

And i’m getting the errors below. Is this “baserow” role something I have to create ahead of time ? Note that i’m changing the user and database name (in the all in one image, i used the defaults baserow/baserow). You can tell me if that’s simply not supported, I can adapt.


pg_restore: [archiver (db)] Error from TOC entry 199; 1259 16388 TABLE django_migrations baserow
pg_restore: [archiver (db)] could not execute query: ERROR:  role "baserow" does not exist
    Command was: ALTER TABLE public.django_migrations OWNER TO baserow;


pg_restore: [archiver (db)] Error from TOC entry 198; 1259 16386 SEQUENCE django_migrations_id_seq baserow
pg_restore: [archiver (db)] could not execute query: ERROR:  role "baserow" does not exist
    Command was: ALTER TABLE public.django_migrations_id_seq OWNER TO baserow;


pg_restore: [archiver (db)] Error from TOC entry 363; 1259 18582 TABLE django_session baserow
pg_restore: [archiver (db)] could not execute query: ERROR:  role "baserow" does not exist
    Command was: ALTER TABLE public.django_session OWNER TO baserow;


pg_restore: [archiver (db)] Error from TOC entry 362; 1259 18576 TABLE health_check_db_testmodel baserow
pg_restore: [archiver (db)] could not execute query: ERROR:  role "baserow" does not exist
    Command was: ALTER TABLE public.health_check_db_testmodel OWNER TO baserow;

Looks like the following works:

docker run -it --rm \
  -v baserow_data:/baserow/data -v \
  /root/restore_baserow:/baserow/host \
  -e DATABASE_HOST=postgresdb.ipv6n.net \
  -e DATABASE_USER=vocabai_words_qa \
  -e DATABASE_NAME=vocabai_words_qa \
  -e DATABASE_PASSWORD=password123\
  lucwastiaux/baserow-vocabai-plugin:20240316.1 backend-cmd-with-db restore -f /baserow/host/backup-20240319-a.tar.gz \
  -- --no-owner --no-privileges

@bram you are a genius !! Thank you so much.

Glad to hear that it’s now working @lucw!