Restoring sql dump from MySQL and adding it to baserow

Are you using our SaaS platform (Baserow.io) or self-hosting Baserow?

Self-hosted

If you are self-hosting, what version of Baserow are you running?

version 1.33.4

If you are self-hosting, which installation method do you use to run Baserow?

Docker version 28.1.1, build 4eba377

What are the exact steps to reproduce this issue?

I created a baserow docker container from their latest docker image. I user external postgres DB

Previously I was using NocoDB with external mySQL and from there I created the dump of the data. Also to create the tables for my use I created them entering entering mysql server using a .sql file (sample attached below). Now the stucture over there was very simple and direct sql level manipulation was possible over there.
Now I am migrating from NocoDB to baserow. I want to achieve the similar functionality. I would prefer to take the periodic dump (on daily basis) of the data and then restore the data from the dump when upgrading customer instances, pushing updates or data loss due to crash. I tried exploring PG DB but the table nomenclature of baserow is complex & is just numeric. I don’t have any idea how to achieve the required functionality. I fear it would be very difficult achieve all this via API. So let me know if there is any way to inject data from .sql file & take data dump?

Attach screenshots, videos, or logs that demonstrate the issue.

Hi @gaurang1745,

to backup and restore data in Baserow, we have a guide here: Install with Docker

Also to create the tables for my use I created them entering entering mysql server using a .sql file (sample attached below)

We strongly discourage using custom SQL to create or modify tables, as the software-managed dependencies and relationships won’t be handled correctly, leading to unexpected bugs that are difficult to resolve. We recommend using the REST APIs for these operations.

Hi @davide

I understand I would not do the direct manipulation at postgres level.

to backup and restore data in Baserow, we have a guide here: Install with Docker

I went through the installation guide but the backup and restore section given over there is strictly applicable only when I am NOT using external postgres, which is NOT the case with me. I am using an external postgres.

For me what seems to be the best idea right now for backup and restore is to take pg_dump each time I want to take the backup.
docker exec postgres_gaurang pg_dump -U root pe_customs > pe_customs_full_backup_$(date +%Y%m%d%s).sql

Now to restore it, I will clear existing db in the postgres (if any), create with the name as given in the DATABASE_URL and restore from backup dump.
cat pe_customs_full_backup_20250701.sql | docker exec -i postgres_gaurang psql -U root -d pe_customs

Would this approach be recommended by baserow?
I think taking dumps is standard practice and probably won’t affect native baserow functionality.

Hi @gaurang1745, I apologize for the late response; I somehow missed the notification.

pg_dump is recommended, even if you may encounter memory issues when dealing with a large number of user tables.

Because of that reason, we created 2 CLI commands a while ago to override that problem:

  • backup_baserow: it uses pg_dump under the hood, but it exports tables in order and batches the user tables to avoid consuming all the available shared memory
  • restore_baserow: it starts from the output of the previous command and restores all the tables correctly

They are somewhat outdated and unmaintained, so I cannot guarantee they will work 100% work, but you can take inspiration from the code to see how the operations are performed.