What commands if any did you use to start your Baserow server?
See above.
Describe the problem
We are use the docker all-in-one Baserow at the moment. For backup/restore, we are using the approach provided in the docs which is copying all the data from the mounted volume in the docker container.
Now we want to move our Baserow instance to a Kubernetes cluster with a managed Postgresql and MinIO. So, we do not have access to the volumes where the PostgreSQL and MinIO data is stored. What would be the recommended approach to move our data to the new environment?
Hey @jjmurre, I recommend checking out the instructions here Install with Docker. The first step about taking a backup results in a tar containing a PostgreSQL and uploaded media files dump.
When you’ve deployed your Baserow Kubernetes cluster, you can connect directly to the PostgreSQL database and restore the dump there. Same for the media files in the MinIO S3 bucket. This should then restore your complete instance.
If you’re using AWS, Azure, or another cloud provider, then I recommend using their managed PostgreSQL, S3, and Redis from there. That typically comes with additional backup, scaling, and monitoring capabilities.
Hi @bram Tx. Ah, I see, I assume I need to use the " Backup only Baserow’s Postgres database" in our case (because we do not have access to the filesystem of the managed postgres).
As for MinIO, I assume we can dump the baserow MiniIO buckets of our docker all-in-one and push those into a bucket on the managed MinIO?
Hey @jjmurre, in this case I think you would still need to use the “Backup all of Baserow” because if you’re migrating to another instance you would need to the PostgreSQL dump and also the uploaded files. The uploaded files are the files that the user uploads in the file field, for example. In the all-in-one image, this is just stored in a folder, not on MinIO.
Basically, the MinIO dump you would like to take from the all-in-one image, is already included if you use the “Backup all of Baserow” method to make the backup.
Once you have the two, then you can work on restoring the PostgreSQL dump into the PostgreSQL database and uploaded user files into the MinIO deployments in the Kubernetes cluster.
docker run --rm -v new_baserow_data_volume:/results -v $PWD:/backup ubuntu bash -c "mkdir -p /results/ && cd /results && tar xvf /backup/backup.tar --strip 2"
The filesystem where the PostgreSQL data lives needs to be mounted. However, because we are using a managed (by another department) PostgreSQL instance, we are not able to mount the data volume of that instance.