Restore all of baserow instruction seems incomplete

https://baserow.io/docs/installation%2Finstall-with-docker#backup-all-of-baserow

the " Restore all of Baserow" section seems wrong, it restores to a new volume
here’s what worked for me:
echo “restoring”

docker run --rm -v baserow_data:/baserow/data -v $BACKUP_DIR:/backup ubuntu bash -c "rm -rf /baserow/data/* && mkdir -p /baserow/data && cd /baserow/data && tar xzf /backup/${BACKUP_FILENAME} --strip-components=2"

Hm, so the idea was that you would then switch to using that new volume. Rather than overriding the contents of an existing volume. This way if something has gone wrong with the restore, you still have the old data volume as a safety backup. Where-as if the instructions had you overwrite it and something was incorrect/broken in the restore then you’ve lost your old working volume.

If you agree (and feel free not to other opinions more than welcome!) I’ll update the docs to make this clearer and provide instructions on how to start using the new volume.

ah understood, anyway it was sufficient to get me on the right track to implement my own backup scripts. The only annoying thing is it requires a shutdown, but it’s a very short amount of downtime , so it’s fine.

In the embedded version of Baserow when the user is using our embedded postgres we could add live no shutdown backups as an option. This would be done using something like pg_basebackup with no need to shut anything down.

Any user who can figure out how to use the above command could use it right now to do live backups of the Baserow database rather than waiting on us to figure it out and make docs etc.

From my notes here is something i left for future us when looking into pg_basebackup that might also be of some help:

We also need to be slightly more careful with how exactly we run pg_basebackup as it is
possible to create backups without transaction logs. If we did not have the transaction
logs included (also known as WALs), it is almost certain that the backup is corrupted
and un-usable.
See https://www.cybertec-postgresql.com/en/pg_basebackup-creating-self-sufficient-backups/
for more details.

Edit: And if you are using your own external postgres you could also use pg_basebackup or any other live backup solution available for postgres which there are many.

This doc goes into a bit more detail why our backup commands work the way they do/why we didn’
t just use pg_baserowbackup to being with: docs/decisions/002-baserow-data-backups.md · develop · Bram Wiepjes / baserow · GitLab

awesome, i’ll do a deep dive into your docs if I ever need something more sophisticated. For now the backup which requires stopping the container is good enough. I’ve got some scripts which upload the backup archive to s3 storage for off-server backup.