Baserow data lost in docker volume mount

Hi,
I have deployed Baserow behind Nginx in Amazon ECS, where everything works fine. The problem comes when the task is stopped and restarted, since all data is lost and the admin account creation appears again, so no permanent storage is done.

The mountpoint I created in Amazon ECS has this specifications:
Path → /baserow/data
Volume source → baserow_data

These is the Dockerfile I use for running the Baserow container. I think the problem can come from the VOLUME definition but while the task is running on Amazon ECS, the plugin is found in /baserow/data/… and all the code runs well, so I would need help in this topic.

Dockerfile

FROM baserow/baserow:1.10.2

ENV DATABASE_PASSWORD=****
ENV SECRET_KEY=****
ENV REDIS_PASSWORD=****
ENV BASEROW_PUBLIC_URL=http://baserow.mydomain.com
ENV PRIVATE_BACKEND_URL=http://backend:8000
ENV PRIVATE_WEB_FRONTEND_URL=http://web-frontend:3000
ENV PUBLIC_BACKEND_URL=
ENV PUBLIC_WEB_FRONTEND_URL=
ENV BASEROW_CADDY_ADDRESSES=:8080
ENV WEB_FRONTEND_PORT=80
ENV WEB_FRONTEND_SSL_PORT=443
ENV HOST_PUBLISH_IP=0.0.0.0
ENV MEDIA_URL=
ENV BASEROW_EXTRA_ALLOWED_HOSTS=
ENV BASEROW_CADDY_GLOBAL_CONF=
ENV MIGRATE_ON_STARTUP=true
ENV SYNC_TEMPLATES_ON_STARTUP=true
ENV DATABASE_USER=baserow
ENV DATABASE_NAME=baserow
ENV DATA_DIR=/baserow/data
ENV BASEROW_PLUGIN_DIR=$DATA_DIR/plugins
ENV FEATURE_FLAGS=
ENV BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER=
ENV EMAIL_SMTP=
ENV EMAIL_SMTP_HOST=
ENV EMAIL_SMTP_PORT=
ENV EMAIL_SMTP_USE_TLS=
ENV EMAIL_SMTP_USER=
ENV EMAIL_SMTP_PASSWORD=
ENV FROM_EMAIL=
ENV DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS=
COPY ./plugins/myplugin/ /baserow/plugins/myplugin/
RUN /baserow/plugins/install_plugin.sh --folder /baserow/plugins/myplugin

VOLUME /baserow/data

When running Baserow and plugins in local, the volume inspected has this specifications:

docker volume inspect baserow_data

{
“CreatedAt”: “2022-07-20T13:04:32Z”,
“Driver”: “local”,
“Labels”: null,
“Mountpoint”: “/var/lib/docker/volumes/baserow_data/_data”,
“Name”: “baserow_data”,
“Options”: null,
“Scope”: “local”
}

I would appreciate a little guidance for that in order to continue developing and not losing all data.

Hi @DavidPoza

Could you also provide your ECS task definition showing how you’ve set the mountpoint?

From the ECS docs it looks like perhaps this is expected behaviour after stopping an starting a container using a bind mount:

With bind mounts, a file or directory on a host, such as AWS Fargate, is mounted into a container. Bind mounts are tied to the lifecycle of the container that uses them. After all of the containers that use a bind mount are stopped, such as when a task is stopped, the data is removed.

Perhaps instead you need to use one of these? Amazon EFS volumes - Amazon ECS

Next I would recommend striping down your Dockerfile a tad. You seem have defined a large number of env variables which I would leave as defaults in your situation by not setting them:

FROM baserow/baserow:1.10.2

ENV DATABASE_PASSWORD=****
ENV SECRET_KEY=****
ENV REDIS_PASSWORD=****
ENV BASEROW_PUBLIC_URL=http://baserow.mydomain.com
ENV BASEROW_CADDY_ADDRESSES=:8080

ENV EMAIL_SMTP=
ENV EMAIL_SMTP_HOST=
ENV EMAIL_SMTP_PORT=
ENV EMAIL_SMTP_USE_TLS=
ENV EMAIL_SMTP_USER=
ENV EMAIL_SMTP_PASSWORD=
ENV FROM_EMAIL=

COPY ./plugins/myplugin/ /baserow/plugins/myplugin/
RUN /baserow/plugins/install_plugin.sh --folder /baserow/plugins/myplugin

Finally I would also experiment removing the VOLUME directive from your Dockerfile and just use the task definitions alone to mount /baserow/data.

Yes, I think the only solution is using EFS volumes.
I will configure it and reply to this post if it works.

Thanks for your help!

It works perfectly.
Thanks!