Unable to export tables to CSV, unable to upgrade to 1.24.2 from 1.23.2 and not able to export databaes

Please fill in the questionnaire below.

Technical Help Questionnaire

Have you read and followed the instructions at: *READ ME FIRST* Technical Help FAQs - #2 by nigel ?

Answer: Yes I have

Self-Hosted Installation and Setup Questions

How have you self-hosted Baserow.

docker compose docker-compose.yml - the multiservice one, I am trying to migrate off of it

What are the specs of the service or server you are using to host Baserow.

3 GB of Ram, 4 CPUS

Which version of Baserow are you using.

1.23.2

How have you configured your self-hosted installation?

.env
docker-compose.yml

What commands if any did you use to start your Baserow server?

SECRET_KEY=
DATABASE_PASSWORD=
REDIS_PASSWORD=
BASEROW_PUBLIC_URL=

Describe the problem

I am trying to migrate off multi-service docker compose to all-in-one docker compose.

  1. I am unable to upgrade to 1.24.2 due to postgresql11 issue
  2. I am unable to export tables to CSV as it hangs at download
  3. I tried to export database by using the “backup -f backup.tar.gz” but I have been unable to restore it into the all-in-one docker compose system
  4. I feel I lost all my data in baserow, even though it still running, to get the data out and restore is too complex

My problem is all my data is stuck in a broken baserow instance and I have no easy way of getting the data migrated to a new instance, there should be a easy in the GUI to export all data and restore into a new instance.

Describe, step by step, how to reproduce the error or problem you are encountering.

Provide screenshots or include share links showing:

Trying to export to CSV but fails, I am trying to migrate off

I can export data using this command, but I have not been able to restore this data into all-in-one docker

docker run -it \
  --rm \
  -v baserow_data:/baserow/data \
  -v $PWD:/baserow/host \
  -e SECRET_KEY="" \
  -e DATABASE_HOST="" \
  -e DATABASE_NAME="" \
  -e DATABASE_USER="" \
  -e DATABASE_PASSWORD="" \
  -e DATABASE_PORT="5432" \
  baserow/backend:1.23.2 \
  backup -f backup.tar.gz

How many rows in total do you have in your Baserow tables?

Not sure

Please attach full logs from all of Baserow’s services

NAME                             IMAGE                         COMMAND                  SERVICE                CREATED             STATUS                         PORTS
baserow-backend-1                baserow/backend:1.23.2        "/usr/bin/tini -- /b…"   backend                4 weeks ago         Up About an hour (healthy)
baserow-caddy-1                  caddy:2                       "caddy run --config …"   caddy                  4 weeks ago         Up About an hour               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/udp, 2019/tcp
baserow-celery-1                 baserow/backend:1.23.2        "/usr/bin/tini -- /b…"   celery                 4 weeks ago         Up About an hour (healthy)
baserow-celery-beat-worker-1     baserow/backend:1.23.2        "/usr/bin/tini -- /b…"   celery-beat-worker     4 weeks ago         Up About an hour (unhealthy)
baserow-celery-export-worker-1   baserow/backend:1.23.2        "/usr/bin/tini -- /b…"   celery-export-worker   4 weeks ago         Up About an hour (healthy)
baserow-db-1                     postgres:11                   "docker-entrypoint.s…"   db                     4 weeks ago         Up About an hour (healthy)     5432/tcp
baserow-redis-1                  redis:6                       "docker-entrypoint.s…"   redis                  4 weeks ago         Up About an hour (healthy)     6379/tcp
baserow-web-frontend-1           baserow/web-frontend:1.23.2   "/usr/bin/tini -- /b…"   web-frontend           4 weeks ago         Up About an hour (healthy)     3000/tcp

docker logs -f baserow-web-frontend-1

ERROR getaddrinfo ENOTFOUND backend

at module.exports.AxiosError.from (node_modules/axios/lib/core/AxiosError.js:80:0)
at RedirectableRequest.handleRequestError (node_modules/axios/lib/adapters/http.js:610:0)
at RedirectableRequest.emit (node:events:517:28)
at RedirectableRequest.emit (node:domain:489:12)
at eventHandlers. (node_modules/follow-redirects/index.js:38:24)
at ClientRequest.emit (node:events:529:35)
at ClientRequest.emit (node:domain:489:12)
at Socket.socketErrorListener (node:_http_client:501:9)
at Socket.emit (node:events:517:28)
at Socket.emit (node:domain:489:12)
at Axios_Axios.request (node_modules/axios/lib/core/Axios.js:37:0)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Store.load (modules/core/store/settings.js:1:0) (repeated 3 times)

So I was finally able to upgrade it to 1.24.2 , leaving postgres at 11

by just modifying the version in docker-compose.yml instead of downloading it from the website.

I exported data
Restored it on new untouched virtual machine, looks like the table names are there but if I click on any it gives me an error.

Backup

docker compose run -v ~/baserow_backups:/baserow/backups backend backup -f /baserow/backups/baserow_backup.tar.gz

Restore

docker compose run -v ~/baserow_backups:/baserow/backups baserow backend-cmd-with-db restore -f /baserow/backups/baserow_backup.tar.gz

Docs

I followed this guidance but its needs more examples on how exactly to restore.
There was not enough information, all the way at the bottom.

https://baserow.io/docs/installation%2Finstall-with-docker-compose

Logs

[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:24:45.908 UTC [437] baserow@baserow STATEMENT: SELECT “database_table_468”.“id”, “database_table_468”.“created_on”, “database_table_468”.“updated_on”, “database_table_468”.“trashed”, “database_table_468”.“field_4235”, “database_table_468”.“field_4236”, “database_table_468”.“field_4238”, “database_table_468”.“order”, “database_table_468”.“needs_background_update”, “database_table_468”.“created_by_id”, “database_table_468”.“last_modified_by_id” FROM “database_table_468” WHERE NOT “database_table_468”.“trashed” ORDER BY “database_table_468”.“field_4235” COLLATE “en-x-icu” ASC NULLS FIRST, “database_table_468”.“order” ASC NULLS FIRST, “database_table_468”.“id” ASC NULLS FIRST LIMIT 21
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.801 UTC [561] baserow@baserow ERROR: relation “database_table_557” does not exist at character 35
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.801 UTC [561] baserow@baserow STATEMENT: SELECT COUNT() AS “__count” FROM “database_table_557” WHERE NOT “database_table_557”.“trashed”
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.837 UTC [563] baserow@baserow ERROR: relation “database_table_557” does not exist at character 406
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.837 UTC [563] baserow@baserow STATEMENT: SELECT “database_table_557”.“id”, “database_table_557”.“created_on”, “database_table_557”.“updated_on”, “database_table_557”.“trashed”, “database_table_557”.“field_5091”, “database_table_557”.“field_5092”, “database_table_557”.“field_5097”, “database_table_557”.“order”, “database_table_557”.“needs_background_update”, “database_table_557”.“created_by_id”, “database_table_557”.“last_modified_by_id” FROM “database_table_557” WHERE NOT “database_table_557”.“trashed” ORDER BY “database_table_557”.“order” ASC NULLS FIRST, “database_table_557”.“id” ASC NULLS FIRST LIMIT 21
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.838 UTC [563] baserow@baserow ERROR: relation “database_table_557” does not exist at character 406
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.838 UTC [563] baserow@baserow STATEMENT: SELECT “database_table_557”.“id”, “database_table_557”.“created_on”, “database_table_557”.“updated_on”, “database_table_557”.“trashed”, “database_table_557”.“field_5091”, “database_table_557”.“field_5092”, “database_table_557”.“field_5097”, “database_table_557”.“order”, “database_table_557”.“needs_background_update”, “database_table_557”.“created_by_id”, “database_table_557”.“last_modified_by_id” FROM “database_table_557” WHERE NOT “database_table_557”.“trashed” ORDER BY “database_table_557”.“order” ASC NULLS FIRST, “database_table_557”.“id” ASC NULLS FIRST LIMIT 21
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.838 UTC [563] baserow@baserow ERROR: relation “database_table_557” does not exist at character 406
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.838 UTC [563] baserow@baserow STATEMENT: SELECT “database_table_557”.“id”, “database_table_557”.“created_on”, “database_table_557”.“updated_on”, “database_table_557”.“trashed”, “database_table_557”.“field_5091”, “database_table_557”.“field_5092”, “database_table_557”.“field_5097”, “database_table_557”.“order”, “database_table_557”.“needs_background_update”, “database_table_557”.“created_by_id”, “database_table_557”.“last_modified_by_id” FROM “database_table_557” WHERE NOT “database_table_557”.“trashed” ORDER BY “database_table_557”.“order” ASC NULLS FIRST, “database_table_557”.“id” ASC NULLS FIRST LIMIT 21
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.839 UTC [563] baserow@baserow ERROR: relation “database_table_557” does not exist at character 406
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.839 UTC [563] baserow@baserow STATEMENT: SELECT “database_table_557”.“id”, “database_table_557”.“created_on”, “database_table_557”.“updated_on”, “database_table_557”.“trashed”, “database_table_557”.“field_5091”, “database_table_557”.“field_5092”, “database_table_557”.“field_5097”, “database_table_557”.“order”, “database_table_557”.“needs_background_update”, “database_table_557”.“created_by_id”, “database_table_557”.“last_modified_by_id” FROM “database_table_557” WHERE NOT “database_table_557”.“trashed” ORDER BY “database_table_557”.“order” ASC NULLS FIRST, “database_table_557”.“id” ASC NULLS FIRST LIMIT 21
[POSTGRES][2024-04-20 23:26:31] 2024-04-20 23:26:31.840 UTC [563] baserow@baserow ERROR: relation “database_table_557” does not exist at character 406
[BACKEND][2024-04-20 23:26:31] 10.21.0.195:0 - “GET /api/database/views/table/557/?include=filters,sortings,group_bys,decorations HTTP/1.1” 200
[BACKEND][2024-04-20 23:26:31] ERROR 2024-04-20 23:26:31,808 django.request.log_response:241- Internal Server Error: /api/database/views/grid/2273/
[BACKEND][2024-04-20 23:26:31] Traceback (most recent call last):
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py”, line 89, in _execute
[BACKEND][2024-04-20 23:26:31] return self.cursor.execute(sql, params)
[BACKEND][2024-04-20 23:26:31] psycopg2.errors.UndefinedTable: relation “database_table_557” does not exist
[BACKEND][2024-04-20 23:26:31] LINE 1: SELECT COUNT(
) AS “__count” FROM “database_table_557” WHERE…
[BACKEND][2024-04-20 23:26:31] ^
[BACKEND][2024-04-20 23:26:31]
[BACKEND][2024-04-20 23:26:31]
[BACKEND][2024-04-20 23:26:31] The above exception was the direct cause of the following exception:
[BACKEND][2024-04-20 23:26:31]
[BACKEND][2024-04-20 23:26:31] Traceback (most recent call last):
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/asgiref/sync.py”, line 486, in thread_handler
[BACKEND][2024-04-20 23:26:31] raise exc_info[1]
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/core/handlers/exception.py”, line 43, in inner
[BACKEND][2024-04-20 23:26:31] response = await get_response(request)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/core/handlers/base.py”, line 253, in _get_response_async
[BACKEND][2024-04-20 23:26:31] response = await wrapped_callback(
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/asgiref/sync.py”, line 448, in call
[BACKEND][2024-04-20 23:26:31] ret = await asyncio.wait_for(future, timeout=None)
[BACKEND][2024-04-20 23:26:31] File “/usr/lib/python3.9/asyncio/tasks.py”, line 442, in wait_for
[BACKEND][2024-04-20 23:26:31] return await fut
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/asgiref/current_thread_executor.py”, line 22, in run
[BACKEND][2024-04-20 23:26:31] result = self.fn(*self.args, **self.kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/asgiref/sync.py”, line 490, in thread_handler
[BACKEND][2024-04-20 23:26:31] return func(*args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/views/decorators/csrf.py”, line 55, in wrapped_view
[BACKEND][2024-04-20 23:26:31] return view_func(*args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/views/generic/base.py”, line 103, in view
[BACKEND][2024-04-20 23:26:31] return self.dispatch(request, *args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/rest_framework/views.py”, line 509, in dispatch
[BACKEND][2024-04-20 23:26:31] response = self.handle_exception(exc)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/rest_framework/views.py”, line 469, in handle_exception
[BACKEND][2024-04-20 23:26:31] self.raise_uncaught_exception(exc)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/rest_framework/views.py”, line 480, in raise_uncaught_exception
[BACKEND][2024-04-20 23:26:31] raise exc
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/rest_framework/views.py”, line 506, in dispatch
[BACKEND][2024-04-20 23:26:31] response = handler(request, *args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/backend/src/baserow/api/decorators.py”, line 105, in func_wrapper
[BACKEND][2024-04-20 23:26:31] return func(*args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/backend/src/baserow/api/decorators.py”, line 340, in func_wrapper
[BACKEND][2024-04-20 23:26:31] return func(*args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/backend/src/baserow/api/decorators.py”, line 170, in func_wrapper
[BACKEND][2024-04-20 23:26:31] return func(args, **kwargs)
[BACKEND][2024-04-20 23:26:31] File “/baserow/backend/src/baserow/contrib/database/api/views/grid/views.py”, line 379, in get
[BACKEND][2024-04-20 23:26:31] page = paginator.paginate_queryset(queryset, request, self)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/rest_framework/pagination.py”, line 387, in paginate_queryset
[BACKEND][2024-04-20 23:26:31] self.count = self.get_count(queryset)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/rest_framework/pagination.py”, line 525, in get_count
[BACKEND][2024-04-20 23:26:31] return queryset.count()
[BACKEND][2024-04-20 23:26:31] File “/baserow/backend/src/baserow/contrib/database/table/models.py”, line 161, in count
[BACKEND][2024-04-20 23:26:31] return super().count()
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/models/query.py”, line 621, in count
[BACKEND][2024-04-20 23:26:31] return self.query.get_count(using=self.db)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/models/sql/query.py”, line 559, in get_count
[BACKEND][2024-04-20 23:26:31] return obj.get_aggregation(using, [“__count”])[“__count”]
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/models/sql/query.py”, line 544, in get_aggregation
[BACKEND][2024-04-20 23:26:31] result = compiler.execute_sql(SINGLE)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py”, line 1398, in execute_sql
[BACKEND][2024-04-20 23:26:31] cursor.execute(sql, params)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py”, line 67, in execute
[BACKEND][2024-04-20 23:26:31] return self._execute_with_wrappers(
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py”, line 80, in _execute_with_wrappers
[BACKEND][2024-04-20 23:26:31] return executor(sql, params, many, context)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py”, line 89, in _execute
[BACKEND][2024-04-20 23:26:31] return self.cursor.execute(sql, params)
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/utils.py”, line 91, in exit
[BACKEND][2024-04-20 23:26:31] raise dj_exc_value.with_traceback(traceback) from exc_value
[BACKEND][2024-04-20 23:26:31] File “/baserow/venv/lib/python3.9/site-packages/django/db/backends/utils.py”, line 89, in _execute
[BACKEND][2024-04-20 23:26:31] return self.cursor.execute(sql, params)
[BACKEND][2024-04-20 23:26:31] django.db.utils.ProgrammingError: relation “database_table_557” does not exist
[BACKEND][2024-04-20 23:26:31] LINE 1: SELECT COUNT(
) AS “__count” FROM “database_table_557” WHERE…
[BACKEND][2024-04-20 23:26:31] ^
[BACKEND][2024-04-20 23:26:31]
[BACKEND][2024-04-20 23:26:31] 10.21.0.195:0 - “GET /api/database/views/grid/2273/?limit=80&offset=0&include=field_options%2Crow_metadata HTTP/1.1” 500
[BACKEND][2024-04-20 23:26:39] [2024-04-20 23:26:31 +0000] [399] [INFO] connection closed

Hello @Catalyze4968,

could you please share your docker-compose.yml before and after your attempt to upgrade?

Current docker-compose.yml

version: "3.4"
########################################################################################
#
# READ ME FIRST!
#
# Use the much simpler `docker-compose.all-in-one.yml` instead of this file
# unless you are an advanced user!
#
# This compose file runs every service separately using a Caddy reverse proxy to route
# requests to the backend or web-frontend, handle websockets and to serve uploaded files
# . Even if you have your own http proxy we recommend to simply forward requests to it
# as it is already properly configured for Baserow.
#
# If you wish to continue with this more advanced compose file, it is recommended that
# you set environment variables using the .env.example file by:
# 1. `cp .env.example .env`
# 2. Edit .env and fill in the first 3 variables.
# 3. Set further environment variables as you wish.
#
# More documentation can be found in:
# https://baserow.io/docs/installation%2Finstall-with-docker-compose
#
########################################################################################

# See https://baserow.io/docs/installation%2Fconfiguration for more details on these
# backend environment variables, their defaults if left blank etc.
x-backend-variables: &backend-variables
  # Most users should only need to set these first four variables.
  SECRET_KEY: ${SECRET_KEY:?}
  BASEROW_JWT_SIGNING_KEY: ${BASEROW_JWT_SIGNING_KEY:-}
  DATABASE_PASSWORD: ${DATABASE_PASSWORD:?}
  REDIS_PASSWORD: ${REDIS_PASSWORD:?}
  # If you manually change this line make sure you also change the duplicate line in
  # the web-frontend service.
  BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}

  # Set these if you want to use an external postgres instead of the db service below.
  DATABASE_USER: ${DATABASE_USER:-baserow}
  DATABASE_NAME: ${DATABASE_NAME:-baserow}
  DATABASE_HOST:
  DATABASE_PORT:
  DATABASE_OPTIONS:
  DATABASE_URL:

  # Set these if you want to use an external redis instead of the redis service below.
  REDIS_HOST:
  REDIS_PORT:
  REDIS_PROTOCOL:
  REDIS_URL:
  REDIS_USER:

  # Set these to enable Baserow to send emails.
  EMAIL_SMTP:
  EMAIL_SMTP_HOST:
  EMAIL_SMTP_PORT:
  EMAIL_SMTP_USE_TLS:
  EMAIL_SMTP_USE_SSL:
  EMAIL_SMTP_USER:
  EMAIL_SMTP_PASSWORD:
  EMAIL_SMTP_SSL_CERTFILE_PATH:
  EMAIL_SMTP_SSL_KEYFILE_PATH:
  FROM_EMAIL:

  # Set these to use AWS S3 bucket to store user files.
  AWS_ACCESS_KEY_ID:
  AWS_SECRET_ACCESS_KEY:
  AWS_STORAGE_BUCKET_NAME:
  AWS_S3_REGION_NAME:
  AWS_S3_ENDPOINT_URL:
  AWS_S3_CUSTOM_DOMAIN:

  # Misc settings see https://baserow.io/docs/installation%2Fconfiguration for info
  BASEROW_AMOUNT_OF_WORKERS:
  BASEROW_ROW_PAGE_SIZE_LIMIT:
  BATCH_ROWS_SIZE_LIMIT:
  INITIAL_TABLE_DATA_LIMIT:
  BASEROW_FILE_UPLOAD_SIZE_LIMIT_MB:
  BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT:

  BASEROW_EXTRA_ALLOWED_HOSTS:
  ADDITIONAL_APPS:
  BASEROW_PLUGIN_GIT_REPOS:
  BASEROW_PLUGIN_URLS:

  BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER:
  MIGRATE_ON_STARTUP: ${MIGRATE_ON_STARTUP:-true}
  SYNC_TEMPLATES_ON_STARTUP: ${SYNC_TEMPLATES_ON_STARTUP:-true}
  DONT_UPDATE_FORMULAS_AFTER_MIGRATION:
  BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION:
  BASEROW_SYNC_TEMPLATES_TIME_LIMIT:

  BASEROW_BACKEND_DEBUG:
  BASEROW_BACKEND_LOG_LEVEL:
  FEATURE_FLAGS:
  BASEROW_ENABLE_OTEL:
  BASEROW_DEPLOYMENT_ENV:
  OTEL_EXPORTER_OTLP_ENDPOINT:
  OTEL_RESOURCE_ATTRIBUTES:
  POSTHOG_PROJECT_API_KEY:
  POSTHOG_HOST:

  PRIVATE_BACKEND_URL: http://backend:8000
  PUBLIC_BACKEND_URL:
  PUBLIC_WEB_FRONTEND_URL:
  BASEROW_EMBEDDED_SHARE_URL:
  MEDIA_URL:
  MEDIA_ROOT:

  BASEROW_AIRTABLE_IMPORT_SOFT_TIME_LIMIT:
  HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
  OLD_ACTION_CLEANUP_INTERVAL_MINUTES:
  MINUTES_UNTIL_ACTION_CLEANED_UP:
  BASEROW_GROUP_STORAGE_USAGE_QUEUE:
  DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
  BASEROW_WAIT_INSTEAD_OF_409_CONFLICT_ERROR:
  BASEROW_DISABLE_MODEL_CACHE:
  BASEROW_PLUGIN_DIR:
  BASEROW_JOB_EXPIRATION_TIME_LIMIT:
  BASEROW_JOB_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ROW_HISTORY_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ROW_HISTORY_RETENTION_DAYS:
  BASEROW_USER_LOG_ENTRY_CLEANUP_INTERVAL_MINUTES:
  BASEROW_USER_LOG_ENTRY_RETENTION_DAYS:
  BASEROW_MAX_ROW_REPORT_ERROR_COUNT:
  BASEROW_JOB_SOFT_TIME_LIMIT:
  BASEROW_FRONTEND_JOBS_POLLING_TIMEOUT_MS:
  BASEROW_INITIAL_CREATE_SYNC_TABLE_DATA_LIMIT:
  BASEROW_MAX_SNAPSHOTS_PER_GROUP:
  BASEROW_SNAPSHOT_EXPIRATION_TIME_DAYS:
  BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS:
  BASEROW_WEBHOOKS_IP_BLACKLIST:
  BASEROW_WEBHOOKS_IP_WHITELIST:
  BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST:
  BASEROW_WEBHOOKS_URL_CHECK_TIMEOUT_SECS:
  BASEROW_WEBHOOKS_MAX_CONSECUTIVE_TRIGGER_FAILURES:
  BASEROW_WEBHOOKS_MAX_RETRIES_PER_CALL:
  BASEROW_WEBHOOKS_MAX_PER_TABLE:
  BASEROW_WEBHOOKS_MAX_CALL_LOG_ENTRIES:
  BASEROW_WEBHOOKS_REQUEST_TIMEOUT_SECONDS:
  BASEROW_ENTERPRISE_AUDIT_LOG_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ENTERPRISE_AUDIT_LOG_RETENTION_DAYS:
  BASEROW_ALLOW_MULTIPLE_SSO_PROVIDERS_FOR_SAME_ACCOUNT:
  BASEROW_STORAGE_USAGE_JOB_CRONTAB:
  BASEROW_SEAT_USAGE_JOB_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_TIMEOUT_MINUTES:
  BASEROW_PERIODIC_FIELD_UPDATE_QUEUE_NAME:
  BASEROW_MAX_CONCURRENT_USER_REQUESTS:
  BASEROW_CONCURRENT_USER_REQUESTS_THROTTLE_TIMEOUT:
  BASEROW_OSS_ONLY:
  OTEL_TRACES_SAMPLER:
  OTEL_TRACES_SAMPLER_ARG:
  OTEL_PER_MODULE_SAMPLER_OVERRIDES:
  BASEROW_CACHALOT_ENABLED:
  BASEROW_CACHALOT_MODE:
  BASEROW_CACHALOT_ONLY_CACHABLE_TABLES:
  BASEROW_CACHALOT_UNCACHABLE_TABLES:
  BASEROW_CACHALOT_TIMEOUT:
  BASEROW_AUTO_INDEX_VIEW_ENABLED:
  BASEROW_PERSONAL_VIEW_LOWEST_ROLE_ALLOWED:
  BASEROW_DISABLE_LOCKED_MIGRATIONS:
  BASEROW_USE_PG_FULLTEXT_SEARCH:
  BASEROW_AUTO_VACUUM:
  BASEROW_BUILDER_DOMAINS:
  SENTRY_DSN:
  SENTRY_BACKEND_DSN:


services:
  # A caddy reverse proxy sitting in-front of all the services. Responsible for routing
  # requests to either the backend or web-frontend and also serving user uploaded files
  # from the media volume.
  caddy:
    image: caddy:2
    restart: unless-stopped
    environment:
      # Controls what port the Caddy server binds to inside its container.
      BASEROW_CADDY_ADDRESSES: ${BASEROW_CADDY_ADDRESSES:-:80}
      PRIVATE_WEB_FRONTEND_URL: ${PRIVATE_WEB_FRONTEND_URL:-http://web-frontend:3000}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
    ports:
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_PORT:-80}:80"
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_SSL_PORT:-443}:443"
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - media:/baserow/media
      - caddy_config:/config
      - caddy_data:/data
    networks:
      local:

  backend:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    environment:
      <<: *backend-variables
    depends_on:
      - db
      - redis
    volumes:
      - media:/baserow/media
    networks:
      local:

  web-frontend:
    image: baserow/web-frontend:1.24.2
    restart: unless-stopped
    environment:
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      PUBLIC_BACKEND_URL:
      PUBLIC_WEB_FRONTEND_URL:
      BASEROW_EMBEDDED_SHARE_URL:
      BASEROW_DISABLE_PUBLIC_URL_CHECK:
      INITIAL_TABLE_DATA_LIMIT:
      DOWNLOAD_FILE_VIA_XHR:
      BASEROW_DISABLE_GOOGLE_DOCS_FILE_PREVIEW:
      HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
      DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
      FEATURE_FLAGS:
      ADDITIONAL_MODULES:
      BASEROW_MAX_IMPORT_FILE_SIZE_MB:
      BASEROW_MAX_SNAPSHOTS_PER_GROUP:
      BASEROW_ENABLE_OTEL:
      BASEROW_DEPLOYMENT_ENV:
      BASEROW_OSS_ONLY:
      BASEROW_USE_PG_FULLTEXT_SEARCH:
      POSTHOG_PROJECT_API_KEY:
      POSTHOG_HOST:
      BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT:
      BASEROW_ROW_PAGE_SIZE_LIMIT:
      BASEROW_BUILDER_DOMAINS:
      BASEROW_FRONTEND_SAME_SITE_COOKIE:
      SENTRY_DSN:
    depends_on:
      - backend
    networks:
      local:

  celery:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    environment:
      <<: *backend-variables
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-export-worker:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    command: celery-exportworker
    environment:
      <<: *backend-variables
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-beat-worker:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    command: celery-beat
    environment:
      <<: *backend-variables
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  db:
    image: postgres:11
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      local:
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    networks:
      local:

  # By default, the media volume will be owned by root on startup. Ensure it is owned by
  # the same user that django is running as, so it can write user files.
  volume-permissions-fixer:
    image: bash:4.4
    command: chown 9999:9999 -R /baserow/media
    volumes:
      - media:/baserow/media
    networks:
      local:

volumes:
  pgdata:
  media:
  caddy_data:
  caddy_config:

networks:
  local:
    driver: bridge

New docker-compose.yml

version: "3.4"
########################################################################################
#
# READ ME FIRST!
#
# Use the much simpler `docker-compose.all-in-one.yml` instead of this file
# unless you are an advanced user!
#
# This compose file runs every service separately using a Caddy reverse proxy to route
# requests to the backend or web-frontend, handle websockets and to serve uploaded files
# . Even if you have your own http proxy we recommend to simply forward requests to it
# as it is already properly configured for Baserow.
#
# If you wish to continue with this more advanced compose file, it is recommended that
# you set environment variables using the .env.example file by:
# 1. `cp .env.example .env`
# 2. Edit .env and fill in the first 3 variables.
# 3. Set further environment variables as you wish.
#
# More documentation can be found in:
# https://baserow.io/docs/installation%2Finstall-with-docker-compose
#
########################################################################################

# See https://baserow.io/docs/installation%2Fconfiguration for more details on these
# backend environment variables, their defaults if left blank etc.
x-backend-variables: &backend-variables
  # Most users should only need to set these first four variables.
  SECRET_KEY: ${SECRET_KEY:?}
  BASEROW_JWT_SIGNING_KEY: ${BASEROW_JWT_SIGNING_KEY:-}
  DATABASE_PASSWORD: ${DATABASE_PASSWORD:?}
  REDIS_PASSWORD: ${REDIS_PASSWORD:?}
  # If you manually change this line make sure you also change the duplicate line in
  # the web-frontend service.
  BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}

  # Set these if you want to use an external postgres instead of the db service below.
  DATABASE_USER: ${DATABASE_USER:-baserow}
  DATABASE_NAME: ${DATABASE_NAME:-baserow}
  DATABASE_HOST:
  DATABASE_PORT:
  DATABASE_OPTIONS:
  DATABASE_URL:

  # Set these if you want to use an external redis instead of the redis service below.
  REDIS_HOST:
  REDIS_PORT:
  REDIS_PROTOCOL:
  REDIS_URL:
  REDIS_USER:

  # Set these to enable Baserow to send emails.
  EMAIL_SMTP:
  EMAIL_SMTP_HOST:
  EMAIL_SMTP_PORT:
  EMAIL_SMTP_USE_TLS:
  EMAIL_SMTP_USE_SSL:
  EMAIL_SMTP_USER:
  EMAIL_SMTP_PASSWORD:
  EMAIL_SMTP_SSL_CERTFILE_PATH:
  EMAIL_SMTP_SSL_KEYFILE_PATH:
  FROM_EMAIL:

  # Set these to use AWS S3 bucket to store user files.
  AWS_ACCESS_KEY_ID:
  AWS_SECRET_ACCESS_KEY:
  AWS_STORAGE_BUCKET_NAME:
  AWS_S3_REGION_NAME:
  AWS_S3_ENDPOINT_URL:
  AWS_S3_CUSTOM_DOMAIN:

  # Misc settings see https://baserow.io/docs/installation%2Fconfiguration for info
  BASEROW_AMOUNT_OF_WORKERS:
  BASEROW_ROW_PAGE_SIZE_LIMIT:
  BATCH_ROWS_SIZE_LIMIT:
  INITIAL_TABLE_DATA_LIMIT:
  BASEROW_FILE_UPLOAD_SIZE_LIMIT_MB:
  BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT:

  BASEROW_EXTRA_ALLOWED_HOSTS:
  ADDITIONAL_APPS:
  BASEROW_PLUGIN_GIT_REPOS:
  BASEROW_PLUGIN_URLS:

  BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER:
  MIGRATE_ON_STARTUP: ${MIGRATE_ON_STARTUP:-true}
  SYNC_TEMPLATES_ON_STARTUP: ${SYNC_TEMPLATES_ON_STARTUP:-true}
  DONT_UPDATE_FORMULAS_AFTER_MIGRATION:
  BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION:
  BASEROW_SYNC_TEMPLATES_TIME_LIMIT:

  BASEROW_BACKEND_DEBUG:
  BASEROW_BACKEND_LOG_LEVEL:
  FEATURE_FLAGS:
  BASEROW_ENABLE_OTEL:
  BASEROW_DEPLOYMENT_ENV:
  OTEL_EXPORTER_OTLP_ENDPOINT:
  OTEL_RESOURCE_ATTRIBUTES:
  POSTHOG_PROJECT_API_KEY:
  POSTHOG_HOST:

  PRIVATE_BACKEND_URL: http://backend:8000
  PUBLIC_BACKEND_URL:
  PUBLIC_WEB_FRONTEND_URL:
  BASEROW_EMBEDDED_SHARE_URL:
  MEDIA_URL:
  MEDIA_ROOT:

  BASEROW_AIRTABLE_IMPORT_SOFT_TIME_LIMIT:
  HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
  OLD_ACTION_CLEANUP_INTERVAL_MINUTES:
  MINUTES_UNTIL_ACTION_CLEANED_UP:
  BASEROW_GROUP_STORAGE_USAGE_QUEUE:
  DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
  BASEROW_WAIT_INSTEAD_OF_409_CONFLICT_ERROR:
  BASEROW_DISABLE_MODEL_CACHE:
  BASEROW_PLUGIN_DIR:
  BASEROW_JOB_EXPIRATION_TIME_LIMIT:
  BASEROW_JOB_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ROW_HISTORY_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ROW_HISTORY_RETENTION_DAYS:
  BASEROW_USER_LOG_ENTRY_CLEANUP_INTERVAL_MINUTES:
  BASEROW_USER_LOG_ENTRY_RETENTION_DAYS:
  BASEROW_MAX_ROW_REPORT_ERROR_COUNT:
  BASEROW_JOB_SOFT_TIME_LIMIT:
  BASEROW_FRONTEND_JOBS_POLLING_TIMEOUT_MS:
  BASEROW_INITIAL_CREATE_SYNC_TABLE_DATA_LIMIT:
  BASEROW_MAX_SNAPSHOTS_PER_GROUP:
  BASEROW_SNAPSHOT_EXPIRATION_TIME_DAYS:
  BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS:
  BASEROW_WEBHOOKS_IP_BLACKLIST:
  BASEROW_WEBHOOKS_IP_WHITELIST:
  BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST:
  BASEROW_WEBHOOKS_URL_CHECK_TIMEOUT_SECS:
  BASEROW_WEBHOOKS_MAX_CONSECUTIVE_TRIGGER_FAILURES:
  BASEROW_WEBHOOKS_MAX_RETRIES_PER_CALL:
  BASEROW_WEBHOOKS_MAX_PER_TABLE:
  BASEROW_WEBHOOKS_MAX_CALL_LOG_ENTRIES:
  BASEROW_WEBHOOKS_REQUEST_TIMEOUT_SECONDS:
  BASEROW_ENTERPRISE_AUDIT_LOG_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ENTERPRISE_AUDIT_LOG_RETENTION_DAYS:
  BASEROW_ALLOW_MULTIPLE_SSO_PROVIDERS_FOR_SAME_ACCOUNT:
  BASEROW_STORAGE_USAGE_JOB_CRONTAB:
  BASEROW_SEAT_USAGE_JOB_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_TIMEOUT_MINUTES:
  BASEROW_PERIODIC_FIELD_UPDATE_QUEUE_NAME:
  BASEROW_MAX_CONCURRENT_USER_REQUESTS:
  BASEROW_CONCURRENT_USER_REQUESTS_THROTTLE_TIMEOUT:
  BASEROW_OSS_ONLY:
  OTEL_TRACES_SAMPLER:
  OTEL_TRACES_SAMPLER_ARG:
  OTEL_PER_MODULE_SAMPLER_OVERRIDES:
  BASEROW_CACHALOT_ENABLED:
  BASEROW_CACHALOT_MODE:
  BASEROW_CACHALOT_ONLY_CACHABLE_TABLES:
  BASEROW_CACHALOT_UNCACHABLE_TABLES:
  BASEROW_CACHALOT_TIMEOUT:
  BASEROW_AUTO_INDEX_VIEW_ENABLED:
  BASEROW_PERSONAL_VIEW_LOWEST_ROLE_ALLOWED:
  BASEROW_DISABLE_LOCKED_MIGRATIONS:
  BASEROW_USE_PG_FULLTEXT_SEARCH:
  BASEROW_AUTO_VACUUM:
  BASEROW_BUILDER_DOMAINS:
  SENTRY_DSN:
  SENTRY_BACKEND_DSN:
  BASEROW_OPENAI_API_KEY:
  BASEROW_OPENAI_ORGANIZATION:
  BASEROW_OPENAI_MODELS:
  BASEROW_OLLAMA_HOST:
  BASEROW_OLLAMA_MODELS:


services:
  # A caddy reverse proxy sitting in-front of all the services. Responsible for routing
  # requests to either the backend or web-frontend and also serving user uploaded files
  # from the media volume.
  caddy:
    image: caddy:2
    restart: unless-stopped
    environment:
      # Controls what port the Caddy server binds to inside its container.
      BASEROW_CADDY_ADDRESSES: ${BASEROW_CADDY_ADDRESSES:-:80}
      PRIVATE_WEB_FRONTEND_URL: ${PRIVATE_WEB_FRONTEND_URL:-http://web-frontend:3000}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL:-}
    ports:
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_PORT:-80}:80"
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_SSL_PORT:-443}:443"
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - media:/baserow/media
      - caddy_config:/config
      - caddy_data:/data
    networks:
      local:

  backend:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    environment:
      <<: *backend-variables
    depends_on:
      - db
      - redis
    volumes:
      - media:/baserow/media
    networks:
      local:

  web-frontend:
    image: baserow/web-frontend:1.24.2
    restart: unless-stopped
    environment:
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      PUBLIC_BACKEND_URL:
      PUBLIC_WEB_FRONTEND_URL:
      BASEROW_EMBEDDED_SHARE_URL:
      BASEROW_DISABLE_PUBLIC_URL_CHECK:
      INITIAL_TABLE_DATA_LIMIT:
      DOWNLOAD_FILE_VIA_XHR:
      BASEROW_DISABLE_GOOGLE_DOCS_FILE_PREVIEW:
      HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
      DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
      FEATURE_FLAGS:
      ADDITIONAL_MODULES:
      BASEROW_MAX_IMPORT_FILE_SIZE_MB:
      BASEROW_MAX_SNAPSHOTS_PER_GROUP:
      BASEROW_ENABLE_OTEL:
      BASEROW_DEPLOYMENT_ENV:
      BASEROW_OSS_ONLY:
      BASEROW_USE_PG_FULLTEXT_SEARCH:
      POSTHOG_PROJECT_API_KEY:
      POSTHOG_HOST:
      BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT:
      BASEROW_ROW_PAGE_SIZE_LIMIT:
      BASEROW_BUILDER_DOMAINS:
      BASEROW_FRONTEND_SAME_SITE_COOKIE:
      SENTRY_DSN:
    depends_on:
      - backend
    networks:
      local:

  celery:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    environment:
      <<: *backend-variables
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-export-worker:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    command: celery-exportworker
    environment:
      <<: *backend-variables
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-beat-worker:
    image: baserow/backend:1.24.2
    restart: unless-stopped
    command: celery-beat
    environment:
      <<: *backend-variables
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  db:
    #image: postgres:15
    # If you were using a previous version, perform the update by uncommenting the
    # following line. See: https://baserow.io/docs/installation%2Finstall-with-docker#upgrading-postgresql-database-from-a-previous-version
    # for more information.
    image: pgautoupgrade/pgautoupgrade:15-alpine3.8
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      local:
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    networks:
      local:

  # By default, the media volume will be owned by root on startup. Ensure it is owned by
  # the same user that django is running as, so it can write user files.
  volume-permissions-fixer:
    image: bash:4.4
    command: chown 9999:9999 -R /baserow/media
    volumes:
      - media:/baserow/media
    networks:
      local:

volumes:
  pgdata:
  media:
  caddy_data:
  caddy_config:

networks:
  local:
    driver: bridge