Logs for Baserow Docker image on Digital Ocean

Hello,
Generally, I can figure out how to find things on most applications, but I’m not a Python expert. I installed the Baserow image on Docker Hub in DigitalOcean. The page is not loading after running this Docker command:
docker run -d --name baserow -e BASEROW_PUBLIC_URL=http://<dropletIpAddress> -v baserow_data:/baserow/data -p 80:80 -p 443:443 --restart unless-stopped baserow/baserow:1.9.1

Where can I go to see any errors?

Dale

Hey @dmccrory , just to check you deployed using the DigitalOcean new Apps feature?

image

Nigel,
Good question…
I did not know that the Apps feature had Baserow yet. So what I did is I started a new droplet with Docker and used the new docker images per DockerHub instructions. It’s better to have the domain automatically assigned with the apps, so that is much nicer.

So I’m going down that path now, but still have issues. When I first tried to start the app, it failed and I had to set the environment variable DISABLE_VOLUME_CHECK = yes.

The second time it tried to build, base row started, but returned this error

[2022-03-04 14:04:37] [STARTUP][2022-03-04 14:04:37] Running first time setup of embedded baserow database.
[2022-03-04 14:04:41] [POSTGRES_INIT][2022-03-04 14:04:41] [POSTGRES_INIT][2022-03-04 14:04:41] Error: could not start session:
[2022-03-04 14:04:41] [POSTGRES_INIT][2022-03-04 14:04:41] [POSTGRES_INIT][2022-03-04 14:04:41] Error: /usr/lib/postgresql/11/bin/pg_ctl /usr/lib/postgresql/11/bin/pg_ctl start -D /baserow/data/postgres -l /var/log/postgresql/postgresql-11-main.log -w -o -c listen_addresses=‘’ -s -o -c config_file=“/etc/postgresql/11/main/postgresql.conf” exited with status 1:

What are the next steps to get it running via the Apps feature? I’m using default settings for everything.

I also tried using the digital Ocean apps feature and encountered the same error you did. I believe it is due to the emphemeral file system that digital Ocean uses. It perhaps might work if you figure out how to attach a database using the “attach database” feature provided by digital Ocean.

However I was going to recommend using a droplet instead because of these issues with the app system :stuck_out_tongue:

If you switch back to trying a droplet could you let me know the droplet type and provide the logs from the baserow container you run. The command you provided in your initial post looks correct. The logs can be obtained by running docker logs baserow. Also it takes baserow a few minutes to setup initial and so you will only be able to access Baserow after it logs something along the lines of “Baserow is ready at xxx”.

I used the Docker droplet approach again (now looking at the logs).

Here is the error:

[WEBFRONTEND][2022-03-04 17:00:20] [WEBFRONTEND][2022-03-04 17:00:20]
[WEBFRONTEND][2022-03-04 17:00:20] [WEBFRONTEND][2022-03-04 17:00:20] ERROR connect ECONNREFUSED 127.0.0.1:8000
[WEBFRONTEND][2022-03-04 17:00:20] [WEBFRONTEND][2022-03-04 17:00:20]
[WEBFRONTEND][2022-03-04 17:00:20] [WEBFRONTEND][2022-03-04 17:00:20] at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
[WEBFRONTEND][2022-03-04 17:00:20] [WEBFRONTEND][2022-03-04 17:00:20]

Any idea what this means?

Can you provide the entire logs? It looks like the internal web front-end service is failing to connect to the internal backend service and so we need to look at the log lines starting with BACKEND to see if is working. Can you also let me know your docker version?

The version of docker logs is:

Client: Docker Engine - Community
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        f0df350
 Built:             Wed Jun  2 11:56:38 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:50 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Regarding logs: Is this what you are asking for?

=========================================================================================

██████╗  █████╗ ███████╗███████╗██████╗  ██████╗ ██╗    ██╗
██╔══██╗██╔══██╗██╔════╝██╔════╝██╔══██╗██╔═══██╗██║    ██║
██████╔╝███████║███████╗█████╗  ██████╔╝██║   ██║██║ █╗ ██║
██╔══██╗██╔══██║╚════██║██╔══╝  ██╔══██╗██║   ██║██║███╗██║
██████╔╝██║  ██║███████║███████╗██║  ██║╚██████╔╝╚███╔███╔╝
╚═════╝ ╚═╝  ╚═╝╚══════╝╚══════╝╚═╝  ╚═╝ ╚═════╝  ╚══╝╚══╝

Version 1.9.1

=========================================================================================
Welcome to Baserow. See https://baserow.io/installation/install-with-docker/ for detailed instructions on 
how to use this Docker image.
[STARTUP][2022-03-04 20:29:26] Running first time setup of embedded baserow database.
[POSTGRES_INIT][2022-03-04 20:29:26] 
[POSTGRES_INIT][2022-03-04 20:29:26] PostgreSQL Database directory appears to contain a database; Skipping initialization
[POSTGRES_INIT][2022-03-04 20:29:26] 
[STARTUP][2022-03-04 20:29:26] Starting Baserow using addresses http://164.92.70.57, if any are https automatically Caddy will attempt to setup HTTPS automatically.
[STARTUP][2022-03-04 20:29:27] Starting all Baserow processes:
2022-03-04 20:29:27,262 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-03-04 20:29:27,262 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-03-04 20:29:27,262 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing
2022-03-04 20:29:27,262 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing
2022-03-04 20:29:27,262 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-redis.conf" during parsing
2022-03-04 20:29:27,262 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-redis.conf" during parsing
2022-03-04 20:29:27,265 INFO supervisord started with pid 1
2022-03-04 20:29:27,265 INFO supervisord started with pid 1
2022-03-04 20:29:28,268 INFO spawned: 'processes' with pid 276
2022-03-04 20:29:28,268 INFO spawned: 'processes' with pid 276
2022-03-04 20:29:28,271 INFO spawned: 'postgresql' with pid 277
2022-03-04 20:29:28,271 INFO spawned: 'postgresql' with pid 277
2022-03-04 20:29:28,277 INFO spawned: 'baserow-watcher' with pid 278
2022-03-04 20:29:28,277 INFO spawned: 'baserow-watcher' with pid 278
2022-03-04 20:29:28,291 INFO spawned: 'redis' with pid 279
2022-03-04 20:29:28,291 INFO spawned: 'redis' with pid 279
2022-03-04 20:29:28,296 INFO spawned: 'caddy' with pid 280
2022-03-04 20:29:28,296 INFO spawned: 'caddy' with pid 280
2022-03-04 20:29:28,313 INFO spawned: 'celeryworker' with pid 283
2022-03-04 20:29:28,313 INFO spawned: 'celeryworker' with pid 283
2022-03-04 20:29:28,321 INFO spawned: 'exportworker' with pid 286
2022-03-04 20:29:28,321 INFO spawned: 'exportworker' with pid 286
2022-03-04 20:29:28,344 INFO spawned: 'backend' with pid 294
2022-03-04 20:29:28,344 INFO spawned: 'backend' with pid 294
2022-03-04 20:29:28,378 INFO spawned: 'webfrontend' with pid 307
2022-03-04 20:29:28,378 INFO spawned: 'webfrontend' with pid 307
2022-03-04 20:29:28,388 INFO spawned: 'beatworker' with pid 320
2022-03-04 20:29:28,388 INFO spawned: 'beatworker' with pid 320
2022-03-04 20:29:28,411 INFO reaped unknown pid 264
2022-03-04 20:29:28,411 INFO reaped unknown pid 264
2022-03-04 20:29:28,411 INFO reaped unknown pid 265
2022-03-04 20:29:28,411 INFO reaped unknown pid 265
[REDIS][2022-03-04 20:29:28] 279:C 04 Mar 2022 20:29:28.584 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
[REDIS][2022-03-04 20:29:28] 279:C 04 Mar 2022 20:29:28.584 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=279, just started
[REDIS][2022-03-04 20:29:28] 279:C 04 Mar 2022 20:29:28.584 # Configuration loaded
[REDIS][2022-03-04 20:29:28] 279:M 04 Mar 2022 20:29:28.624 * Running mode=standalone, port=6379.
[REDIS][2022-03-04 20:29:28] 279:M 04 Mar 2022 20:29:28.625 # Server initialized
[REDIS][2022-03-04 20:29:28] 279:M 04 Mar 2022 20:29:28.626 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[REDIS][2022-03-04 20:29:28] 279:M 04 Mar 2022 20:29:28.628 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[REDIS][2022-03-04 20:29:28] 279:M 04 Mar 2022 20:29:28.638 * Ready to accept connections
[BASEROW-WATCHER][2022-03-04 20:29:28] Waiting for Baserow to become available, this might take 30+ seconds...
[POSTGRES][2022-03-04 20:29:28] [POSTGRES][2022-03-04 20:29:28] 2022-03-04 20:29:28.830 UTC [277] LOG:  listening on IPv4 address "127.0.0.1", port 5432
[POSTGRES][2022-03-04 20:29:28] [POSTGRES][2022-03-04 20:29:28] 2022-03-04 20:29:28.838 UTC [277] LOG:  could not bind IPv6 address "::1": Cannot assign requested address
[POSTGRES][2022-03-04 20:29:28] [POSTGRES][2022-03-04 20:29:28] 2022-03-04 20:29:28.838 UTC [277] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
[POSTGRES][2022-03-04 20:29:28] [POSTGRES][2022-03-04 20:29:28] 2022-03-04 20:29:28.859 UTC [277] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[BEAT_WORKER][2022-03-04 20:29:28] Sleeping for 15 before starting beat to prevent  startup errors.
[POSTGRES][2022-03-04 20:29:29] [POSTGRES][2022-03-04 20:29:29] 2022-03-04 20:29:29.027 UTC [544] LOG:  database system was shut down at 2022-03-04 20:29:20 UTC
[POSTGRES][2022-03-04 20:29:29] [POSTGRES][2022-03-04 20:29:29] 2022-03-04 20:29:29.044 UTC [277] LOG:  database system is ready to accept connections
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.1095378,"msg":"using provided configuration","config_file":"/baserow/caddy/Caddyfile","config_adapter":""}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"warn","ts":1646425769.1151128,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/baserow/caddy/Caddyfile","line":2}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.124893,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.1308053,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.1349325,"msg":"autosaved config (load with --resume flag)","file":"/baserow/data/caddy/config/caddy/autosave.json"}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.1360736,"msg":"serving initial configuration"}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.1370065,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00027ed90"}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.1378958,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/baserow/data/caddy/data/caddy"}
[CADDY][2022-03-04 20:29:29] [CADDY][2022-03-04 20:29:29] {"level":"info","ts":1646425769.138933,"logger":"tls","msg":"finished cleaning storage units"}
[BACKEND][2022-03-04 20:29:29] PostgreSQL is available
[BACKEND][2022-03-04 20:29:29] python /baserow/backend/src/baserow/manage.py migrate
2022-03-04 20:29:29,405 INFO success: processes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-03-04 20:29:29,405 INFO success: processes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-03-04 20:29:29,405 INFO success: baserow-watcher entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-03-04 20:29:29,405 INFO success: baserow-watcher entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[EXPORT_WORKER][2022-03-04 20:29:38]  
[EXPORT_WORKER][2022-03-04 20:29:38]  -------------- export-worker@a27a59737b2d v5.2.3 (dawn-chorus)
[EXPORT_WORKER][2022-03-04 20:29:38] --- ***** ----- 
[EXPORT_WORKER][2022-03-04 20:29:38] -- ******* ---- Linux-5.4.0-77-generic-x86_64-with-debian-10.11 2022-03-04 20:29:38
[EXPORT_WORKER][2022-03-04 20:29:38] - *** --- * --- 
[EXPORT_WORKER][2022-03-04 20:29:38] - ** ---------- [config]
[EXPORT_WORKER][2022-03-04 20:29:38] - ** ---------- .> app:         baserow:0x7fe123961cc0
[EXPORT_WORKER][2022-03-04 20:29:38] - ** ---------- .> transport:   redis://:**@localhost:6379/0
[EXPORT_WORKER][2022-03-04 20:29:38] - ** ---------- .> results:     disabled://
[EXPORT_WORKER][2022-03-04 20:29:38] - *** --- * --- .> concurrency: 1 (prefork)
[EXPORT_WORKER][2022-03-04 20:29:38] -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
[EXPORT_WORKER][2022-03-04 20:29:38] --- ***** ----- 
[EXPORT_WORKER][2022-03-04 20:29:38]  -------------- [queues]
[EXPORT_WORKER][2022-03-04 20:29:38]                 .> export           exchange=export(direct) key=export
[EXPORT_WORKER][2022-03-04 20:29:38]                 
[EXPORT_WORKER][2022-03-04 20:29:38] 
[EXPORT_WORKER][2022-03-04 20:29:38] [tasks]
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.airtable.tasks.run_import_from_airtable
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.export.tasks.clean_up_old_jobs
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.export.tasks.run_export_job
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.webhooks.tasks.call_webhook
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.core.trash.tasks.permanently_delete_marked_trash
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.ws.tasks.broadcast_to_channel_group
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.ws.tasks.broadcast_to_group
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow.ws.tasks.broadcast_to_users
[EXPORT_WORKER][2022-03-04 20:29:38]   . baserow_premium.license.tasks.license_check
[EXPORT_WORKER][2022-03-04 20:29:38]   . djcelery_email_send_multiple
[EXPORT_WORKER][2022-03-04 20:29:38] 
[CELERY_WORKER][2022-03-04 20:29:38]  
[CELERY_WORKER][2022-03-04 20:29:38]  -------------- default-worker@a27a59737b2d v5.2.3 (dawn-chorus)
[CELERY_WORKER][2022-03-04 20:29:38] --- ***** ----- 
[CELERY_WORKER][2022-03-04 20:29:38] -- ******* ---- Linux-5.4.0-77-generic-x86_64-with-debian-10.11 2022-03-04 20:29:38
[CELERY_WORKER][2022-03-04 20:29:38] - *** --- * --- 
[CELERY_WORKER][2022-03-04 20:29:38] - ** ---------- [config]
[CELERY_WORKER][2022-03-04 20:29:38] - ** ---------- .> app:         baserow:0x7fd406014be0
[CELERY_WORKER][2022-03-04 20:29:38] - ** ---------- .> transport:   redis://:**@localhost:6379/0
[CELERY_WORKER][2022-03-04 20:29:38] - ** ---------- .> results:     disabled://
[CELERY_WORKER][2022-03-04 20:29:38] - *** --- * --- .> concurrency: 1 (prefork)
[CELERY_WORKER][2022-03-04 20:29:38] -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
[CELERY_WORKER][2022-03-04 20:29:38] --- ***** ----- 
[CELERY_WORKER][2022-03-04 20:29:38]  -------------- [queues]
[CELERY_WORKER][2022-03-04 20:29:38]                 .> celery           exchange=celery(direct) key=celery
[CELERY_WORKER][2022-03-04 20:29:38]                 
[CELERY_WORKER][2022-03-04 20:29:38] 
[CELERY_WORKER][2022-03-04 20:29:38] [tasks]
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.airtable.tasks.run_import_from_airtable
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.export.tasks.clean_up_old_jobs
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.export.tasks.run_export_job
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.contrib.database.webhooks.tasks.call_webhook
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.core.trash.tasks.permanently_delete_marked_trash
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.ws.tasks.broadcast_to_channel_group
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.ws.tasks.broadcast_to_group
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow.ws.tasks.broadcast_to_users
[CELERY_WORKER][2022-03-04 20:29:38]   . baserow_premium.license.tasks.license_check
[CELERY_WORKER][2022-03-04 20:29:38]   . djcelery_email_send_multiple
[CELERY_WORKER][2022-03-04 20:29:38] 
[CELERY_WORKER][2022-03-04 20:29:38] [CELERY_WORKER][2022-03-04 20:29:38] [2022-03-04 20:29:38,555: INFO/MainProcess] Connected to redis://:**@localhost:6379/0
[EXPORT_WORKER][2022-03-04 20:29:38] [EXPORT_WORKER][2022-03-04 20:29:38] [2022-03-04 20:29:38,562: INFO/MainProcess] Connected to redis://:**@localhost:6379/0
[EXPORT_WORKER][2022-03-04 20:29:38] [EXPORT_WORKER][2022-03-04 20:29:38] [2022-03-04 20:29:38,605: INFO/MainProcess] mingle: searching for neighbors
[CELERY_WORKER][2022-03-04 20:29:38] [CELERY_WORKER][2022-03-04 20:29:38] [2022-03-04 20:29:38,622: INFO/MainProcess] mingle: searching for neighbors
[BACKEND][2022-03-04 20:29:38] Operations to perform:
[BACKEND][2022-03-04 20:29:38]   Apply all migrations: auth, baserow_premium, contenttypes, core, database, db, sessions
[BASEROW-WATCHER][2022-03-04 20:29:38] Waiting for Baserow to become available, this might take 30+ seconds...
[BACKEND][2022-03-04 20:29:38] Running migrations:
[BACKEND][2022-03-04 20:29:39]   No migrations to apply.
[BACKEND][2022-03-04 20:29:39] Clearing Baserow's internal generated model cache...
[BACKEND][2022-03-04 20:29:39] Done clearing cache.
[EXPORT_WORKER][2022-03-04 20:29:39] [EXPORT_WORKER][2022-03-04 20:29:39] [2022-03-04 20:29:39,676: INFO/MainProcess] mingle: all alone
[CELERY_WORKER][2022-03-04 20:29:39] [CELERY_WORKER][2022-03-04 20:29:39] [2022-03-04 20:29:39,685: INFO/MainProcess] mingle: all alone
[CELERY_WORKER][2022-03-04 20:29:39] [CELERY_WORKER][2022-03-04 20:29:39] [2022-03-04 20:29:39,743: INFO/MainProcess] default-worker@a27a59737b2d ready.
[EXPORT_WORKER][2022-03-04 20:29:39] [EXPORT_WORKER][2022-03-04 20:29:39] [2022-03-04 20:29:39,745: INFO/MainProcess] export-worker@a27a59737b2d ready.
[BACKEND][2022-03-04 20:29:40] python /baserow/backend/src/baserow/manage.py sync_templates
[WEBFRONTEND][2022-03-04 20:29:42] ℹ Listening on: http://localhost:3000/
[BEAT_WORKER][2022-03-04 20:29:45] celery beat v5.2.3 (dawn-chorus) is starting.
[BEAT_WORKER][2022-03-04 20:29:49] __    -    ... __   -        _
[BEAT_WORKER][2022-03-04 20:29:49] LocalTime -> 2022-03-04 20:29:49
[BEAT_WORKER][2022-03-04 20:29:49] Configuration ->
[BEAT_WORKER][2022-03-04 20:29:49]     . broker -> redis://:**@localhost:6379/0
[BEAT_WORKER][2022-03-04 20:29:49]     . loader -> celery.loaders.app.AppLoader
[BEAT_WORKER][2022-03-04 20:29:49]     . scheduler -> redbeat.schedulers.RedBeatScheduler
[BEAT_WORKER][2022-03-04 20:29:49]        . redis -> redis://:**@localhost:6379/0
[BEAT_WORKER][2022-03-04 20:29:49]        . lock -> `redbeat::lock` 1.33 minutes (80s)
[BEAT_WORKER][2022-03-04 20:29:49]     . logfile -> [stderr]@%INFO
[BEAT_WORKER][2022-03-04 20:29:49]     . maxinterval -> 20.00 seconds (20s)
[BEAT_WORKER][2022-03-04 20:29:49] [BEAT_WORKER][2022-03-04 20:29:49] [2022-03-04 20:29:49,116: INFO/MainProcess] beat: Starting...
[BASEROW-WATCHER][2022-03-04 20:29:50] Waiting for Baserow to become available, this might take 30+ seconds...
2022-03-04 20:29:59,216 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,216 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,225 INFO success: redis entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,225 INFO success: redis entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: celeryworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: celeryworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: exportworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: exportworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: backend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: backend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: webfrontend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-03-04 20:29:59,226 INFO success: webfrontend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
[BASEROW-WATCHER][2022-03-04 20:30:00] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:30:11] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:30:22] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:30:37] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:30:50] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:31:07] Waiting for Baserow to become available, this might take 30+ seconds...
2022-03-04 20:31:08,781 INFO success: beatworker entered RUNNING state, process has stayed up for > than 100 seconds (startsecs)
2022-03-04 20:31:08,781 INFO success: beatworker entered RUNNING state, process has stayed up for > than 100 seconds (startsecs)
[BASEROW-WATCHER][2022-03-04 20:31:26] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:31:38] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:31:52] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:32:05] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:32:20] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:32:35] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:32:56] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:33:16] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:33:31] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:33:51] Waiting for Baserow to become available, this might take 30+ seconds...
[CELERY_WORKER][2022-03-04 20:34:11] [CELERY_WORKER][2022-03-04 20:34:11] [2022-03-04 20:34:09,915: INFO/MainProcess] missed heartbeat from export-worker@a27a59737b2d
[EXPORT_WORKER][2022-03-04 20:34:20] [EXPORT_WORKER][2022-03-04 20:34:20] [2022-03-04 20:34:18,592: INFO/MainProcess] missed heartbeat from default-worker@a27a59737b2d
[BASEROW-WATCHER][2022-03-04 20:34:14] Waiting for Baserow to become available, this might take 30+ seconds...
[CELERY_WORKER][2022-03-04 20:34:36] [CELERY_WORKER][2022-03-04 20:34:36] [2022-03-04 20:34:36,114: WARNING/MainProcess] Substantial drift from export-worker@a27a59737b2d may mean clocks are out of sync.  Current drift is 24 seconds.  [orig: 2022-03-04 20:34:35.805740 recv: 2022-03-04 20:34:11.069145]
[EXPORT_WORKER][2022-03-04 20:34:36] [EXPORT_WORKER][2022-03-04 20:34:36] [2022-03-04 20:34:36,114: WARNING/MainProcess] Substantial drift from default-worker@a27a59737b2d may mean clocks are out of sync.  Current drift is 23 seconds.  [orig: 2022-03-04 20:34:35.807450 recv: 2022-03-04 20:34:12.602686]
[CELERY_WORKER][2022-03-04 20:34:41] [CELERY_WORKER][2022-03-04 20:34:41] [2022-03-04 20:34:41,125: INFO/MainProcess] missed heartbeat from export-worker@a27a59737b2d
[BEAT_WORKER][2022-03-04 20:34:49] [BEAT_WORKER][2022-03-04 20:34:49] [2022-03-04 20:34:49,654: INFO/MainProcess] Scheduler: Sending due task baserow.contrib.database.export.tasks.clean_up_old_jobs() (baserow.contrib.database.export.tasks.clean_up_old_jobs)
[BEAT_WORKER][2022-03-04 20:34:50] [BEAT_WORKER][2022-03-04 20:34:50] [2022-03-04 20:34:50,460: INFO/MainProcess] Scheduler: Sending due task baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion() (baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion)
[EXPORT_WORKER][2022-03-04 20:34:50] [EXPORT_WORKER][2022-03-04 20:34:50] [2022-03-04 20:34:50,537: INFO/MainProcess] Task baserow.contrib.database.export.tasks.clean_up_old_jobs[f7afb57c-0130-4bc8-80fd-184bc31fbe5a] received
[BEAT_WORKER][2022-03-04 20:34:50] [BEAT_WORKER][2022-03-04 20:34:50] [2022-03-04 20:34:50,786: INFO/MainProcess] Scheduler: Sending due task baserow.core.trash.tasks.permanently_delete_marked_trash() (baserow.core.trash.tasks.permanently_delete_marked_trash)
[EXPORT_WORKER][2022-03-04 20:34:51] [EXPORT_WORKER][2022-03-04 20:34:51] [2022-03-04 20:34:51,092: INFO/MainProcess] Task baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion[20d507f8-28d0-40e0-9381-9eb6b95cfd51] received
[EXPORT_WORKER][2022-03-04 20:34:51] [EXPORT_WORKER][2022-03-04 20:34:51] [2022-03-04 20:34:51,533: INFO/MainProcess] Task baserow.core.trash.tasks.permanently_delete_marked_trash[63ba1a7d-2ff3-482d-a8c8-793daddc0ebf] received
[EXPORT_WORKER][2022-03-04 20:34:52] [EXPORT_WORKER][2022-03-04 20:34:52] [2022-03-04 20:34:52,535: INFO/ForkPoolWorker-1] Cleaning up 0 old jobs
[EXPORT_WORKER][2022-03-04 20:34:53] [EXPORT_WORKER][2022-03-04 20:34:53] [2022-03-04 20:34:53,295: INFO/ForkPoolWorker-1] Task baserow.contrib.database.export.tasks.clean_up_old_jobs[f7afb57c-0130-4bc8-80fd-184bc31fbe5a] succeeded in 2.6223931289987377s: None
[EXPORT_WORKER][2022-03-04 20:34:55] [EXPORT_WORKER][2022-03-04 20:34:55] [2022-03-04 20:34:55,382: INFO/ForkPoolWorker-1] Successfully marked 0 old trash items for deletion as they were older than 72 hours.
[EXPORT_WORKER][2022-03-04 20:34:55] [EXPORT_WORKER][2022-03-04 20:34:55] [2022-03-04 20:34:55,518: INFO/ForkPoolWorker-1] Task baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion[20d507f8-28d0-40e0-9381-9eb6b95cfd51] succeeded in 1.6086453740008437s: None
[BASEROW-WATCHER][2022-03-04 20:34:56] Waiting for Baserow to become available, this might take 30+ seconds...
[EXPORT_WORKER][2022-03-04 20:34:57] [EXPORT_WORKER][2022-03-04 20:34:57] [2022-03-04 20:34:57,846: INFO/ForkPoolWorker-1] Successfully deleted 0 trash entries and their associated trashed items.
[EXPORT_WORKER][2022-03-04 20:34:58] [EXPORT_WORKER][2022-03-04 20:34:58] [2022-03-04 20:34:57,939: INFO/ForkPoolWorker-1] Task baserow.core.trash.tasks.permanently_delete_marked_trash[63ba1a7d-2ff3-482d-a8c8-793daddc0ebf] succeeded in 1.9594235759996081s: None
[BASEROW-WATCHER][2022-03-04 20:35:18] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:35:39] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:36:01] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:36:21] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:36:43] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:37:04] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:37:28] Waiting for Baserow to become available, this might take 30+ seconds...
[BASEROW-WATCHER][2022-03-04 20:37:48] Waiting for Baserow to become available, this might take 30+ seconds...
[POSTGRES][2022-03-04 20:38:01] [POSTGRES][2022-03-04 20:38:01] 2022-03-04 20:38:01.695 UTC [578] baserow@baserow LOG:  could not receive data from client: Connection reset by peer
[POSTGRES][2022-03-04 20:38:01] [POSTGRES][2022-03-04 20:38:01] 2022-03-04 20:38:01.698 UTC [578] baserow@baserow LOG:  unexpected EOF on client connection with an open transaction
[BACKEND][2022-03-04 20:38:01] [BACKEND][2022-03-04 20:38:01] /baserow/backend/docker/docker-entrypoint.sh: line 151:   571 Killed                  python /baserow/backend/src/baserow/manage.py sync_templates
2022-03-04 20:38:01,741 INFO exited: backend (exit status 137; not expected)
2022-03-04 20:38:01,741 INFO exited: backend (exit status 137; not expected)
2022-03-04 20:38:01,742 INFO reaped unknown pid 489
2022-03-04 20:38:01,742 INFO reaped unknown pid 489
2022-03-04 20:38:01,742 INFO reaped unknown pid 490
2022-03-04 20:38:01,742 INFO reaped unknown pid 490
2022-03-04 20:38:01,790 INFO spawned: 'backend' with pid 802
2022-03-04 20:38:01,790 INFO spawned: 'backend' with pid 802
Baserow was stopped or one of it's services crashed, see the logs above for more details. 
2022-03-04 20:38:01,888 WARN received SIGTERM indicating exit request
2022-03-04 20:38:01,888 WARN received SIGTERM indicating exit request
2022-03-04 20:38:01,889 INFO waiting for processes, postgresql, baserow-watcher, beatworker, redis, celeryworker, webfrontend, caddy, exportworker, backend to die
2022-03-04 20:38:01,889 INFO waiting for processes, postgresql, baserow-watcher, beatworker, redis, celeryworker, webfrontend, caddy, exportworker, backend to die
[BASEROW-WATCHER][2022-03-04 20:38:03] Waiting for Baserow to become available, this might take 30+ seconds...
2022-03-04 20:38:03,870 INFO stopped: beatworker (terminated by SIGQUIT (core dumped))
2022-03-04 20:38:03,870 INFO stopped: beatworker (terminated by SIGQUIT (core dumped))
2022-03-04 20:38:03,870 INFO reaped unknown pid 526
2022-03-04 20:38:03,870 INFO reaped unknown pid 526
2022-03-04 20:38:03,870 INFO reaped unknown pid 527
2022-03-04 20:38:03,870 INFO reaped unknown pid 527
2022-03-04 20:38:03,938 INFO stopped: webfrontend (terminated by SIGTERM)
2022-03-04 20:38:03,938 INFO stopped: webfrontend (terminated by SIGTERM)
2022-03-04 20:38:03,939 INFO reaped unknown pid 513
2022-03-04 20:38:03,939 INFO reaped unknown pid 513
2022-03-04 20:38:03,939 INFO reaped unknown pid 514
2022-03-04 20:38:03,939 INFO reaped unknown pid 514

Hey @dmccrory sorry for the delayed response,

Could you let me know what the exact droplet you are using to run Baserow with? In the logs provided do you exit the docker run/docker stop Baserow around 20:38:01.695 ?

Could you also trying running with the following flag set:

docker run -d --name baserow -e BASEROW_PUBLIC_URL=http://<dropletIpAddress> -e SYNC_TEMPLATES_ON_STARTUP=false -v baserow_data:/baserow/data -p 80:80 -p 443:443 --restart unless-stopped baserow/baserow:1.9.1

Hi @nigel has there been any talk about updating the docs to include self-hosted installation instructions specific to DigitalOcean Docker droplets?