Can't sefl hosting on Digital ocean

Hello,

I tried to run Baserow in a droplet hosted by Digital Ocean but didn’t succeded…

I ran the following command :

  docker run \
  -d \
  --name baserow \
  -e BASEROW_PUBLIC_URL=https://<MY_DROPLET_IP>\
  -e BASEROW_CADDY_ADDRESSES=https://<MY_DROPLET_IP>\
  -e SYNC_TEMPLATES_ON_STARTUP=false \
  -v baserow_data:/baserow/data \
  -p 80:80 \
  -p 443:443 \
  baserow/baserow:1.13.3

Here is the last part of the log :

 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.172 UTC [167] LOG:  listening on IPv4 address "127.0.0.1", port 5432  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.172 UTC [167] LOG:  could not bind IPv6 address "::1": Cannot assign requested address  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.172 UTC [167] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.180 UTC [167] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.2828312,"msg":"using provided configuration","config_file":"/baserow/caddy/Caddyfile","config_adapter":""}  
 [CADDY][2023-01-01 15:53:38] {"level":"warn","ts":1672588418.3098662,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/baserow/caddy/Caddyfile","line":2}  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.320155,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["127.0.0.1:2019","localhost:2019","[::1]:2019"]}  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.3278556,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.329508,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.329367,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00027ed20"}  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.385136,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/baserow/data/caddy/data/caddy"}  
 [CADDY][2023-01-01 15:53:38] {"level":"info","ts":1672588418.4086804,"logger":"tls","msg":"finished cleaning storage units"}  
 [CADDY][2023-01-01 15:53:38] {"level":"warn","ts":1672588418.6409862,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}  
 [CADDY][2023-01-01 15:53:38] 2023/01/01 15:53:38 Warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.303 UTC [362] LOG:  database system was interrupted; last known up at 2023-01-01 15:51:50 UTC  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.685 UTC [362] LOG:  database system was not properly shut down; automatic recovery in progress  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.689 UTC [362] LOG:  redo starts at 0/27921C8  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.736 UTC [362] LOG:  invalid record length at 0/27D6AF0: wanted 24, got 0  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.736 UTC [362] LOG:  redo done at 0/27D6AB8  
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.737 UTC [362] LOG:  last completed transaction was at log time 2023-01-01 15:53:03.122827+00  
2023-01-01 15:53:38,794 INFO success: processes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-01-01 15:53:38,794 INFO success: processes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-01-01 15:53:38,795 INFO success: baserow-watcher entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-01-01 15:53:38,795 INFO success: baserow-watcher entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
 [POSTGRES][2023-01-01 15:53:38] 2023-01-01 15:53:38.794 UTC [167] LOG:  database system is ready to accept connections  
 [BACKEND][2023-01-01 15:53:38] Error: Failed to connect to the postgresql database at localhost  
 [BACKEND][2023-01-01 15:53:38] Please see the error below for more details:  
 [BACKEND][2023-01-01 15:53:38] connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL:  the database system is starting up  
 [BACKEND][2023-01-01 15:53:38]   
 [BACKEND][2023-01-01 15:53:41] Waiting for PostgreSQL to become available attempt  0/5 ...  
 [BACKEND][2023-01-01 15:53:41] PostgreSQL is available  
 [CADDY][2023-01-01 15:53:46] 2023/01/01 15:53:38 define JAVA_HOME environment variable to use the Java trust  
 [CADDY][2023-01-01 15:53:46] 2023/01/01 15:53:46 certificate installed properly in linux trusts  
 [CADDY][2023-01-01 15:53:46] {"level":"info","ts":1672588426.8318326,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["104.248.136.140"]}  
 [CADDY][2023-01-01 15:53:46] {"level":"warn","ts":1672588426.8337545,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [104.248.136.140]: no OCSP server specified in certificate"}  
 [CADDY][2023-01-01 15:53:46] {"level":"info","ts":1672588426.834552,"msg":"autosaved config (load with --resume flag)","file":"/baserow/data/caddy/config/caddy/autosave.json"}  
 [BASEROW-WATCHER][2023-01-01 15:53:48] Waiting for Baserow to become available, this might take 30+ seconds...  
 [BEAT_WORKER][2023-01-01 15:53:57] Sleeping for 15 before starting beat to prevent  startup errors.  
 [BASEROW-WATCHER][2023-01-01 15:53:58] Waiting for Baserow to become available, this might take 30+ seconds...  
 [CELERY_WORKER][2023-01-01 15:54:01]    
 [CELERY_WORKER][2023-01-01 15:54:01]  -------------- default-worker@c749c1e29be7 v5.2.3 (dawn-chorus)  
 [CELERY_WORKER][2023-01-01 15:54:01] --- ***** -----   
 [CELERY_WORKER][2023-01-01 15:54:01] -- ******* ---- Linux-5.15.0-56-generic-x86_64-with-glibc2.31 2023-01-01 15:54:01  
 [CELERY_WORKER][2023-01-01 15:54:01] - *** --- * ---   
 [CELERY_WORKER][2023-01-01 15:54:01] - ** ---------- [config]  
 [CELERY_WORKER][2023-01-01 15:54:01] - ** ---------- .> app:         baserow:0x7f1743da8e50  
 [CELERY_WORKER][2023-01-01 15:54:01] - ** ---------- .> transport:   redis://:**@localhost:6379/0  
 [CELERY_WORKER][2023-01-01 15:54:01] - ** ---------- .> results:     disabled://  
 [CELERY_WORKER][2023-01-01 15:54:01] - *** --- * --- .> concurrency: 1 (prefork)  
 [CELERY_WORKER][2023-01-01 15:54:01] -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)  
 [CELERY_WORKER][2023-01-01 15:54:01] --- ***** -----   
 [CELERY_WORKER][2023-01-01 15:54:01]  -------------- [queues]  
 [CELERY_WORKER][2023-01-01 15:54:01]                 .> celery           exchange=celery(direct) key=celery  
 [CELERY_WORKER][2023-01-01 15:54:01]                   
 [CELERY_WORKER][2023-01-01 15:54:01]   
 [CELERY_WORKER][2023-01-01 15:54:01] [tasks]  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.export.tasks.clean_up_old_jobs  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.export.tasks.run_export_job  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.table.tasks.run_row_count_job  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.table.tasks.unsubscribe_user_from_table_currently_subscribed_to  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.webhooks.tasks.call_webhook  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.action.tasks.cleanup_old_actions  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.jobs.tasks.clean_up_jobs  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.jobs.tasks.run_async_job  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.snapshots.tasks.delete_application_snapshot  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.snapshots.tasks.delete_expired_snapshots  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.tasks.sync_templates_task  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.trash.tasks.permanently_delete_marked_trash  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.usage.tasks.run_calculate_storage  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.core.user.tasks.check_pending_account_deletion  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_channel_group  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_group  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_groups  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_users  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow_enterprise.tasks.unsubscribe_subject_from_tables_currently_subscribed_to_task  
 [CELERY_WORKER][2023-01-01 15:54:01]   . baserow_premium.license.tasks.license_check  
 [CELERY_WORKER][2023-01-01 15:54:01]   . djcelery_email_send_multiple  
 [EXPORT_WORKER][2023-01-01 15:54:01]    
 [EXPORT_WORKER][2023-01-01 15:54:01]  -------------- export-worker@c749c1e29be7 v5.2.3 (dawn-chorus)  
 [EXPORT_WORKER][2023-01-01 15:54:01] --- ***** -----   
 [EXPORT_WORKER][2023-01-01 15:54:01] -- ******* ---- Linux-5.15.0-56-generic-x86_64-with-glibc2.31 2023-01-01 15:54:01  
 [EXPORT_WORKER][2023-01-01 15:54:01] - *** --- * ---   
 [EXPORT_WORKER][2023-01-01 15:54:01] - ** ---------- [config]  
 [EXPORT_WORKER][2023-01-01 15:54:01] - ** ---------- .> app:         baserow:0x7f635f9aacd0  
 [EXPORT_WORKER][2023-01-01 15:54:01] - ** ---------- .> transport:   redis://:**@localhost:6379/0  
 [EXPORT_WORKER][2023-01-01 15:54:01] - ** ---------- .> results:     disabled://  
 [EXPORT_WORKER][2023-01-01 15:54:01] - *** --- * --- .> concurrency: 1 (prefork)  
 [EXPORT_WORKER][2023-01-01 15:54:01] -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)  
 [EXPORT_WORKER][2023-01-01 15:54:01] --- ***** -----   
 [EXPORT_WORKER][2023-01-01 15:54:01]  -------------- [queues]  
 [EXPORT_WORKER][2023-01-01 15:54:01]                 .> export           exchange=export(direct) key=export  
 [EXPORT_WORKER][2023-01-01 15:54:01]                   
 [EXPORT_WORKER][2023-01-01 15:54:01]   
 [EXPORT_WORKER][2023-01-01 15:54:01] [tasks]  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.export.tasks.clean_up_old_jobs  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.export.tasks.run_export_job  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.table.tasks.run_row_count_job  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.table.tasks.unsubscribe_user_from_table_currently_subscribed_to  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.contrib.database.webhooks.tasks.call_webhook  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.action.tasks.cleanup_old_actions  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.jobs.tasks.clean_up_jobs  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.jobs.tasks.run_async_job  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.snapshots.tasks.delete_application_snapshot  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.snapshots.tasks.delete_expired_snapshots  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.tasks.sync_templates_task  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.trash.tasks.permanently_delete_marked_trash  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.usage.tasks.run_calculate_storage  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.core.user.tasks.check_pending_account_deletion  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_channel_group  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_group  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_groups  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow.ws.tasks.broadcast_to_users  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow_enterprise.tasks.unsubscribe_subject_from_tables_currently_subscribed_to_task  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . baserow_premium.license.tasks.license_check  
 [EXPORT_WORKER][2023-01-01 15:54:01]   . djcelery_email_send_multiple  
 [BACKEND][2023-01-01 15:54:01] python /baserow/backend/src/baserow/manage.py migrate  
 [BACKEND][2023-01-01 15:54:01] Operations to perform:  
 [CELERY_WORKER][2023-01-01 15:54:02]   
 [EXPORT_WORKER][2023-01-01 15:54:02]   
 [CELERY_WORKER][2023-01-01 15:54:02] [2023-01-01 15:54:02,110: INFO/MainProcess] Connected to redis://:**@localhost:6379/0  
 [EXPORT_WORKER][2023-01-01 15:54:02] [2023-01-01 15:54:02,126: INFO/MainProcess] Connected to redis://:**@localhost:6379/0  
 [BACKEND][2023-01-01 15:54:02]   Apply all migrations: auth, baserow_enterprise, baserow_premium, contenttypes, core, database, db, sessions  
 [BACKEND][2023-01-01 15:54:02] Clearing Baserow's internal generated model cache...  
 [BACKEND][2023-01-01 15:54:02] Done clearing cache.  
 [BACKEND][2023-01-01 15:54:02] Running migrations:  
 [CELERY_WORKER][2023-01-01 15:54:03] [2023-01-01 15:54:02,148: INFO/MainProcess] mingle: searching for neighbors  
 [EXPORT_WORKER][2023-01-01 15:54:03] [2023-01-01 15:54:02,162: INFO/MainProcess] mingle: searching for neighbors  
 [CELERY_WORKER][2023-01-01 15:54:03] [2023-01-01 15:54:03,223: INFO/MainProcess] mingle: all alone  
 [EXPORT_WORKER][2023-01-01 15:54:03] [2023-01-01 15:54:03,231: INFO/MainProcess] mingle: all alone  
 [BACKEND][2023-01-01 15:54:03]   No migrations to apply.  
 [BACKEND][2023-01-01 15:54:03] Creating all operations...  
 [BACKEND][2023-01-01 15:54:03] Checking to see if formulas need updating...  
 [BACKEND][2023-01-01 15:54:03] INFO 2023-01-01 15:54:03,948 baserow.contrib.database.formula.migrations.handler.migrate_formulas:168- Found 0 batches of formulas to migrate from version None to 5.   
 [BACKEND][2023-01-01 15:54:04] Finished migrating formulas: : 0it [00:00, ?it/s]  
 [BACKEND][2023-01-01 15:54:07] Syncing default roles: 100%|██████████| 7/7 [00:00<00:00,  8.01it/s]  
 [BACKEND][2023-01-01 15:54:07] [2023-01-01 15:54:07 +0000] [171] [INFO] Starting gunicorn 20.1.0  
 [BACKEND][2023-01-01 15:54:07] [2023-01-01 15:54:07 +0000] [171] [INFO] Listening at: http://127.0.0.1:8000 (171)  
 [BACKEND][2023-01-01 15:54:07] [2023-01-01 15:54:07 +0000] [171] [INFO] Using worker: uvicorn.workers.UvicornWorker  
 [BACKEND][2023-01-01 15:54:07] [2023-01-01 15:54:07 +0000] [1339] [INFO] Booting worker with pid: 1339  
 [BACKEND][2023-01-01 15:54:07] [2023-01-01 15:54:07 +0000] [1340] [INFO] Booting worker with pid: 1340  
 [BEAT_WORKER][2023-01-01 15:54:07] celery beat v5.2.3 (dawn-chorus) is starting.  
 [BEAT_WORKER][2023-01-01 15:54:07] __    -    ... __   -        _  
 [BEAT_WORKER][2023-01-01 15:54:07] LocalTime -> 2023-01-01 15:54:07  
 [BEAT_WORKER][2023-01-01 15:54:07] Configuration ->  
 [BEAT_WORKER][2023-01-01 15:54:07]     . broker -> redis://:**@localhost:6379/0  
 [BEAT_WORKER][2023-01-01 15:54:07]     . loader -> celery.loaders.app.AppLoader  
 [BEAT_WORKER][2023-01-01 15:54:07]     . scheduler -> redbeat.schedulers.RedBeatScheduler  
 [BEAT_WORKER][2023-01-01 15:54:07]        . redis -> redis://:**@localhost:6379/0  
 [BEAT_WORKER][2023-01-01 15:54:07]        . lock -> `redbeat::lock` 1.33 minutes (80s)  
 [BEAT_WORKER][2023-01-01 15:54:07]     . logfile -> [stderr]@%INFO  
 [BEAT_WORKER][2023-01-01 15:54:07]     . maxinterval -> 20.00 seconds (20s)  
2023-01-01 15:54:08,391 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,391 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,392 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,392 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: redis entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: redis entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: backend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: backend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: celeryworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: celeryworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: exportworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: exportworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: webfrontend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-01-01 15:54:08,393 INFO success: webfrontend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
 [CELERY_WORKER][2023-01-01 15:54:22] [2023-01-01 15:54:03,322: INFO/MainProcess] default-worker@c749c1e29be7 ready.  
 [EXPORT_WORKER][2023-01-01 15:54:23] [2023-01-01 15:54:03,331: INFO/MainProcess] export-worker@c749c1e29be7 ready.  
 [CELERY_WORKER][2023-01-01 15:54:32] [2023-01-01 15:54:21,950: INFO/MainProcess] missed heartbeat from export-worker@c749c1e29be7  
 [EXPORT_WORKER][2023-01-01 15:54:33] [2023-01-01 15:54:22,723: INFO/MainProcess] missed heartbeat from default-worker@c749c1e29be7  
 [BASEROW-WATCHER][2023-01-01 15:54:34] Waiting for Baserow to become available, this might take 30+ seconds...  
 [BACKEND][2023-01-01 15:54:39] [2023-01-01 15:54:07 +0000] [1341] [INFO] Booting worker with pid: 1341  
 [BACKEND][2023-01-01 15:54:59] [2023-01-01 15:54:38 +0000] [171] [CRITICAL] WORKER TIMEOUT (pid:1341)  
 [BACKEND][2023-01-01 15:55:03] [2023-01-01 15:54:46 +0000] [171] [CRITICAL] WORKER TIMEOUT (pid:1339)  
 [CELERY_WORKER][2023-01-01 15:55:03] [2023-01-01 15:54:32,361: INFO/MainProcess] missed heartbeat from export-worker@c749c1e29be7  
 [CELERY_WORKER][2023-01-01 15:55:03] [2023-01-01 15:55:03,710: INFO/MainProcess] missed heartbeat from export-worker@c749c1e29be7  
 [BACKEND][2023-01-01 15:55:03] [2023-01-01 15:55:03 +0000] [171] [WARNING] Worker with pid 1341 was terminated due to signal 6  
 [BACKEND][2023-01-01 15:55:03] [2023-01-01 15:55:03 +0000] [171] [CRITICAL] WORKER TIMEOUT (pid:1340)  
 [EXPORT_WORKER][2023-01-01 15:55:03] [2023-01-01 15:54:33,800: INFO/MainProcess] missed heartbeat from default-worker@c749c1e29be7  
 [EXPORT_WORKER][2023-01-01 15:55:03] [2023-01-01 15:55:03,784: INFO/MainProcess] missed heartbeat from default-worker@c749c1e29be7  
 [BACKEND][2023-01-01 15:55:03] [2023-01-01 15:55:03 +0000] [171] [WARNING] Worker with pid 1339 was terminated due to signal 6  
 [BACKEND][2023-01-01 15:55:03] [2023-01-01 15:55:03 +0000] [1347] [INFO] Booting worker with pid: 1347  
 [BACKEND][2023-01-01 15:55:04] [2023-01-01 15:55:03 +0000] [171] [WARNING] Worker with pid 1340 was terminated due to signal 9  
 [BACKEND][2023-01-01 15:55:04] [2023-01-01 15:55:04 +0000] [1349] [INFO] Booting worker with pid: 1349  
 [EXPORT_WORKER][2023-01-01 15:55:05] [2023-01-01 15:55:03,838: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:1337 exited with 'signal 9 (SIGKILL)'  
2023-01-01 15:55:15,373 INFO reaped unknown pid 1375 (exit status 28)
2023-01-01 15:55:15,373 INFO reaped unknown pid 1375 (exit status 28)
2023-01-01 15:55:18,562 INFO success: beatworker entered RUNNING state, process has stayed up for > than 100 seconds (startsecs)
2023-01-01 15:55:18,562 INFO success: beatworker entered RUNNING state, process has stayed up for > than 100 seconds (startsecs)
 [BASEROW-WATCHER][2023-01-01 15:55:22] Waiting for Baserow to become available, this might take 30+ seconds...  
 [CELERY_WORKER][2023-01-01 15:55:32] [2023-01-01 15:55:03,716: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:1336 exited with 'signal 9 (SIGKILL)'  
 [EXPORT_WORKER][2023-01-01 15:55:32] [2023-01-01 15:55:05,215: WARNING/MainProcess] Substantial drift from default-worker@c749c1e29be7 may mean clocks are out of sync.  Current drift is 30 seconds.  [orig: 2023-01-01 15:55:05.215264 recv: 2023-01-01 15:54:35.373833]  
 [CELERY_WORKER][2023-01-01 15:55:32] [2023-01-01 15:55:32,454: INFO/MainProcess] missed heartbeat from export-worker@c749c1e29be7  
 [EXPORT_WORKER][2023-01-01 15:55:32] [2023-01-01 15:55:32,454: INFO/MainProcess] missed heartbeat from default-worker@c749c1e29be7  
 [BACKEND][2023-01-01 15:55:34] [2023-01-01 15:55:04 +0000] [1355] [INFO] Booting worker with pid: 1355  
2023-01-01 15:55:43,708 INFO exited: exportworker (terminated by SIGKILL; not expected)
2023-01-01 15:55:43,708 INFO exited: exportworker (terminated by SIGKILL; not expected)
 [BACKEND][2023-01-01 15:55:35] [2023-01-01 15:55:34 +0000] [171] [CRITICAL] WORKER TIMEOUT (pid:1347)  
 [BACKEND][2023-01-01 15:55:43] [2023-01-01 15:55:34 +0000] [171] [CRITICAL] WORKER TIMEOUT (pid:1349)  
2023-01-01 15:55:44,063 INFO spawned: 'exportworker' with pid 1388
2023-01-01 15:55:44,063 INFO spawned: 'exportworker' with pid 1388
2023-01-01 15:55:44,209 INFO reaped unknown pid 267 (exit status 141)
2023-01-01 15:55:44,209 INFO reaped unknown pid 267 (exit status 141)
2023-01-01 15:55:44,213 INFO reaped unknown pid 1384 (terminated by SIGKILL)
2023-01-01 15:55:44,213 INFO reaped unknown pid 1384 (terminated by SIGKILL)
Baserow was stopped or one of it's services crashed, see the logs above for more details. 
 [BASEROW-WATCHER][2023-01-01 15:55:44] Waiting for Baserow to become available, this might take 30+ seconds...  
 [BACKEND][2023-01-01 15:55:44] [2023-01-01 15:55:41 +0000] [171] [CRITICAL] WORKER TIMEOUT (pid:1355)  
 [BACKEND][2023-01-01 15:55:44] [2023-01-01 15:55:44 +0000] [171] [WARNING] Worker with pid 1347 was terminated due to signal 6  
 [BACKEND][2023-01-01 15:55:44] [2023-01-01 15:55:44 +0000] [171] [WARNING] Worker with pid 1349 was terminated due to signal 6  
 [CELERY_WORKER][2023-01-01 15:55:44] [2023-01-01 15:55:32,486: ERROR/MainProcess] Process 'ForkPoolWorker-2' pid:1377 exited with 'signal 9 (SIGKILL)'  
 [CELERY_WORKER][2023-01-01 15:55:44] [2023-01-01 15:55:43,993: ERROR/MainProcess] Timed out waiting for UP message from <ForkProcess(ForkPoolWorker-3, started daemon)>  
 [BACKEND][2023-01-01 15:55:44] [2023-01-01 15:55:44 +0000] [1387] [INFO] Booting worker with pid: 1387  
 [BACKEND][2023-01-01 15:55:44] [2023-01-01 15:55:44 +0000] [171] [WARNING] Worker with pid 1355 was terminated due to signal 9  
2023-01-01 15:55:44,286 WARN received SIGTERM indicating exit request
2023-01-01 15:55:44,286 WARN received SIGTERM indicating exit request
2023-01-01 15:55:44,287 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die
2023-01-01 15:55:44,287 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die
 [BACKEND][2023-01-01 15:55:44] [2023-01-01 15:55:44 +0000] [1390] [INFO] Booting worker with pid: 1390  
 [BEAT_WORKER][2023-01-01 15:55:44] [2023-01-01 15:54:07,389: INFO/MainProcess] beat: Starting...  
2023-01-01 15:55:44,690 INFO stopped: beatworker (terminated by SIGQUIT (core dumped))
2023-01-01 15:55:44,690 INFO stopped: beatworker (terminated by SIGQUIT (core dumped))
2023-01-01 15:55:44,690 INFO reaped unknown pid 299 (exit status 0)
2023-01-01 15:55:44,690 INFO reaped unknown pid 299 (exit status 0)
 [WEBFRONTEND][2023-01-01 15:55:45] ℹ Listening on: http://localhost:3000/  
2023-01-01 15:55:46,376 INFO stopped: webfrontend (terminated by SIGTERM)
2023-01-01 15:55:46,376 INFO stopped: webfrontend (terminated by SIGTERM)
2023-01-01 15:55:46,377 INFO reaped unknown pid 290 (exit status 0)
2023-01-01 15:55:46,377 INFO reaped unknown pid 290 (exit status 0)
2023-01-01 15:55:47,392 INFO stopped: exportworker (terminated by SIGTERM)
2023-01-01 15:55:47,392 INFO stopped: exportworker (terminated by SIGTERM)
2023-01-01 15:55:47,393 INFO reaped unknown pid 1400 (exit status 0)
2023-01-01 15:55:47,393 INFO reaped unknown pid 1400 (exit status 0)
2023-01-01 15:55:47,393 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend, celeryworker to die
2023-01-01 15:55:47,393 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend, celeryworker to die
 [CELERY_WORKER][2023-01-01 15:55:47] [2023-01-01 15:55:44,225: ERROR/MainProcess] Process 'ForkPoolWorker-3' pid:1385 exited with 'signal 9 (SIGKILL)'  
 [CELERY_WORKER][2023-01-01 15:55:47]   
2023-01-01 15:55:50,400 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend, celeryworker to die
2023-01-01 15:55:50,400 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend, celeryworker to die
 [CELERY_WORKER][2023-01-01 15:55:52] worker: Warm shutdown (MainProcess)  
2023-01-01 15:55:53,254 INFO stopped: celeryworker (exit status 0)
2023-01-01 15:55:53,254 INFO stopped: celeryworker (exit status 0)
2023-01-01 15:55:53,255 INFO reaped unknown pid 259 (exit status 0)
2023-01-01 15:55:53,255 INFO reaped unknown pid 259 (exit status 0)
 [BACKEND][2023-01-01 15:55:53] [2023-01-01 15:55:44 +0000] [1391] [INFO] Booting worker with pid: 1391  
 [BACKEND][2023-01-01 15:55:53] [2023-01-01 15:55:53 +0000] [171] [INFO] Handling signal: term  
 [BACKEND][2023-01-01 15:55:53] [2023-01-01 15:55:53 +0000] [171] [WARNING] Worker with pid 1390 was terminated due to signal 15  
 [BACKEND][2023-01-01 15:55:53] [2023-01-01 15:55:53 +0000] [171] [WARNING] Worker with pid 1391 was terminated due to signal 15  
 [BACKEND][2023-01-01 15:55:53] [2023-01-01 15:55:53 +0000] [171] [WARNING] Worker with pid 1387 was terminated due to signal 15  
 [BACKEND][2023-01-01 15:55:53] [2023-01-01 15:55:53 +0000] [171] [INFO] Shutting down: Master  
2023-01-01 15:55:53,448 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend to die
2023-01-01 15:55:53,448 INFO waiting for processes, baserow-watcher, caddy, postgresql, redis, backend to die
 [BASEROW-WATCHER][2023-01-01 15:55:54] Waiting for Baserow to become available, this might take 30+ seconds...  
2023-01-01 15:55:54,301 INFO stopped: backend (exit status 0)
2023-01-01 15:55:54,301 INFO stopped: backend (exit status 0)
2023-01-01 15:55:54,301 INFO reaped unknown pid 239 (exit status 0)
2023-01-01 15:55:54,301 INFO reaped unknown pid 239 (exit status 0)
 [REDIS][2023-01-01 15:55:54] 168:M 01 Jan 2023 15:53:38.094 * Ready to accept connections  
 [REDIS][2023-01-01 15:55:54] 168:signal-handler (1672588554) Received SIGTERM scheduling shutdown...  
 [REDIS][2023-01-01 15:55:54] 168:M 01 Jan 2023 15:55:54.365 # User requested shutdown...  
 [REDIS][2023-01-01 15:55:54] 168:M 01 Jan 2023 15:55:54.365 # Redis is now ready to exit, bye bye...  
2023-01-01 15:55:55,376 INFO stopped: redis (exit status 0)
2023-01-01 15:55:55,376 INFO stopped: redis (exit status 0)
2023-01-01 15:55:55,376 INFO reaped unknown pid 225 (exit status 0)
2023-01-01 15:55:55,376 INFO reaped unknown pid 225 (exit status 0)
 [POSTGRES][2023-01-01 15:55:55] 2023-01-01 15:53:38.821 UTC [388] baserow@baserow FATAL:  the database system is starting up  
 [POSTGRES][2023-01-01 15:55:55] 2023-01-01 15:55:55.380 UTC [167] LOG:  received smart shutdown request  
 [POSTGRES][2023-01-01 15:55:55] 2023-01-01 15:55:55.401 UTC [167] LOG:  background worker "logical replication launcher" (PID 395) exited with exit code 1  
 [POSTGRES][2023-01-01 15:55:55] 2023-01-01 15:55:55.403 UTC [390] LOG:  shutting down  
 [POSTGRES][2023-01-01 15:55:55] 2023-01-01 15:55:55.444 UTC [167] LOG:  database system is shut down  
2023-01-01 15:55:56,451 INFO stopped: postgresql (exit status 0)
2023-01-01 15:55:56,451 INFO stopped: postgresql (exit status 0)
2023-01-01 15:55:56,451 INFO reaped unknown pid 235 (exit status 0)
2023-01-01 15:55:56,451 INFO reaped unknown pid 235 (exit status 0)
2023-01-01 15:55:56,452 INFO waiting for processes, baserow-watcher, caddy to die
2023-01-01 15:55:56,452 INFO waiting for processes, baserow-watcher, caddy to die
 [CADDY][2023-01-01 15:55:56] {"level":"info","ts":1672588426.8351622,"msg":"serving initial configuration"}  
 [CADDY][2023-01-01 15:55:56] {"level":"info","ts":1672588556.4690428,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}  
 [CADDY][2023-01-01 15:55:56] {"level":"warn","ts":1672588556.4756896,"msg":"exiting; byeee!! 👋","signal":"SIGTERM"}  
 [CADDY][2023-01-01 15:55:56] {"level":"info","ts":1672588556.505443,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc00027ed20"}  
 [CADDY][2023-01-01 15:55:56] {"level":"info","ts":1672588556.5087063,"logger":"admin","msg":"stopped previous server","address":"tcp/localhost:2019"}  
 [CADDY][2023-01-01 15:55:56] {"level":"info","ts":1672588556.5093925,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}  
2023-01-01 15:55:57,513 INFO stopped: caddy (exit status 0)
2023-01-01 15:55:57,513 INFO stopped: caddy (exit status 0)
2023-01-01 15:55:57,514 INFO reaped unknown pid 212 (exit status 0)
2023-01-01 15:55:57,514 INFO reaped unknown pid 212 (exit status 0)
2023-01-01 15:55:58,516 INFO stopped: baserow-watcher (terminated by SIGTERM)
2023-01-01 15:55:58,516 INFO stopped: baserow-watcher (terminated by SIGTERM)
2023-01-01 15:55:58,518 INFO stopped: processes (terminated by SIGTERM)
2023-01-01 15:55:58,518 INFO stopped: processes (terminated by SIGTERM)
2023-01-01 15:55:58,518 ERRO pool processes event buffer overflowed, discarding event 1
2023-01-01 15:55:58,518 ERRO pool processes event buffer overflowed, discarding event 1

Do you know what’s the problem here ?

Thanks

Hi @Clapp

Can you let me know the specs of your droplet? I recommend you are using one with atleast 4GB of RAM, lower than that might cause crashes like the one you are seeing.

Hello @nigel,

Thank you for your reply, nope, it was a smaller one, I’ll try with a one with at least 4GB of memory as adviced,

I keep you posted,

It worked like a charme with a 4gb of RAM droplet!

1 Like

Is it difficult to do for a non dev ? I wish there was a 1 click droplet for Baserow on Digital Ocean but there is not.

Would you have a tutoriel to host and update Baserow on a DO server ?

Hello @bastien, apologies for not getting back to you earlier. We plan to add more self-hosting options in the future and a 1-click install on DigitalOcean sounds like a great option (I’ll share it with our DevOps engineer who is starting next week or so :slightly_smiling_face:)

Would you have a tutoriel to host and update Baserow on a DO server ?

We do plan to produce more content about self-hosting in Baserow. For now, I can advise you to check out these materials about running Baserow on DigitalOcean: Host Your Own NoCode Airtable Alternative with Baserow, https://youtu.be/B98pTpQUqmM.

Note: for non-tech people, we advise using Cloudron because of its ease of use or Heroku with the one-click deployment but definitely choose whatever you feel is best for you.

By using Cloudron or Heroku, do you mean using it through DO ? Indeed I am not dev.

You can run Cloudron on Digital Ocean but not Heroku as Heroku and Digital Ocean are both cloud hosting providers which mean they are alternatives to each other. Hope that helps :raised_hands:

mmm that gives me an idea for a quick weekend project. Did you get it up and running in the end? If you need any help drop me a dm or an email.

hey, I am not at the origin of the topic but I am very insterested in self hosting baserow database at lowest cost possible and as a non dev, I find it hard for now

1 Like

@bvairet Drop me an email (hello@86-88.solutions) or a DM and we can have a chat.