Install problems on Synology Docker

Hello,

I continue to experience problems installing Baserow on a Synology NAS.

Here is my Portainer stack:

version: '3.3'
services:
    baserow:
        container_name: baserow
        ports:
            - '9988:80'
        environment:
            - 'BASEROW_PUBLIC_URL=http://192.168.86.200:9988'
        volumes:
            - '/volume2/docker/baserow:/baserow/data'
        restart: always
        image: 'baserow/baserow:1.13.0'

And here are the logs:

 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.jobs.tasks.clean_up_jobs  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.jobs.tasks.run_async_job  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.snapshots.tasks.delete_application_snapshot  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.snapshots.tasks.delete_expired_snapshots  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.tasks.sync_templates_task  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.trash.tasks.permanently_delete_marked_trash  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.usage.tasks.run_calculate_storage  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.core.user.tasks.check_pending_account_deletion  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_channel_group  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_group  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_groups  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_users  
 [CELERY_WORKER][2022-12-13 15:14:55]   . baserow_premium.license.tasks.license_check  
 [CELERY_WORKER][2022-12-13 15:14:55]   . djcelery_email_send_multiple  
 [EXPORT_WORKER][2022-12-13 15:14:55] INFO 2022-12-13 15:14:54,037 xmlschema.include_schema:1250- Resource 'XMLSchema.xsd' is already loaded   
 [EXPORT_WORKER][2022-12-13 15:14:55]    
 [EXPORT_WORKER][2022-12-13 15:14:55]  -------------- export-worker@ea4ec9820bfa v5.2.3 (dawn-chorus)  
 [EXPORT_WORKER][2022-12-13 15:14:55] --- ***** -----   
 [EXPORT_WORKER][2022-12-13 15:14:55] -- ******* ---- Linux-4.4.180+-x86_64-with-glibc2.31 2022-12-13 15:14:55  
 [EXPORT_WORKER][2022-12-13 15:14:55] - *** --- * ---   
 [EXPORT_WORKER][2022-12-13 15:14:55] - ** ---------- [config]  
 [EXPORT_WORKER][2022-12-13 15:14:55] - ** ---------- .> app:         baserow:0x7fe07158c7f0  
 [EXPORT_WORKER][2022-12-13 15:14:55] - ** ---------- .> transport:   redis://:**@localhost:6379/0  
 [EXPORT_WORKER][2022-12-13 15:14:55] - ** ---------- .> results:     disabled://  
 [EXPORT_WORKER][2022-12-13 15:14:55] - *** --- * --- .> concurrency: 1 (prefork)  
 [EXPORT_WORKER][2022-12-13 15:14:55] -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)  
 [EXPORT_WORKER][2022-12-13 15:14:55] --- ***** -----   
 [EXPORT_WORKER][2022-12-13 15:14:55]  -------------- [queues]  
 [EXPORT_WORKER][2022-12-13 15:14:55]                 .> export           exchange=export(direct) key=export  
 [EXPORT_WORKER][2022-12-13 15:14:55]                   
 [EXPORT_WORKER][2022-12-13 15:14:55]   
 [EXPORT_WORKER][2022-12-13 15:14:55] [tasks]  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.contrib.database.export.tasks.clean_up_old_jobs  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.contrib.database.export.tasks.run_export_job  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.contrib.database.table.tasks.run_row_count_job  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.contrib.database.webhooks.tasks.call_webhook  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.action.tasks.cleanup_old_actions  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.jobs.tasks.clean_up_jobs  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.jobs.tasks.run_async_job  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.snapshots.tasks.delete_application_snapshot  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.snapshots.tasks.delete_expired_snapshots  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.tasks.sync_templates_task  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.trash.tasks.permanently_delete_marked_trash  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.usage.tasks.run_calculate_storage  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.core.user.tasks.check_pending_account_deletion  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_channel_group  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_group  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_groups  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow.ws.tasks.broadcast_to_users  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . baserow_premium.license.tasks.license_check  
 [EXPORT_WORKER][2022-12-13 15:14:55]   . djcelery_email_send_multiple  
 [CELERY_WORKER][2022-12-13 15:14:55]   
 [CELERY_WORKER][2022-12-13 15:14:55] [2022-12-13 15:14:55,365: INFO/MainProcess] Connected to redis://:**@localhost:6379/0  
 [EXPORT_WORKER][2022-12-13 15:14:55]   
 [EXPORT_WORKER][2022-12-13 15:14:55] [2022-12-13 15:14:55,405: INFO/MainProcess] Connected to redis://:**@localhost:6379/0  
 [BACKEND][2022-12-13 15:14:56] Waiting for PostgreSQL to become available attempt  1/5 ...  
 [BACKEND][2022-12-13 15:14:56] Error: Failed to connect to the postgresql database at localhost  
 [BACKEND][2022-12-13 15:14:56] Please see the error below for more details:  
 [BACKEND][2022-12-13 15:14:56] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused  
 [BACKEND][2022-12-13 15:14:56] 	Is the server running on that host and accepting TCP/IP connections?  
 [BACKEND][2022-12-13 15:14:56] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address  
 [BACKEND][2022-12-13 15:14:56] 	Is the server running on that host and accepting TCP/IP connections?  
 [BACKEND][2022-12-13 15:14:56]   
 [CELERY_WORKER][2022-12-13 15:14:56] [2022-12-13 15:14:55,368: INFO/MainProcess] mingle: searching for neighbors  
 [CELERY_WORKER][2022-12-13 15:14:56] [2022-12-13 15:14:56,378: INFO/MainProcess] mingle: all alone  
 [EXPORT_WORKER][2022-12-13 15:14:56] [2022-12-13 15:14:55,408: INFO/MainProcess] mingle: searching for neighbors  
 [EXPORT_WORKER][2022-12-13 15:14:56] [2022-12-13 15:14:56,417: INFO/MainProcess] mingle: all alone  
 [BACKEND][2022-12-13 15:14:58] Waiting for PostgreSQL to become available attempt  2/5 ...  
 [BACKEND][2022-12-13 15:14:58] Error: Failed to connect to the postgresql database at localhost  
 [BACKEND][2022-12-13 15:14:58] Please see the error below for more details:  
 [BACKEND][2022-12-13 15:14:58] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused  
 [BACKEND][2022-12-13 15:14:58] 	Is the server running on that host and accepting TCP/IP connections?  
 [BACKEND][2022-12-13 15:14:58] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address  
 [BACKEND][2022-12-13 15:14:58] 	Is the server running on that host and accepting TCP/IP connections?  
2022-12-13 15:14:58,265 INFO spawned: 'postgresql' with pid 412
2022-12-13 15:14:58,265 INFO spawned: 'postgresql' with pid 412
 [BACKEND][2022-12-13 15:14:58]   
 [POSTGRES][2022-12-13 15:14:58] 2022-12-13 15:14:58.322 UTC [412] FATAL:  data directory "/baserow/data/postgres" has invalid permissions  
 [POSTGRES][2022-12-13 15:14:58] 2022-12-13 15:14:58.322 UTC [412] DETAIL:  Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).  
2022-12-13 15:14:58,323 INFO exited: postgresql (exit status 1; not expected)
2022-12-13 15:14:58,323 INFO exited: postgresql (exit status 1; not expected)
2022-12-13 15:14:58,323 INFO gave up: postgresql entered FATAL state, too many start retries too quickly
2022-12-13 15:14:58,323 INFO gave up: postgresql entered FATAL state, too many start retries too quickly
2022-12-13 15:14:58,323 INFO reaped unknown pid 422 (exit status 0)
2022-12-13 15:14:58,323 INFO reaped unknown pid 422 (exit status 0)
Baserow was stopped or one of it's services crashed, see the logs above for more details. 
2022-12-13 15:15:00,326 WARN received SIGTERM indicating exit request
2022-12-13 15:15:00,326 WARN received SIGTERM indicating exit request
2022-12-13 15:15:00,326 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die
2022-12-13 15:15:00,326 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die
 [BACKEND][2022-12-13 15:15:00] Waiting for PostgreSQL to become available attempt  3/5 ...  
 [BACKEND][2022-12-13 15:15:00] Error: Failed to connect to the postgresql database at localhost  
 [BACKEND][2022-12-13 15:15:00] Please see the error below for more details:  
 [BACKEND][2022-12-13 15:15:00] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused  
 [BACKEND][2022-12-13 15:15:00] 	Is the server running on that host and accepting TCP/IP connections?  
 [BACKEND][2022-12-13 15:15:00] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address  
 [BACKEND][2022-12-13 15:15:00] 	Is the server running on that host and accepting TCP/IP connections?  
 [BACKEND][2022-12-13 15:15:00]

It seems clear I have a postgresql problem (but I don’t know why) and potentially a permissions problem (even though I already did chmod 777 to the entire baserow directory, including subdirectories).

Any help would be much appreciated! Thanks.

@pjones I wouldn’t recomend making any permission changes manually to the /baserow directory.

The error from the logs is because you’ve manually changed them:

 [POSTGRES][2022-12-13 15:14:58] 2022-12-13 15:14:58.322 UTC [412] FATAL:  data directory "/baserow/data/postgres" has invalid permissions  
 [POSTGRES][2022-12-13 15:14:58] 2022-12-13 15:14:58.322 UTC [412] DETAIL:  Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).  

You need to change the permissions on that folder to match what the error is telling you it should be.

@nigel Many thanks for your reply!

I have deleted all previous folders and created a new baserow folder on my NAS with the default permissions. Nothing altered.

I then created a new baserow container using the above docker compose file.

Baserow now starts, but unfortunately will not pull the templates (gives a permissions error) nor will it allow files/attachments using a File field type.

Here is the log.

 [BACKEND][2022-12-15 17:43:15]   Applying database.0079_table_version... OK  
 [BACKEND][2022-12-15 17:43:15]   Applying database.0080_auto_20220702_1612... OK  
 [BACKEND][2022-12-15 17:43:15]   Applying database.0081_batch_webhooks... OK  
 [BACKEND][2022-12-15 17:43:15]   Applying database.0082_add_import_job_data_mixin... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0083_form_field_options_conditions... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0084_duplicatetablejob... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0085_alter_fileimportjob_name... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0086_formview_mode... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0087_add_duplicate_field_job_type... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0088_multiple_collaborators_field... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0089_update_webhook_url_validators... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0090_add_link_formula_type... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying database.0091_view_show_logo... OK  
 [BACKEND][2022-12-15 17:43:16]   Applying db.0001_initial... OK  
 [BACKEND][2022-12-15 17:43:17]   Applying sessions.0001_initial... OK  
 [BACKEND][2022-12-15 17:43:17] Submitting the sync templates task to run asynchronously in celery after the migration...  
 [EXPORT_WORKER][2022-12-15 17:43:17] [2022-12-15 17:43:00,526: INFO/MainProcess] export-worker@6a88f02177a8 ready.  
 [EXPORT_WORKER][2022-12-15 17:43:17] [2022-12-15 17:43:17,233: INFO/MainProcess] Task baserow.core.tasks.sync_templates_task[9fe34bc9-c836-4eba-9751-e04fb1f90e7a] received  
 [EXPORT_WORKER][2022-12-15 17:43:17] [2022-12-15 17:43:17,267: WARNING/ForkPoolWorker-1]   
 [BACKEND][2022-12-15 17:43:17] Creating all operations...  
 [BACKEND][2022-12-15 17:43:17] Checking to see if formulas need updating...  
 [BACKEND][2022-12-15 17:43:17] INFO 2022-12-15 17:43:17,366 baserow.contrib.database.formula.migrations.handler.migrate_formulas:168- Found 0 batches of formulas to migrate from version None to 5.   
 [BACKEND][2022-12-15 17:43:17]   
 [BACKEND][2022-12-15 17:43:17] 0it [00:00, ?it/s]  
 [BACKEND][2022-12-15 17:43:17] Finished migrating formulas: : 0it [00:00, ?it/s]  
 [BACKEND][2022-12-15 17:43:17] Finished migrating formulas: : 0it [00:00, ?it/s]  
 [BACKEND][2022-12-15 17:43:17]   
 [BACKEND][2022-12-15 17:43:18] Syncing default roles:   0%|          | 0/6 [00:00<?, ?it/s]  
 [BACKEND][2022-12-15 17:43:19] Syncing default roles:  17%|█▋        | 1/6 [00:00<00:03,  1.27it/s]  
 [EXPORT_WORKER][2022-12-15 17:43:19] Syncing Baserow templates. Disable by setting BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION=false.:   0%|          | 0/73 [00:00<?, ?it/s]  
 [EXPORT_WORKER][2022-12-15 17:43:19] [2022-12-15 17:43:19,166: WARNING/ForkPoolWorker-1]   
 [EXPORT_WORKER][2022-12-15 17:43:19] Syncing Baserow templates. Disable by setting BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION=false.:   0%|          | 0/73 [00:01<?, ?it/s]  
 [EXPORT_WORKER][2022-12-15 17:43:19] [2022-12-15 17:43:19,190: ERROR/ForkPoolWorker-1] Task baserow.core.tasks.sync_templates_task[9fe34bc9-c836-4eba-9751-e04fb1f90e7a] raised unexpected: PermissionError(13, 'Permission denied')  
 [EXPORT_WORKER][2022-12-15 17:43:19] Traceback (most recent call last):  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/venv/lib/python3.9/site-packages/celery/app/trace.py", line 451, in trace_task  
 [EXPORT_WORKER][2022-12-15 17:43:19]     R = retval = fun(*args, **kwargs)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/venv/lib/python3.9/site-packages/celery/app/trace.py", line 734, in __protected_call__  
 [EXPORT_WORKER][2022-12-15 17:43:19]     return self.run(*args, **kwargs)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/core/tasks.py", line 24, in sync_templates_task  
 [EXPORT_WORKER][2022-12-15 17:43:19]     CoreHandler().sync_templates()  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/usr/lib/python3.9/contextlib.py", line 79, in inner  
 [EXPORT_WORKER][2022-12-15 17:43:19]     return func(*args, **kwds)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/core/handler.py", line 1428, in sync_templates  
 [EXPORT_WORKER][2022-12-15 17:43:19]     self.import_applications_to_group(  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/core/handler.py", line 1309, in import_applications_to_group  
 [EXPORT_WORKER][2022-12-15 17:43:19]     imported_application = application_type.import_serialized(  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/contrib/database/application_types.py", line 427, in import_serialized  
 [EXPORT_WORKER][2022-12-15 17:43:19]     self.import_tables_serialized(  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/contrib/database/application_types.py", line 347, in import_tables_serialized  
 [EXPORT_WORKER][2022-12-15 17:43:19]     field_type.set_import_serialized_value(  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/contrib/database/fields/field_types.py", line 2096, in set_import_serialized_value  
 [EXPORT_WORKER][2022-12-15 17:43:19]     user_file = user_file_handler.upload_user_file(  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/core/user_files/handler.py", line 249, in upload_user_file  
 [EXPORT_WORKER][2022-12-15 17:43:19]     self.generate_and_save_image_thumbnails(image, user_file, storage=storage)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/core/user_files/handler.py", line 165, in generate_and_save_image_thumbnails  
 [EXPORT_WORKER][2022-12-15 17:43:19]     storage.save(thumbnail_path, thumbnail_stream)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/venv/lib/python3.9/site-packages/django/core/files/storage.py", line 54, in save  
 [EXPORT_WORKER][2022-12-15 17:43:19]     name = self._save(name, content)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/backend/src/baserow/core/storage.py", line 8, in _save  
 [EXPORT_WORKER][2022-12-15 17:43:19]     return super()._save(name, content)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/baserow/venv/lib/python3.9/site-packages/django/core/files/storage.py", line 260, in _save  
 [EXPORT_WORKER][2022-12-15 17:43:19]     os.makedirs(directory, exist_ok=True)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/usr/lib/python3.9/os.py", line 215, in makedirs  
 [EXPORT_WORKER][2022-12-15 17:43:19]     makedirs(head, exist_ok=exist_ok)  
 [EXPORT_WORKER][2022-12-15 17:43:19]   File "/usr/lib/python3.9/os.py", line 225, in makedirs  
 [EXPORT_WORKER][2022-12-15 17:43:19]     mkdir(name, mode)  
 [BACKEND][2022-12-15 17:43:19] Syncing default roles:  33%|███▎      | 2/6 [00:01<00:03,  1.14it/s]  
 [BACKEND][2022-12-15 17:43:19] Syncing default roles:  50%|█████     | 3/6 [00:02<00:02,  1.45it/s]  
 [BACKEND][2022-12-15 17:43:20] Syncing default roles:  67%|██████▋   | 4/6 [00:02<00:01,  1.82it/s]  
 [BACKEND][2022-12-15 17:43:20] Syncing default roles:  83%|████████▎ | 5/6 [00:02<00:00,  2.22it/s]  
 [BACKEND][2022-12-15 17:43:21] Syncing default roles: 100%|██████████| 6/6 [00:02<00:00,  2.13it/s]  
 [BACKEND][2022-12-15 17:43:21] [2022-12-15 17:43:21 +0000] [274] [INFO] Starting gunicorn 20.1.0  
 [BACKEND][2022-12-15 17:43:21] [2022-12-15 17:43:21 +0000] [274] [INFO] Listening at: http://127.0.0.1:8000 (274)  
 [BACKEND][2022-12-15 17:43:21] [2022-12-15 17:43:21 +0000] [274] [INFO] Using worker: uvicorn.workers.UvicornWorker  
 [BACKEND][2022-12-15 17:43:21] [2022-12-15 17:43:21 +0000] [515] [INFO] Booting worker with pid: 515  
 [BACKEND][2022-12-15 17:43:21] [2022-12-15 17:43:21 +0000] [516] [INFO] Booting worker with pid: 516  
 [BACKEND][2022-12-15 17:43:22] [2022-12-15 17:43:21 +0000] [517] [INFO] Booting worker with pid: 517  
2022-12-15 17:43:22,918 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: redis entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: redis entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: backend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: backend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: celeryworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: celeryworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: exportworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: exportworker entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: webfrontend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2022-12-15 17:43:22,918 INFO success: webfrontend entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
 [BACKEND][2022-12-15 17:43:23] INFO 2022-12-15 17:43:22,917 xmlschema.include_schema:1250- Resource 'XMLSchema.xsd' is already loaded   
 [BACKEND][2022-12-15 17:43:23] INFO 2022-12-15 17:43:23,078 xmlschema.include_schema:1250- Resource 'XMLSchema.xsd' is already loaded   
 [BACKEND][2022-12-15 17:43:23] INFO 2022-12-15 17:43:23,144 xmlschema.include_schema:1250- Resource 'XMLSchema.xsd' is already loaded   
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [515] [INFO] Started server process [515]  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [515] [INFO] Waiting for application startup.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [515] [INFO] ASGI 'lifespan' protocol appears unsupported.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [515] [INFO] Application startup complete.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [517] [INFO] Started server process [517]  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [517] [INFO] Waiting for application startup.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [517] [INFO] ASGI 'lifespan' protocol appears unsupported.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [517] [INFO] Application startup complete.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [516] [INFO] Started server process [516]  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [516] [INFO] Waiting for application startup.  
 [BACKEND][2022-12-15 17:43:23] [2022-12-15 17:43:23 +0000] [516] [INFO] ASGI 'lifespan' protocol appears unsupported.  
 [BACKEND][2022-12-15 17:43:24] [2022-12-15 17:43:23 +0000] [516] [INFO] Application startup complete.  
 [BASEROW-WATCHER][2022-12-15 17:43:24] Waiting for Baserow to become available, this might take 30+ seconds...  
 [BASEROW-WATCHER][2022-12-15 17:43:24] =======================================================================  
 [BASEROW-WATCHER][2022-12-15 17:43:24] Baserow is now available at http://192.168.86.200:9988  
 [BACKEND][2022-12-15 17:43:44] 127.0.0.1:33026 - "GET /_health/ HTTP/1.1" 200  
 [BACKEND][2022-12-15 17:43:45] 127.0.0.1:33048 - "GET /_health/ HTTP/1.1" 200  
 [BACKEND][2022-12-15 17:44:04] 127.0.0.1:33060 - "GET /_health/ HTTP/1.1" 200  
![Screenshot 2022-12-15 at 12.46.42|689x247](upload://UVnRNKcMbBnKtruIHIy6rdx7fM.png)

Note this was the original reason that made me change permissions.

So, do you think I need to change permissions on some of the subfolders within the baserow folder (like media)?

Any help getting this instance up and running greatly appreciated!

If it helps at all, here are the permissions for the contents of the baserow folder. I have not altered anything myself.

The weird thing is the media folder already appears to have full permissions, so I am stymied!

Thank you.

Hm, it looks like your /baserow/data folder is not owned by the 9999:9999 user which might cause this problem.

You could try sudo chown 9999:9999 /volume2/docker/baserow and restart Baserow.

Alternatively we’ve previously recommended not using a bind mount in the volumes section but instead putting a named volume which docker-compose will automatically create for you. So you could instead make your portainer stack look like the following instead. Warning: this will make a new volume and restart your Baserow from scratch data wise.

version: '3.3'
services:
    baserow:
        container_name: baserow
        ports:
            - '9988:80'
        environment:
            - 'BASEROW_PUBLIC_URL=http://192.168.86.200:9988'
        volumes:
            - 'baserow_data:/baserow/data'
        restart: always
        image: 'baserow/baserow:1.13.0'

The change i made is on the volumes: section.

Thank you so much, @nigel !

With your suggestion to used a Docker named volume rather than a bind mount, all is working perfectly. My files are work as they should and the templates have synced.

I will note that, as above, the Docker Compose YAML file just needs two more lines at the very end to avoid a “volume not found” error.

I post the final file here for others’ benefit, as it seems Synology has some quirks when it comes to its Docker implementation that may not exist on other platforms.

Thanks again for the help!

services:
    baserow:
        container_name: baserow
        ports:
            - '9988:80'
        environment:
            - 'BASEROW_PUBLIC_URL=http://192.168.86.200:9988'
        volumes:
            - 'baserow_data:/baserow/data'
        restart: always
        image: 'baserow/baserow:1.13.0'
volumes:
  baserow_data:

I think this problem might just be a general docker caveat and not a portainer oddity even.

When you use a bind volume, it will have some existing permissions, this will result in the user (9999:9999) who is running Baserow in the docker containers not being able to read/write to that bind volume.

Our docs should have a clearer section explaining:

  • Troubleshooting steps with file system access
  • Point out this problem explicitly with bind mounts and the solution (change its perms and owner manually to 9999:9999)

I’ve made an issue to track this change here: Add file system issues troubleshooting section to documentation (#1456) · Issues · Bram Wiepjes / baserow · GitLab