Plugin migration is not working properly on Windows

Hi There,

I am trying to develop a plugin based on the tutorial ( Field type // Baserow) and I got stuck with it as the changes which I made are not reflected on the UI.

According to the official documentation, these commands should work:

export PLUGIN_BUILD_UID=$(id -u)
export PLUGIN_BUILD_GID=$(id -g)
docker-compose run my-baserow-plugin /baserow.sh backend-cmd manage makemigrations
docker-compose run my-baserow-plugin /baserow.sh backend-cmd manage migrate

But for me, these are not working correctly. I checked another thread here and Nigel suggested using the following commands instead:

docker-compose -f docker-compose.dev.yml run --rm my-baserow-plugin backend-cmd manage makemigrations
docker-compose -f docker-compose.dev.yml run --rm my-baserow-plugin backend-cmd-with-db manage migrate

I saw several warnings like this:

docker-compose -f docker-compose.dev.yml run --rm my-baserow-plugin backend-cmd manage makemigrations
 [STARTUP][2022-11-30 12:57:13] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
 [STARTUP][2022-11-30 12:57:13] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
 [STARTUP][2022-11-30 12:57:13] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
 [STARTUP][2022-11-30 12:57:13] Importing SECRET_KEY secret from /baserow/data/.secret
 [STARTUP][2022-11-30 12:57:13] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
 [STARTUP][2022-11-30 12:57:13] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
Loaded backend plugins: my_baserow_plugin
WARNING: Baserow is configured to use a BASEROW_PUBLIC_URL of http://localhost. If you attempt to access Baserow on any other hostname requests to the backend will fail as they will be from an unknown host. Please set BASEROW_PUBLIC_URL if you will be accessing Baserow from any other URL then http://localhost.
/baserow/venv/lib/python3.9/site-packages/django/core/management/commands/makemigrations.py:105: RuntimeWarning: Got an error checking a consistent migration history performed for database connection 'default': connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
        Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
        Is the server running on that host and accepting TCP/IP connections?

  warnings.warn(

But at least something happened. After the second command, I was expecting that the new field type would be available but it doesn’t.

Maybe I overlooked something, can you please shed light on how I could get it to work?

Thanks in advance!

cspocsai

3 Likes

Hey-hey, @nigel can you please take a look into this request?

1 Like

@cspocsai Can you provide the full output of the docker-compose command? Did it actually end up creating a new migration file which adds your new field’s metadata table? Did you then run the migrations and actually create this table? Finally did you finish the entire guide including the frontend part? If you only got upto running that docker command then the webfrontend will still have no idea about your new field type and won’t show anything.

Hi @nigel, thanks for reaching out to me.

First of all, yes, I ran the migration script and I also can see new files under the backend’s migrations folder:

Second, yes, I finished the fontend part as well, I added the components as well as other parts, see here:

Last, but not least, after I ran the migrate command (that you provided) the whole docker image became corrupt due to postgre db issue. Here are the logs:

=========================================================================================


β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•—

β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ•— β–ˆβ–ˆβ•‘

β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ•”β•

β•šβ•β•β•β•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β• β•šβ•β•β•β•β•β•  β•šβ•β•β•β•šβ•β•β•


Version 1.13.1


=========================================================================================

Welcome to Baserow. See https://baserow.io/installation/install-with-docker/ for detailed instructions on 

how to use this Docker image.

 [STARTUP][2022-12-01 18:56:26] Running setup of embedded baserow database. e(B 

 [POSTGRES_INIT][2022-12-01 18:56:26] Becoming postgres superuser to run setup SQL commands: e(B 

 [POSTGRES_INIT][2022-12-01 18:56:26]  e(B 

 [POSTGRES_INIT][2022-12-01 18:56:26] PostgreSQL Database directory appears to contain a database; Skipping initialization e(B 

 [POSTGRES_INIT][2022-12-01 18:56:26]  e(B 

 [STARTUP][2022-12-01 18:56:26] No BASEROW_PUBLIC_URL environment variable provided. Starting baserow locally at http://localhost without automatic https. e(B 

[PLUGIN][SETUP] Found a plugin in /baserow/data/plugins/my_baserow_plugin/, ensuring it is installed...e(B

[PLUGIN][my_baserow_plugin] Found a backend app for my_baserow_plugin.e(B

[PLUGIN][my_baserow_plugin] Skipping install of my_baserow_plugin's backend app as it is already installed.e(B

[PLUGIN][my_baserow_plugin] Skipping runtime setup of my_baserow_plugin's backend app.e(B

[PLUGIN][my_baserow_plugin] Found a web-frontend module for my_baserow_plugin.e(B

[PLUGIN][my_baserow_plugin] Skipping build of my_baserow_plugin web-frontend module as it has already been built.e(B

[PLUGIN][my_baserow_plugin] Skipping runtime setup of my_baserow_plugin's web-frontend module.e(B

[PLUGIN][my_baserow_plugin] Fixing ownership of plugins from 0 to baserow_docker_user in /baserow/data/pluginse(B

[PLUGIN][my_baserow_plugin] Finished setting up my_baserow_plugin successfully.e(B

 [STARTUP][2022-12-01 18:56:27] Starting all Baserow processes: e(B 

2022-12-01 18:56:27,391 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.

2022-12-01 18:56:27,391 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.

2022-12-01 18:56:27,391 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing

2022-12-01 18:56:27,391 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing

2022-12-01 18:56:27,391 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-redis.conf" during parsing

2022-12-01 18:56:27,391 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-redis.conf" during parsing

2022-12-01 18:56:27,393 INFO supervisord started with pid 1

2022-12-01 18:56:27,393 INFO supervisord started with pid 1

2022-12-01 18:56:28,395 INFO spawned: 'processes' with pid 187

2022-12-01 18:56:28,395 INFO spawned: 'processes' with pid 187

2022-12-01 18:56:28,396 INFO spawned: 'baserow-watcher' with pid 188

2022-12-01 18:56:28,396 INFO spawned: 'baserow-watcher' with pid 188

2022-12-01 18:56:28,397 INFO spawned: 'caddy' with pid 189

2022-12-01 18:56:28,397 INFO spawned: 'caddy' with pid 189

2022-12-01 18:56:28,398 INFO spawned: 'postgresql' with pid 190

2022-12-01 18:56:28,398 INFO spawned: 'postgresql' with pid 190

2022-12-01 18:56:28,400 INFO spawned: 'redis' with pid 191

2022-12-01 18:56:28,400 INFO spawned: 'redis' with pid 191

2022-12-01 18:56:28,401 INFO spawned: 'backend' with pid 193

2022-12-01 18:56:28,401 INFO spawned: 'backend' with pid 193

2022-12-01 18:56:28,403 INFO spawned: 'celeryworker' with pid 198

2022-12-01 18:56:28,403 INFO spawned: 'celeryworker' with pid 198

2022-12-01 18:56:28,409 INFO spawned: 'exportworker' with pid 225

2022-12-01 18:56:28,409 INFO spawned: 'exportworker' with pid 225

2022-12-01 18:56:28,411 INFO spawned: 'webfrontend' with pid 234

2022-12-01 18:56:28,411 INFO spawned: 'webfrontend' with pid 234

2022-12-01 18:56:28,412 INFO spawned: 'beatworker' with pid 237

2022-12-01 18:56:28,412 INFO spawned: 'beatworker' with pid 237

2022-12-01 18:56:28,413 INFO reaped unknown pid 185 (exit status 0)

2022-12-01 18:56:28,413 INFO reaped unknown pid 185 (exit status 0)

 [REDIS][2022-12-01 18:56:28] 191:C 01 Dec 2022 18:56:28.453 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo e(B 

 [REDIS][2022-12-01 18:56:28] 191:C 01 Dec 2022 18:56:28.453 # Redis version=6.0.16, bits=64, commit=00000000, modified=0, pid=191, just started e(B 

 [REDIS][2022-12-01 18:56:28] 191:C 01 Dec 2022 18:56:28.453 # Configuration loaded e(B 

 [REDIS][2022-12-01 18:56:28] 191:M 01 Dec 2022 18:56:28.458 * Running mode=standalone, port=6379. e(B 

 [REDIS][2022-12-01 18:56:28] 191:M 01 Dec 2022 18:56:28.458 # Server initialized e(B 

 [REDIS][2022-12-01 18:56:28] 191:M 01 Dec 2022 18:56:28.458 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. e(B 

 [REDIS][2022-12-01 18:56:28] 191:M 01 Dec 2022 18:56:28.458 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never'). e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.502 UTC [190] LOG:  listening on IPv4 address "127.0.0.1", port 5432 e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.502 UTC [190] LOG:  could not bind IPv6 address "::1": Cannot assign requested address e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.502 UTC [190] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry. e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.511 UTC [190] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.532 UTC [402] LOG:  database system was shut down at 2022-11-30 14:50:50 UTC e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.533 UTC [402] LOG:  invalid resource manager ID 105 at 0/3180AC0 e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.533 UTC [402] LOG:  invalid primary checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.533 UTC [402] PANIC:  could not locate a valid checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.541 UTC [190] LOG:  startup process (PID 402) was terminated by signal 6: Aborted e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.541 UTC [190] LOG:  aborting startup due to startup process failure e(B 

 [POSTGRES][2022-12-01 18:56:28] 2022-12-01 18:56:28.542 UTC [190] LOG:  database system is shut down e(B 

2022-12-01 18:56:28,549 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:28,549 INFO exited: postgresql (exit status 1; not expected)

 [BACKEND][2022-12-01 18:56:28] Error: Failed to connect to the postgresql database at localhost e(B 

 [BACKEND][2022-12-01 18:56:28] Please see the error below for more details: e(B 

 [BACKEND][2022-12-01 18:56:28] connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL:  the database system is starting up e(B 

 [BACKEND][2022-12-01 18:56:28]  e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.5674763,"msg":"using provided configuration","config_file":"/baserow/caddy/Caddyfile","config_adapter":""} e(B 

2022-12-01 18:56:28,570 INFO reaped unknown pid 278 (exit status 141)

2022-12-01 18:56:28,570 INFO reaped unknown pid 278 (exit status 141)

2022-12-01 18:56:28,570 INFO reaped unknown pid 403 (exit status 1)

2022-12-01 18:56:28,570 INFO reaped unknown pid 403 (exit status 1)

 [CADDY][2022-12-01 18:56:28] {"level":"warn","ts":1669920988.570174,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/baserow/caddy/Caddyfile","line":2} e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.571802,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["[::1]:2019","127.0.0.1:2019","localhost:2019"]} e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.5721884,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80} e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.5728242,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000d89a0"} e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.5737472,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/baserow/data/caddy/data/caddy"} e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.5738244,"logger":"tls","msg":"finished cleaning storage units"} e(B 

 [CADDY][2022-12-01 18:56:28] {"level":"info","ts":1669920988.5742636,"msg":"autosaved config (load with --resume flag)","file":"/baserow/data/caddy/config/caddy/autosave.json"} e(B 

 [WEBFRONTEND][2022-12-01 18:56:28] yarn run v1.22.19 e(B 

 [WEBFRONTEND][2022-12-01 18:56:29] $ nuxt --hostname 0.0.0.0 e(B 

2022-12-01 18:56:29,560 INFO success: processes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

2022-12-01 18:56:29,560 INFO success: processes entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

2022-12-01 18:56:29,561 INFO success: baserow-watcher entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

2022-12-01 18:56:29,561 INFO success: baserow-watcher entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

2022-12-01 18:56:29,561 INFO spawned: 'postgresql' with pid 427

2022-12-01 18:56:29,561 INFO spawned: 'postgresql' with pid 427

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.593 UTC [427] LOG:  listening on IPv4 address "127.0.0.1", port 5432 e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.593 UTC [427] LOG:  could not bind IPv6 address "::1": Cannot assign requested address e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.593 UTC [427] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry. e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.602 UTC [427] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.620 UTC [438] LOG:  database system was shut down at 2022-11-30 14:50:50 UTC e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.620 UTC [438] LOG:  invalid resource manager ID 105 at 0/3180AC0 e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.620 UTC [438] LOG:  invalid primary checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.620 UTC [438] PANIC:  could not locate a valid checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.627 UTC [427] LOG:  startup process (PID 438) was terminated by signal 6: Aborted e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.627 UTC [427] LOG:  aborting startup due to startup process failure e(B 

 [POSTGRES][2022-12-01 18:56:29] 2022-12-01 18:56:29.628 UTC [427] LOG:  database system is shut down e(B 

2022-12-01 18:56:29,629 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:29,629 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:29,630 INFO reaped unknown pid 436 (exit status 0)

2022-12-01 18:56:29,630 INFO reaped unknown pid 436 (exit status 0)

 [WEBFRONTEND][2022-12-01 18:56:30] Loading extra plugin modules: /baserow/data/plugins/my_baserow_plugin/web-frontend/modules/my-baserow-plugin/module.js e(B 

 [BACKEND][2022-12-01 18:56:30] Waiting for PostgreSQL to become available attempt  0/5 ... e(B 

 [BACKEND][2022-12-01 18:56:30] Error: Failed to connect to the postgresql database at localhost e(B 

 [BACKEND][2022-12-01 18:56:30] Please see the error below for more details: e(B 

 [BACKEND][2022-12-01 18:56:30] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused e(B 

 [BACKEND][2022-12-01 18:56:30]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:30] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address e(B 

 [BACKEND][2022-12-01 18:56:30]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:30]  e(B 

 [WEBFRONTEND][2022-12-01 18:56:31] β„Ή Listening on: http://172.22.0.2:3000/ e(B 

 [WEBFRONTEND][2022-12-01 18:56:31] β„Ή Preparing project for development e(B 

 [WEBFRONTEND][2022-12-01 18:56:31] β„Ή Initial build may take a while e(B 

 [WEBFRONTEND][2022-12-01 18:56:31]  e(B 

 [WEBFRONTEND][2022-12-01 18:56:31]  WARN  No pages directory found in /baserow/web-frontend. Using the default built-in page. e(B 

 [WEBFRONTEND][2022-12-01 18:56:31]  e(B 

 [WEBFRONTEND][2022-12-01 18:56:31] βœ” Builder initialized e(B 

2022-12-01 18:56:32,338 INFO spawned: 'postgresql' with pid 445

2022-12-01 18:56:32,338 INFO spawned: 'postgresql' with pid 445

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.379 UTC [445] LOG:  listening on IPv4 address "127.0.0.1", port 5432 e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.379 UTC [445] LOG:  could not bind IPv6 address "::1": Cannot assign requested address e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.379 UTC [445] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry. e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.388 UTC [445] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.408 UTC [456] LOG:  database system was shut down at 2022-11-30 14:50:50 UTC e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.408 UTC [456] LOG:  invalid resource manager ID 105 at 0/3180AC0 e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.408 UTC [456] LOG:  invalid primary checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.408 UTC [456] PANIC:  could not locate a valid checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.415 UTC [445] LOG:  startup process (PID 456) was terminated by signal 6: Aborted e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.415 UTC [445] LOG:  aborting startup due to startup process failure e(B 

 [POSTGRES][2022-12-01 18:56:32] 2022-12-01 18:56:32.416 UTC [445] LOG:  database system is shut down e(B 

2022-12-01 18:56:32,417 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:32,417 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:32,417 INFO reaped unknown pid 454 (exit status 0)

2022-12-01 18:56:32,417 INFO reaped unknown pid 454 (exit status 0)

 [WEBFRONTEND][2022-12-01 18:56:32] βœ” Nuxt files generated e(B 

 [WEBFRONTEND][2022-12-01 18:56:32] β„Ή Compiling Client e(B 

 [BACKEND][2022-12-01 18:56:32] Waiting for PostgreSQL to become available attempt  1/5 ... e(B 

 [BACKEND][2022-12-01 18:56:32] Error: Failed to connect to the postgresql database at localhost e(B 

 [BACKEND][2022-12-01 18:56:32] Please see the error below for more details: e(B 

 [BACKEND][2022-12-01 18:56:32] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused e(B 

 [BACKEND][2022-12-01 18:56:32]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:32] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address e(B 

 [BACKEND][2022-12-01 18:56:32]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:32]  e(B 

 [WEBFRONTEND][2022-12-01 18:56:32] β„Ή Compiling Server e(B 

 [WEBFRONTEND][2022-12-01 18:56:32]  e(B 

 [WEBFRONTEND][2022-12-01 18:56:32]  WARN  Browserslist: caniuse-lite is outdated. Please run: e(B 

 [WEBFRONTEND][2022-12-01 18:56:32] npx browserslist@latest --update-db e(B 

 [WEBFRONTEND][2022-12-01 18:56:32]  e(B 

 [WEBFRONTEND][2022-12-01 18:56:32] Why you should do it regularly: e(B 

 [WEBFRONTEND][2022-12-01 18:56:32] https://github.com/browserslist/browserslist#browsers-data-updating e(B 

 [BACKEND][2022-12-01 18:56:34] Waiting for PostgreSQL to become available attempt  2/5 ... e(B 

 [BACKEND][2022-12-01 18:56:34] Error: Failed to connect to the postgresql database at localhost e(B 

 [BACKEND][2022-12-01 18:56:34] Please see the error below for more details: e(B 

 [BACKEND][2022-12-01 18:56:34] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused e(B 

 [BACKEND][2022-12-01 18:56:34]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:34] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address e(B 

 [BACKEND][2022-12-01 18:56:34]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:34]  e(B 

2022-12-01 18:56:35,721 INFO spawned: 'postgresql' with pid 461

2022-12-01 18:56:35,721 INFO spawned: 'postgresql' with pid 461

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.762 UTC [461] LOG:  listening on IPv4 address "127.0.0.1", port 5432 e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.762 UTC [461] LOG:  could not bind IPv6 address "::1": Cannot assign requested address e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.762 UTC [461] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry. e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.770 UTC [461] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] LOG:  database system was shut down at 2022-11-30 14:50:50 UTC e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] LOG:  invalid resource manager ID 105 at 0/3180AC0 e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] LOG:  invalid primary checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] PANIC:  could not locate a valid checkpoint record e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.797 UTC [461] LOG:  startup process (PID 472) was terminated by signal 6: Aborted e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.797 UTC [461] LOG:  aborting startup due to startup process failure e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.798 UTC [461] LOG:  database system is shut down e(B 

2022-12-01 18:56:35,800 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:35,800 INFO exited: postgresql (exit status 1; not expected)

2022-12-01 18:56:35,800 INFO reaped unknown pid 470 (exit status 0)

2022-12-01 18:56:35,800 INFO reaped unknown pid 470 (exit status 0)

2022-12-01 18:56:35,800 INFO gave up: postgresql entered FATAL state, too many start retries too quickly

2022-12-01 18:56:35,800 INFO gave up: postgresql entered FATAL state, too many start retries too quickly

 [BACKEND][2022-12-01 18:56:36] Waiting for PostgreSQL to become available attempt  3/5 ... e(B 

 [BACKEND][2022-12-01 18:56:36] Error: Failed to connect to the postgresql database at localhost e(B 

 [BACKEND][2022-12-01 18:56:36] Please see the error below for more details: e(B 

 [BACKEND][2022-12-01 18:56:36] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused e(B 

 [BACKEND][2022-12-01 18:56:36]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:36] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address e(B 

 [BACKEND][2022-12-01 18:56:36]  Is the server running on that host and accepting TCP/IP connections? e(B 

Baserow was stopped or one of it's services crashed, see the logs above for more details. 

 [BACKEND][2022-12-01 18:56:36]  e(B 

2022-12-01 18:56:36,768 WARN received SIGTERM indicating exit request

2022-12-01 18:56:36,768 WARN received SIGTERM indicating exit request

2022-12-01 18:56:36,768 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

2022-12-01 18:56:36,768 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

 [BACKEND][2022-12-01 18:56:38] Waiting for PostgreSQL to become available attempt  4/5 ... e(B 

 [BACKEND][2022-12-01 18:56:38] Error: Failed to connect to the postgresql database at localhost e(B 

 [BACKEND][2022-12-01 18:56:38] Please see the error below for more details: e(B 

 [BACKEND][2022-12-01 18:56:38] connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused e(B 

 [BACKEND][2022-12-01 18:56:38]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:38] connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address e(B 

 [BACKEND][2022-12-01 18:56:38]  Is the server running on that host and accepting TCP/IP connections? e(B 

 [BACKEND][2022-12-01 18:56:38]  e(B 

2022-12-01 18:56:39,818 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

2022-12-01 18:56:39,818 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

 [BACKEND][2022-12-01 18:56:40] Waiting for PostgreSQL to become available attempt  5/5 ... e(B 

 [BACKEND][2022-12-01 18:56:40] PostgreSQL did not become available in time... e(B 

2022-12-01 18:56:40,818 INFO exited: backend (exit status 1; not expected)

2022-12-01 18:56:40,818 INFO exited: backend (exit status 1; not expected)

2022-12-01 18:56:40,818 INFO reaped unknown pid 250 (exit status 0)

2022-12-01 18:56:40,818 INFO reaped unknown pid 250 (exit status 0)

2022-12-01 18:56:42,821 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

2022-12-01 18:56:42,821 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

 [BEAT_WORKER][2022-12-01 18:56:44] Sleeping for 15 before starting beat to prevent  startup errors. e(B 

 [BEAT_WORKER][2022-12-01 18:56:44] Loaded backend plugins: my_baserow_plugin e(B 

 [BEAT_WORKER][2022-12-01 18:56:44] WARNING: Baserow is configured to use a BASEROW_PUBLIC_URL of http://localhost. If you attempt to access Baserow on any other hostname requests to the backend will fail as they will be from an unknown host. Please set BASEROW_PUBLIC_URL if you will be accessing Baserow from any other URL then http://localhost. e(B 

2022-12-01 18:56:46,394 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

2022-12-01 18:56:46,394 INFO waiting for processes, baserow-watcher, caddy, redis, backend, celeryworker, exportworker, webfrontend, beatworker to die

 [BEAT_WORKER][2022-12-01 18:56:46] celery beat v5.2.3 (dawn-chorus) is starting. e(B 

 [BEAT_WORKER][2022-12-01 18:56:46] __    -    ... __   -        _ e(B 

 [BEAT_WORKER][2022-12-01 18:56:46] LocalTime -> 2022-12-01 18:56:46 e(B 

 [BEAT_WORKER][2022-12-01 18:56:46] Configuration -> e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]     . broker -> redis://:**@localhost:6379/0 e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]     . loader -> celery.loaders.app.AppLoader e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]     . scheduler -> redbeat.schedulers.RedBeatScheduler e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]        . redis -> redis://:**@localhost:6379/0 e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]        . lock -> `redbeat::lock` 1.33 minutes (80s) e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]     . logfile -> [stderr]@%INFO e(B 

 [BEAT_WORKER][2022-12-01 18:56:46]     . maxinterval -> 20.00 seconds (20s) e(B 

2022-12-01 18:56:47,753 WARN killing 'beatworker' (237) with SIGKILL

2022-12-01 18:56:47,753 WARN killing 'beatworker' (237) with SIGKILL

 [BEAT_WORKER][2022-12-01 18:56:47] [2022-12-01 18:56:46,751: INFO/MainProcess] beat: Starting... e(B 

2022-12-01 18:56:47,758 INFO stopped: beatworker (terminated by SIGKILL)

2022-12-01 18:56:47,758 INFO stopped: beatworker (terminated by SIGKILL)

2022-12-01 18:56:47,758 INFO reaped unknown pid 304 (exit status 0)

2022-12-01 18:56:47,758 INFO reaped unknown pid 304 (exit status 0)

 [BASEROW-WATCHER][2022-12-01 18:56:48] Waiting for Baserow to become available, this might take 30+ seconds... e(B 

2022-12-01 18:56:48,470 INFO stopped: webfrontend (exit status 1)

2022-12-01 18:56:48,470 INFO stopped: webfrontend (exit status 1)

2022-12-01 18:56:48,470 INFO reaped unknown pid 419 (terminated by SIGTERM)

2022-12-01 18:56:48,470 INFO reaped unknown pid 419 (terminated by SIGTERM)

 [EXPORT_WORKER][2022-12-01 18:56:48] watchmedo auto-restart  -d=/baserow/backend/src -d=/baserow/premium/backend/src -d=/baserow/enterprise/backend/src --pattern=*.py --recursive -- bash /baserow/backend/docker/docker-entrypoint.sh celery-exportworker e(B 

2022-12-01 18:56:48,470 INFO stopped: exportworker (terminated by SIGTERM)

2022-12-01 18:56:48,470 INFO stopped: exportworker (terminated by SIGTERM)

2022-12-01 18:56:48,471 INFO reaped unknown pid 387 (exit status 0)

2022-12-01 18:56:48,471 INFO reaped unknown pid 387 (exit status 0)

 [CELERY_WORKER][2022-12-01 18:56:48] watchmedo auto-restart  -d=/baserow/backend/src -d=/baserow/premium/backend/src -d=/baserow/enterprise/backend/src --pattern=*.py --recursive -- bash /baserow/backend/docker/docker-entrypoint.sh celery-worker e(B 

2022-12-01 18:56:48,471 INFO stopped: celeryworker (terminated by SIGTERM)

2022-12-01 18:56:48,471 INFO stopped: celeryworker (terminated by SIGTERM)

2022-12-01 18:56:48,471 INFO reaped unknown pid 277 (exit status 0)

2022-12-01 18:56:48,471 INFO reaped unknown pid 277 (exit status 0)

2022-12-01 18:56:48,471 INFO reaped unknown pid 265 (exit status 0)

2022-12-01 18:56:48,471 INFO reaped unknown pid 265 (exit status 0)

2022-12-01 18:56:48,472 INFO reaped unknown pid 369 (exit status 0)

2022-12-01 18:56:48,472 INFO reaped unknown pid 369 (exit status 0)

 [REDIS][2022-12-01 18:56:48] 191:M 01 Dec 2022 18:56:28.458 * Ready to accept connections e(B 

 [REDIS][2022-12-01 18:56:48] 191:signal-handler (1669921008) Received SIGTERM scheduling shutdown... e(B 

 [REDIS][2022-12-01 18:56:48] 191:M 01 Dec 2022 18:56:48.499 # User requested shutdown... e(B 

 [REDIS][2022-12-01 18:56:48] 191:M 01 Dec 2022 18:56:48.499 # Redis is now ready to exit, bye bye... e(B 

2022-12-01 18:56:48,500 INFO stopped: redis (exit status 0)

2022-12-01 18:56:48,500 INFO stopped: redis (exit status 0)

2022-12-01 18:56:48,500 INFO reaped unknown pid 245 (exit status 0)

2022-12-01 18:56:48,500 INFO reaped unknown pid 245 (exit status 0)

 [CADDY][2022-12-01 18:56:48] {"level":"info","ts":1669920988.57429,"msg":"serving initial configuration"} e(B 

 [CADDY][2022-12-01 18:56:48] {"level":"info","ts":1669921008.5004604,"msg":"shutting down apps, then terminating","signal":"SIGTERM"} e(B 

 [CADDY][2022-12-01 18:56:48] {"level":"warn","ts":1669921008.500493,"msg":"exiting; byeee!! πŸ‘‹","signal":"SIGTERM"} e(B 

 [CADDY][2022-12-01 18:56:48] {"level":"info","ts":1669921008.5017242,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc0000d89a0"} e(B 

 [CADDY][2022-12-01 18:56:48] {"level":"info","ts":1669921008.5039456,"logger":"admin","msg":"stopped previous server","address":"tcp/localhost:2019"} e(B 

 [CADDY][2022-12-01 18:56:48] {"level":"info","ts":1669921008.503974,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0} e(B 

2022-12-01 18:56:48,505 INFO stopped: caddy (exit status 0)

2022-12-01 18:56:48,505 INFO stopped: caddy (exit status 0)

2022-12-01 18:56:48,505 INFO reaped unknown pid 226 (exit status 0)

2022-12-01 18:56:48,505 INFO reaped unknown pid 226 (exit status 0)

2022-12-01 18:56:49,506 INFO stopped: baserow-watcher (terminated by SIGTERM)

2022-12-01 18:56:49,506 INFO stopped: baserow-watcher (terminated by SIGTERM)

2022-12-01 18:56:49,506 INFO waiting for processes to die

2022-12-01 18:56:49,506 INFO waiting for processes to die

2022-12-01 18:56:49,507 INFO stopped: processes (terminated by SIGTERM)

2022-12-01 18:56:49,507 INFO stopped: processes (terminated by SIGTERM)

I am not sure why it is not working properly, can you please shed light on that?

cspocsai


 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] LOG:  database system was shut down at 2022-11-30 14:50:50 UTC e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] LOG:  invalid resource manager ID 105 at 0/3180AC0 e(B 

 [POSTGRES][2022-12-01 18:56:35] 2022-12-01 18:56:35.791 UTC [472] LOG:  invalid primary checkpoint record e(B 

Yup this is something corrupt in your postgres database. If you don’t mind throwing away all of your data try deleting the docker baserow_data data volume and trying again. Otherwise you are going to have to use lower level postgres commands inside of the container to try fix your corrupted database (postgresql error PANIC: could not locate a valid checkpoint record - Stack Overflow).

FYI: We don’t test or officially support these images on Windows, all testing and usage of these images is done on Linux/MacOS. So here be dragons potentially.

1 Like

I tried it multiple times (deleted everything, created a new plugin, all sorts of things), but the migration command you provided always makes the postgre db corrupt.
Question 1: are you sure that these are the right commands for plugin migration?

docker-compose -f docker-compose.dev.yml run --rm my-baserow-plugin backend-cmd manage makemigrations
docker-compose -f docker-compose.dev.yml run --rm my-baserow-plugin backend-cmd-with-db manage migrate

Question 2: is there any logical reason why you are not testing it on Windows?

Those commands look fine for me and also work for me.

One thing that might help is are you getting the following warnings when using these commands also? You might want to try set these env variables if you aren’t already

WARNING: The PLUGIN_BUILD_UID variable is not set. Defaulting to a blank string.
WARNING: The PLUGIN_BUILD_GID variable is not set. Defaulting to a blank string.

One way of setting this is writing the commands prefixed with:

PLUGIN_BUILD_UID=$(id -u) PLUGIN_BUILD_GID=$(id -g) docker-compose -f docker-compose.dev.yml run ...

But I’m mainly suspicious of backend-cmd-with-db, potentially combined with running on Windows might be corrupting the database by not shutting it down correctly. You can completely avoid this command by instead:

  1. Starting the dev container normally like you would run it as a server
  2. Execing into the dev container
  3. Running all the commands you want like ./baserow migrate inside of the execed shell in the container.

So in actual steps you could:

  1. export PLUGIN_BUILD_UID=$(id -u)
  2. export PLUGIN_BUILD_GID=$(id -g)
  3. check these env vars have been set to number
    1. echo $PLUGIN_BUILD_UID
    2. echo $PLUGIN_BUILD_GID
  4. WARNING this will delete ALL of your volumes associated with this Baserow plugin/serever and all of your data with it.
    1. docker-compose -f docker-compose.dev.yml down -v
  5. docker-compose -f docker-compose.dev.yml up -d
  6. Check that Baserow has started fully:
    1. docker-compose -f docker-compose.dev.yml logs
  7. This will open a bash shell inside of your Baserow servers container
    1. docker-compose -f docker-compose.dev.yml exec my-baserow-plugin /baserow.sh backend-cmd bash -c bash
  8. ./baserow makemigrations
  9. ./baserow migrate

Let me know if you have any problems with the above, sorry for the long delay I have been on holiday :slight_smile:

Finally, we aren’t testing on windows because we have no developers who use windows. We also do not yet have our automated build processes setup to support building and testing on windows.

Baserow works perfectly fine on WSL2 on windows, I would highly recommend that over using a windows python interpreter.

I’m running into the same issue, unable to run migrations with postgres seemingly not up inside the docker container. I’m running on a linux VM, not on windows.

luc@vocabai$ docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage makemigrations
 [STARTUP][2023-04-14 12:46:54] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
 [STARTUP][2023-04-14 12:46:54] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
 [STARTUP][2023-04-14 12:46:54] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
 [STARTUP][2023-04-14 12:46:54] Importing SECRET_KEY secret from /baserow/data/.secret
 [STARTUP][2023-04-14 12:46:54] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
 [STARTUP][2023-04-14 12:46:54] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.16.0,deployment.environment=unknown
Loaded backend plugins: baserow_translate_plugin
WARNING: Baserow is configured to use a BASEROW_PUBLIC_URL of http://localhost. If you attempt to access Baserow on any other hostname requests to the backend will fail as they will be from an unknown host. Please set BASEROW_PUBLIC_URL if you will be accessing Baserow from any other URL then http://localhost.
/baserow/venv/lib/python3.9/site-packages/django/core/management/commands/makemigrations.py:105: RuntimeWarning: Got an error checking a consistent migration history performed for database connection 'default': connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
        Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
        Is the server running on that host and accepting TCP/IP connections?

  warnings.warn(
No changes detected

When attaching to the container, I don’t see anything that looks like a postgres instance:

luc@vocabai$ docker compose -f docker-compose.dev.yml exec baserow-translate-plugin /baserow.sh backend-cmd bash -c bash
 [STARTUP][2023-04-14 12:47:52] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
 [STARTUP][2023-04-14 12:47:52] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
 [STARTUP][2023-04-14 12:47:53] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
 [STARTUP][2023-04-14 12:47:53] Importing SECRET_KEY secret from /baserow/data/.secret
 [STARTUP][2023-04-14 12:47:53] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
 [STARTUP][2023-04-14 12:47:53] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.16.0,deployment.environment=unknown
baserow_docker_user@e5587ec169c5:/baserow/backend$ ps x
    PID TTY      STAT   TIME COMMAND
    191 pts/0    S      0:00 bash /baserow/supervisor/baserow-watcher.sh
    196 pts/0    S      0:00 /bin/bash /baserow/backend/docker/docker-entrypoint.sh django-dev-no-attach
    199 pts/0    S      0:00 bash --init-file /dev/fd/63
    203 pts/0    S      0:00 bash --init-file /dev/fd/63
    214 pts/0    Sl     0:01 node /usr/bin/yarn run dev
    224 pts/0    S      0:03 /baserow/venv/bin/python /baserow/venv/bin/celery -A baserow beat -l INFO -S redbeat.RedBeatScheduler
    240 pts/0    S      0:00 bash /baserow/supervisor/wrapper.sh BOLD BASEROW-WATCHER /baserow/supervisor/baserow-watcher.sh
    247 pts/0    S      0:00 gawk -vRS=[\r\n] -vORS= {  print "?[1m",strftime("[BASEROW-WATCHER][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
    259 pts/0    S      0:00 bash /baserow/supervisor/wrapper.sh BLUE BACKEND ./docker/docker-entrypoint.sh django-dev-no-attach
    262 pts/0    S      0:00 gawk -vRS=[\r\n] -vORS= {  print "?[34m",strftime("[BACKEND][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
    274 pts/0    S      0:00 bash /baserow/supervisor/wrapper.sh CYAN EXPORT_WORKER ./docker/docker-entrypoint.sh watch-py celery-exportworker
    277 pts/0    S      0:00 gawk -vRS=[\r\n] -vORS= {  print "?[36m",strftime("[EXPORT_WORKER][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
    282 pts/0    S      0:00 bash /baserow/supervisor/wrapper.sh YELLOW WEBFRONTEND ./docker/docker-entrypoint.sh nuxt-dev-no-attach
    283 pts/0    S      0:00 gawk -vRS=[\r\n] -vORS= {  print "?[33m",strftime("[WEBFRONTEND][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
    286 pts/0    S      0:00 bash /baserow/supervisor/wrapper.sh CYAN BEAT_WORKER ./docker/docker-entrypoint.sh celery-beat
    291 pts/0    S      0:00 gawk -vRS=[\r\n] -vORS= {  print "?[36m",strftime("[BEAT_WORKER][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
    310 pts/0    S      0:00 bash /baserow/supervisor/wrapper.sh CYAN CELERY_WORKER ./docker/docker-entrypoint.sh watch-py celery-worker
    317 pts/0    S      0:00 gawk -vRS=[\r\n] -vORS= {  print "?[36m",strftime("[CELERY_WORKER][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
    404 pts/0    Z      0:00 [docker-entrypoi] <defunct>
    410 pts/0    Z      0:00 [docker-entrypoi] <defunct>
    423 pts/0    S      0:00 /bin/sh -c nuxt --hostname 0.0.0.0
    424 pts/0    Sl     5:10 node /baserow/web-frontend/node_modules/.bin/nuxt --hostname 0.0.0.0
    447 pts/0    S      0:02 python /baserow/backend/src/baserow/manage.py runserver 127.0.0.1:8000
    450 pts/0    Sl     3:28 /baserow/venv/bin/python /baserow/backend/src/baserow/manage.py runserver 127.0.0.1:8000
   5107 pts/0    S      0:00 sleep 20
   5196 pts/1    S      0:00 bash
   5221 pts/1    R+     0:00 ps x

I don’t see any suspicious warnings in the log:

luc@vocabai$ docker logs --since 4h -f baserow-translate-plugin 2>&2 | grep -i warning
 [REDIS][2023-04-14 11:44:00] 194:M 14 Apr 2023 11:44:00.846 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
 [BACKEND][2023-04-14 11:45:47] WARNING 2023-04-14 11:45:47,409 django.request.log_response:224- Unauthorized: /api/user/token-refresh/
 [BACKEND][2023-04-14 11:45:47] WARNING 2023-04-14 11:45:47,409 django.request.log_response:224- Unauthorized: /api/user/token-refresh/
 [BACKEND][2023-04-14 11:45:47] WARNING 2023-04-14 11:45:47,409 django.request.log_response:224- Unauthorized: /api/user/token-refresh/
 [BACKEND][2023-04-14 11:45:47] WARNING 2023-04-14 11:45:47,410 django.channels.server.log_action:178- HTTP POST /api/user/token-refresh/ 401 [0.05, 127.0.0.1:44738]
 [BACKEND][2023-04-14 11:45:47] WARNING 2023-04-14 11:45:47,410 django.channels.server.log_action:178- HTTP POST /api/user/token-refresh/ 401 [0.05, 127.0.0.1:44738]

Here’s what I see when looking for ```postgres`` in the container logs:

luc@vocabai$ docker logs --since 4h -f baserow-translate-plugin 2>&2 | grep -i postgres
 [STARTUP][2023-04-14 12:50:43] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
 [POSTGRES_INIT][2023-04-14 12:50:43] Becoming postgres superuser to run setup SQL commands:
 [POSTGRES_INIT][2023-04-14 12:50:43]
 [POSTGRES_INIT][2023-04-14 12:50:43] PostgreSQL Database directory appears to contain a database; Skipping initialization
 [POSTGRES_INIT][2023-04-14 12:50:43]
2023-04-14 12:50:44,027 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing
2023-04-14 12:50:44,027 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing
2023-04-14 12:50:45,036 INFO spawned: 'postgresql' with pid 193
2023-04-14 12:50:45,036 INFO spawned: 'postgresql' with pid 193
 [POSTGRES][2023-04-14 12:50:45] 2023-04-14 12:50:45.073 UTC [193] LOG:  listening on IPv4 address "127.0.0.1", port 5432
 [POSTGRES][2023-04-14 12:50:45] 2023-04-14 12:50:45.073 UTC [193] LOG:  could not bind IPv6 address "::1": Cannot assign requested address
 [POSTGRES][2023-04-14 12:50:45] 2023-04-14 12:50:45.073 UTC [193] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
 [POSTGRES][2023-04-14 12:50:45] 2023-04-14 12:50:45.076 UTC [193] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
 [POSTGRES][2023-04-14 12:50:45] 2023-04-14 12:50:45.091 UTC [370] LOG:  database system was interrupted; last known up at 2023-04-14 12:49:11 UTC
 [BACKEND][2023-04-14 12:50:45] Error: Failed to connect to the postgresql database at localhost
 [POSTGRES][2023-04-14 12:50:46] 2023-04-14 12:50:45.173 UTC [415] baserow@baserow FATAL:  the database system is starting up
 [POSTGRES][2023-04-14 12:50:46] 2023-04-14 12:50:46.781 UTC [370] LOG:  database system was not properly shut down; automatic recovery in progress
 [POSTGRES][2023-04-14 12:50:46] 2023-04-14 12:50:46.783 UTC [370] LOG:  redo starts at 0/35A4020
 [POSTGRES][2023-04-14 12:50:46] 2023-04-14 12:50:46.785 UTC [370] LOG:  invalid record length at 0/35EF4D8: wanted 24, got 0
 [POSTGRES][2023-04-14 12:50:46] 2023-04-14 12:50:46.785 UTC [370] LOG:  redo done at 0/35EF4A0
 [POSTGRES][2023-04-14 12:50:46] 2023-04-14 12:50:46.785 UTC [370] LOG:  last completed transaction was at log time 2023-04-14 12:50:29.630998+00
 [BACKEND][2023-04-14 12:50:47] Waiting for PostgreSQL to become available attempt  0/5 ...
 [BACKEND][2023-04-14 12:50:47] PostgreSQL is available
2023-04-14 12:51:15,152 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-04-14 12:51:15,152 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)

Any idea what i’m doing wrong ?

Looks like your postgres db is corrupted, possibly by accidentally running multiple baserow/baserow containers at the same time pointing at the same volume?

The fix command shown on this issue should fix it: Investigate, document and prevent or fix postgres database corruption when two all-in-one images run at the same time with same volume (#1524) Β· Issues Β· Baserow / baserow Β· GitLab

Thank you, in my case I did the following as I didn’t mind wiping out the volume:

docker container prune
docker volume rm baserow-translate-plugin_baserow_data

It’s possible I did something wrong, I do have two plugins on that machine, but I didn’t think they would interfere with each other given the names are different. I may have abruptly stopped the container as well.

I’m unfortunately hitting another issue at start up now:

baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:33] OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.16.0,deployment.environment=unknown
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:33] PostgreSQL is available
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:33] python /baserow/backend/src/baserow/manage.py migrate
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36] Loaded backend plugins: baserow_translate_plugin
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36] Operations to perform:
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   Apply all migrations: auth, baserow_enterprise, baserow_premium, contenttypes, core, database, db, sessions, silk
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36] Traceback (most recent call last):
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/backend/src/baserow/manage.py", line 41, in <module>
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     main()
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/backend/src/baserow/manage.py", line 37, in main
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     execute_from_command_line(sys.argv)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     utility.execute()
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     self.fetch_command(subcommand).run_from_argv(self.argv)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     self.execute(*args, **cmd_options)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     output = self.handle(*args, **options)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     res = handle_func(*args, **kwargs)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 202, in handle
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     pre_migrate_apps = pre_migrate_state.apps
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     res = instance.__dict__[self.name] = self.func(instance)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/state.py", line 208, in apps
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     return StateApps(self.real_apps, self.models)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/state.py", line 270, in __init__
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     self.render_multiple([*models.values(), *self.real_models])
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]   File "/baserow/venv/lib/python3.9/site-packages/django/db/migrations/state.py", line 309, in render_multiple
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]     raise InvalidBasesError(
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36] django.db.migrations.exceptions.InvalidBasesError: Cannot resolve bases for [<ModelState: 'baserow_translate_plugin.TranslationField'>]
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36] This can happen if you are inheriting models from an app with migrations (e.g. contrib.auth)
baserow-translate-plugin  |  [BACKEND][2023-04-14 13:16:36]  in an app with no migrations; see https://docs.djangoproject.com/en/3.2/topics/migrations/#dependencies for more

After some googling this seems like a django issue, I thought i’d ask in case it’s a common issue.
Here’s my code (a diff between a tag I made after running the cookiecutter command and now)

Looks like you are missing migrations for those new django models perhaps?

I couldn’t run the β€œmakemigrations” command without the container being up (and the above migrations exceptions would terminate the container). But I just tried something I saw mentioned in another thread:

~/python/baserow-translate-plugin/plugins/baserow_translate_plugin/backend/src/baserow_translate_plugin (master)
luc@vocabai$ mkdir migrations

~/python/baserow-translate-plugin/plugins/baserow_translate_plugin/backend/src/baserow_translate_plugin (master)
luc@vocabai$ touch migrations/__init__.py

After that, the container starts up, and the default baserow migrations are applied, and I suspect i’ll be able to run my plugin’s migrations.

I’m once again getting the β€œpostgres not up” issue:

~/python/baserow-translate-plugin (master)
luc@vocabai$ docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage migrate
 [STARTUP][2023-04-14 13:55:02] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
 [STARTUP][2023-04-14 13:55:02] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
 [STARTUP][2023-04-14 13:55:02] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
 [STARTUP][2023-04-14 13:55:02] Importing SECRET_KEY secret from /baserow/data/.secret
 [STARTUP][2023-04-14 13:55:02] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
 [STARTUP][2023-04-14 13:55:02] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.16.0,deployment.environment=unknown
Loaded backend plugins: baserow_translate_plugin
WARNING: Baserow is configured to use a BASEROW_PUBLIC_URL of http://localhost. If you attempt to access Baserow on any other hostname requests to the backend will fail as they will be from an unknown host. Please set BASEROW_PUBLIC_URL if you will be accessing Baserow from any other URL then http://localhost.
Traceback (most recent call last):
  File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
    self.connect()
  File "/baserow/venv/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner
    return func(*args, **kwargs)
  File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect
    self.connection = self.get_new_connection(conn_params)
  File "/baserow/venv/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner
    return func(*args, **kwargs)
  File "/baserow/venv/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
    connection = Database.connect(**conn_params)
  File "/baserow/venv/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
        Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
        Is the server running on that host and accepting TCP/IP connections?

I don’t see anything obvious on the postgres logs:

luc@vocabai$ docker logs --since 1h baserow-translate-plugin  2>&1 | grep -i postgres
 [STARTUP][2023-04-14 13:44:48] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
 [POSTGRES_INIT][2023-04-14 13:44:48] Becoming postgres superuser to run setup SQL commands:
 [POSTGRES_INIT][2023-04-14 13:44:48]
 [POSTGRES_INIT][2023-04-14 13:44:48] PostgreSQL Database directory appears to contain a database; Skipping initialization
 [POSTGRES_INIT][2023-04-14 13:44:48]
2023-04-14 13:44:49,508 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing
2023-04-14 13:44:49,508 INFO Included extra file "/baserow/supervisor/includes/enabled/embedded-postgres.conf" during parsing
2023-04-14 13:44:50,518 INFO spawned: 'postgresql' with pid 193
2023-04-14 13:44:50,518 INFO spawned: 'postgresql' with pid 193
 [POSTGRES][2023-04-14 13:44:50] 2023-04-14 13:44:50.667 UTC [193] LOG:  listening on IPv4 address "127.0.0.1", port 5432
 [POSTGRES][2023-04-14 13:44:50] 2023-04-14 13:44:50.667 UTC [193] LOG:  could not bind IPv6 address "::1": Cannot assign requested address
 [POSTGRES][2023-04-14 13:44:50] 2023-04-14 13:44:50.667 UTC [193] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
 [POSTGRES][2023-04-14 13:44:50] 2023-04-14 13:44:50.680 UTC [193] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
 [POSTGRES][2023-04-14 13:44:50] 2023-04-14 13:44:50.704 UTC [416] LOG:  database system was shut down at 2023-04-14 13:31:51 UTC
 [BACKEND][2023-04-14 13:44:50] Error: Failed to connect to the postgresql database at localhost
 [POSTGRES][2023-04-14 13:44:50] 2023-04-14 13:44:50.726 UTC [417] baserow@baserow FATAL:  the database system is starting up
 [BACKEND][2023-04-14 13:44:52] Waiting for PostgreSQL to become available attempt  0/5 ...
 [BACKEND][2023-04-14 13:44:52] PostgreSQL is available
2023-04-14 13:45:21,136 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
2023-04-14 13:45:21,136 INFO success: postgresql entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)

BTW I’m not expecting an immediate answer, i’m going to a deep dive into this and see what I can figure out, i’ll report back.

postgres processes were running under a different user, but they are indeed up:

baserow_docker_user@56f833389d88:/baserow/backend$ ps aux | grep postgres
postgres     192  0.0  0.1 211596 27316 pts/0    S    14:00   0:00 /usr/lib/postgresql/11/bin/postgres -c config_file=/etc/postgresql/11/main/postgresql.conf
postgres     261  0.0  0.0   5792   160 pts/0    S    14:00   0:00 bash /baserow/supervisor/wrapper.sh PURPLE POSTGRES /usr/lib/postgresql/11/bin/postgres -c config_file=/etc/postgresql/11/main/postgresql.conf
postgres     275  0.0  0.0  10232  2912 pts/0    S    14:00   0:00 gawk -vRS=[\r\n] -vORS= {  print "?[35m",strftime("[POSTGRES][%Y-%m-%d %H:%M:%S]"), $0, "?(B?[m" , RT; fflush(stdout)}
postgres     430  0.0  0.0 211596  3960 ?        Ss   14:00   0:00 postgres: 11/main: checkpointer
postgres     431  0.0  0.0 211596  5824 ?        Ss   14:00   0:00 postgres: 11/main: background writer
postgres     432  0.0  0.0 211596  9424 ?        Ss   14:00   0:00 postgres: 11/main: walwriter
postgres     433  0.0  0.0 212000  6840 ?        Ss   14:00   0:00 postgres: 11/main: autovacuum launcher
postgres     434  0.0  0.0  66636  4152 ?        Ss   14:00   0:00 postgres: 11/main: stats collector
postgres     435  0.0  0.0 212004  6888 ?        Ss   14:00   0:00 postgres: 11/main: logical replication launcher
postgres     451  0.0  0.0 213416 16148 ?        Ss   14:00   0:00 postgres: 11/main: baserow baserow 127.0.0.1(56950) idle

Could you provide the full history of all commands you’ve run from your shell history after the restart wipe and the results of docker ps ?

tab 1 (where I run docker compose up)

  210  docker container prune
  211  docker volume prune
  212  docker volume ls
  213  docker volume rm baserow-translate-plugin_baserow_data
  214  env |grep -i
  215  echo $PLUGIN_BUILD_UID
  216  docker compose -f docker-compose.dev.yml up --build
  217  docker compose -f docker-compose.dev.yml up
  218  docker compose -f docker-compose.dev.yml up --build

tab 2: ( where I ran makemigrations and others)

  177  docker compose run baserow-translate-plugin /baserow.sh backend-cmd manage makemigrations
  178  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage makemigrations
  179  docker ps
  180  docker compose -f docker-compose.dev.yml exec baserow-translate-plugin /baserow.sh backend-cmd bash -c bash
  181  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage makemigrations
  182*
  183  docker logs --since 4h -f baserow-translate-plugin
  184  docker logs --since 4h -f baserow-translate-plugin 2>&2 | grep -i warning
  185  docker logs --since 4h -f baserow-translate-plugin 2>&2 | grep -i postgres
  186  docker logs --since 4h -f baserow-translate-plugin 2>&2 | less
  187  docker logs --since 4h -f baserow-translate-plugin 2>&2 | less -r
  188  docker logs --since 4h -f baserow-translate-plugin
  189  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage makemigrations
  190  docker logs --since 4h -f baserow-translate-plugin 2>&2 | grep -i postgres
  191  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage makemigrations
  192  git status
  193  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage migrate
  194  docker compose -f docker-compose.dev.yml exec baserow-translate-plugin /baserow.sh backend-cmd bash -c bash
  195  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage migrate
  196  docker compose -f docker-compose.dev.yml exec baserow-translate-plugin /baserow.sh backend-cmd bash -c bash
  197  docker compose -f docker-compose.dev.yml run --rm baserow-translate-plugin backend-cmd manage makemigrations
  198  docker compose -f docker-compose.dev.yml exec baserow-translate-plugin /baserow.sh backend-cmd bash -c bash

suddenly I have a doubt: should the β€œmakemigrations” command be run while the plugin is not running ?

So docker compose ... run is going to spawn a brand new container, inside of that container start a second postgres and then that second postgres is going to try and use the same folder that your docker compose .. up containers are using at the same time. I think this is what results in the corruption, two containers, with two postgreses using the same volume at the same time.

Instead i’d always make sure to use docker compose -f docker-compose.dev.yml exec so you are running commands in the existing container. Or if ever issuing a new run command stopping all existing containers first.

Though after saying that, running docker run backend-cmd shouldn’t be trying to launch a second postgres off the top of my head hm…

1 Like