Can’t create table
After updating Baserow to version 1.10.2, the option to create new tables stopped working.
In the console it receives a server error 500 when sending a query to create a table.
Cannot download CSV export file because file does not exist
This option did not work both before and after the update.
File is generating but after clicking to download nothing happens.
In chrome there is info that this file doesn’t exist.
When i check on server file is not generated.
I don’t have a standard docker-compose file because I don’t use caddy but nginx.
my docker-compose.yml
version: "3.4"
# MAKE SURE YOU HAVE SET THE REQUIRED VARIABLES IN the .env FILE.configs:
services:
backend:
image: baserow/backend:1.10.2
restart: unless-stopped
ports:
- "${HOST_PUBLISH_IP:-127.0.0.1}:8000:8000"
env_file:
- .env
depends_on:
- db
- redis
volumes:
- /home/root/baserow_media:/baserow/media
web-frontend:
image: baserow/web-frontend:1.10.2
restart: unless-stopped
ports:
- "${HOST_PUBLISH_IP:-127.0.0.1}:3000:3000"
env_file:
- .env
depends_on:
- backend
celery:
image: baserow/backend:1.10.2
restart: unless-stopped
env_file:
.env
command: celery-worker
# The backend image's baked in healthcheck defaults to the django healthcheck
# override it to the celery one here.
healthcheck:
test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
depends_on:
- backend
volumes:
- /home/root/baserow_media:/baserow/media
celery-export-worker:
image: baserow/backend:1.10.2
restart: unless-stopped
command: celery-exportworker
# The backend image's baked in healthcheck defaults to the django healthcheck
# override it to the celery one here.
healthcheck:
test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
depends_on:
- backend
env_file:
.env
celery-beat-worker:
image: baserow/backend:1.10.2
restart: unless-stopped
command: celery-beat
# See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
stop_signal: SIGQUIT
env_file:
- .env
depends_on:
- backend
db:
image: postgres:11.3
restart: unless-stopped
env_file:
- .env
environment:
- POSTGRES_USER=${DATABASE_USER:-baserow}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
- POSTGRES_DB=${DATABASE_NAME:-baserow}
healthcheck:
test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
interval: 10s
timeout: 5s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:6.0
restart: unless-stopped
command: redis-server --requirepass ${REDIS_PASSWORD:?}
env_file:
- .env
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
volumes:
pgdata:
Hi @rafuru sorry you’ve hit this bug. In 1.10.2 we did make changes to table creation, but hopefully this problem you are hitting is a misconfiguration somehow.
In 1.10.2 we moved table creation into a async job that runs in the celery-export-worker (this service is badly named! We use this service to run various user initiated async jobs). My first thought is that your backend server is not able to send jobs to your celery export worker due to misconfiguration.
If you set the following two debug environment variables in your .env file:
Then restart your environment: docker-compose restart.
Now if you could try to create a table again once Baserow is available and when you see the error could you send me the output of
docker-compose logs backend
and docker-compose logs celery-export-worker
I would additionally advise checking docker-compose ps and if the redis and celery-export-worker services are up and healthy?
Onto your next problem:
Does the user running inside the backend service container have permissions to write to /home/root/baserow_media ? The user inside the container has the id and guid of 9999:9999, so running something like chown 9999:9999 -R /home/root/baserow_media could possibly fix your issue. Secondly is your nginx configured correctly to serve files from that folder? You can see an example working nginx config at Installing Baserow behind Nginx // Baserow
I saw that my /home/root/baserow-media was empty.
After adding volume to celery-export-worker and chown and nginx user_1 folder shown up.
But export doesn’t start working.
In the guide you linked, sites-avaible is used in nginx.
I don’t use this feature. I have config inside /etc/nginx/conf.d/[domain].conf
Health
baserow_celery-beat-worker_1 in unhealthy.
All other are healthy.
Edited docker-compose.yml
version: "3.4"
# MAKE SURE YOU HAVE SET THE REQUIRED VARIABLES IN the .env FILE.configs:
services:
nginx:
image: nginx
ports:
- '80:80'
volumes:
- $PWD/conf.d/<domain>.conf:/etc/nginx/conf.d/<domain>.conf:ro
- /home/root/baserow_media:/baserow/media
depends_on: [backend]
backend:
image: baserow/backend:1.10.2
restart: unless-stopped
ports:
- "${HOST_PUBLISH_IP:-127.0.0.1}:8000:8000"
env_file:
- .env
depends_on:
- db
- redis
volumes:
- /home/root/baserow_media:/baserow/media
web-frontend:
image: baserow/web-frontend:1.10.2
restart: unless-stopped
ports:
- "${HOST_PUBLISH_IP:-127.0.0.1}:3000:3000"
env_file:
- .env
depends_on:
- backend
celery:
image: baserow/backend:1.10.2
restart: unless-stopped
env_file:
.env
command: celery-worker
# The backend image's baked in healthcheck defaults to the django healthcheck
# override it to the celery one here.
healthcheck:
test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
depends_on:
- backend
volumes:
- /home/root/baserow_media:/baserow/media
celery-export-worker:
image: baserow/backend:1.10.2
restart: unless-stopped
command: celery-exportworker
# The backend image's baked in healthcheck defaults to the django healthcheck
# override it to the celery one here.
healthcheck:
test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
depends_on:
- backend
env_file:
.env
volumes:
- /home/root/baserow_media:/baserow/media
celery-beat-worker:
image: baserow/backend:1.10.2
restart: unless-stopped
command: celery-beat
# See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
stop_signal: SIGQUIT
env_file:
- .env
depends_on:
- backend
db:
image: postgres:11.3
restart: unless-stopped
env_file:
- .env
environment:
- POSTGRES_USER=${DATABASE_USER:-baserow}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
- POSTGRES_DB=${DATABASE_NAME:-baserow}
healthcheck:
test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
interval: 10s
timeout: 5s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:6.0
restart: unless-stopped
command: redis-server --requirepass ${REDIS_PASSWORD:?}
env_file:
- .env
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
volumes:
pgdata:
Creating table
This starts working after changes and restarting service.
I can see before I assume you made changes to the nginx config that websocket connections were not being properly upgraded when going through your nginx. But now at the end of your logs you can see websocket connections opening correctly.
Secondly, I can now see in your original config that yup there was no media volume in the export worker service which would have prevented exports/imports etc from working.
With regards to using sites-available vs conf.d/[domain].conf that should be fine. It won’t matter exactly how you get nginx to pickup the Baserow configuration, only that the configuration itself is correct.
The baserow_celery-beat-worker_1 doesn’t have a working healthcheck from what I remember so that should be ok.
Finally, It now looks like the exports are successfully running from the point of view of Baserow and it is now able to write out files. I’m guessing that your nginx configuration for your Baserow domain is not properly configured however to serve them.
Can you provide the exact errors/response bodies/error codes you are getting when trying to download an export?
Could you also provide your nginx configuration from /etc/nginx/conf.d/[domain].conf for Baserow
And also the output of ls -haltr when run in /home/root/baserow_media so I can check your file permissions are correct.
In particular, I have a strong feeling your nginx user which is serving any files found in /home/root/baserow_media probably does not actually have access to read files in that directly as it will be owned by root or possibly 9999:9999 if you chowned it already to the docker user.
One solution to this is creating a new workspace that both the nginx user and the 9999 user are in and then making sure that this new workspace has read/write access to /home/root/baserow_media.
Yup so if you checked your nginx logs I bet it is getting access denied errors when trying to read that folder. All you need to do is create a workspace with the gid 9999 and add the nginx user to it:
groupadd -g 9999 baserow_group
usermod -a -G baserow_group nginx
And then probably restart nginx, not sure perhaps not!