Can't create table & export data (file doesn't exist)

Can’t create table
After updating Baserow to version 1.10.2, the option to create new tables stopped working.

In the console it receives a server error 500 when sending a query to create a table.

Cannot download CSV export file because file does not exist
This option did not work both before and after the update.
File is generating but after clicking to download nothing happens.
In chrome there is info that this file doesn’t exist.
When i check on server file is not generated.


I don’t have a standard docker-compose file because I don’t use caddy but nginx.

my docker-compose.yml

version: "3.4"

# MAKE SURE YOU HAVE SET THE REQUIRED VARIABLES IN the .env FILE.configs:


services:
  backend:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    ports:
      - "${HOST_PUBLISH_IP:-127.0.0.1}:8000:8000"
    env_file:
      - .env
    depends_on:
      - db
      - redis
    volumes:
      - /home/root/baserow_media:/baserow/media

  web-frontend:
    image: baserow/web-frontend:1.10.2
    restart: unless-stopped
    ports:
      - "${HOST_PUBLISH_IP:-127.0.0.1}:3000:3000"
    env_file:
      - .env
    depends_on:
      - backend

  celery:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    env_file:
      .env
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - /home/root/baserow_media:/baserow/media

  celery-export-worker:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    command: celery-exportworker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
   healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    env_file:
      .env

  celery-beat-worker:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    command: celery-beat
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    env_file:
      - .env
    depends_on:
      - backend

  db:
    image: postgres:11.3
    restart: unless-stopped
    env_file:
      - .env
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6.0
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    env_file:
      - .env
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]

volumes:
  pgdata:

My redacted .env file

SECRET_KEY=<key>
DATABASE_PASSWORD=<pass>
REDIS_PASSWORD=<pass>
BASEROW_PUBLIC_URL=<url>
WEB_FRONTEND_PORT=80
WEB_FRONTEND_SSL_PORT=4443
HOST_PUBLISH_IP=127.0.0.1:4443

Do you have any idea why?

Hi @rafuru sorry you’ve hit this bug. In 1.10.2 we did make changes to table creation, but hopefully this problem you are hitting is a misconfiguration somehow.

In 1.10.2 we moved table creation into a async job that runs in the celery-export-worker (this service is badly named! We use this service to run various user initiated async jobs). My first thought is that your backend server is not able to send jobs to your celery export worker due to misconfiguration.

If you set the following two debug environment variables in your .env file:

BASEROW_BACKEND_LOG_LEVEL=DEBUG
BASEROW_BACKEND_DEBUG=on

Then restart your environment: docker-compose restart.

Now if you could try to create a table again once Baserow is available and when you see the error could you send me the output of

docker-compose logs backend
and
docker-compose logs celery-export-worker

I would additionally advise checking docker-compose ps and if the redis and celery-export-worker services are up and healthy?

Onto your next problem:

Does the user running inside the backend service container have permissions to write to /home/root/baserow_media ? The user inside the container has the id and guid of 9999:9999, so running something like chown 9999:9999 -R /home/root/baserow_media could possibly fix your issue. Secondly is your nginx configured correctly to serve files from that folder? You can see an example working nginx config at Installing Baserow behind Nginx // Baserow

Exporting

I saw that my /home/root/baserow-media was empty.
After adding volume to celery-export-worker and chown and nginx user_1 folder shown up.
But export doesn’t start working.
In the guide you linked, sites-avaible is used in nginx.
I don’t use this feature. I have config inside /etc/nginx/conf.d/[domain].conf

Health

baserow_celery-beat-worker_1 in unhealthy.
All other are healthy.

Edited docker-compose.yml

version: "3.4"

# MAKE SURE YOU HAVE SET THE REQUIRED VARIABLES IN the .env FILE.configs:


services:
  nginx:
    image: nginx
    ports:
      - '80:80'
    volumes:
      - $PWD/conf.d/<domain>.conf:/etc/nginx/conf.d/<domain>.conf:ro
      - /home/root/baserow_media:/baserow/media
    depends_on: [backend]

  backend:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    ports:
      - "${HOST_PUBLISH_IP:-127.0.0.1}:8000:8000"
    env_file:
      - .env
    depends_on:
      - db
      - redis
    volumes:
      - /home/root/baserow_media:/baserow/media

  web-frontend:
    image: baserow/web-frontend:1.10.2
    restart: unless-stopped
    ports:
      - "${HOST_PUBLISH_IP:-127.0.0.1}:3000:3000"
    env_file:
      - .env
    depends_on:
      - backend

  celery:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    env_file:
      .env
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - /home/root/baserow_media:/baserow/media

  celery-export-worker:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    command: celery-exportworker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    env_file:
      .env
    volumes:
      - /home/root/baserow_media:/baserow/media

  celery-beat-worker:
    image: baserow/backend:1.10.2
    restart: unless-stopped
    command: celery-beat
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    env_file:
      - .env
    depends_on:
      - backend

  db:
    image: postgres:11.3
    restart: unless-stopped
    env_file:
      - .env
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6.0
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    env_file:
      - .env
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]

volumes:
  pgdata:

Creating table

This starts working after changes and restarting service.

Thanks for the log files!

  1. I can see before I assume you made changes to the nginx config that websocket connections were not being properly upgraded when going through your nginx. But now at the end of your logs you can see websocket connections opening correctly.
  2. Secondly, I can now see in your original config that yup there was no media volume in the export worker service which would have prevented exports/imports etc from working.
  3. With regards to using sites-available vs conf.d/[domain].conf that should be fine. It won’t matter exactly how you get nginx to pickup the Baserow configuration, only that the configuration itself is correct.
  4. The baserow_celery-beat-worker_1 doesn’t have a working healthcheck from what I remember so that should be ok.

Finally, It now looks like the exports are successfully running from the point of view of Baserow and it is now able to write out files. I’m guessing that your nginx configuration for your Baserow domain is not properly configured however to serve them.

  • Can you provide the exact errors/response bodies/error codes you are getting when trying to download an export?
  • Could you also provide your nginx configuration from /etc/nginx/conf.d/[domain].conf for Baserow
  • And also the output of ls -haltr when run in /home/root/baserow_media so I can check your file permissions are correct.

Almost there!

In particular, I have a strong feeling your nginx user which is serving any files found in /home/root/baserow_media probably does not actually have access to read files in that directly as it will be owned by root or possibly 9999:9999 if you chowned it already to the docker user.

One solution to this is creating a new workspace that both the nginx user and the 9999 user are in and then making sure that this new workspace has read/write access to /home/root/baserow_media.

1 Like

My nginx conf.

server {
    server_name [domain];

location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_http_version 1.1;
        proxy_pass http://localhost:3000;
    }
location ~ ^/(api|ws) {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass http://localhost:8000;
    }

location /media/ {
        if ($arg_dl) {
            add_header Content-disposition "attachment; filename=$arg_dl";
        }
        # TODO CHANGE TO THE MEDIA FOLDER USED IN THE docker-compose.yml
        root /home/root/baserow_media;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    listen 443 ssl; # managed by Certbot
    [redacted certbot]

}
server {
    if ($host = [domain]) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    server_name  [domain];
    listen 80;
    return 404; # managed by Certbot


}

My nginx is not on docker.
I can’t add 9999 user to new workspace cause this user don’t exist.

When i try to export, in console I get 404.
“/media/export_files/d20f158d-6e2a-40d0-9be3-c9c71ca6d6ba.csv?dl=export - question_keywords.csv”
obraz

Permissions:
obraz

1 Like

Yup so if you checked your nginx logs I bet it is getting access denied errors when trying to read that folder. All you need to do is create a workspace with the gid 9999 and add the nginx user to it:

groupadd -g 9999 baserow_group
usermod -a -G baserow_group nginx

And then probably restart nginx, not sure perhaps not!

@nigel
I managed to solve the problem.

In the nginx configuration at location /media/
root line requires a trailing slash at the end
root /home/root/baserow_media/;

In the docker-compose configuration, I removed the nginx service and added at the end in the volumes media:

1 Like