Baserow crashing - Cannot release a lock that's no longer owned error

Hello,
I upgraded Baserow from 1.16 to 1.25.1 and Baserow keeps crashing every few hours. I don’t know what causes it.

Upgrade was done using the intermediate image to upgrade postgres to the latest version and then upgrading to Baserow latest.

it’s running inside docker. it’s the default all in one baserow image .


 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,928: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/redbeat/schedulers.py", line 466, in tick
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,928: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,928: WARNING/MainProcess] self.lock.extend(int(self.lock_timeout))
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,928: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/redis/lock.py", line 276, in extend
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,928: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,928: WARNING/MainProcess] return self.do_extend(additional_time, replace_ttl)

 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/redis/lock.py", line 287, in do_extend
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] raise LockNotOwnedError("Cannot extend a lock that's no longer owned")
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] redis.exceptions
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] .
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] LockNotOwnedError
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] :
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] Cannot extend a lock that's no longer owned
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] During handling of the above exception, another exception occurred:
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess] Traceback (most recent call last):
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,932: WARNING/MainProcess]   File "/baserow/venv/bin/celery", line 8, in <module>
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,933: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,933: WARNING/MainProcess] sys.exit(main())

 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,958: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/celery/apps/beat.py", line 77, in run
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess] self.start_scheduler()
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/celery/apps/beat.py", line 105, in start_scheduler
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess] service.start()
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/celery/beat.py", line 655, in start
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess] self.sync()
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,959: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/celery/beat.py", line 658, in sync
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess] self.scheduler.close()
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/redbeat/schedulers.py", line 482, in close
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess] self.lock.release()
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/redis/lock.py", line 253, in release
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess] self.do_release(expected_token)
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,960: WARNING/MainProcess]   File "/baserow/venv/lib/python3.11/site-packages/redis/lock.py", line 259, in do_release
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,961: WARNING/MainProcess]
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,961: WARNING/MainProcess] raise LockNotOwnedError("Cannot release a lock that's no longer owned")
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,961: WARNING/MainProcess] redis.exceptions
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,961: WARNING/MainProcess] .
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,961: WARNING/MainProcess] LockNotOwnedError
 [BEAT_WORKER][2024-06-25 06:47:44] [2024-06-25 06:47:44,961: WARNING/MainProcess] :
 [BEAT_WORKER][2024-06-25 06:47:45] [2024-06-25 06:47:44,961: WARNING/MainProcess] Cannot release a lock that's no longer owned
2024-06-25 06:47:45,482 WARN exited: beatworker (exit status 1; not expected)
2024-06-25 06:47:45,482 WARN exited: beatworker (exit status 1; not expected)
2024-06-25 06:47:45,484 INFO spawned: 'beatworker' with pid 4230
2024-06-25 06:47:45,484 INFO spawned: 'beatworker' with pid 4230
2024-06-25 06:47:45,484 INFO reaped unknown pid 242 (exit status 0)
2024-06-25 06:47:45,484 INFO reaped unknown pid 242 (exit status 0)
Baserow was stopped or one of it's services crashed, see the logs above for more details.
 [BEAT_WORKER][2024-06-25 06:47:45] OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.25.1,deployment.environment=unknown

Please help.

Thanks

Hi @computersrmyfriends, I’m not really sure what’s causing this, but would you be able to clear your entire Redis database manually? Baserow doesn’t the previously existing data for it work, and I suspect that something is wrong on that front.

Hi @bram I managed to find the credential for redis and cleared redis using the following commands…

redis-cli FLUSHALL
redis-cli FLUSHDB

I still face the same issue…

see full logs here : JustPaste.it - Share Text & Images the Easy Way

It works for a while and then automatically crashes.

Do these full logs give a better idea? its the default embedded db docker image

Hi @computersrmyfriends, would you be able to give us more insights in how you’re currently self-hosting? It can be helpful if you can share that server specifications you have, how the CPU and memory allocations look like, how you start Baserow. The more information you can share, the better it would be to try and reproduce this problem.

@cezary and/or @davide would you be able to help @computersrmyfriends with this problem?

Hi @bram , It’s hosted in a vps in a dedicated server. On the production version, I dont think this problem exists. I need to confirm if it does. That was setup a year ago and it’s still running Baserow 1.16

However, when I take a backup and set it up inside WSL2 on my pc, I see this issue.

I tried setting it up as it is with Baserow 1.16. Tried upgrading, Tried using the legacy postgres with latest. All of these types of setups have the same issue.

I tried clearing the redis cache.

I don’t know if you had a look at my logs. If you’d like, I can even do a screenshare and show you. After few minutes of using it, it crashes. Once I restart the container, it works again for sometime or maybe even few hours until the next crash.

Thanks

Hi @computersrmyfriends,

if I understand correctly, both versions 1.16 and 1.25 have the same issue on your PC inside WSL2. Is that correct?

Could you also please verify if this is happening on production or only on your PC?

If the issue is only occurring on your PC, could you please provide the versions of your Windows, WSL2, and Docker Desktop? If they’re not the latest versions, I’d recommend upgrading them to the latest versions.

I cannot guarantee we’ll be able to test in the same environment, but we’ll do our best.

@davide yes, that’s correct. On my local pc, I restore the backup inside wsl 2 and I see that happening.

I am using Ubuntu 24.04 inside wsl2.

As for specific version of wsl2, here it is:

 wsl --version
WSL version: 2.2.4.0
Kernel version: 5.15.153.1-2
WSLg version: 1.0.61
MSRDC version: 1.2.5326
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.26091.1-240325-1447.ge-release
Windows version: 10.0.22631.3737

Docker version


docker -v
Docker version 26.1.2, build 211e74b

Docker is running inside wsl2. it’s not Docker desktop on Windows.

Hope that helps. Meanwhile, I will check my production version and update

On the production version, I tried docker logs --tail 10000 baserow but, I don’t see crashes within the last 10000 lines which shows me upto roughly a week ago and I don’t see any crashes.

Production server info:

docker -v
Docker version 23.0.3, build 3e7cbfd

OS in production is : Ubuntu jammy 22.04 x86_64

Issues are happening only happening on my local pc where I restore the data from /baserow/data . Daily snapshots of /baserow/data are backed up from within the container from the production server. One such recent data snapshot is what I am restoring locally.

here’s part of my local setup script:

#!/bin/bash
container_name="baserow"

wget https://github.com/rclone/rclone/releases/download/v1.62.2/rclone-v1.62.2-linux-amd64.deb -O app.deb && dpkg -i app.deb
docker volume create baserow_data
docker run --rm -v baserow_data:/baserow/data -v $(pwd):/backup busybox sh -c "unzip -o /backup/data.zip -d /"
#docker run   -d   --name $container_name --expose=3000   -e BASEROW_PUBLIC_URL=http://localhost:3001   -v baserow_data:/baserow/data -p 3000:3000  -p 3001:80 baserow/baserow-pgautoupgrade:1.25.1
docker run   -d   --name $container_name --expose=3000   -e BASEROW_PUBLIC_URL=http://localhost:3001    -v baserow_data:/baserow/data -p 3000:3000  -p 3001:80  baserow/baserow:1.16.0
#docker run   -d   --name $container_name --expose=3000   -e BASEROW_PUBLIC_URL=http://localhost:39001   -v baserow_data:/baserow/data -p 39000:3000  -p 39001:80 baserow/baserow:1.25.1

docker cp data.zip $container_name:/baserow/data
docker exec -it $container_name apt update
docker exec -it $container_name bash -c 'apt update && apt install htop wget curl -y'
docker exec -it $container_name bash -c 'apt update && apt install tmux -y'
docker exec -it $container_name bash -c 'apt update && apt install sudo -y'
docker exec -it $container_name bash -c 'apt update && apt install ripgrep -y'
docker exec -it $container_name bash -c 'apt update && apt install unzip zip -y'
echo "downloading rclone..."
docker exec -it baserow bash -c "wget https://github.com/rclone/rclone/releases/download/v1.62.2/rclone-v1.62.2-linux-amd64.deb -O /baserow/app.deb && dpkg -i /baserow/app.deb"
docker cp rclone.conf $container_name:/root/.config/rclone/rclone.conf
echo "unzipping baserow data..."

#docker exec -it $container_name bash -c 'unzip -o /baserow/data/data.zip -d /'

#docker exec -it $container_name bash -c 'chown -R postgres:postgres /baserow/data'
docker exec -it $container_name bash -c 'rm /baserow/data/data.zip'

Thanks