Baserow Server 500 Errors

Are you using our SaaS platform (Baserow.io) or self-hosting Baserow?

Self-hosted

If you are self-hosting, what version of Baserow are you running?

Baserow v1.32.5

If you are self-hosting, which installation method do you use to run Baserow?

Docker compose file setup and its included in a system service file

What are the exact steps to reproduce this issue?

I have been using baserow as the database for my automations in n8n however i keep getting Server 500 errors since the latest few patches. I am unsure as to what is the issue. I have started developing things within the lightweight CRM and added new tables etc for my needs. So i have a few linked fields across different tables in the DB part.

The automations I have run by adding rows to a certain table and then it kicks off other automations based on the webhooks associated to different fields to then do the next part of the process to try and reduce the need for 1 huge workflow that could easily break.

However after a number of these running I then get Server 500 errors in my baserow nodes.

My setup is a local 40core server with 80gb ram but I do have n8n and baserow running on the same VM so dont know if that is one of the causes?

@joffcom, we would appreciate your help here. :raised_hands:

I’ve been running into a similar issue lately - specifically with the N8N Baserow “update row” node.
It used to work flawlessly, but now whenever the update procedure involves few columns (especially one or two) I would get these 500 errors after a few dozen updated rows.

even for Baserow 1.33.4

relevant log items:

[BACKEND][2025-06-08 17:21:59] XXXXXXX - “PATCH /api/database/rows/table/xxxx/xxx/ HTTP/1.1” 200

[POSTGRES][2025-06-08 17:21:59] 2025-06-08 17:20:51.229 UTC [56357] baserow@baserow FATAL: remaining connection slots are reserved for non-replication superuser connections

[BACKEND][2025-06-08 17:21:59] ERROR 2025-06-08 17:21:59,619 django.request.log_response:241- Internal Server Error: /api/database/views/grid/17339/aggregations/
[BACKEND][2025-06-08 17:21:59] Traceback (most recent call last):
[BACKEND][2025-06-08 17:21:59] File “/baserow/venv/lib/python3.11/site-packages/django/db/backends/base/base.py”, line 275, in ensure_connection
[BACKEND][2025-06-08 17:21:59] self.connect()

[BACKEND][2025-06-08 17:21:59] psycopg2.OperationalError: connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
[…]

[BACKEND][2025-06-08 17:21:59] django.db.utils.OperationalError: connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
[BACKEND][2025-06-08 17:21:59] xxxxxxxxx - “GET /api/database/views/grid/17339/aggregations/ HTTP/1.1” 500

[EXPORT_WORKER][2025-06-08 17:21:59] [2025-06-08 17:21:59,736: ERROR/ForkPoolWorker-16] Task baserow.contrib.database.search.tasks.async_update_tsvector_columns[xxx] raised unexpected: OperationalError(‘connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections\n’)
[EXPORT_WORKER][2025-06-08 17:21:59] Traceback (most recent call last):
[…]
[EXPORT_WORKER][2025-06-08 17:21:59] psycopg2.OperationalError: connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections

btw - I can get around this by splitting the the set of rows to update into batches of about 25 rows per batch (sometimes more, have not experimented enough to pinpoint all the variables here) and setting a wait time of at least 2s between batches, but that slows the process down significantly for larger datasets.

@cwinhall the workaround is far from ideal as I hoped for the update speed without the “wait time”. For 100 000 + rows that all adds up to hours of overhead for row updates via the n8n node.

It seems that I could also increase the max_connections for pg - however - I’m still on the all-in-one container and I’m not entirely sure this would be the right direction in this case? Would appreciate any pointers/hints on how to approach this issue.

Ram will not be an issue, btw.

Thanks!