Baserow Server 500 Errors

Are you using our SaaS platform (Baserow.io) or self-hosting Baserow?

Self-hosted

If you are self-hosting, what version of Baserow are you running?

Baserow v1.32.5

If you are self-hosting, which installation method do you use to run Baserow?

Docker compose file setup and its included in a system service file

What are the exact steps to reproduce this issue?

I have been using baserow as the database for my automations in n8n however i keep getting Server 500 errors since the latest few patches. I am unsure as to what is the issue. I have started developing things within the lightweight CRM and added new tables etc for my needs. So i have a few linked fields across different tables in the DB part.

The automations I have run by adding rows to a certain table and then it kicks off other automations based on the webhooks associated to different fields to then do the next part of the process to try and reduce the need for 1 huge workflow that could easily break.

However after a number of these running I then get Server 500 errors in my baserow nodes.

My setup is a local 40core server with 80gb ram but I do have n8n and baserow running on the same VM so dont know if that is one of the causes?

@joffcom, we would appreciate your help here. :raised_hands:

I’ve been running into a similar issue lately - specifically with the N8N Baserow “update row” node.
It used to work flawlessly, but now whenever the update procedure involves few columns (especially one or two) I would get these 500 errors after a few dozen updated rows.

even for Baserow 1.33.4

relevant log items:

[BACKEND][2025-06-08 17:21:59] XXXXXXX - “PATCH /api/database/rows/table/xxxx/xxx/ HTTP/1.1” 200

[POSTGRES][2025-06-08 17:21:59] 2025-06-08 17:20:51.229 UTC [56357] baserow@baserow FATAL: remaining connection slots are reserved for non-replication superuser connections

[BACKEND][2025-06-08 17:21:59] ERROR 2025-06-08 17:21:59,619 django.request.log_response:241- Internal Server Error: /api/database/views/grid/17339/aggregations/
[BACKEND][2025-06-08 17:21:59] Traceback (most recent call last):
[BACKEND][2025-06-08 17:21:59] File “/baserow/venv/lib/python3.11/site-packages/django/db/backends/base/base.py”, line 275, in ensure_connection
[BACKEND][2025-06-08 17:21:59] self.connect()

[BACKEND][2025-06-08 17:21:59] psycopg2.OperationalError: connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
[…]

[BACKEND][2025-06-08 17:21:59] django.db.utils.OperationalError: connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
[BACKEND][2025-06-08 17:21:59] xxxxxxxxx - “GET /api/database/views/grid/17339/aggregations/ HTTP/1.1” 500

[EXPORT_WORKER][2025-06-08 17:21:59] [2025-06-08 17:21:59,736: ERROR/ForkPoolWorker-16] Task baserow.contrib.database.search.tasks.async_update_tsvector_columns[xxx] raised unexpected: OperationalError(‘connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections\n’)
[EXPORT_WORKER][2025-06-08 17:21:59] Traceback (most recent call last):
[…]
[EXPORT_WORKER][2025-06-08 17:21:59] psycopg2.OperationalError: connection to server at “localhost” (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections

btw - I can get around this by splitting the the set of rows to update into batches of about 25 rows per batch (sometimes more, have not experimented enough to pinpoint all the variables here) and setting a wait time of at least 2s between batches, but that slows the process down significantly for larger datasets.

UPDATE: (as there is a limit on consecutive posts)
are there any updates on this issue?
@olgatrykush @cwinhall ?

This is getting in the way of some more advanced integrations we hoped would go smooth…
Is this a bug that is related to anything on the gitlab issue roadmap?


the workaround is far from ideal as I hoped for the update speed without the “wait time”. For 100 000 + rows that all adds up to hours of overhead for row updates via the n8n node.

It seems that I could also increase the max_connections for pg - however - I’m still on the all-in-one container and I’m not entirely sure this would be the right direction in this case? Would appreciate any pointers/hints on how to approach this issue.

Ram will not be an issue, btw.

Thanks!

Hi,

Are there a lot of lookup / rollup fields - including formula fields using the lookup() function - in the table you want to update? This might be the reason why you need to work in batches of 25 records.

Also, do you use the Baserow block in n8n or a pure HTTP request? From my experiences, the HTTP request node is much faster compared to the Baserow node.

@frederikdc no rollup field, no linked fields, no formula fields.
(as far as I can tell, this happens regardless of these, but I have not had enough time to research any further regarding column type impact or custom http call.)

This usually happens with the Baserow node, but experienced a crash with custom http call recently too…

like I said - with the N8N baserow node, this usually happens when there is one or two columns to update and a lot of rows.

In that case, my advice would be to use the HTTP node and make the calls directly to the API. I assume that the Baserow nodes in N8N perform much slower to avoid timeouts or other errors.