Are you using our SaaS platform (Baserow.io) or self-hosting Baserow?
Self-hosted
If you are self-hosting, what version of Baserow are you running?
Baserow v1.32.5
If you are self-hosting, which installation method do you use to run Baserow?
Docker compose file setup and its included in a system service file
What are the exact steps to reproduce this issue?
I have been using baserow as the database for my automations in n8n however i keep getting Server 500 errors since the latest few patches. I am unsure as to what is the issue. I have started developing things within the lightweight CRM and added new tables etc for my needs. So i have a few linked fields across different tables in the DB part.
The automations I have run by adding rows to a certain table and then it kicks off other automations based on the webhooks associated to different fields to then do the next part of the process to try and reduce the need for 1 huge workflow that could easily break.
However after a number of these running I then get Server 500 errors in my baserow nodes.
My setup is a local 40core server with 80gb ram but I do have n8n and baserow running on the same VM so dont know if that is one of the causes?
Iāve been running into a similar issue lately - specifically with the N8N Baserow āupdate rowā node.
It used to work flawlessly, but now whenever the update procedure involves few columns (especially one or two) I would get these 500 errors after a few dozen updated rows.
[POSTGRES][2025-06-08 17:21:59] 2025-06-08 17:20:51.229 UTC [56357] baserow@baserow FATAL: remaining connection slots are reserved for non-replication superuser connections
[BACKEND][2025-06-08 17:21:59] ERROR 2025-06-08 17:21:59,619 django.request.log_response:241- Internal Server Error: /api/database/views/grid/17339/aggregations/
[BACKEND][2025-06-08 17:21:59] Traceback (most recent call last):
[BACKEND][2025-06-08 17:21:59] File ā/baserow/venv/lib/python3.11/site-packages/django/db/backends/base/base.pyā, line 275, in ensure_connection
[BACKEND][2025-06-08 17:21:59] self.connect()
[BACKEND][2025-06-08 17:21:59] psycopg2.OperationalError: connection to server at ālocalhostā (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
[ā¦]
[BACKEND][2025-06-08 17:21:59] django.db.utils.OperationalError: connection to server at ālocalhostā (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
[BACKEND][2025-06-08 17:21:59] xxxxxxxxx - āGET /api/database/views/grid/17339/aggregations/ HTTP/1.1ā 500
[EXPORT_WORKER][2025-06-08 17:21:59] [2025-06-08 17:21:59,736: ERROR/ForkPoolWorker-16] Task baserow.contrib.database.search.tasks.async_update_tsvector_columns[xxx] raised unexpected: OperationalError(āconnection to server at ālocalhostā (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections\nā)
[EXPORT_WORKER][2025-06-08 17:21:59] Traceback (most recent call last):
[ā¦]
[EXPORT_WORKER][2025-06-08 17:21:59] psycopg2.OperationalError: connection to server at ālocalhostā (::1), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections
btw - I can get around this by splitting the the set of rows to update into batches of about 25 rows per batch (sometimes more, have not experimented enough to pinpoint all the variables here) and setting a wait time of at least 2s between batches, but that slows the process down significantly for larger datasets.
UPDATE: (as there is a limit on consecutive posts)
are there any updates on this issue? @olgatrykush@cwinhall ?
This is getting in the way of some more advanced integrations we hoped would go smoothā¦
Is this a bug that is related to anything on the gitlab issue roadmap?
the workaround is far from ideal as I hoped for the update speed without the āwait timeā. For 100 000 + rows that all adds up to hours of overhead for row updates via the n8n node.
It seems that I could also increase the max_connections for pg - however - Iām still on the all-in-one container and Iām not entirely sure this would be the right direction in this case? Would appreciate any pointers/hints on how to approach this issue.
Are there a lot of lookup / rollup fields - including formula fields using the lookup() function - in the table you want to update? This might be the reason why you need to work in batches of 25 records.
Also, do you use the Baserow block in n8n or a pure HTTP request? From my experiences, the HTTP request node is much faster compared to the Baserow node.
@frederikdc no rollup field, no linked fields, no formula fields.
(as far as I can tell, this happens regardless of these, but I have not had enough time to research any further regarding column type impact or custom http call.)
This usually happens with the Baserow node, but experienced a crash with custom http call recently tooā¦
like I said - with the N8N baserow node, this usually happens when there is one or two columns to update and a lot of rows.
In that case, my advice would be to use the HTTP node and make the calls directly to the API. I assume that the Baserow nodes in N8N perform much slower to avoid timeouts or other errors.
@frederikdc
Iā had some time to dig into this - turns out the issue is column summaries triggering connections to postgres. (a spike/cascade of new connections is created, which go over the default limit of 100 quite fast )
These things need to happen at the same time to trigger this:
fast, consecutive updates of rows (N8N does this)
a column summary needs to be turned on in the updated table
a user needs to have a browser window open with a view that includes the updated table and column summaries turned on (like distirbution, %filled, etc).
Is there any chance this could be solved/investigated any time soon?
@olgatrykush
Is there a chance that someone from the developer team could take a look at this issue?
Specifically - column summaries triggering errors (error 500) during fast row updates - for example, from n8n? (as described in the previous post?)
The most likely thing that is causing the saturation of available database connections is the use of async (asgi) workers that we use in all in one image for processing incoming requests. With higher load it might not be that difficult to reach the default Postgres limit (100).
Unfortunately, we donāt yet have better documentation on scaling such deployment.
What we can suggest at this time is to:
Switch away from the all-in-one image. If you use our separate images (front-end, back-end, etc.), the configuration will use sync workers for handling requests. This will limit the number of connections taken to the number of workers involved.
And/or use external PostgreSQL database with connection pooling, however I donāt have any specific recommendation that would work with Baserow at this time.
We will continue to think on how to tweak things to make the deployment with all-in-one image easier for higher load, but there is nothing tangible at this time.