Baserow schema updates are slowing down all API calls and brings the server down

I’ve installed Baserow and n8n using Cloudron on DigitalOcean, under the same server with 2GB RAM.
2 GB Memory / 1 Intel vCPU / 70 GB Disk / FRA1 - Cloudron 7.6.2 on Ubuntu 22.04

I have a single n8n workflow that read/updates a single line in baserow, every 10 seconds.

On usual load, everything works just fine. But I noticed that when I manipulate the Baserow database schema (adding field, renaming, editing formulas, etc.) this simple workflow brings n8n + baserow instances completely down, the server basically crashes and then recovers at some point. (crash, restart)

Essentially, when editing the Baserow schema, n8n executions are getting stuck (what usually takes <3 seconds can run for 20mn). I believe it depends on what changes are made on Baserow, and how this affect performances.

I believe the best practice for that is to use servers dedicated for a specific use, one for n8n, one for Baserow. I guess the issue is that n8n keeps piling up executions, which basically DDoS the Baserow API. And, because they run on the same server, this also affects n8n, which becomes unresponsive, and doesn’t allow me to cancel executions.

I’m a bit surprised by that behavior, though. I understand how manipulating Baserow schema would take up significant resources to apply the changes, but I fail to understand why the side effects are so strong.

Hi, @Ambroise-DNA-PC, this sounds like a resources problem. I think you can easily combine n8n and Baserow on one server, but I don’t think you have enough memory and CPUs to do so. Because there aren’t enough resources, it sounds like Baserow can only handle one request at the same time. If you’re making a schema update, which might take a bit longer, it will block the other requests, causing the whole system to slow down. It can also be that the server runs out of memory and crashes for that reason. I would do the following:

  • Upgrade to a server with 4GB of memory and 2 vCPU.
  • Make sure that Baserow has 3GB allocated memory.
  • Set the environment variable BASEROW_AMOUNT_OF_GUNICORN_WORKERS to 5 to enforce 5 gunicorn workers from running. This will allow you to run 5 concurrent requests. If you want to allocate less memory, then this number needs to be lower as well.
  • Set the environment variable BASEROW_AMOUNT_OF_WORKERS to 2 to make sure you can have two async queued tasks in the background.
1 Like

This also happenned in another of my client’s server, where the setup is much more robust with 8Go RAM. n8n and Baserow are hosted on that server, and using n8n cron task basically brought Baserow server down.

They changed it to a much more powerful server (32Go) and the issue disappeared.
It’s a bit concerning that the server would go down “so easily”, though. And the lack of understanding/observability as to why things went wrong is also a concern.

Moving forward, we’ll host Baserow on a server dedicated to solely Baserow, to avoid issues spreading between apps.

Is there some cheatsheet about “how much memory to use for how many runners” or general indication about how to configure our server?

Thanks for pointing out those ENV vars, they’ll surely be useful to us!