New Timeout Errors when using Baserows API in the Hosted Version

Are you using our SaaS platform (Baserow.io) or self-hosting Baserow?

SaaS

What are the exact steps to reproduce this issue?

I am running a badge tasks where I have to update thousands of rows in my Baserow table.

Currently, I am running a procedure with Make.com where I put the process to sleep for 2 seconds, then do a search via Baserow API to find the value, then do another call to update. Now task 423 leads to this error. This seems to be a time-out error, probably because I triggered some limit with the API calls.



For this process, unfortunately, I cannot run a badge update as the format of my input data is messy and I need to run a process to match my records to a name field (other than the ID field).

Some months ago, this process did work perfectly even without using a 2 Seconds Sleep module in Make.com. I updated thousands of rows during the night in my Baserow tables without any problem. Now I receive time-out errors all the time. I don’t understand. My IP from which I am calling is whitelisted, and I am also properly authorized. Also I am

Hey @artoflogic, I’m sorry to hear that you ran into problems with the API. We made some resource optimizations 2 weeks ago, but we’ve been hearing from other users that they ran into 502 and 504 errors as well. We’ve rolled back this change two days ago, and other users confirmed that it works as expected again. Would you mind trying it one more time?

Hi @bram,

Thank you for looking into this. The issue remains. I could run a few hundred updates to my table but then again, I got a bad gateway error.

I am trying to run a batch of 500 updates to my table via a lool in my automation, but I still get this time-out error. I completely something between 200 - 300 updates before this happened again:

The threshold seems still too low for my purposes I think.

A second observation I have is that when you run a “list rows” command via the API, I always had to set the limit at around 100 rows. I find this also too low. In WordPress, you can get lists with 1,000 rows. For Baserow, given this is based on a better performing database structure, I think the limit should also be more like 1,000, not 100 rows, before you get a timeout error. It should be possible to query a table and list 1,000 entries minimum in my humble opinion, not 100. However, this issue is not new, it was always like this, and its separate from my first observation.

Hey @artoflogic, I’d love to get a better understanding of how your database is structured, so that we can identify the problem. Would you mind sharing how the schema of your database looks like (or share which table it is)? Then we can analyze where these performance problems are coming from.

Dear @bram , I sent you a private message trying to give more details. Cheers

Hey @artoflogic, thank you for sharing all the details privately. We have just made some additional improvements that should improve the network communication between our load balancer and application servers. Before, we could see a number of 502 responses every minute, but it has now basically dropped to 0.

However, I also noticed that your table contains a lot of relationships. If there are many lookups and formulas in the related tables, then a create, update and delete requests can be slown down by it. If you’re doing many updates, then it could happen that the rows in your database are locked while making the next request, which can result in a 502.

The good news is that we’re currently working on multiple improvements on that front (Draft: Formula update cte performance improvement (!3459) · Merge requests · Baserow / baserow · GitLab, Resolve "Limit the number of relations returned in link row fields" (!3442) · Merge requests · Baserow / baserow · GitLab, and another one related to the search index updates).

Hi @bram, ok that sounds great. Thank you!