Heroku Issues (Dynos) & Baserow / NocoDB

Hi, we are experiencing quite a few issues with self-hosting on Heroku / PostgreSQL

Our dynos are maxed out (performance) & always over-use memory (typically 105%+)

So far this is only for 200k records, and we are looking to add millions more.

Is this a common theme with Baserow? Our Redis plan is also constantly maximised as well.

We’re now testing out NocoDB, however I would love to stay with Baserow but it doesn’t seem to be too efficient - even just with our small 200k record test.

If there are any pointers on how to optimise this, or possibly some common themes on what we might be doing wrong, I’m happy to hear.

Thanks,

Hi @Emily, I’m sorry to hear that you’re running into performance problems. Would you mind sharing more information about your current Heroku setup? Ideally, I’d like to know the following:

  • What type of Dyno do you have (specs), and how many do you have running?
  • How does your database look like, how many tables, how many fields, how many relationships and rows per table?
  • What type of PostgreSQL database are you using?
  • What type of Redis database do you have?
  • How many API requests are you making to Baserow? Many API requests can slow down performance.

It depends a bit on how your database looks like whether there is room to optimize it. There might be, but keep in mind that millions of rows is a lot for no-code databases like Baserow, Airtable, etc. We’ve built Baserow to scale, and in the upcoming weeks the dev team is focussed on making more performance improvements, so that might already help.