Wow, really?
Actually, I switched to nocodb already, but sounds really great!!!
I have shared a lot of error logs, but I don’t think your team didn’t understand what I was facing.
Because you said, my issue have fixed in the previous release.
I spent 2 weeks to fix my problem, but was no lucky.
But thank you for your kind.
I will try again, so if my issue is gone, I will be back to the baserow.
Thank you so much!
Hi @bram
I tested with Country table for a simple one, so yup, looks like it’s working.
But I will test with tens of records for make sure.
Thank you so much!
I am happy to help you too with reporting.
HI @Long, I’m glad to hear that you’re willing to give Baserow another shot! It’s always difficult to debug random errors in the self-hosted environment of someone else. It actually seemed like there were two different problems, and you ran into both of them. I suspect this is because you were making a lot API requests to the instance, which triggered the problem. Anyway, everything should work as expected now. Please let me know if you have any other questions.
Hi @bram.
Thanks for letting me know.
Do you have any updates or did you fix your platform issue?
I really love to use baserow, but it has critical problem as before.
Please let me know, thanks.
Hi @bram or @davide
I was asked about this problem.
Can I know whether this issue was fixed or not?
I want to use baserow, but I didn’t get any updates for a long time.
Please let me know, thanks.
Hi @Long, as Bram mentioned in his last message, everything should work as expected now.
@Long is it working properly for you now?
@bram I seem to have got this issue now with my instance and using 1.32.5
Trying to run an automation thats kicked off by webhooks with pulling data from baserow and updating it in the flow but after a number of items running it then starts spitting Server 500 errors
The service was not able to process your request
Server Error (500) Server Error (500)
Can you please provide the logs of your self-hosted after you’ve received the 500 errors? That should give us a better idea of what’s going wrong.
Sure where do I grab that log?
This depends on how you’re self-hosting, but typically you run docker logs baserow
.
perfect found it and sent over
Hey @DataSwami, thank you for sharing the logs. I’ve identified the error, and it’s related to a known problem, that’s currently in progress of being fixed (#3346 - fixed asgi race condition when accessing auth.User.profile (!3169) · Merge requests · Baserow / baserow · GitLab). We’re expecting to include the proper fix in the next, 1.33 release.
It’s related to a problem that can occurs if your table has relationships, created/last-modified by fields, and in a certain combination processors or CPU throttling. It’s a difficult problem to reproduce on our side, but we know what is going wrong.
If you can reduce the number of concurrent API requests, that should solve it for you, for now. A proper fix is coming in 1.33.
Ahhh thanks @bram for now I have reverted to using execute by workflow in n8n and processing all items in a loop per item. However this is very slow and not scalable so hopefully the 1.33 update will resolve this.
Do you know what the eta is on that release?
Hey @DataSwami, I was looking at some code in Baserow yesterday, and realized that it can also be solved differently for now. If you set the following environment variables:
BASEROW_ASGI_HTTP_MAX_CONCURRENCY=1
BASEROW_AMOUNT_OF_GUNICORN_WORKERS=5
Then this problem should not happen anymore. It does require your Baserow instance to our more memory, but at least you can, should be able to import with 5 concurrent API requests without running into the problem.
We’re aiming to release version 1.33 of Baserow around the end of this month.
thanks @bram in terms of doing that would that essentially queue any calls or would it time them out if the workers are busy?
By default, Baserow starts in a multithreaded way. If you make more requests, it’s spinning up more threads. The bug you encountered is related to a thread safety problem in a unique combination of hardware, database schema, and number of requests.
Setting the related environment variables prevents starting new threads. Instead of that, it starts 5 works to handle concurrent requests. Each need their own memory to run, but they run completely isolated from each other, preventing the thread problem to occur.
So it’s still possible to make concurrent requests.
Ok so if more than 5 concurrent requests are made would it hold them in a queue?
Yes, they will automatically be queued.