I’ve found that for generating large numbers of AI fields concurrently in the self-host environment I seem to be limited by the number of celery workers.
Most of them only use up to 20-25% of the thread’s (logical core) processing power as reported by top.
I was wondering if there is any room for improvement here in cpu utilisation or maybe more concurrent workers. I’m not entirely sure what the actual bottleneck is in this case, so I’d like to brainstorm some options.
What would be the risks of going for more celery workers than available cores in this specific context?