My current build:
Baserow 1.29 - self-hosted through a Linode, with docker containers managed by Easypanel
I got AI working great with gtp-4o-mini via the environment variables and my API key, however I’ve run into an issue when trying to analyze pdf files uploaded.
Specifically, I’d like to analyze the owner’s manuals of equipment, so I’m attempting to upload the PDF manuals to a “File” field, then utilize the “AI prompt” field to pull whatever information I want.
When I attempt to “Generate”, the field loads/spins for a few seconds before showing an API_key error, however when running AI with text-only prompts, it works fine. Seems that the files are the issue for some reason.
Yes, I have selected the field in the “Files” dropdown within the AI prompt field. The files are typically 1MB - 5MB max.
I am not referencing the file field directly in the prompt, however, since I assumed I wouldn’t need to as it’s referenced in the dropdown… would this be a cause of the issue?
You are correct - selecting the file field in that form should be enough, you don’t need to reference it in the text.
Could you please examine the logs of your Baserow instance, specifically where the Celery export worker runs? I think you should see some error message there.
Thanks for your message, I’ve since upgraded to 1.30, and have adjusted the num of workers to 4 in my environment variables, which didn’t change the outcome.
Below is the only error I’m seeing from the celery workers when attempting to run the ai prompt:
[EXPORT_WORKER][2024-12-20 15:02:01] TypeError: Messages.create() got an unexpected keyword argument 'temperature'
[EXPORT_WORKER][2024-12-20 15:02:01] TypeError: Messages.create() got an unexpected keyword argument 'temperature'
Yes, this is the problem. You can try setting temperature to 1 but I suspect that won’t work either.
It seems that Baserow can’t be used with this model at this time, we could open a ticket to support it. Try to use the file field with gpt-3.5-turbo or gpt-4-turbo-preview models. These should work.
it’s odd because as soon as I set the “File” to none, the prompt will work based on info fed to it in the prompt text. Seems to be just the file that’s messing with it. Temp, as you suggested, didn’t change it.
I’ll attempt to include a few other models and see what works.
Thanks for your help.
Edit: I have tried gpt-3.5-turbo to no avail, I may try to link an additional non-openai api when i have time next.