Error in Airtable import

Hi,
After unsuccessful try of Airtable import i can’t run that function again because of error:

“Another import job is already running. You need to wait for that one to finish before starting another.”

How to solve this issue?

Thanks in advance.

Hello @fasta, an import can run for a maximum of 30 minutes. You need to wait a bit and your imported database will either become visible or canceled (if you import a huge Airtable base, it might fail).

Regarding the error message: somehow, another Airtable import is running in the background. It could be that you started an import, closed the modal/popup, then opened it again and tried to run another import. You can only run one per account and starting the second one resulted in this error.

Hope that answers all your questions, and sorry for the delay with the reply, I’ve been checking this case with our founder Bram :blush:

Hi, thank you for response. But the problem is unfortunately a little bit deeper. An error is still present, despite of much more passed time, than 30 minutes :frowning:
We tried to find solution by ourselwes, our idea now is in reset Redis cache (based on analysis of part of the code below), but we are not sure is it a correct idea :slight_smile:


A user can only have one Airtable import job running simultaneously. If one

    # is already running, we don't want to start a new one.
    running_jobs = AirtableImportJob.objects.filter(user_id=user.id).is_running()
    if len(running_jobs) > 0:
        raise AirtableImportJobAlreadyRunning(
            f"Another job is already running with id {running_jobs[0].id}."
        )

Hey @fasta, the import is still marked as running. You need to change something in the PostgreSQL database to recover from this state, not in Redis.

Oh, we will try, thanks.
840 tables so far, it’ll be a hard task. :slight_smile:

1 Like

Hi @fasta. I am receiving the same error message (though I am importing data from a CSV file and not Airtable). How did you finally resolve the issues? Thanks.

Figured it out. Deleted failed jobs from the database_fileimportjob and core_job tables.

For anyone stuck on this…

Follow this guide to get your postgresql credentials, download dbeaver, access the database from your pc/laptop, go to database_fileimportjob and find the record with a filename still in it, delete that row and it will work. Maybe you have to also do core_job as well like CharlesC says…

We have found the root cause of the bug causing these jobs to get stuck and will work on a fix now :slight_smile: