Table appears to be corrupted

Are you using our SaaS platform (Baserow.io) or self-hosting Baserow?

Self-hosted

If you are self-hosting, what version of Baserow are you running?

Version 1.33.2

If you are self-hosting, which installation method do you use to run Baserow?

Docker Version 28.5.2

What are the exact steps to reproduce this issue?

A single table in a workspace began displaying this error today:

Something went wrong
Something went wrong while loading the page. Our developers have been notified of the issue. Please try to refresh or return to the dashboard.

This table is 1 of ~140 tables in a Baserow database my client is using - none of the other tables are experiencing any trouble.

When I try to duplicate or delete the table, the following error message appears:

Action not completed.
The action couldn’t be completed because an unknown error has occurred.

If I try to export the table:

Export Failed
The export failed due to a server error.

I can rename the table.

There are no errors if I export the workspace, but when importing that exported zip file into a new workspace (in the same Baserow installation), I get:

Action not completed.
Something went wrong during the import_applications job execution.

(so that means this client is also no longer able to reliably perform backups via the export routine).

I can provide the log files, but don’t want to upload in the public forum.

This log seems to point to the issue:

django.db.utils.ProgrammingError: column database_table_2860.field_23599 does not exist
HINT: Perhaps you meant to reference the column “database_table_2860.field_23499”.

So in Baserow’s metadata, there is still a field with id 23599 for table 2860, but in the underlying Postgres table (database_table_2860), the corresponding column field_23599 is missing?

I think this has been resolved, but I’d like to know I handled it correctly, and haven’t potentially caused any other problems unknowingly.

I used:

docker exec -it baserow /baserow.sh backend-cmd-with-db manage dbshell

baserow=> SELECT column_name, data_type
FROM information_schema.columns
WHERE table_name = ‘database_table_2860’
AND column_name = ‘field_23599’;
column_name | data_type
-------------±----------
(0 rows)

I assumed a text data type for the missing field:

baserow=> ALTER TABLE database_table_2860
ADD COLUMN field_23599 text;
ALTER TABLE

baserow=> SELECT column_name, data_type
FROM information_schema.columns
WHERE table_name = ‘database_table_2860’
AND column_name = ‘field_23599’;
column_name | data_type
-------------±----------
field_23599 | text
(1 row)

The table is back up and running, but import of an exported workspace is still returning the same error, so I’m concerned there’s still a mismatch in the metadata.

In the process of troubleshooting, I had put a table in the trash which was linked elsewhere. After undeleting, workspace export/import now works again.

I ran some tests to make sure everything else is in good shape:

For:

docker logs --since=24h baserow | grep -i “does not exist” || echo “No ‘does not exist’ errors in last 24h”

all of the “does not exist” errors in the last 24h are ‘column database_table_2860.field_23599 does not exist’

baserow=> SELECT t.id, t.name
FROM database_table t
WHERE NOT EXISTS (
SELECT 1
FROM information_schema.tables it
WHERE it.table_schema = ‘public’
AND it.table_name = ‘database_table_’ || t.id::text
);
id | name
----±-----
(0 rows)

baserow=> SELECT f.id,
f.name,
f.table_id
FROM database_field f
LEFT JOIN database_table t ON t.id = f.table_id
WHERE t.id IS NULL
ORDER BY f.id;
id | name | table_id
----±-----±---------
(0 rows)

Is there any other test you recommend to check integrity?

From what I saw in the Baserow trash and log history, what appears to have happened to cause this issue, was that one of my client’s employees created some new schema with relationship links to a table that they later deleted, leaving orphaned relationships. Then, when they discovered an issue with the orphaned schema, they went through several steps of deleting the tables containing the orphaned links, and then they created replacement tables for the tables they had deleted, with both variations of different names and the same name as the tables which were deleted. It took a while to figure out which tables had been deleted incorrectly, because the naming was a mess, and the original deleted table which caused the orphaned relationship appears to have been removed from the trash history. So, none of this appears to have been an issue with Baserow’s default handling of Postgres tables or Metadata.

Hey @NickAntonaccio, glad to hear the team managed to resolve the problem, do you still need any assistance with this?

I’m certain enough that there’s currently no issue with integrity, but if a technician at Baserow has time at some point to share their validation strategy for a case like this, it would be appreciated to have that documentation. It’s not urgent at this point.

1 Like