Error when trying to turn on indexing for text fields

how to reproduce:
create a table with text fields with longer text strings (long text or single line text) - try to turn on indexing and click save:

error:
Action not completed.
The action couldn’t be completed
because an unknown error has
occurred.

the corresponding log item seems to be:
[POSTGRES][2025-09-01 21:13:09] 2025-09-01 21:13:09.084 UTC [xxx] baserow@baserow ERROR: index row size 3592 exceeds btree version 4 maximum 2704 for index “database_table_xxx_field_xxx_xxx”

This seems like a postgres limitation (?) - but then why would this be active for “offending” fields?

I kind of hoped this is some custom solution which would enable faster “contains” search in really long texts…. which is what could use a speedup for very large tables.

1 Like

Thanks for the report @dev-rd, we are looking into it! Will provide an update when I can.

2 Likes

The underlying issue here is that there is a size limit for the indexing (maximum size is 8191bytes). Therefore, we will likely take the action to disable indexing for long text fields.

Thanks for the info
in my case this also happens for single line text fields with mostly just one sentence in each item…

Could you share an exmaple of one of those strings? Because that should not be the case for sure.

I can say that it works fine for single line text which contains no more than 255 chars.
The procedure throws errors if any cell is between 2000 and 4000 characters or longer.

Also - it seems I also cannot copy long text after turning the indexing on.
The max length I can copy into a cell with indexing turned on is about 3952 characters.

The truncation is expected behaviour given the limitation. Also, I’d say that a single line text field probably shouldn’t be this long but I do agree that it should be handled better.

Here is the gitlab issue to track: Disable custom indexes for Long text field (#3813) · Issues · Baserow / baserow · GitLab

@cwinhall One more thing - we have a workspace where we test large tables and after turning on indexing for a few of the fields in the test tables I noticed that our instance’s size increased by about 50GB (there are a few tables of 500K rows in each test table)

The size increase seems to coincide with the postgres table named search_workspace_XXX_data, which is about 50GB in size (the XXX is the number of the test workspace where we keep teh large tables).
Is that expected after turning indexing on (if related, of course) and is the size expected - the original tables where I turned the index on were about 7 GB and about 3GB in size. There is another table about 17 GB in size in that workspace, but I have not turned on indexing for any of the fields in that table.

Also - the search table seems to persist even after deleting the indexed fields, even entire tables.