Server Error after upgrading to PostgreSQL 15

Please fill in the questionnaire below.

Technical Help Questionnaire

Have you read and followed the instructions at: *READ ME FIRST* Technical Help FAQs - #2 by nigel ?

Answer: Yes

How have you self-hosted Baserow.

docker-compose.yml :

version: "3.4"
x-backend-variables: &backend-variables
  # Most users should only need to set these first four variables.
  SECRET_KEY: ${SECRET_KEY:?}
  BASEROW_JWT_SIGNING_KEY: ${BASEROW_JWT_SIGNING_KEY:-}
  DATABASE_PASSWORD: ${DATABASE_PASSWORD:?}
  REDIS_PASSWORD: ${REDIS_PASSWORD:?}
  # If you manually change this line make sure you also change the duplicate line in
  # the web-frontend service.
  BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}

...
 PRIVATE_BACKEND_URL: http://backend:8000
  
services:
  caddy:
    image: caddy:2
    restart: unless-stopped
    environment:
      # Controls what port the Caddy server binds to inside its container.
      BASEROW_CADDY_ADDRESSES: ${BASEROW_CADDY_ADDRESSES:-:80}
      PRIVATE_WEB_FRONTEND_URL: ${PRIVATE_WEB_FRONTEND_URL:-http://web-frontend:3000}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL:-}
    ports:
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_PORT:-80}:80"
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_SSL_PORT:-443}:443"
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - media:/baserow/media
      - caddy_config:/config
      - caddy_data:/data
    networks:
      local:

  backend:
    image: baserow/backend:1.25.1
    restart: unless-stopped

    environment:
      <<: *backend-variables
    depends_on:
      - db
      - redis
    volumes:
      - media:/baserow/media
    networks:
      local:

  web-frontend:
    image: baserow/web-frontend:1.25.1
    restart: unless-stopped
    environment:
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
    ../...
    depends_on:
      - backend
    networks:
      local:

  celery:
    image: baserow/backend:1.25.1
    restart: unless-stopped
    environment:
      <<: *backend-variables
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-export-worker:
    image: baserow/backend:1.25.1
    restart: unless-stopped
    command: celery-exportworker
    environment:
      <<: *backend-variables
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-beat-worker:
    image: baserow/backend:1.25.1
    restart: unless-stopped
    command: celery-beat
    environment:
      <<: *backend-variables
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  db:
    image: postgres:15
    # If you were using a previous version, perform the update by uncommenting the
    # following line. See: https://baserow.io/docs/installation%2Finstall-with-docker#upgrading-postgresql-database-from-a-previous-version
    # for more information.
    # image: pgautoupgrade/pgautoupgrade:15-alpine3.8
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      local:
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    networks:
      local:

  volume-permissions-fixer:
    image: bash:4.4
    command: chown 9999:9999 -R /baserow/media
    volumes:
      - media:/baserow/media
    networks:
      local:

volumes:
  pgdata:
  media:
  caddy_data:
  caddy_config:

networks:
  local:
    driver: bridge


.env :

SECRET_KEY=***************************
DATABASE_PASSWORD=******************
REDIS_PASSWORD=***************************
BASEROW_JWT_SIGNING_KEY=*****************************
BASEROW_PUBLIC_URL=http://localhost

What are the specs of the service or server you are using to host Baserow.

Tested on Linux mint 32 Go 6 CPUs via Virtualbox > Ubuntu 20.2 16Go - 6 theads

Which version of Baserow are you using.

1.25.1 with PostgreSQl 15 migrated from 1.25.1 with PostgreSQl 11

How have you configured your self-hosted installation?

What commands if any did you use to start your Baserow server?

PWD=$PWD HOST_PUBLISH_IP=127.0.0.1 docker-compose up -d

Describe the problem

After upgrading the database to PostgreSQL 15, access to localhost is no longer possible: “server error”
The backend keeps restarting :
name_backend_1 /usr/bin/tini – Restarting

The migration was successful :

db_1 **********************************************************
db_1 Automatic upgrade process finished with no errors reported
db_1 **********************************************************
db_1 2024-06-01 15:10:20.938 UTC [1] LOG: starting PostgreSQL 15.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db_1 2024-06-01 15:10:20.938 UTC [1] LOG: listening on IPv4 address “0.0.0.0”, port 5432
db_1 2024-06-01 15:10:20.938 UTC [1] LOG: listening on IPv6 address “::”, port 5432
db_1 2024-06-01 15:10:20.953 UTC [1] LOG: listening on Unix socket “/var/run/postgresql/.s.PGSQL.5432”
db_1 2024-06-01 15:10:20.978 UTC [267] LOG: database system was shut down at 2024-06-01 15:10:20 UTC
db_1 2024-06-01 15:10:21.002 UTC [1] LOG: database system is ready to accept connections


#### How many rows in total do you have in your Baserow tables? 
about 9000
####  Please attach full logs from all of Baserow's services 

Logs after successful migration  : 
```                 OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.25.1,deployment.environment=unknown
backend_1                    PostgreSQL is available
backend_1                    python /baserow/backend/src/baserow/manage.py locked_migrate
backend_1                    WARNING: Baserow is configured to use a BASEROW_PUBLIC_URL of http://localhost. If you attempt to access Baserow on any other hostname requests to the backend will fail as they will be from an unknown host. Please set BASEROW_PUBLIC_URL if you will be accessing Baserow from any other URL then http://localhost.
backend_1                    2024-06-01 15:32:40.137 | INFO     | baserow.core.management.commands.locked_migrate:acquire_lock:54 - Attempting to lock the postgres advisory lock with id: 123456 You can disable using locked_migrate by default and switch back to the non-locking version by setting BASEROW_DISABLE_LOCKED_MIGRATIONS=true
backend_1                    2024-06-01 15:32:40.140 | INFO     | baserow.core.management.commands.locked_migrate:acquire_lock:65 - Acquired the lock, proceeding with migration.
backend_1                    Operations to perform:
backend_1                      Apply all migrations: auth, baserow_enterprise, baserow_premium, builder, contenttypes, core, database, db, integrations, sessions
backend_1                    Clearing Baserow's internal generated model cache...
backend_1                    Done clearing cache.
backend_1                    Running migrations:
backend_1                      No migrations to apply.
backend_1                    Submitting the sync templates task to run asynchronously in celery after the migration...
backend_1                    Created 188 operations...
backend_1                    Deleted 0 un-registered operations...
backend_1                    Checking to see if formulas need updating...
backend_1                    2024-06-01 15:32:41.529 | INFO     | baserow.contrib.database.formula.migrations.handler:migrate_formulas:167 - Found 0 batches of formulas to migrate from version 5 to 5.
backend_1                    
0it [00:00, ?it/s]
Finished migrating formulas: : 0it [00:00, ?it/s]
Finished migrating formulas: : 0it [00:00, ?it/s]
backend_1                    
Syncing default roles:   0%|          | 0/7 [00:00<?, ?it/s]
Syncing default roles:   0%|          | 0/7 [00:00<?, ?it/s]
backend_1                    Traceback (most recent call last):
backend_1                      File "/baserow/backend/src/baserow/manage.py", line 41, in <module>
backend_1                        main()
backend_1                      File "/baserow/backend/src/baserow/manage.py", line 37, in main
backend_1                        execute_from_command_line(sys.argv)
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
backend_1                        utility.execute()
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 440, in execute
backend_1                        self.fetch_command(subcommand).run_from_argv(self.argv)
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/base.py", line 402, in run_from_argv
backend_1                        self.execute(*args, **cmd_options)
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/base.py", line 448, in execute
backend_1                        output = self.handle(*args, **options)
backend_1                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend_1                      File "/baserow/backend/src/baserow/core/management/commands/locked_migrate.py", line 43, in handle
backend_1                        super().handle(*args, **options)
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/base.py", line 96, in wrapped
backend_1                        res = handle_func(*args, **kwargs)
backend_1                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/commands/migrate.py", line 376, in handle
backend_1                        emit_post_migrate_signal(
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/core/management/sql.py", line 52, in emit_post_migrate_signal
backend_1                        models.signals.post_migrate.send(
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/dispatch/dispatcher.py", line 176, in send
backend_1                        return [
backend_1                               ^
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/dispatch/dispatcher.py", line 177, in <listcomp>
backend_1                        (receiver, receiver(signal=self, sender=sender, **named))
backend_1                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend_1                      File "/baserow/enterprise/backend/src/baserow_enterprise/apps.py", line 233, in sync_default_roles_after_migrate
backend_1                        operation, _ = Operation.objects.get_or_create(
backend_1                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/db/models/manager.py", line 85, in manager_method
backend_1                        return getattr(self.get_queryset(), name)(*args, **kwargs)
backend_1                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/db/models/query.py", line 929, in get_or_create
backend_1                        return self.get(**kwargs), False
backend_1                               ^^^^^^^^^^^^^^^^^^
backend_1                      File "/baserow/venv/lib/python3.11/site-packages/django/db/models/query.py", line 653, in get
backend_1                        raise self.model.MultipleObjectsReturned(
backend_1                    __fake__.Operation.MultipleObjectsReturned: get() returned more than one Operation -- it returned 2!
backend_1                    

celery_1                      -------------- default-worker@6b5f3c1f7b2e v5.2.7 (dawn-chorus)
celery_1                     --- ***** ----- 
celery_1                     -- ******* ---- Linux-6.5.0-35-generic-x86_64-with-glibc2.36 2024-06-01 15:32:40
celery_1                     - *** --- * --- 
celery_1                     - ** ---------- [config]
celery_1                     - ** ---------- .> app:         baserow:0x743f29457010
celery_1                     - ** ---------- .> transport:   redis://:**@redis:6379/0
celery_1                     - ** ---------- .> results:     disabled://
celery_1                     - *** --- * --- .> concurrency: 6 (prefork)
celery_1                     -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
celery_1                     --- ***** ----- 
celery_1                      -------------- [queues]
celery_1                                     .> celery           exchange=celery(direct) key=celery
celery_1                                     
celery_1                     
celery_1                     [tasks]
celery_1                       . baserow.contrib.database.export.tasks.clean_up_old_jobs
celery_1                       . baserow.contrib.database.export.tasks.run_export_job
celery_1                       . baserow.contrib.database.fields.tasks.delete_mentions_marked_for_deletion
celery_1                       . baserow.contrib.database.fields.tasks.run_periodic_fields_updates
celery_1                       . baserow.contrib.database.rows.tasks.clean_up_row_history_entries
celery_1                       . baserow.contrib.database.search.tasks.async_update_tsvector_columns
celery_1                       . baserow.contrib.database.table.tasks.create_tables_usage_for_new_database
celery_1                       . baserow.contrib.database.table.tasks.setup_created_by_and_last_modified_by_column
celery_1                       . baserow.contrib.database.table.tasks.setup_new_background_update_and_search_columns
celery_1                       . baserow.contrib.database.table.tasks.unsubscribe_user_from_tables_when_removed_from_workspace
celery_1                       . baserow.contrib.database.table.tasks.update_table_usage
celery_1                       . baserow.contrib.database.views.tasks._check_for_pending_view_index_updates
celery_1                       . baserow.contrib.database.views.tasks.update_view_index
celery_1                       . baserow.contrib.database.webhooks.tasks.call_webhook
celery_1                       . baserow.core.action.tasks.cleanup_old_actions
celery_1                       . baserow.core.jobs.tasks.clean_up_jobs
celery_1                       . baserow.core.jobs.tasks.run_async_job
celery_1                       . baserow.core.notifications.tasks.beat_send_instant_notifications_summary_by_email
celery_1                       . baserow.core.notifications.tasks.send_daily_and_weekly_notifications_summary_by_email
celery_1                       . baserow.core.notifications.tasks.send_queued_notifications_to_users
celery_1                       . baserow.core.notifications.tasks.singleton_send_instant_notifications_summary_by_email
celery_1                       . baserow.core.snapshots.tasks.delete_application_snapshot
celery_1                       . baserow.core.snapshots.tasks.delete_expired_snapshots
celery_1                       . baserow.core.tasks.sync_templates_task
celery_1                       . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion
celery_1                       . baserow.core.trash.tasks.permanently_delete_marked_trash
celery_1                       . baserow.core.usage.tasks.run_calculate_storage
celery_1                       . baserow.core.user.tasks.check_pending_account_deletion
celery_1                       . baserow.core.user.tasks.clean_up_user_log_entry
celery_1                       . baserow.core.user.tasks.flush_expired_tokens
celery_1                       . baserow.core.user.tasks.share_onboarding_details_with_baserow
celery_1                       . baserow.ws.tasks.broadcast_application_created
celery_1                       . baserow.ws.tasks.broadcast_to_channel_group
celery_1                       . baserow.ws.tasks.broadcast_to_group
celery_1                       . baserow.ws.tasks.broadcast_to_groups
celery_1                       . baserow.ws.tasks.broadcast_to_permitted_users
celery_1                       . baserow.ws.tasks.broadcast_to_users
celery_1                       . baserow.ws.tasks.broadcast_to_users_individual_payloads
celery_1                       . baserow.ws.tasks.force_disconnect_users
celery_1                       . baserow_enterprise.audit_log.tasks.clean_up_audit_log_entries
celery_1                       . baserow_enterprise.tasks.unsubscribe_subject_from_tables_currently_subscribed_to_task
celery_1                       . baserow_premium.fields.tasks.generate_ai_values_for_rows
celery_1                       . baserow_premium.license.tasks.license_check
celery_1                       . baserow_premium.usage.tasks.run_calculate_seats
celery_1                       . djcelery_email_send_multiple
celery_1                     

redis_1                      1:C 01 Jun 2024 15:32:35.475 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                      1:C 01 Jun 2024 15:32:35.475 # Redis version=6.2.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                      1:C 01 Jun 2024 15:32:35.475 # Configuration loaded
redis_1                      1:M 01 Jun 2024 15:32:35.476 * monotonic clock: POSIX clock_gettime
redis_1                      1:M 01 Jun 2024 15:32:35.486 * Running mode=standalone, port=6379.
redis_1                      1:M 01 Jun 2024 15:32:35.487 # Server initialized
redis_1                      1:M 01 Jun 2024 15:32:35.488 * Ready to accept connections
redis_1                      1:M 01 Jun 2024 15:37:36.038 * 100 changes in 300 seconds. Saving...
redis_1                      1:M 01 Jun 2024 15:37:36.039 * Background saving started by pid 73
redis_1                      73:C 01 Jun 2024 15:37:36.068 * DB saved on disk
redis_1                      73:C 01 Jun 2024 15:37:36.069 * RDB: 0 MB of memory used by copy-on-write
redis_1                      1:M 01 Jun 2024 15:37:36.146 * Background saving terminated with success
db_1                         
db_1                         PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1                         
db_1                         2024-06-01 15:32:36.023 UTC [1] LOG:  starting PostgreSQL 15.7 (Debian 15.7-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
db_1                         2024-06-01 15:32:36.027 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1                         2024-06-01 15:32:36.027 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1                         2024-06-01 15:32:36.049 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1                         2024-06-01 15:32:36.167 UTC [35] LOG:  database system was shut down at 2024-06-01 15:23:53 UTC
db_1                         2024-06-01 15:32:36.311 UTC [1] LOG:  database system is ready to accept connections
db_1                         2024-06-01 15:37:36.248 UTC [33] LOG:  checkpoint starting: time
db_1                         2024-06-01 15:37:42.285 UTC [33] LOG:  checkpoint complete: wrote 59 buffers (0.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=5.829 s, sync=0.162 s, total=6.038 s; sync files=22, longest=0.095 s, average=0.008 s; distance=721 kB, estimate=721 kB
web-frontend_1               
web-frontend_1                ERROR  (node:7) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead.
web-frontend_1               (Use node --trace-deprecation ... to show where the warning was created)
web-frontend_1               
web-frontend_1               ✔ Sentry reporting is enabled (client side: enabled, server side: enabled)
web-frontend_1               ℹ Listening on: http://172.18.0.9:3000/
web-frontend_1               
web-frontend_1                ERROR  connect ECONNREFUSED 172.18.0.6:8000
web-frontend_1               
web-frontend_1                 at module.exports.AxiosError.from (node_modules/axios/lib/core/AxiosError.js:80:0)
web-frontend_1                 at RedirectableRequest.handleRequestError (node_modules/axios/lib/adapters/http.js:610:0)
web-frontend_1                 at RedirectableRequest.emit (node:events:519:28)
web-frontend_1                 at RedirectableRequest.emit (node:domain:488:12)
web-frontend_1                 at eventHandlers.<computed> (node_modules/follow-redirects/index.js:38:24)
web-frontend_1                 at ClientRequest.emit (node:events:531:35)
web-frontend_1                 at ClientRequest.emit (node:domain:488:12)
web-frontend_1                 at Socket.socketErrorListener (node:_http_client:500:9)
web-frontend_1                 at Socket.emit (node:events:519:28)
web-frontend_1                 at Socket.emit (node:domain:488:12)
web-frontend_1                 at Axios_Axios.request (node_modules/axios/lib/core/Axios.js:37:0)
web-frontend_1                 at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
web-frontend_1                 at async Store.load (modules/core/store/settings.js:1:0)
web-frontend_1               
celery-export-worker_1       OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.25.1,deployment.environment=unknown
celery-export-worker_1       WARNING: Baserow is configured to use a BASEROW_PUBLIC_URL of http://localhost. If you attempt to access Baserow on any other hostname requests to the backend will fail as they will be from an unknown host. Please set BASEROW_PUBLIC_URL if you will be accessing Baserow from any other URL then http://localhost.
celery-export-worker_1        
celery-export-worker_1        -------------- export-worker@a224d47a16ff v5.2.7 (dawn-chorus)
celery-export-worker_1       --- ***** ----- 
celery-export-worker_1       -- ******* ---- Linux-6.5.0-35-generic-x86_64-with-glibc2.36 2024-06-01 15:32:40
celery-export-worker_1       - *** --- * --- 
celery-export-worker_1       - ** ---------- [config]
celery-export-worker_1       - ** ---------- .> app:         baserow:0x7cc214b91890
celery-export-worker_1       - ** ---------- .> transport:   redis://:**@redis:6379/0
celery-export-worker_1       - ** ---------- .> results:     disabled://
celery-export-worker_1       - *** --- * --- .> concurrency: 6 (prefork)
celery-export-worker_1       -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
celery-export-worker_1       --- ***** ----- 
celery-export-worker_1        -------------- [queues]
celery-export-worker_1                       .> export           exchange=export(direct) key=export
celery-export-worker_1                       
celery-export-worker_1       
celery-export-worker_1       [tasks]
celery-export-worker_1         . baserow.contrib.database.export.tasks.clean_up_old_jobs
celery-export-worker_1         . baserow.contrib.database.export.tasks.run_export_job
celery-export-worker_1         . baserow.contrib.database.fields.tasks.delete_mentions_marked_for_deletion
celery-export-worker_1         . baserow.contrib.database.fields.tasks.run_periodic_fields_updates
celery-export-worker_1         . baserow.contrib.database.rows.tasks.clean_up_row_history_entries
celery-export-worker_1         . baserow.contrib.database.search.tasks.async_update_tsvector_columns
celery-export-worker_1         . baserow.contrib.database.table.tasks.create_tables_usage_for_new_database
celery-export-worker_1         . baserow.contrib.database.table.tasks.setup_created_by_and_last_modified_by_column
celery-export-worker_1         . baserow.contrib.database.table.tasks.setup_new_background_update_and_search_columns
celery-export-worker_1         . baserow.contrib.database.table.tasks.unsubscribe_user_from_tables_when_removed_from_workspace
celery-export-worker_1         . baserow.contrib.database.table.tasks.update_table_usage
celery-export-worker_1         . baserow.contrib.database.views.tasks._check_for_pending_view_index_updates
celery-export-worker_1         . baserow.contrib.database.views.tasks.update_view_index
celery-export-worker_1         . baserow.contrib.database.webhooks.tasks.call_webhook
celery-export-worker_1         . baserow.core.action.tasks.cleanup_old_actions
celery-export-worker_1         . baserow.core.jobs.tasks.clean_up_jobs
celery-export-worker_1         . baserow.core.jobs.tasks.run_async_job
celery-export-worker_1         . baserow.core.notifications.tasks.beat_send_instant_notifications_summary_by_email
celery-export-worker_1         . baserow.core.notifications.tasks.send_daily_and_weekly_notifications_summary_by_email
celery-export-worker_1         . baserow.core.notifications.tasks.send_queued_notifications_to_users
celery-export-worker_1         . baserow.core.notifications.tasks.singleton_send_instant_notifications_summary_by_email
celery-export-worker_1         . baserow.core.snapshots.tasks.delete_application_snapshot
celery-export-worker_1         . baserow.core.snapshots.tasks.delete_expired_snapshots
celery-export-worker_1         . baserow.core.tasks.sync_templates_task
celery-export-worker_1         . baserow.core.trash.tasks.mark_old_trash_for_permanent_deletion
celery-export-worker_1         . baserow.core.trash.tasks.permanently_delete_marked_trash
celery-export-worker_1         . baserow.core.usage.tasks.run_calculate_storage
celery-export-worker_1         . baserow.core.user.tasks.check_pending_account_deletion
celery-export-worker_1         . baserow.core.user.tasks.clean_up_user_log_entry
celery-export-worker_1         . baserow.core.user.tasks.flush_expired_tokens
celery-export-worker_1         . baserow.core.user.tasks.share_onboarding_details_with_baserow
celery-export-worker_1         . baserow.ws.tasks.broadcast_application_created
celery-export-worker_1         . baserow.ws.tasks.broadcast_to_channel_group
celery-export-worker_1         . baserow.ws.tasks.broadcast_to_group
celery-export-worker_1         . baserow.ws.tasks.broadcast_to_groups
celery-export-worker_1         . baserow.ws.tasks.broadcast_to_permitted_users
celery-export-worker_1         . baserow.ws.tasks.broadcast_to_users
celery-export-worker_1         . baserow.ws.tasks.broadcast_to_users_individual_payloads
celery-export-worker_1         . baserow.ws.tasks.force_disconnect_users
celery-export-worker_1         . baserow_enterprise.audit_log.tasks.clean_up_audit_log_entries
celery-export-worker_1         . baserow_enterprise.tasks.unsubscribe_subject_from_tables_currently_subscribed_to_task
celery-export-worker_1         . baserow_premium.fields.tasks.generate_ai_values_for_rows
celery-export-worker_1         . baserow_premium.license.tasks.license_check
celery-export-worker_1         . baserow_premium.usage.tasks.run_calculate_seats
celery-export-worker_1         . djcelery_email_send_multiple

Hi @Sterfield, how did you run the database upgrade? Did you run a container with the pgautoupgrade/pgautoupgrade:15-alpine3.8 image?

Separately, when I look at your log file, it looks like the upgrade was successfull. In your backend output log, I’m seeing the error __fake__.Operation.MultipleObjectsReturned: get() returned more than one Operation -- it returned 2!. Could that somehow be related to you having tried to restore your backup into an existing database (Baserow restoration issues), now resulting in duplicate entries? The error actually looks like it.

Hi @bram, Thank you for your answer !
I retested with a new installation of Baserow 1.23.2 in a virtual machine in which I copied the media and pgdata folders from my original database (which has never been restored). The recreated database is perfectly functional :

health

Then I upgraded PostgreSQL 11 to PostgreSQL 15 with

pgautoupgrade/pgautoupgrade:15-alpine3.8

The db logs confirm the successful completion of the update. But the problem remains the same with the same error message:

__fake__.Operation.MultipleObjectsReturned: get() returned more than one Operation -- it returned 2!

Could the original database be corrupted ?
Perhaps it would be interesting to migrate it to the baserow server ?

Hey @Sterfield, I’m sorry to hear that you’re running in some problems. Honest, I’ve not seen anything similar before, so I’m not really sure how to help you. Something else that can try it the export and restore command from Baserow, which runs on application level, instead of PostgreSQL database level. These are the commands we would normally use to migrate workspaces from one Baserow instance to the other.

The steps below will export an application level JSON export of your workspace, which can be imported into a new clean and not corrupted Baserow environment.

  1. You would need to restore the instance to the older PG 11 version.
  2. Find WORKSPACE_ID_TO_EXPORT, This is the workspace ID in your self-hosted environment, that you’d like to export. You can find the workspace ID by clicking on the three dots next to it, and then find the number between brackets.
  3. Find CONTAINER_ID, this is the ID of the container running the Baserow backend.
  4. Run docker-compose exec backend /baserow/backend/docker/docker-entrypoint.sh manage export_workspace_applications {WORKSPACE_ID_TO_EXPORT}
  5. Run docker-compose cp backend:/baserow/backend/workspace_{WORKSPACE_ID_TO_EXPORT}.json workspace_WORKSPACE_ID_TO_EXPORT.json. This will give you a file that you can restore later.
  6. Start an entirely new clean Baserow environment with PG 15. Find the WORKSPACE_ID_TO_IMPORT. This is the workspace ID in your self-hosted environment, that you’d like to import the old database into. You can find the workspace ID by clicking on the three dots next to it, and then find the number between brackets.
  7. Run docker-compose cp workspace_WORKSPACE_ID_TO_EXPORT.json backend:/baserow/backend/workspace_{WORKSPACE_ID_TO_EXPORT}.json. This will give you a file that you can restore later.
  8. Run docker-compose exec backend /baserow/backend/docker/docker-entrypoint.sh manage import_workspace_applications {WORKSPACE_ID_TO_IMPORT} workspace_{WORKSPACE_ID_TO_EXPORT}.

I’ve not tried out these commands to be honest, so you might need to tweak them a little bit. But it might help you to set a new environment and restore your data into.