Has anyone successfully deployed Baserow on Cloud Run (Google Cloud Platform) yet?


I would like to host a baserow container on Cloud Run. I tried different options and configuration but did not succeed yet.

Thanks a lot.

Hi @Pierre-Yves

I’ve not yet tried deploying Baserow on Cloud run just yet. But I might still be able to help.

Could you provide the full details on the options/configuration you setup? Ideally showing your env vars, what image you used etc.


Hi @nigel,

I’m using the Docker installation process with https and Baserow 1.14 :

docker run \
  -d \
  --name baserow \
  -v baserow_data:/baserow/data \
  -p 80:80 \
  -p 443:443 \
  --restart unless-stopped \

I’m curious if anyone else has made progress on this. I’m also trying to figure it out.

Here’s as far as I’ve got.


  • I’m using a Cloud SQL instance to persist my data instead of a Docker volume (but presumably the approach would be mostly the same using a Docker volume).

  • I increased the service listening timeout to 10 minutes to give it plenty of time to run first migrations and do other initialization stuff.

  • I increased the RAM from 512mb default to 2GB.

gcloud services enable sql-component.googleapis.com
gcloud services enable sqladmin.googleapis.com
gcloud sql instances create instance-1 --database-version=POSTGRES_14 --cpu=2 --memory=4GiB --zone=northamerica-northeast1-a --root-password=[PASSWORD]
gcloud sql databases create database-1 --instance=instance-1 --charset=UTF8 --collation=en_US.UTF8
gcloud services enable run.googleapis.com
gcloud auth configure-docker
gcloud services enable containerregistry.googleapis.com
docker pull baserow/baserow:1.15.0
docker tag baserow/baserow:1.15.0 gcr.io/[PROJECT]/baserow/baserow:1.15.0 
docker push gcr.io/[PROJECT]/baserow/baserow:1.15.0
gcloud run deploy baserow --image=gcr.io/[PROJECT]/baserow/baserow:1.15.0 \
--region=northamerica-northeast1 \
--allow-unauthenticated \
--platform=managed \
--memory=2G \
--timeout=10m \
--add-cloudsql-instances=[PROJECT]:northamerica-northeast1:instance-1 \
--port 80 \
--update-env-vars \

I can see in the logs that the applications starts and connects to the database and runs the initial database migrations, and I can confirm that by inspecting the database with a database client.

After a couple of minutes the application is emitting encouraging looking stdout log messages such as

[BACKEND][2023-04-03 22:06:03] [2023-04-03 22:06:03 +0000] [383] [INFO] Application startup complete.
[BASEROW-WATCHER][2023-04-03 21:54:48] Baserow is now available at http://[DOMAIN].com
[BACKEND][2023-04-03 22:07:04] - “GET /_health/ HTTP/1.1” 200
2023-04-03 22:07:06,759 INFO success: beatworker entered RUNNING state, process has stayed up for > than 100 seconds (startsecs)

but the Cloud Run service revision never finishes deploying, and after a few more minutes it dies with this error:

Ready condition status changed to False for Service baserow with message: The user-provided container failed to start and listen on the port defined provided by the PORT=80 environment variable.

I suspect what’s happening is Cloud Run is waiting for the service to be responsive on the specified port, and when that doesn’t happen it assumes it has failed and kills it.

I can see in the documentation that the standalone images use a few different ports (8000, 3000) and I have tried setting the gcloud --port= argument to 80, 8080, 8000 and 3000 just in case, but that hasn’t resolved it.

Anyone else have thoughts?

Hi @hugh I recently got Baserow running on Cloudrun and here were my notes from the process:


Cloudrun only deploys stateless containers. This means you need to configure
and run Baserow to use an external PostgreSQL database and external S3 compatible user
file storage. So you will need:

  1. An external Postgres setup (CloudSQL on GCP)
  2. A S3 compatible bucket setup (Cloud storage on GCP)
  3. An external Redis server

Configuring Baserow for Google cloudrun

  1. Choose Deploy one revision from an existing container image and enter our latest official
    all-in-one Docker image baserow/baserow:REPLACE_ME.
  2. Now select the CPU is always allocated option as Baserow runs various background
    jobs even when requests are not being processed.
  3. Select your desired autoscaling and Authentication options.
  4. Expand the Container, Networking, Security section:
    5. Change the container port to 80 OR later on set the environment variable BASEROW_CADDY_ADDRESSES=:8080
  5. You need to then configure Baserow to use the external postgres, redis and S3 storage and also Cloudrun/IAM etc so this container has permissions to access them. You can use environment variables or file secrets as discussed at Install with Docker // Baserow, Install with Docker // Baserow and the env vars at Configuring Baserow // Baserow
  6. Because your Baserow is running in stateless mode without a volume you must:
    1. Set the env var DISABLE_VOLUME_CHECK=true
    2. You will not be able to use the automatic https functionallity provided by the embedded caddy, this is because caddy will try to store the SSL certs etc in a volume, which doesn’t work in a stateless container. So you should not set BASEROW_CADDY_ADDRESSES="some https address" as this is how automatic https is enabled. Instead you should not set BASEROW_CADDY_ADDRESSES at all OR just set a port to change which port Caddy is listening on inside of the container like BASEROW_CADDY_ADDRESSES=:8080. This port needs to match the port that cloudrun is expecting the container to listen on, and by default Baserow’s Caddy listens on port 80 hence in the step above changing Cloudrun to use port 80.

gcloud beta run services proxy doesn’t work with Baserow

After configuring all of the above, you cannot use the gcloud beta run services proxy to access the service. This proxy server strips the Authentication header and replaces it with it’s own, which breaks Baserow.

Specific suggestions for @Pierre-Yves

So I would not set BASEROW_CADDY_ADDRESSES at all, and instead change Cloudrun’s port to port 80. I would also not expose port 443 and instead have Cloudrun handle SSL for you and not Baserow’s caddy. After this can you also confirm you have also done the advice set out above?

Specific suggestions for @hugh

Can you perhaps send me the full logs after restarting your Cloudrun Baserow instance to nigel@baserow.io? I’ll be looking to see if Baserow’s Caddy ends up listening on port 80 correctly or crashes or if there are any other errors from Baserow in the logs. It doesn’t look like it, but have you also set BASEROW_CADDY_ADDRESSES as an env variable? If so could you unset it and try again?

Edit: You can also set BASEROW_CADDY_GLOBAL_CONF=debug to make Baserow’s Caddy print full debug logs which might help.

I’ve also edited the response just now to indicate you should also be using an external Redis server. If not then each Baserow container will be launching with its own separate embedded redis, and so real-time collaboration in Baserow will not work as expected.

Thanks very much for all of this @nigel. I will work through this in a few days and report how it goes.

The external Redis server is an interesting twist. My plan is to only have a single running Baserow service, as I don’t anticipate ever having more than 2 or 3 simultaneous users. Is it still recommended to set up a separate Redis server, or is a single embedded Redis server sufficient if there’s only a single core Baserow service?

(If you’re wondering why I would bother with a separate external database service for a single core Baserow service, it’s just because I’m more comfortable doing backup and inspection tasks that way.)

Ah in that case if you only ever have 1 Baserow service running in Cloudrun then the embedded redis is completely fine and you don’t need an external one.

Hi again, I was able to get this to work (or at least most of the way).

First I tried setting the gcloud run deploy --port=80 argument, and leaving the BASEROW_CADDY_ADDRESSES environment variable unset. That had the same result as before (service appears to start and run, but it never leaves “deploying” state, and after a couple of minutes dies with “The user-provided container failed to start and listen on the port defined provided by the PORT=80 environment variable” error).

Then I tried setting --port=8080 and BASEROW_CADDY_ADDRESSES as :8080 but that had the same result.

Finally I tried creating the service through the web console UI, with what appeared to me to be all of the same arguments and environment variables (using 80 for the Container port and leaving BASEROW_CADDY_ADDRESSES unset). That worked–after a few seconds the service entered deployed state.

Loading the site took me to the admin creation form. Submitting the form yielded a “could not reach the API” error. Looking in the network console I could see that it was trying to reach the API at http://localhost/api/user.

So I redeployed the service, but with BASEROW_PUBLIC_URL set to https://[SERVICE URL].a.run.app. Now the site is accessible and working.

I’m guessing I’m going to run into more challenges with the backend URL once I start trying to use a custom domain, and I’m slightly confused why the command line and web UI deploy seem to have different results, but seems like I’m most of the way there.

Thanks for your help.

I was using the UI and not CLI when I set mine up, but I agree is odd that CLI didn’t work. Perhaps the --update-env-vars was hiding the fact some previous env vars had been set which were breaking things?

When you switch to a custom domain hopefully all you need to do is change BASEROW_PUBLIC_URL=your new domain. However if gcloud is proxying and changing the Host of the requests in any way then you might also need to set BASEROW_EXTRA_ALLOWED_HOSTS=https://[SERVICE URL].a.run.app or whatever the host is that the requests are coming from

Hi all, I found this forum really helpful to get my own GCP instance configured. Especially @nigel’s work. Wanted to provide an update from my own experience:

  1. Follow all of nigel’s instructions from April 2023
  2. Set the SECRET_KEY and BASEROW_JWT_SIGNING_KEY environment variables (enter a random value of your choosing for both) otherwise you’ll be stuck at one instance. Shoutout to @bram for helping me with that.

Has anyone had success with adding redis? I’m wanting to use gcloud memorystore and tried a few things to get the VPC to work but no dice. ty!

Yes @MarshallARoss, I have been able to get it working with gcloud memorystore and gcloud SQL postgress.

Make sure you have a VPC network enabled. In my case I just used the “default” network, but you could also create your own Shared VPC network.

I then went to my Redis Instance and added an Authorized Network where I selected the “default” VPC network.

Under the Cloud Run baserow instance I went to Edit & Deploy New Revision > Networking > Enable Connect to a VPC for outbound traffic > Send traffic directly to a VPC > Network: “Default”, Subnet: “Default” > Traffic routing: “Route all traffic to the VPC”. I could then inspect the logs and saw that baserow was able to connect to my redis memorystore instance.

However, this caused the postgres server to be unavailable, because it was not added to the VPC. Therefore I did: SQL > Connections > Networking > Instance IP Assignment > Private IP > Associated networking > network: “Default” and Allocated IP range: Use automatically assigned IP range > Save.

Then after the edit to your instance is done go back to the main SQL Instances screen and copy the newly created Private IP Address. Insert this private IP address as envs when running your baserow instance: DATABASE_HOST = PRIVATE_IP

Additionally, at a prior moment in the installation process I noticed some issues with the postgres database. It was showing error messages regarding insufficient permissions to the public schema. I was able to fix this issue with the following commands:

GRANT ALL ON DATABASE baserow_db TO baserow_user;
ALTER DATABASE baserow_db OWNER TO baserow_user;