Very high memory usage

I am facing a very annoying memory leakage, Baserow is consuming persistently not less than 27% of RAM. Any suggested solutions?

Technical Help Questionnaire

Have you read and followed the instructions at: *READ ME FIRST* Technical Help FAQs - #2 by nigel ?

Answer: Yes, I did

Self-Hosted Installation and Setup Questions

Delete this section if you are using Baserow.io.

How have you self-hosted Baserow.

What are the specs of the service or server you are using to host Baserow.

8 GB of RAM

Which version of Baserow are you using.

1.25.1

How have you configured your self-hosted installation?

Docker stack

version: "3.4"
services:
  baserow:
    container_name: baserow
    image: baserow/baserow:1.25.1
    environment:
      BASEROW_PUBLIC_URL: 'https://mydomain.tld'
      BASEROW_AMOUNT_OF_WORKERS: 2
    ports:
      - "7300:80"
    restart: always
    networks:
      - read-tunnel
    volumes:
      - /Users/docker/baserow/data:/baserow/data
volumes:
  baserow_data:
networks:
  read-tunnel: #this is the name of the network that I created in the Cloudflared Container
    external: true

What commands if any did you use to start your Baserow server?

Describe the problem

Describe, step by step, how to reproduce the error or problem you are encountering.

Baserow is working fine but consuming 27% of my RAM

Provide screenshots or include share links showing:

How many rows in total do you have in your Baserow tables?

A relateveley new instance, not more than 200 rows yet.

Please attach full logs from all of Baserow’s services


``

Hey @wael00, I’m sorry to hear that you’re running into high memory usage. Based on the configuration that you provided, I notice that you say Baserow is consuming 27% of 8GB memory, resulting a usage of over 2GB.

With 2 workers, it’s actually not that uncommon. This is because we need multiple services like the asgi worker to handle websockets and http requests, 1 fast backend worker for real-time collaboration, 1 slow backend worker for exports, 1 worker for recurring tasks, 1 for the server side rendering. These all need a memory and depending on the BASEROW_AMOUNT_OF_WORKERS and BASEROW_AMOUNT_OF_GUNICORN_WORKERS environment variables, it can even be a multiple of it.

You can try and change the BASEROW_AMOUNT_OF_GUNICORN_WORKERS=2. Depending on how many CPU cores you have, this can reduce the memory usage a bit.

Thanks a lot for your reply @bram
I reduced the amount of both workers from 2 to 1 (hopefully this change will not have a noticeable impact on the performance) and the memory usage dropped to 18% plus/minus, yet it is honestly still very high compared to the most demanding self-hosted services on my server. Any future plans to reduce the memory usage?

In this case, we’re tied to our backend framework Django, and the dependencies we add. It just consumes over 300MB to run one Django instance, and we have to run at least one for all the services I described. You can try to set BASEROW_RUN_MINIMAL=yes to reduce it because then it will combine the two Celery workers into one. But again, this does come at a performance cost.

Hello @bram
if you allow for another question, as you can notice in my docker stack .yml I have a basic baserow installation without redis and without even a dedicated database setup, I am wondering if I integrated a redis container (and eventually a separate postgres database container) to my baserow stack would that help in reducing the memory usage?

Hi @wael00, connecting to and external Redis and PostgreSQL database should reduce memory usage because then you don’t have to run those services inside the container.