Docker network dns resolution for webhook container

I have installed Baserow via docker-compose on my linux workstation (ubuntu).

I added a webhook receiver container to the docker compose file, in the same docker network, on port 5000. By reading other posts, I have set the BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS to true. For reference, i’ve included the relevant portion from docker-compose.yml below.

services:
  receiver:
    image: baserow-receiver
    expose:
      - "5000"
    networks:
      local:

When defining a webhook in the baserow UI, test requests are successful if I use the IP number of the docker container, something like http://172.27.0.4:5000/baserow-webhook.

However, if I use the docker compose service name , i.e. http://receiver:5000/baserow-webhook, I get an “unknown error”. The django log from the backend shows “bad request”.

A curl command from inside the baserow backend container using http://receiver:5000/baserow-webhook works, so docker dns resolution is working in general, and the webhook container is accessible from the other containers.

So it seems that docker network dns resolution is not allowed or not functional in a webhook url in the baserow app itself. Thanks for any tips!

Hi @jwitte are you able to provide your full docker-compose with any env vars included but anonymized?

Can you also provide full logs showing the bad request/unknown error etc?

When BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS=true and is set on the backend container, Baserow will just use the python requests library to call the webhook. To the best of my knowledge the requests python library should work with the custom docker DNS stuff.

Python DNS resolution is working from within the backend container, as shown below:

baserow git:(master) ✗ docker exec -it baserow_backend_1 /bin/bash
baserow_docker_user@288dba4ee4ed:/baserow/backend$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import socket
>>> print(socket.gethostbyname('receiver'))
192.168.1.99

Curl also resolves docker DNS correctly from within the container:

baserow_docker_user@288dba4ee4ed:/baserow/backend$ curl -X POST \
>      -H "Content-Type: application/json" \
>      -d '{"row_id": 123, "event": "created", "data": {"field_1": "value1", "field_2": "value2"}}' \
>      http://receiver:5000/baserow-webhook
Webhook received!baserow_docker_user@288dba4ee4ed:/baserow/backend$

Here’s what the errors look like using “docker-compose logs -f backend”, while trying the “Trigger test webhook” button, using the URL “http://receiver:5000/baserow-webhook”. I don’t see these errors if I use the IP number of the receiver container instead.

backend_1                   | WARNING 2023-09-05 15:38:58,726 django.request.log_response:224- Bad Request: /api/database/webhooks/table/220/test-call/ 
backend_1                   | WARNING 2023-09-05 15:38:58,726 django.request.log_response:224- Bad Request: /api/database/webhooks/table/220/test-call/ 
backend_1                   | WARNING 2023-09-05 15:38:58,726 django.request.log_response:224- Bad Request: /api/database/webhooks/table/220/test-call/ 
backend_1                   | 192.168.1.5:58842 - "POST /api/database/webhooks/table/220/test-call/ HTTP/1.1" 400

Full docker compose below. I’ve set a static IP for the webhook receiver container while I am troubleshooting this. I see the same results without setting a static IP.

BASEROW_PUBLIC_URL is left to the default http://localhost. My .env file sets values for SECRET_KEY, DATABASE_PASSWORD and REDIS_PASSWORD.

version: "3.4"
########################################################################################
#
# READ ME FIRST!
#
# Use the much simpler `docker-compose.all-in-one.yml` instead of this file
# unless you are an advanced user!
#
# This compose file runs every service separately using a Caddy reverse proxy to route
# requests to the backend or web-frontend, handle websockets and to serve uploaded files
# . Even if you have your own http proxy we recommend to simply forward requests to it
# as it is already properly configured for Baserow.
#
# If you wish to continue with this more advanced compose file, it is recommended that
# you set environment variables using the .env.example file by:
# 1. `cp .env.example .env`
# 2. Edit .env and fill in the first 3 variables.
# 3. Set further environment variables as you wish.
#
# More documentation can be found in:
# https://baserow.io/docs/installation%2Finstall-with-docker-compose
#
########################################################################################

# See https://baserow.io/docs/installation%2Fconfiguration for more details on these
# backend environment variables, their defaults if left blank etc.
x-backend-variables: &backend-variables
  # Most users should only need to set these first four variables.
  SECRET_KEY: ${SECRET_KEY:?}
  BASEROW_JWT_SIGNING_KEY: ${BASEROW_JWT_SIGNING_KEY:-}
  DATABASE_PASSWORD: ${DATABASE_PASSWORD:?}
  REDIS_PASSWORD: ${REDIS_PASSWORD:?}
  # If you manually change this line make sure you also change the duplicate line in
  # the web-frontend service.
  BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}

  # Set these if you want to use an external postgres instead of the db service below.
  DATABASE_USER: ${DATABASE_USER:-baserow}
  DATABASE_NAME: ${DATABASE_NAME:-baserow}
  DATABASE_HOST:
  DATABASE_PORT:
  DATABASE_URL:

  # Set these if you want to use an external redis instead of the redis service below.
  REDIS_HOST:
  REDIS_PORT:
  REDIS_PROTOCOL:
  REDIS_URL:
  REDIS_USER:

  # Set these to enable Baserow to send emails.
  EMAIL_SMTP:
  EMAIL_SMTP_HOST:
  EMAIL_SMTP_PORT:
  EMAIL_SMTP_USE_TLS:
  EMAIL_SMTP_USE_SSL:
  EMAIL_SMTP_USER:
  EMAIL_SMTP_PASSWORD:
  EMAIL_SMTP_SSL_CERTFILE_PATH:
  EMAIL_SMTP_SSL_KEYFILE_PATH:
  FROM_EMAIL:

  # Set these to use AWS S3 bucket to store user files.
  AWS_ACCESS_KEY_ID:
  AWS_SECRET_ACCESS_KEY:
  AWS_STORAGE_BUCKET_NAME:
  AWS_S3_REGION_NAME:
  AWS_S3_ENDPOINT_URL:
  AWS_S3_CUSTOM_DOMAIN:

  # Misc settings see https://baserow.io/docs/installation%2Fconfiguration for info
  BASEROW_AMOUNT_OF_WORKERS:
  BASEROW_ROW_PAGE_SIZE_LIMIT:
  BATCH_ROWS_SIZE_LIMIT:
  INITIAL_TABLE_DATA_LIMIT:
  BASEROW_FILE_UPLOAD_SIZE_LIMIT_MB:

  BASEROW_EXTRA_ALLOWED_HOSTS:
  ADDITIONAL_APPS:
  BASEROW_PLUGIN_GIT_REPOS:
  BASEROW_PLUGIN_URLS:

  BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER:
  MIGRATE_ON_STARTUP: ${MIGRATE_ON_STARTUP:-true}
  SYNC_TEMPLATES_ON_STARTUP: ${SYNC_TEMPLATES_ON_STARTUP:-true}
  DONT_UPDATE_FORMULAS_AFTER_MIGRATION:
  BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION:
  BASEROW_SYNC_TEMPLATES_TIME_LIMIT:

  BASEROW_BACKEND_DEBUG: 'on'
  BASEROW_BACKEND_LOG_LEVEL: 'DEBUG'
  FEATURE_FLAGS:
  BASEROW_ENABLE_OTEL:
  BASEROW_DEPLOYMENT_ENV:
  OTEL_EXPORTER_OTLP_ENDPOINT:
  OTEL_RESOURCE_ATTRIBUTES:

  PRIVATE_BACKEND_URL: http://backend:8000
  PUBLIC_BACKEND_URL:
  PUBLIC_WEB_FRONTEND_URL:
  MEDIA_URL:
  MEDIA_ROOT:

  BASEROW_AIRTABLE_IMPORT_SOFT_TIME_LIMIT:
  HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
  OLD_ACTION_CLEANUP_INTERVAL_MINUTES:
  MINUTES_UNTIL_ACTION_CLEANED_UP:
  BASEROW_GROUP_STORAGE_USAGE_QUEUE:
  DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
  BASEROW_WAIT_INSTEAD_OF_409_CONFLICT_ERROR:
  BASEROW_DISABLE_MODEL_CACHE:
  BASEROW_PLUGIN_DIR:
  BASEROW_JOB_EXPIRATION_TIME_LIMIT:
  BASEROW_JOB_CLEANUP_INTERVAL_MINUTES:
  BASEROW_MAX_ROW_REPORT_ERROR_COUNT:
  BASEROW_JOB_SOFT_TIME_LIMIT:
  BASEROW_FRONTEND_JOBS_POLLING_TIMEOUT_MS:
  BASEROW_INITIAL_CREATE_SYNC_TABLE_DATA_LIMIT:
  BASEROW_MAX_SNAPSHOTS_PER_GROUP:
  BASEROW_SNAPSHOT_EXPIRATION_TIME_DAYS:
  BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS: ${BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS:-true}
  BASEROW_WEBHOOKS_IP_BLACKLIST:
  BASEROW_WEBHOOKS_IP_WHITELIST:
  BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST:
  BASEROW_WEBHOOKS_URL_CHECK_TIMEOUT_SECS:
  BASEROW_WEBHOOKS_MAX_CONSECUTIVE_TRIGGER_FAILURES:
  BASEROW_WEBHOOKS_MAX_RETRIES_PER_CALL:
  BASEROW_WEBHOOKS_MAX_PER_TABLE:
  BASEROW_WEBHOOKS_MAX_CALL_LOG_ENTRIES:
  BASEROW_WEBHOOKS_REQUEST_TIMEOUT_SECONDS:
  BASEROW_ENTERPRISE_AUDIT_LOG_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ENTERPRISE_AUDIT_LOG_RETENTION_DAYS:
  BASEROW_ALLOW_MULTIPLE_SSO_PROVIDERS_FOR_SAME_ACCOUNT:
  BASEROW_ROW_COUNT_JOB_CRONTAB:
  BASEROW_STORAGE_USAGE_JOB_CRONTAB:
  BASEROW_SEAT_USAGE_JOB_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_TIMEOUT_MINUTES:
  BASEROW_PERIODIC_FIELD_UPDATE_QUEUE_NAME:
  BASEROW_MAX_CONCURRENT_USER_REQUESTS:
  BASEROW_CONCURRENT_USER_REQUESTS_THROTTLE_TIMEOUT:
  BASEROW_OSS_ONLY:
  OTEL_TRACES_SAMPLER:
  OTEL_TRACES_SAMPLER_ARG:
  OTEL_PER_MODULE_SAMPLER_OVERRIDES:
  BASEROW_CACHALOT_ENABLED:
  BASEROW_CACHALOT_MODE:
  BASEROW_CACHALOT_ONLY_CACHABLE_TABLES:
  BASEROW_CACHALOT_UNCACHABLE_TABLES:
  BASEROW_CACHALOT_TIMEOUT:
  BASEROW_AUTO_INDEX_VIEW_ENABLED:
  BASEROW_PERSONAL_VIEW_LOWEST_ROLE_ALLOWED:
  BASEROW_DISABLE_LOCKED_MIGRATIONS:
  BASEROW_USE_PG_FULLTEXT_SEARCH:
  BASEROW_AUTO_VACUUM:

services:
  receiver:
    image: baserow-receiver
    expose:
      - "5000"
    networks:
      local:
        ipv4_address: 192.168.1.99

  # A caddy reverse proxy sitting in-front of all the services. Responsible for routing
  # requests to either the backend or web-frontend and also serving user uploaded files
  # from the media volume.
  caddy:
    image: caddy:2
    restart: unless-stopped
    environment:
      # Controls what port the Caddy server binds to inside its container.
      BASEROW_CADDY_ADDRESSES: ${BASEROW_CADDY_ADDRESSES:-:80}
      PRIVATE_WEB_FRONTEND_URL: ${PRIVATE_WEB_FRONTEND_URL:-http://web-frontend:3000}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
    ports:
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_PORT:-80}:80"
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_SSL_PORT:-443}:443"
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - media:/baserow/media
      - caddy_config:/config
      - caddy_data:/data
    networks:
      local:

  backend:
    image: baserow/backend:1.19.1
    restart: unless-stopped
    environment:
      <<: *backend-variables
    depends_on:
      - db
      - redis
    volumes:
      - media:/baserow/media
    networks:
      local:

  web-frontend:
    image: baserow/web-frontend:1.19.1
    restart: unless-stopped
    environment:
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      PUBLIC_BACKEND_URL:
      PUBLIC_WEB_FRONTEND_URL:
      BASEROW_DISABLE_PUBLIC_URL_CHECK:
      INITIAL_TABLE_DATA_LIMIT:
      DOWNLOAD_FILE_VIA_XHR:
      BASEROW_DISABLE_GOOGLE_DOCS_FILE_PREVIEW:
      HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
      DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
      FEATURE_FLAGS:
      ADDITIONAL_MODULES:
      BASEROW_MAX_IMPORT_FILE_SIZE_MB:
      BASEROW_MAX_SNAPSHOTS_PER_GROUP:
      BASEROW_ENABLE_OTEL:
      BASEROW_DEPLOYMENT_ENV:
      BASEROW_OSS_ONLY:
      BASEROW_USE_PG_FULLTEXT_SEARCH:
    depends_on:
      - backend
    networks:
      local:

  celery:
    image: baserow/backend:1.19.1
    restart: unless-stopped
    environment:
      <<: *backend-variables
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-export-worker:
    image: baserow/backend:1.19.1
    restart: unless-stopped
    command: celery-exportworker
    environment:
      <<: *backend-variables
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-beat-worker:
    image: baserow/backend:1.19.1
    restart: unless-stopped
    command: celery-beat
    environment:
      <<: *backend-variables
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  db:
    image: postgres:11
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      local:
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    networks:
      local:

  # By default, the media volume will be owned by root on startup. Ensure it is owned by
  # the same user that django is running as, so it can write user files.
  volume-permissions-fixer:
    image: bash:4.4
    command: chown 9999:9999 -R /baserow/media
    volumes:
      - media:/baserow/media
    networks:
      local:

volumes:
  pgdata:
  media:
  caddy_data:
  caddy_config:

networks:
  local:
    driver: bridge
    ipam:
      config:
        - subnet: 192.168.1.0/24

I also notice that when I use an unreachable IP or an unreachable hostname like “receiver.local:5000” in the webhook URL, I get a “Server unreachable” error in the UI, but when I use a docker-style hostname like “receiver:5000”, i get “Action not completed. The action couldn’t be completed because an unknown error has occurred.” in the UI.

I wonder if URL checking might be the culprit. Doesn’t think that “http://receiver:5000” is a valid URL.

I have the same problem. And the same logs. When I try to test the webhook, I get an error. Inside the baserow container the hostname of neighboring container is resolved via curl and python.

So i’ve found the bug and the fix will be reviewed and merged sometime this week and then included in our develop-latest images and the next self hosted release: Resolve "Fix webhook URL validation not being disabled when BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS=true" (!1656) · Merge requests · Baserow / baserow · GitLab

Thanks for the detailed bug report and apologies for any inconvenience.

FYI the reason this generates a different error is that its failing on our URL validator. So receiver.local:5000 passes the validation and so sends a real test request, but isn’t actually a real DNS entry I assume, where-as http://reciever:5000 does not pass the validation and no request is ever set.

For now, one work around would be to setup your containers /etc/hosts so receiver.local maps to receiver, then you can set receiver.local in Baserow and it will validate and allow that URL but get mapped to the real hostname by /etc/hosts.

thank you for looking into this!