Skip to Content

Production Guide: Deploy Redash with Docker Compose + Nginx + PostgreSQL on Ubuntu

A production-first Redash deployment blueprint with TLS, Redis queueing, backups, secure secrets handling, and day-2 operational checks.

Running analytics in production is easy to underestimate. Many teams can spin up Redash for a proof of concept in an afternoon, but production success depends on predictable upgrades, secret management, TLS posture, backup strategy, and repeatable operations for on-call teams. This guide walks through a production-grade Redash deployment on Ubuntu using Docker Compose with Nginx, PostgreSQL, and Redis. The goal is not just to launch a dashboard URL; it is to deliver an operational service that survives incident pressure and routine change windows.

This playbook is intentionally practical. You will configure a clear edge-to-app flow, separate data services from application runtime, enforce HTTPS, and add verification checks that catch misconfiguration before users do. By the end, you should have a deployment your team can support confidently, with enough structure to scale from one host to a more advanced platform later without re-learning fundamentals.

Architecture and flow overview

The stack has four components with explicit roles. Nginx terminates TLS and proxies external traffic to the internal Redash web service. Redash runs as application containers for web and worker processes. PostgreSQL stores metadata, users, dashboards, and query history. Redis backs queues and cache operations required by asynchronous jobs. Keeping these boundaries explicit reduces troubleshooting time because each failure mode maps cleanly to one layer.

Request flow is straightforward: browser request arrives at Nginx over HTTPS, Nginx forwards to the Redash server container, Redash queries PostgreSQL for state and uses Redis for queue/cache operations, then returns the rendered response. This structure avoids unnecessary complexity while giving you clean observability points at proxy logs, app logs, and data-service health checks.

Prerequisites

  • Ubuntu 22.04 or 24.04 host with sudo access and fixed public IP.
  • DNS A record (for example analytics.example.com) pointing to the host.
  • Open ports 80 and 443 to the internet.
  • Docker Engine + Docker Compose plugin installed.
  • Nginx + Certbot on host.
  • A secure location for generated secrets (vault/password manager).

Step-by-step deployment

1) Prepare host and baseline security

Begin with package updates and a minimal firewall policy. Keep the host role focused on this workload to simplify patching and incident response. If multiple workloads share the host, document resource ceilings and ownership boundaries before deployment.

sudo apt update && sudo apt -y upgrade
sudo apt -y install ca-certificates curl gnupg lsb-release nginx certbot python3-certbot-nginx ufw
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
sudo ufw status verbose

If the copy button does not work in your browser, select the block and copy manually.

2) Install Docker runtime

Install Docker from the official repository and verify daemon status. Early runtime validation avoids ambiguous failures later when Compose files look correct but the engine is unhealthy.

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null

sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
docker --version && docker compose version

If the copy button does not work in your browser, select the block and copy manually.

3) Create project files and secure environment variables

Store all mutable values in an environment file with strict permissions. Generate long random values for secrets, and avoid putting credentials directly into shell history or version control.

sudo mkdir -p /opt/redash/{data/postgres,backups}
cd /opt/redash

cat > .env <<'ENV'
POSTGRES_USER=redash
POSTGRES_PASSWORD=REPLACE_WITH_STRONG_DB_PASSWORD
POSTGRES_DB=redash
REDASH_COOKIE_SECRET=REPLACE_WITH_LONG_RANDOM_COOKIE_SECRET
REDASH_SECRET_KEY=REPLACE_WITH_LONG_RANDOM_APP_SECRET
REDASH_DATABASE_URL=postgresql://redash:REPLACE_WITH_STRONG_DB_PASSWORD@redash-db:5432/redash
REDASH_REDIS_URL=redis://redash-redis:6379/0
PYTHONUNBUFFERED=0
ENV
chmod 600 .env

If the copy button does not work in your browser, select the block and copy manually.

4) Define Docker Compose stack (Redash + PostgreSQL + Redis)

Use a single compose file with explicit service names and durable volumes. Bind Redash to localhost only, then expose through Nginx to keep policy and logging centralized.

cat > /opt/redash/docker-compose.yml <<'YAML'
services:
  redash-db:
    image: postgres:16
    container_name: redash-db
    env_file: .env
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - /opt/redash/data/postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10
    restart: unless-stopped

  redash-redis:
    image: redis:7
    container_name: redash-redis
    restart: unless-stopped

  redash-server:
    image: redash/redash:10.1.0.b50633
    container_name: redash-server
    env_file: .env
    command: server
    depends_on:
      redash-db:
        condition: service_healthy
      redash-redis:
        condition: service_started
    ports:
      - "127.0.0.1:5000:5000"
    restart: unless-stopped

  redash-worker:
    image: redash/redash:10.1.0.b50633
    container_name: redash-worker
    env_file: .env
    command: scheduler
    depends_on:
      - redash-server
    restart: unless-stopped
YAML

docker compose -f /opt/redash/docker-compose.yml config

If the copy button does not work in your browser, select the block and copy manually.

5) Run database migrations and start services

Run the migration command once before enabling regular traffic. Explicit initialization prevents subtle startup loops and partially initialized states.

cd /opt/redash
docker compose pull
# one-time DB migration
docker compose run --rm redash-server manage db upgrade
# start all services
docker compose up -d

docker ps --format 'table {{.Names}}	{{.Status}}	{{.Ports}}'

If the copy button does not work in your browser, select the block and copy manually.

6) Configure Nginx reverse proxy for Redash

Proxy to localhost:5000, preserve forwarding headers, and validate config before reloading. Keep this file under change control because proxy settings are common incident root causes.

cat > /etc/nginx/sites-available/redash.conf <<'NGINX'
server {
  listen 80;
  server_name analytics.example.com;

  location / {
    proxy_pass http://127.0.0.1:5000;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_read_timeout 120s;
  }
}
NGINX
ln -sf /etc/nginx/sites-available/redash.conf /etc/nginx/sites-enabled/redash.conf
nginx -t && systemctl reload nginx

If the copy button does not work in your browser, select the block and copy manually.

7) Issue TLS certificate and enforce HTTPS redirect

Once HTTP routing works, issue certificates and enable redirect. Validate certbot timer so renewal is automatic and visible in operations checks.

certbot --nginx -d analytics.example.com --non-interactive --agree-tos -m [email protected] --redirect
systemctl status certbot.timer --no-pager
curl -I https://analytics.example.com

If the copy button does not work in your browser, select the block and copy manually.

8) Add automated database backup with retention

Backups should be boring and reliable. Keep local compressed dumps for short retention and sync to off-host storage for disaster recovery. Test restores monthly.

cat > /opt/redash/backup-db.sh <<'BASH'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/redash
source .env
stamp=$(date +%F-%H%M%S)
out="/opt/redash/backups/redash-${stamp}.sql.gz"
docker exec -e PGPASSWORD="${POSTGRES_PASSWORD}" redash-db   pg_dump -U "${POSTGRES_USER}" -d "${POSTGRES_DB}" | gzip > "$out"
find /opt/redash/backups -type f -name '*.sql.gz' -mtime +14 -delete
BASH
chmod +x /opt/redash/backup-db.sh
(crontab -l 2>/dev/null; echo "19 2 * * * /opt/redash/backup-db.sh") | crontab -

If the copy button does not work in your browser, select the block and copy manually.

Configuration and secrets handling

For production, treat secrets as lifecycle-managed assets instead of static values. Keep app and database secrets in a vault-backed workflow, inject at deploy time, and maintain an explicit rotation policy. Every rotation event should include verification steps and rollback notes. Avoid embedding secrets in compose files, shell aliases, screenshots, or ticket comments. If troubleshooting requires temporary exposure, rotate immediately after resolution and document the event for audit continuity.

Use role separation wherever possible: deployment credentials for automation, service credentials for runtime access, and least-privilege read access for observability systems. This prevents broad lateral access during incidents and makes post-incident forensics significantly cleaner. Store ownership metadata with each secret so operational handoffs do not become dependency bottlenecks.

Verification checklist

Run these checks after first deployment and after every change window.

# service status
docker ps --format 'table {{.Names}}	{{.Status}}'

# redash endpoint and TLS
curl -I https://analytics.example.com

# db readiness and connectivity
docker exec redash-db pg_isready -U redash -d redash

# logs should show stable startup without crash loops
docker logs --tail=120 redash-server
docker logs --tail=120 redash-worker
docker logs --tail=80 redash-db

If the copy button does not work in your browser, select the block and copy manually.

Expected outcome: HTTPS reachable, no recurring restart loops, healthy PostgreSQL readiness, and worker logs processing jobs without repeated connection failures. Save these outputs in your change record so rollbacks and audits have concrete evidence.

Common issues and fixes

Nginx shows 502 after deploy

Usually Redash is not bound to expected localhost port or container startup is still in progress after migration. Validate container status and check proxy target (127.0.0.1:5000) first.

Worker queue not processing queries

Check Redis connectivity and worker container health. A common issue is incorrect REDASH_REDIS_URL or network naming mismatch between compose services.

Database authentication failures

Ensure REDASH_DATABASE_URL and PostgreSQL credentials are aligned. Secret updates in only one location are a frequent failure source after maintenance changes.

Initial admin setup page loops or fails

Verify migration command completed successfully before starting normal services. Partial migration state can cause repeated setup errors even when containers appear healthy.

TLS issuance fails during certbot run

Validate DNS A record and ensure port 80 is publicly accessible during HTTP challenge. Disable conflicting reverse proxy rules temporarily if another service intercepts the challenge path.

FAQ

Can I use managed PostgreSQL and managed Redis?

Yes. Replace internal service endpoints with managed URLs, remove local data services from compose, and update network policies accordingly.

Should I pin Redash image versions?

Yes for production. Pin tested versions and upgrade in scheduled windows with backups and rollback procedures ready.

How frequently should I back up the database?

Daily is a common minimum; increase frequency based on data freshness requirements and acceptable recovery point objective.

Is it safe to expose Redash directly without Nginx?

Not recommended for production. Nginx adds TLS control, routing policy, and operational visibility that direct exposure usually lacks.

Can I run this stack with only one Redash container?

You can for small labs, but production reliability is better with dedicated server and worker roles for predictable background job behavior.

What should I monitor first?

Endpoint availability, certificate expiry, container restart counts, DB readiness, Redis health, and query queue latency are the highest-value early signals.

How do I perform safe upgrades?

Take backups, pin new image tag in staging, run migration checks, promote in a change window, then validate with the same production checklist used on day one.

Related guides

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

How to Deploy Netdata with Docker Compose and Cloudflare Tunnel
Build a secure, internet-accessible monitoring stack without opening inbound ports