Skip to Content

Production Guide: Deploy Grist with Docker Compose + Caddy + PostgreSQL + Redis on Ubuntu

A production-ready deployment of Grist, the open-source relational spreadsheet, using Docker Compose, Caddy, PostgreSQL, and Redis on Ubuntu.

Spreadsheets are where business logic goes to die: formulas hidden in cells, no audit trail, conflicting versions emailed back and forth, and no way to enforce data integrity. Grist is an open-source relational spreadsheet that combines the familiarity of a grid interface with the rigor of a real database. Every column has a type, every change is versioned, and formulas are written in Python instead of fragile cell references. Teams use Grist for project budgets, inventory tracking, compliance checklists, and lightweight CRMs without building a full application.

In this guide, we will deploy Grist on Ubuntu with Docker Compose, publish it through Caddy with automatic HTTPS, and wire in PostgreSQL for the home database and Redis for session storage. The target audience is a small business, operations team, or internal IT group that wants a maintainable, self-hosted data workspace. The pattern keeps the application stack isolated, exposes only Caddy to the public internet, stores secrets in an environment file with restricted permissions, and verifies each layer before inviting users. You can integrate OIDC or SAML later, but this baseline gives you a dependable, upgrade-friendly foundation.

Architecture and flow overview

The browser talks to Caddy on ports 80 and 443. Caddy terminates TLS and reverse-proxies to the Grist server container bound to 127.0.0.1 on port 8484 inside the Docker network. Grist itself is a Node.js application that serves a web UI and an API; it depends on PostgreSQL for user accounts, workspaces, and document metadata, and Redis for session state and caching. Persistent data lives in Docker volumes: one for PostgreSQL, one for Redis, and one for Grist documents and attachments. Logs are written to container stdout by default and can be collected with your existing log shipping stack. The flow is intentionally simple: one public entry point, one application server, and clearly separated backing services.

Prerequisites

  • Ubuntu 22.04 or 24.04 LTS server with at least 1 CPU core, 2 GB RAM, and 20 GB disk.
  • A DNS A record pointing your domain to the server public IP.
  • Docker Engine 24.x and Docker Compose plugin installed.
  • Caddy installed as a system package or binary.
  • UFW or another firewall allowing SSH (22), HTTP (80), and HTTPS (443).
  • An SMTP relay or mail provider account for outbound email (required for invitations and notifications).

Step-by-step deployment

1) Install Docker, Compose, Caddy, and firewall basics

sudo apt update && sudo apt install -y ca-certificates curl gnupg ufw
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker "$USER"

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install -y caddy

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable

2) Create directories and environment file

sudo mkdir -p /opt/grist/{data,postgres,redis}
sudo chown -R "$USER":"$USER" /opt/grist
chmod 750 /opt/grist

Create /opt/grist/.env with the following content. Replace secrets with strong random values and set your domain and email credentials.

APP_HOME_URL=https://grist.example.com
[email protected]
POSTGRES_DB=grist
POSTGRES_USER=grist
POSTGRES_PASSWORD=$(openssl rand -hex 32)
REDIS_PASSWORD=$(openssl rand -hex 32)
GRIST_SESSION_SECRET=$(openssl rand -hex 32)
GRIST_SMTP_HOST=smtp.mailprovider.com
GRIST_SMTP_PORT=587
[email protected]
GRIST_SMTP_PASSWORD=your-email-api-key
GRIST_SMTP_FROM=Grist 

Lock the file:

chmod 600 /opt/grist/.env

3) Define Compose services

Create /opt/grist/docker-compose.yml:

version: "3.8"
services:
  postgres:
    image: postgres:15-alpine
    container_name: grist_postgres
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    networks:
      - grist
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: grist_redis
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD}
    volumes:
      - ./redis:/data
    networks:
      - grist
    healthcheck:
      test: ["CMD", "redis-cli", "--raw", "-a", "${REDIS_PASSWORD}", "incr", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  grist:
    image: gristlabs/grist:latest
    container_name: grist_server
    restart: unless-stopped
    env_file: .env
    environment:
      APP_HOME_URL: ${APP_HOME_URL}
      TYPEORM_TYPE: postgres
      TYPEORM_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres/${POSTGRES_DB}
      GRIST_DATA_DIR: /persist
      GRIST_SESSION_SECRET: ${GRIST_SESSION_SECRET}
      GRIST_REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379/0
      GRIST_DEFAULT_EMAIL: ${GRIST_DEFAULT_EMAIL}
      GRIST_SUPPORT_ANON: "false"
      GRIST_SMTP_HOST: ${GRIST_SMTP_HOST}
      GRIST_SMTP_PORT: ${GRIST_SMTP_PORT}
      GRIST_SMTP_USER: ${GRIST_SMTP_USER}
      GRIST_SMTP_PASSWORD: ${GRIST_SMTP_PASSWORD}
      GRIST_SMTP_FROM: ${GRIST_SMTP_FROM}
    volumes:
      - ./data:/persist
    networks:
      - grist
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    ports:
      - "127.0.0.1:8484:8484"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8484/status"]
      interval: 30s
      timeout: 10s
      retries: 3

networks:
  grist:
    driver: bridge

4) Configure Caddy reverse proxy

Create /etc/caddy/Caddyfile (or add a site block):

grist.example.com {
  reverse_proxy 127.0.0.1:8484
  header {
    Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "SAMEORIGIN"
    Referrer-Policy "strict-origin-when-cross-origin"
  }
}

Validate and reload Caddy:

sudo caddy fmt --overwrite /etc/caddy/Caddyfile
sudo systemctl reload caddy

5) Start services and verify health

cd /opt/grist
docker compose up -d
sleep 15
docker compose ps
docker compose logs --tail 50 grist

Wait until the Grist container reports that the server is listening. The first startup initializes the home database, so it may take one to two minutes.

6) Run first-time setup

Open your domain in a browser. Grist will prompt you to create the first admin account using the email address defined in GRIST_DEFAULT_EMAIL. After registration, log in and create your first workspace and document. Invite teammates from the sharing settings panel.

7) Backup script

Create /opt/grist/backup.sh:

#!/bin/bash
set -euo pipefail
BACKUP_DIR=/opt/grist/backups/$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
docker exec grist_postgres pg_dump -U grist grist | gzip > "$BACKUP_DIR/grist.sql.gz"
tar czf "$BACKUP_DIR/data.tar.gz" -C /opt/grist data
find /opt/grist/backups -mindepth 1 -maxdepth 1 -type d -mtime +7 -exec rm -rf {} +
chmod +x /opt/grist/backup.sh
/opt/grist/backup.sh

Schedule it in cron:

(crontab -l 2>/dev/null; echo "0 3 * * * /opt/grist/backup.sh") | crontab -

8) Acceptance checklist execution

  • Caddy serves HTTPS with a valid certificate.
  • Grist login page loads without mixed-content warnings.
  • Admin login succeeds and the workspace settings page opens.
  • Invitations can be sent and received (test email flow).
  • Document creation, row editing, and formula evaluation work correctly.
  • File attachments upload and download successfully.
  • docker compose ps shows all services healthy.
  • Backup archive exists and can be decompressed.

Configuration and secrets handling

All sensitive values live in /opt/grist/.env with mode 600. The file is never copied into images; it is mounted at runtime by Docker Compose. Rotate the GRIST_SESSION_SECRET only during a planned maintenance window because it invalidates active sessions. For SMTP credentials, use an app-specific password or a dedicated relay user rather than a personal mailbox password. If you run Grist behind a corporate proxy, export HTTP_PROXY and HTTPS_PROXY in the host environment before starting Compose, or add them to the Grist service environment block.

Verification

Run these checks from the server:

curl -s -o /dev/null -w "%{http_code}" https://grist.example.com
# Expected: 200

docker compose exec postgres pg_isready -U grist
# Expected: accepting connections

docker compose exec redis redis-cli --raw -a "$REDIS_PASSWORD" ping
# Expected: PONG

docker compose exec grist_server curl -f http://localhost:8484/status
# Expected: HTTP 200

Common issues and fixes

  • Container exits on startup: Check docker compose logs grist. The most common cause is a missing or unhealthy backing service. Ensure PostgreSQL and Redis are healthy before Grist starts.
  • Database connection errors: Verify that TYPEORM_URL matches the PostgreSQL credentials and that the postgres service is on the same Docker network.
  • Email not delivering: Confirm SMTP host, port, credentials, and the GRIST_SMTP_FROM address. Test with swaks or msmtp from the host. Check Grist logs for SMTP authentication failures.
  • Redirect loop or 400 Bad Request: Ensure APP_HOME_URL exactly matches the public HTTPS URL, including the protocol and without a trailing slash.
  • 502 Bad Gateway: This usually means Grist is still starting or crashed. Wait two minutes after docker compose up and check health status.
  • Permission denied on uploads: Verify that the Grist container user has write access to /persist. The official image runs as a non-root user; ensure the host directory is writable by UID 1000 or adjust ownership.

FAQ

Can I use SQLite instead of PostgreSQL?

Yes, but only for personal or single-user deployments. Grist can store the home database in SQLite, but PostgreSQL is strongly recommended for production because it handles concurrent writes, backups, and authentication more reliably. The Docker Compose setup above uses PostgreSQL by default.

How do I enable OIDC or SAML authentication?

Grist supports OIDC and SAML via environment variables. Set GRIST_OIDC_IDP_ISSUER, GRIST_OIDC_IDP_CLIENT_ID, and GRIST_OIDC_IDP_CLIENT_SECRET for OIDC, or configure the SAML certificate and entry point for enterprise identity providers. Consult the Grist documentation for the full variable list.

Can I run multiple Grist instances behind a load balancer?

Yes, but you must move PostgreSQL and Redis to external hosts or a shared cluster so all instances see the same data. You also need to use S3 or MinIO for document storage instead of local volumes, and ensure sticky sessions or shared session state via Redis.

How do I migrate from Airtable to Grist?

Grist can import CSV files exported from Airtable. Export each Airtable table as CSV, then create a new Grist document and import each CSV as a table. Rebuild linked-record relationships using reference columns, and migrate formulas from Airtable syntax to Python. For large migrations, consider the Grist API to script table creation and row insertion.

What backup strategy is recommended?

The backup script above dumps the PostgreSQL home database and archives the local document storage daily. For production, also replicate backups to an offsite S3 bucket, test restores quarterly, and snapshot the host filesystem before major upgrades.

How do I update Grist?

Run docker compose pull && docker compose up -d to fetch the latest image and restart the stack. Always back up before upgrading. After the restart, verify that existing documents open correctly and that the admin panel reports the expected version.

Can I use S3 instead of local storage for documents?

Yes. Set GRIST_DOC_STORE to s3 and provide GRIST_S3_BUCKET, GRIST_S3_PREFIX, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY. This is the recommended configuration when running multiple Grist replicas or when you want to offload document storage from the host disk.

Internal links

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

Header image: Original SysBrix generated header, no watermark.

Production Guide: Deploy Zulip with Docker Compose + Caddy + PostgreSQL + Redis + RabbitMQ on Ubuntu
Deploy Zulip on Ubuntu with Docker Compose, Caddy reverse proxy, PostgreSQL, Redis, RabbitMQ, and Memcached for a production-ready team chat server.