Skip to Content

Production Guide: Deploy ntfy with Docker Compose + Caddy + Auth + Attachments on Ubuntu

Self-host private operational notifications with ntfy, Docker Compose, Caddy TLS, topic permissions, attachments, backups, and production checks.

Self-hosted notifications are one of those small platform services that quietly make the rest of your operations better. A monitoring stack can detect outages, backup jobs can notice failures, and deployment scripts can report status, but those signals only help if they reach the right team quickly. This guide shows how to deploy ntfy as a lightweight internal push-notification service on Ubuntu using Docker Compose, Caddy, file-backed storage, attachment support, and a small set of production guardrails.

The real-world use case is simple: your team wants private alert topics for infrastructure events without routing every message through a heavyweight chat platform or a third-party incident vendor. ntfy gives you HTTP-based publishing, browser and mobile subscribers, optional authentication, and enough operational simplicity to fit on the same VPS that hosts other internal tools. We will keep the design intentionally boring: one application container, persistent cache, a reverse proxy with automatic TLS, explicit environment files, and a backup routine that can be tested before an incident.

Architecture and flow overview

The deployment has four moving parts. Operators and automation scripts publish messages to HTTPS endpoints such as https://ntfy.example.com/ops-alerts. Caddy terminates TLS, forwards traffic to the ntfy container on the private Docker network, and enforces a clean public hostname. ntfy stores message cache and attachment metadata on a persistent volume so subscribers can reconnect and receive recent events. Backups copy the server database and attachment directory to a timestamped folder that can be moved off-host.

In production, treat ntfy as a shared service rather than a toy webhook receiver. Use dedicated topics per workflow, require authentication for publishing, rotate tokens when employees or vendors leave, and monitor disk usage because attachments can grow faster than plain text notifications. If you later need SSO, rate limiting at the edge, or multi-region delivery, the same pattern can be extended behind a dedicated gateway, but the single-host version is often enough for internal alerting and small customer operations.

Prerequisites

  • Ubuntu 22.04 or 24.04 with a non-root sudo user.
  • A DNS record such as ntfy.example.com pointing at the server.
  • Ports 80 and 443 open from the internet.
  • Docker Engine and the Compose plugin installed.
  • A plan for who can publish, who can subscribe, and which topics are sensitive.

Step-by-step deployment

Start by creating a dedicated directory and a service user. The directory layout keeps runtime files, configuration, and backups separate so you can reason about permissions and recovery later.

sudo mkdir -p /opt/ntfy/{config,cache,attachments,backups}
sudo chown -R $USER:$USER /opt/ntfy
cd /opt/ntfy
openssl rand -hex 32 > config/server-secret.txt
chmod 600 config/server-secret.txt

If the copy button is unavailable, manually select the command text and copy it.

Create the environment file next. Keep the base URL exact; clients use it in message links and attachment URLs. The cache settings below favor reliability for operational alerts while still limiting retained history.

cat > .env <<'EOF'
NTFY_HOST=ntfy.example.com
NTFY_BASE_URL=https://ntfy.example.com
NTFY_CACHE_DURATION=72h
NTFY_ATTACHMENT_TOTAL_SIZE_LIMIT=5G
NTFY_ATTACHMENT_FILE_SIZE_LIMIT=50M
NTFY_ATTACHMENT_EXPIRY_DURATION=24h
EOF
chmod 600 .env

If the copy button is unavailable, manually select the command text and copy it.

Write the ntfy server configuration. This enables a persistent cache, local auth database, attachment storage, and conservative visitor limits. Authentication starts in disabled-by-default mode for anonymous users, then you explicitly grant access to topics.

cat > config/server.yml <<'EOF'
base-url: "https://ntfy.example.com"
listen-http: ":80"
cache-file: "/var/cache/ntfy/cache.db"
cache-duration: "72h"
auth-file: "/var/cache/ntfy/user.db"
auth-default-access: "deny-all"
attachment-cache-dir: "/var/lib/ntfy/attachments"
attachment-total-size-limit: "5G"
attachment-file-size-limit: "50M"
attachment-expiry-duration: "24h"
visitor-request-limit-burst: 60
visitor-request-limit-replenish: "10s"
behind-proxy: true
EOF

If the copy button is unavailable, manually select the command text and copy it.

Now define the Compose stack. Caddy and ntfy share a private network. The ntfy container does not need to publish a host port; Caddy reaches it by service name. Pin versions in change-controlled environments and upgrade deliberately after reading release notes.

cat > docker-compose.yml <&lt'EOF'
services:
  ntfy:
    image: binwiederhier/ntfy:v2.11.0
    command: serve
    restart: unless-stopped
    volumes:
      - ./config/server.yml:/etc/ntfy/server.yml:ro
      - ./cache:/var/cache/ntfy
      - ./attachments:/var/lib/ntfy/attachments
    healthcheck:
      test: ["CMD-SHELL", "wget -qO- http://localhost/v1/health | grep -q true"]
      interval: 30s
      timeout: 5s
      retries: 5
    networks: [internal]

  caddy:
    image: caddy:2.8
    restart: unless-stopped
    depends_on: [ntfy]
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    networks: [internal]

networks:
  internal:

volumes:
  caddy_data:
  caddy_config:
EOF

If the copy button is unavailable, manually select the command text and copy it.

Add the Caddy reverse proxy configuration. The header lines preserve the original client details so ntfy can apply proxy-aware rate limits and produce useful logs.

cat > Caddyfile <&lt'EOF'
ntfy.example.com {
  encode zstd gzip
  reverse_proxy ntfy:80 {
    header_up X-Forwarded-Proto {scheme}
    header_up X-Forwarded-Host {host}
    header_up X-Real-IP {remote_host}
  }
}
EOF
docker compose pull
docker compose up -d

If the copy button is unavailable, manually select the command text and copy it.

Create initial users and permissions from inside the application container. The example gives one automation user write access to an operations topic and one human user read access. Add more topics for backups, deployments, security alerts, and customer-facing incidents instead of mixing everything into a single noisy feed.

docker compose exec ntfy ntfy user add ops-bot
docker compose exec ntfy ntfy user add alice
docker compose exec ntfy ntfy access ops-bot ops-alerts write-only
docker compose exec ntfy ntfy access alice ops-alerts read-only
docker compose exec ntfy ntfy access alice backup-alerts read-only

If the copy button is unavailable, manually select the command text and copy it.

Configuration and secrets handling best practices

Do not bake passwords, topic names, or bearer tokens into public repositories. Store publishing credentials in the secret manager used by your automation platform, and scope each token to the smallest useful set of topics. For example, a backup job should publish only to backup-alerts, while a deployment pipeline should publish only to deployments. This makes token rotation less disruptive and keeps accidental leaks contained.

Topic names are not a complete security boundary. Use authentication for private alerts, avoid putting sensitive customer data in notification bodies, and treat message attachments as operational evidence rather than permanent document storage. If you need long retention for audit events, publish a short notification that links to your logging or ticketing system instead of uploading the full artifact to ntfy.

Backups should include cache/cache.db, cache/user.db, attachments/, config/server.yml, docker-compose.yml, and Caddyfile. Test restore on a spare host before relying on the service for incident notifications. A notification platform that cannot be restored during a bad day becomes another point of uncertainty.

cat > backup-ntfy.sh <&lt'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/ntfy
stamp=$(date -u +%Y%m%dT%H%M%SZ)
dest="backups/ntfy-$stamp"
mkdir -p "$dest"
docker compose exec -T ntfy sh -c 'sync'
cp -a config docker-compose.yml Caddyfile cache attachments "$dest/"
tar -czf "$dest.tar.gz" -C backups "ntfy-$stamp"
rm -rf "$dest"
find backups -name 'ntfy-*.tar.gz' -mtime +14 -delete
EOF
chmod +x backup-ntfy.sh
./backup-ntfy.sh

If the copy button is unavailable, manually select the command text and copy it.

Verification checklist

Verify the service before connecting production systems. First check that both containers are healthy and Caddy has issued a certificate. Then publish a test alert with an authenticated user and confirm a browser or mobile subscriber receives it. Finally, confirm the backup archive exists and can be listed.

cd /opt/ntfy
docker compose ps
docker compose logs --tail=80 caddy
curl -u ops-bot 'https://ntfy.example.com/ops-alerts' \
  -H 'Title: ntfy production test' \
  -H 'Priority: 4' \
  -d 'If you can read this, authenticated publishing works.'
ls -lh backups/ntfy-*.tar.gz

If the copy button is unavailable, manually select the command text and copy it.

For ongoing checks, publish a low-priority heartbeat from cron or your monitoring system every few hours. Alert if the HTTP request fails, if the container restarts repeatedly, or if disk usage under /opt/ntfy crosses your threshold. The point is not to page people for every heartbeat; it is to know that the notification path itself has not silently broken.

Common issues and fixes

Caddy cannot obtain a certificate. Confirm the DNS A or AAAA record points at this server, ports 80 and 443 are reachable, and no other process is already bound to those ports. Check docker compose logs caddy before changing the ntfy configuration.

Publishing returns 403 Forbidden. The user does not have write access to that topic, or anonymous access is denied as intended. Re-run ntfy access for the specific user and topic, then test with curl -u.

Subscribers miss old messages. Increase cache-duration if the operational requirement is to replay alerts after long offline periods. Remember that higher retention increases the importance of database backups and disk monitoring.

Attachments fill the server. Lower attachment size limits, shorten expiry duration, and route large artifacts to object storage or your ticketing system. ntfy should announce that an artifact exists, not become the main archive for every log bundle.

Clients show the wrong URL in notifications. Fix base-url and confirm Caddy forwards the host and protocol headers. After changing the base URL, recreate the container and send a fresh test message.

Automation scripts leak credentials in logs. Switch from inline passwords to environment variables or secret-mounted files. Also rotate the affected ntfy user password and inspect access logs for unexpected publishing attempts.

FAQ

Should ntfy be public or private?

The web endpoint must be reachable by subscribers, but topics do not need to be open. For internal alerts, deny anonymous access and grant read or write permissions per user and per topic.

Can I use ntfy for paging?

It can be part of a paging workflow, especially for small teams, but critical incident response should still define escalation, ownership, and backup channels if phones are offline or push delivery is delayed.

How many topics should I create?

Create topics around operational ownership: backups, deployments, uptime, security, and customer incidents. Too few topics create noise; too many make permissions and subscriptions hard to audit.

Should alerts include secrets or customer data?

No. Put only the minimum context needed to investigate, then link to logs, dashboards, tickets, or runbooks with proper access control. Notifications are delivery mechanisms, not data stores.

How often should I back up ntfy?

Daily backups are enough for many teams because message history is short-lived, but back up immediately after large permission changes. Always test restoring users and topics on a separate host.

Can multiple applications share one publishing user?

Avoid it. Separate users make rotation easier and help you identify which system created a noisy or malformed alert. Use one write-only account per automation source when possible.

What should I monitor after launch?

Monitor HTTPS availability, container health, failed publish responses, disk usage, certificate renewal, backup freshness, and whether a known subscriber receives periodic test notifications.

Internal links

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

Production Guide: Deploy Meilisearch with Docker Compose + Caddy + Master Key on Ubuntu
A production-ready private search deployment with TLS, secret hygiene, snapshots, health checks, and rollback-ready operations.