Skip to Content

Uptime Kuma Setup: Advanced Monitoring, Integrations, and Status Pages for Production Teams

Go beyond basic ping checks — learn how to configure Uptime Kuma with advanced monitor types, multi-channel alerting, public status pages, and Docker container monitoring for a production-ready observability setup.
Uptime Kuma setup guide

Uptime Kuma Setup: Advanced Monitoring, Integrations, and Status Pages for Production Teams

Getting Uptime Kuma running takes 10 minutes. Getting it configured to actually catch every failure, alert the right people through the right channels, and give your users a professional status page — that takes a bit more thought. This guide covers a production-grade Uptime Kuma setup: Docker deployment, advanced monitor types including push monitors for cron jobs and Docker container checks, multi-channel notification routing, and a branded public status page your users will actually trust.

If you're brand new to Uptime Kuma and want to start with the basics, check out our getting started guide first. This guide picks up where that one leaves off and focuses on production configuration for real infrastructure.


Prerequisites

  • A Linux server (Ubuntu 20.04+ recommended) separate from the services you're monitoring — monitoring and monitored services on the same host defeats the purpose
  • Docker Engine and Docker Compose v2 installed
  • At least 512MB RAM — Uptime Kuma is lightweight but runs better with room to breathe
  • A domain name for your Uptime Kuma instance and a separate subdomain for your public status page
  • API keys or webhook URLs for your notification channels (Slack, Telegram, PagerDuty, etc.)
  • Ports 80 and 443 open for the status page and web UI

Verify Docker is ready on your monitoring server:

docker --version
docker compose version
free -h

# Confirm ports are free
sudo ss -tlnp | grep -E ':80|:443|:3001'

Deploying Uptime Kuma with Docker Compose

Production Compose Setup

A production Uptime Kuma deployment should mount the Docker socket (for container monitoring), set a fixed timezone, and use a named volume for data persistence:

# docker-compose.yml
version: '3.8'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      - "3001:3001"
    volumes:
      - uptime_kuma_data:/app/data
      # Mount Docker socket for container health monitoring
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - TZ=UTC
      # Disable rate limiting if behind a trusted reverse proxy
      - UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=0
    security_opt:
      # Read-only socket mount — Kuma only reads, never writes
      - no-new-privileges:true

volumes:
  uptime_kuma_data:
docker compose up -d

# Wait for startup
docker compose logs -f uptime-kuma
# Look for: Listening on 3001

# Verify Docker socket access works
docker exec uptime-kuma docker ps 2>/dev/null || echo "Socket access check done"

Nginx Reverse Proxy with HTTPS

Put Uptime Kuma behind Nginx with SSL for both the admin UI and the public status page. Use separate server blocks for each subdomain:

# /etc/nginx/sites-available/uptime-kuma
upstream kuma {
    server localhost:3001;
    keepalive 8;
}

# Admin UI
server {
    listen 443 ssl http2;
    server_name monitor.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/monitor.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/monitor.yourdomain.com/privkey.pem;

    # Restrict admin access to your IP only
    allow YOUR_IP_ADDRESS;
    allow YOUR_VPN_SUBNET/24;
    deny all;

    location / {
        proxy_pass http://kuma;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}

# Public Status Page — no IP restriction
server {
    listen 443 ssl http2;
    server_name status.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/status.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/status.yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://kuma;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 80;
    server_name monitor.yourdomain.com status.yourdomain.com;
    return 301 https://$host$request_uri;
}
sudo ln -s /etc/nginx/sites-available/uptime-kuma /etc/nginx/sites-enabled/
sudo nginx -t

# Issue certs for both subdomains in one command
sudo certbot --nginx -d monitor.yourdomain.com -d status.yourdomain.com

sudo systemctl reload nginx

Advanced Monitor Types

HTTP Keyword Monitors for Deep Health Checks

Standard HTTP monitors only check response codes — they'll show green even when your app returns a 200 with an error page. Keyword monitors fetch the response body and verify specific content exists:

  • Monitor Type: HTTP(s) — Keyword
  • URL: https://api.yourdomain.com/health
  • Keyword: "status":"ok" or any string that proves the app is healthy
  • Invert Keyword: toggle on to alert when an error string appears instead

For a JSON health endpoint, the keyword check might look for "database":"connected" — catching the case where the app is running but can't reach its database. HTTP response code monitoring alone would miss this entirely.

Docker Container Monitors

With the Docker socket mounted, Uptime Kuma can monitor container health directly. Add a new monitor:

  • Monitor Type: Docker Container
  • Container Name / ID: exact container name (e.g., nginx, postgres, my-app)
  • Docker Host: Local Docker Host (uses the mounted socket)

This alerts immediately when a container crashes or is stopped — before any HTTP check would catch it. Set these up for every critical container alongside your HTTP monitors for layered detection.

Push Monitors for Cron Jobs and Background Workers

Push monitors invert the usual model — instead of Uptime Kuma polling your service, your service pings Uptime Kuma on a schedule. If the ping stops arriving, Uptime Kuma raises an alert. This is the right way to monitor cron jobs, queue workers, and scheduled tasks:

  1. Create a monitor with Type: Push
  2. Set Heartbeat Interval to slightly longer than your job frequency (e.g., 370 seconds for a 5-minute job)
  3. Copy the generated push URL
  4. Add the ping to the end of your script or cron job
#!/bin/bash
# /opt/scripts/nightly-backup.sh

set -euo pipefail

PUSH_URL="https://monitor.yourdomain.com/api/push/YOUR_PUSH_TOKEN"

# Your actual job logic
/opt/scripts/backup-postgres.sh
/opt/scripts/sync-to-s3.sh

# Report success to Uptime Kuma
# Only runs if the script above succeeded (set -e exits on failure)
curl -fsS --retry 3 \
  "${PUSH_URL}?status=up&msg=Backup+completed&ping=" \
  > /dev/null

echo "Backup and heartbeat completed: $(date)"

If the script fails before the curl line, no heartbeat arrives, and Uptime Kuma alerts after the interval expires. You catch failures without needing to monitor log files.

DNS Monitors

DNS monitors verify that your domain resolves to the expected IP address or value. Critical for catching:

  • Accidental DNS misconfigurations during record updates
  • Domain hijacking or unauthorized record changes
  • CDN or load balancer routing failures

Set Monitor Type to DNS, enter your domain, choose the record type (A, CNAME, MX, TXT), and set the expected value. Any deviation triggers an alert.


Multi-Channel Notification Routing

Setting Up Tiered Notifications

Not all outages are equal. A best practice is creating multiple notification channels with different urgency levels and assigning monitors to the appropriate channel:

  • P0 — Critical: PagerDuty or phone call for payment APIs, auth services, core infrastructure
  • P1 — High: Telegram or SMS for main product services
  • P2 — Medium: Slack for non-user-facing services, staging environments
  • P3 — Low: Email digest for background jobs, internal tools

In Uptime Kuma, go to Settings → Notifications → Add Notification and create one entry per channel. Then assign the appropriate notification channel to each monitor when creating or editing it.

Webhook Notification for Custom Routing

The Webhook notification type posts a JSON payload to any URL on status change. This is the most flexible option — use it to route alerts through n8n for custom logic, into Slack with custom formatting, or to your own incident management system:

# Webhook payload format sent by Uptime Kuma:
{
  "heartbeat": {
    "monitorID": 5,
    "status": 0,
    "time": "2026-04-06 20:00:00",
    "msg": "Response code: 502. Response: Bad Gateway",
    "ping": 143,
    "duration": 300,
    "retries": 3,
    "important": true
  },
  "monitor": {
    "id": 5,
    "name": "Production API",
    "url": "https://api.yourdomain.com/health",
    "type": "http",
    "interval": 60,
    "tags": [{"name": "production", "color": "#e11d48"}]
  },
  "msg": "[Production API] [🔴 Down] Response code: 502"
}

Telegram with Custom Message Templates

Telegram is reliable and has no rate limits for small teams. When creating the Telegram notification, set a custom message template to include runbook links or recovery steps directly in the alert:

# Custom Telegram message template (set in notification settings):
🚨 *{{monitorName}}* is {{status}}

📍 URL: {{monitorUrl}}
⏱ Duration: {{duration}}s
💬 {{msg}}

🔧 Runbook: https://wiki.yourdomain.com/runbooks/{{monitorName}}
📊 Dashboard: https://monitor.yourdomain.com

Time: {{time}}

Building a Production Status Page

Status Page Design Principles

A status page is a communication tool as much as a technical one. Design it with your users in mind:

  • Group monitors logically — users care about features, not infrastructure. Group by "Checkout", "API", "Dashboard", not by server name
  • Show only what users experience — internal monitoring for database replication lag is valuable for you but irrelevant to users; keep it off the public page
  • Keep descriptions short and jargon-free — "Payment Processing" not "Stripe webhook handler at payments-api-1"

Configuring the Status Page

Go to Status Page → Add New Status Page. Set the slug to match your custom domain (status for status.yourdomain.com). Add the custom domain under Custom Domain. Then build your groups:

  1. Add groups for user-facing feature areas
  2. Drag monitors into each group — only visible to users what you add here
  3. Upload your logo and set brand colors under Customize
  4. Write a short description: "Real-time status for [Your Product] services"
  5. Toggle Show Tags if you use environment tags to differentiate production from staging

Incident Announcements

During an active incident, use the Incident feature on your status page to post updates. Go to your status page settings → Incident → create an incident with a title, description, and severity level. Post updates as the incident progresses and mark it resolved when service is restored. Users checking your status page get real-time updates without you needing to manage a separate communication channel.


Tips, Gotchas, and Troubleshooting

Too Many False Alarms

Alerts that fire for transient 30-second blips train your team to ignore notifications — the definition of alert fatigue. Fix it with proper retry configuration per monitor:

# Recommended monitor settings to reduce false positives:
# Heartbeat Interval: 60 seconds
# Retries: 3
# Retry Interval: 20 seconds

# With these settings:
# Uptime Kuma checks every 60s
# On failure: retries 3 times at 20s intervals
# Alert fires only after ~60s of consistent failure (3 retries × 20s)
# Single-packet loss or brief network blip = no alert

# For critical services where every second matters:
# Interval: 30s, Retries: 1, Retry Interval: 10s
# Alert fires after ~40s confirmed failure

Database Growing Too Large

Uptime Kuma stores every heartbeat in SQLite. After months of monitoring 50+ services at 60-second intervals, the database can grow significantly:

# Check database size
docker exec uptime-kuma du -sh /app/data/kuma.db

# Configure data retention in Settings → General:
# Keep history: 180 days (default is unlimited)
# This prunes old heartbeat records automatically

# Manual database maintenance:
docker exec uptime-kuma sqlite3 /app/data/kuma.db "PRAGMA wal_checkpoint(TRUNCATE);"
docker exec uptime-kuma sqlite3 /app/data/kuma.db "VACUUM;"

# Check size after vacuum:
docker exec uptime-kuma du -sh /app/data/kuma.db

Monitor Shows Incorrect Status After Recovery

# Force immediate recheck from monitor detail page:
# Click the monitor → "Resume" if paused, or use the "Heartbeat" button

# Or restart Uptime Kuma to clear any stuck state:
docker compose restart uptime-kuma
docker compose logs -f uptime-kuma

# Check if Uptime Kuma's own network is the issue:
docker exec uptime-kuma curl -I https://yourservice.com
# If this fails, the monitoring server has a network problem, not the service

Uptime Kuma Not Monitoring Uptime Kuma

Your monitoring tool going down silently is a real problem. Set up an external heartbeat monitor that pings Uptime Kuma itself. The simplest approach: use a free-tier external monitor (Uptime Robot has a free plan with 50 monitors) to ping https://monitor.yourdomain.com. Two-layer monitoring means you're never completely blind:

# Add a cron job on a different server to ping your Uptime Kuma instance:
# */5 * * * * curl -fsS https://monitor.yourdomain.com/api/badge/1/status \
#   > /dev/null || echo "Uptime Kuma unreachable" | mail -s "ALERT" [email protected]

# Or use Uptime Kuma's own API to verify it's responding:
curl https://monitor.yourdomain.com/metrics | grep -c "monitor_status"
# Should return a number > 0

Updating Uptime Kuma

cd ~/uptime-kuma

# Back up before updating
docker cp uptime-kuma:/app/data/kuma.db \
  ~/backups/kuma-$(date +%Y-%m-%d).db

# Pull and restart
docker compose pull
docker compose up -d

# Confirm new version
docker logs uptime-kuma --tail 5

# Pin to a specific version for stability:
# image: louislam/uptime-kuma:1.23.13

Pro Tips

  • Use tags to organize monitors — tag by environment (production, staging), team (backend, data), or priority (p0, p1). Filter the dashboard by tag during incidents to focus on what matters.
  • Set maintenance windows before planned deployments — go to Maintenance → Add Maintenance before any deploy that will cause intentional downtime. Suppress alerts during the window so your team doesn't get paged for expected behavior.
  • Use the API to create monitors programmatically — if you're deploying new services frequently, script monitor creation via the Uptime Kuma API rather than clicking through the UI each time. New service deploys can automatically register their own health check monitor.
  • Keep the monitoring server geographically separate — if your app and Uptime Kuma are in the same datacenter/region, a regional outage takes out both simultaneously. Use a different cloud provider or region for the monitoring server.

Wrapping Up

A production-grade Uptime Kuma setup is more than a ping monitor pointed at your homepage. It's keyword checks that catch partial failures, push monitors that detect silent cron job failures, tiered notification routing that gets the right alert to the right person at the right urgency level, and a public status page that keeps users informed instead of confused.

If you're just getting started, the Uptime Kuma getting started guide covers the initial deployment and basic monitor setup. Once that foundation is in place, come back here to layer in the advanced configuration that makes it genuinely useful for production infrastructure.


Need a Full Observability Stack for Production Infrastructure?

Uptime Kuma handles uptime monitoring well, but a production observability stack also needs metrics, logs, traces, and alerting that scales with your infrastructure. The sysbrix team designs and deploys complete monitoring stacks — Uptime Kuma, Grafana, Prometheus, and log aggregation — tuned to your specific infrastructure and team size.

Talk to Us →
Vaultwarden Bitwarden Self-Host: Run Your Own Password Manager and Stop Trusting Anyone Else With Your Secrets
Learn how to deploy Vaultwarden — the lightweight Bitwarden-compatible server — with Docker, configure HTTPS, set up admin access, and connect the official Bitwarden apps to your own self-hosted vault.