Skip to Content

Uptime Kuma Setup: Self-Host Your Own Uptime Monitor and Never Miss a Downtime Again

Learn how to deploy Uptime Kuma with Docker, configure monitors for your services, set up multi-channel alerts, and publish a public status page — all on your own infrastructure.
Uptime Kuma setup guide

Uptime Kuma Setup: Self-Host Your Own Uptime Monitor and Never Miss a Downtime Again

Finding out your site is down because a user complained is the worst way to learn about an outage. Uptime Kuma is the self-hosted uptime monitor that fixes that — it watches your services, pings you the moment something goes wrong, and gives you a clean public status page to keep users informed. It's lightweight, beautiful, and runs on a $5 VPS. This guide walks you through a complete Uptime Kuma setup: from Docker deployment to production-grade monitoring with alerts and a public status page.


Prerequisites

  • A Linux server or local machine (Ubuntu 20.04+ recommended)
  • Docker Engine installed and running — or Node.js 18+ if you prefer a non-Docker install
  • At least 512MB RAM — Uptime Kuma is impressively lightweight
  • Port 3001 available (default), or any custom port you prefer
  • A domain name (optional for local use, recommended for team access and status pages)

Verify Docker is ready:

docker --version
sudo systemctl status docker

# Check port 3001 is free
sudo ss -tlnp | grep 3001

What Is Uptime Kuma and Why Self-Host It?

Uptime Kuma is an open-source, self-hosted monitoring tool inspired by Uptime Robot. It checks whether your services are up at configurable intervals and alerts you through dozens of notification channels when they go down — or come back up.

What It Monitors

  • HTTP/HTTPS — checks response codes, response time, and optional keyword matching in the response body
  • TCP port — checks whether a port is open and accepting connections
  • Ping (ICMP) — raw ping monitoring for servers and network devices
  • DNS — verifies DNS records resolve to expected values
  • Docker containers — monitors container status directly via the Docker socket
  • Databases — MySQL, PostgreSQL, MongoDB, Redis connectivity checks
  • Push monitors — your service pings Uptime Kuma on a schedule; if the ping stops, it's considered down (great for cron job health checks)
  • Steam game servers, Minecraft, and more — niche but useful protocol monitors

Why Self-Host Instead of Using Uptime Robot or Better Uptime?

The free tiers of SaaS monitoring tools cap you at 50 monitors with 5-minute intervals. Uptime Kuma has no limits on monitors, no per-check pricing, 20-second minimum intervals, and your monitoring data stays on your own infrastructure. For a team running 20+ services, the math is obvious.


Installing Uptime Kuma

Option 1: Docker Run (Fastest)

One command gets you a running instance with persistent data:

docker run -d \
  --name uptime-kuma \
  --restart unless-stopped \
  -p 3001:3001 \
  -v uptime-kuma_data:/app/data \
  louislam/uptime-kuma:latest

Open http://localhost:3001 and you'll be prompted to create an admin account. That's your entire setup for local use.

Option 2: Docker Compose (Recommended)

For a reproducible deployment that's easier to manage alongside other services:

# docker-compose.yml
version: '3.8'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      - "3001:3001"
    volumes:
      - uptime_kuma_data:/app/data
      # Optional: mount Docker socket to monitor containers
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - TZ=UTC

volumes:
  uptime_kuma_data:
docker compose up -d
docker compose logs -f uptime-kuma

Wait for Listening on 3001 in the logs, then open the browser. First-run setup takes under a minute.

Option 3: Without Docker (Node.js)

If you prefer running directly on the host without Docker:

# Requires Node.js 18+
git clone https://github.com/louislam/uptime-kuma.git
cd uptime-kuma
npm install
npm run setup

# Start the server
node server/server.js

# For production, use PM2 to keep it running
npm install -g pm2
pm2 start server/server.js --name uptime-kuma
pm2 save
pm2 startup

Configuring Monitors

Adding Your First HTTP Monitor

In the Uptime Kuma dashboard, click Add New Monitor. For a standard website or API endpoint:

  • Monitor Type: HTTP(s)
  • Friendly Name: something recognizable — e.g., Main Website or API — /health
  • URL: https://yourdomain.com/health
  • Heartbeat Interval: 60 seconds (or as low as 20 seconds)
  • Retries: 3 — avoids false alarms from single-packet failures
  • Accepted Status Codes: 200-299 by default, or specific codes for your endpoint

Enable Certificate Expiry Notification on HTTPS monitors — Uptime Kuma will alert you before your TLS cert expires, which is one of the most common causes of unexpected downtime.

Keyword Monitoring

For deeper health checks, use keyword monitoring. Uptime Kuma fetches the URL and checks whether a specific string appears in the response body. Useful for detecting when a page loads but shows an error state:

  • Set Monitor Type to HTTP(s) — Keyword
  • Set Keyword to something that should always appear: "status":"ok", healthy, or your app name
  • Toggle Invert Keyword to alert when a specific error string appears instead

TCP Port Monitor

For services that don't speak HTTP — databases, SMTP, custom TCP servers:

  • Monitor Type: TCP Port
  • Hostname: your-db-host.yourdomain.com
  • Port: 5432 (Postgres), 6379 (Redis), 25 (SMTP)

Push Monitor for Cron Jobs

Push monitors flip the detection model: instead of Uptime Kuma polling your service, your service pings Uptime Kuma. If the ping stops arriving, it's considered down. Perfect for monitoring cron jobs and scheduled tasks:

  1. Create a monitor with Type: Push
  2. Set the Heartbeat Interval to slightly longer than your cron frequency (e.g., if cron runs every 5 minutes, set 6 minutes)
  3. Copy the generated push URL
  4. Add the ping to the end of your cron script:
#!/bin/bash
# your-cron-job.sh

# Your actual job logic
python3 /opt/scripts/sync_data.py

# Ping Uptime Kuma on success
# If this line never runs (script crashes), Uptime Kuma raises an alert
curl -fsS --retry 3 \
  "https://uptime.yourdomain.com/api/push/YOUR_PUSH_TOKEN?status=up&msg=OK&ping=" \
  > /dev/null

Setting Up Notifications and Alerts

Connecting a Notification Channel

Go to Settings → Notifications → Setup Notification. Uptime Kuma supports over 90 notification providers. The most commonly used:

  • Telegram — create a bot via BotFather, get the chat ID, paste both into Uptime Kuma
  • Slack — create an incoming webhook in your Slack workspace, paste the URL
  • Discord — create a channel webhook, paste the URL
  • Email (SMTP) — configure your SMTP server credentials and recipient address
  • PagerDuty / OpsGenie — enterprise on-call integration via integration keys
  • Webhook — POST to any custom URL (works with n8n, Zapier, or your own endpoint)

Webhook Notification Config

The webhook notification sends a JSON payload to your endpoint on every status change. This is the most flexible option — it lets you route alerts through n8n for custom logic, or into any internal system:

# Example webhook payload Uptime Kuma sends:
{
  "heartbeat": {
    "monitorID": 1,
    "status": 0,
    "time": "2026-04-05 06:00:00",
    "msg": "Connection refused",
    "ping": null,
    "duration": 42,
    "retries": 3
  },
  "monitor": {
    "id": 1,
    "name": "Main API",
    "url": "https://api.yourdomain.com/health",
    "type": "http",
    "interval": 60
  },
  "msg": "[Main API] [🔴 Down] Connection refused"
}

Notification Best Practices

  • Create separate notification channels for critical services (wake you up) and non-critical services (Slack message is fine)
  • Use Apply on all existing monitors when adding a new notification to avoid manually adding it to every monitor
  • Set Custom Message per monitor to include runbook links or recovery steps directly in the alert

Creating a Public Status Page

Setting Up the Status Page

Go to Status Page → New Status Page. Give it a name and a slug — the slug becomes the URL path: https://status.yourdomain.com/ or https://uptime.yourdomain.com/status/my-page.

In the status page editor:

  1. Add monitor groups (e.g., Core Services, APIs, Infrastructure)
  2. Drag monitors into each group — only the monitors you add here are visible on the public page
  3. Enable Show Tags to display environment labels
  4. Upload a logo and set your brand colors
  5. Toggle Domain Names if you want to serve the status page from a custom domain like status.yourdomain.com

Custom Domain for the Status Page

To serve your status page from status.yourdomain.com, point a CNAME at your server and add the domain in the Status Page settings. Then configure Nginx to route the subdomain to Uptime Kuma:

server {
    listen 80;
    server_name status.yourdomain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name status.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/status.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/status.yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://localhost:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}
sudo nginx -t && sudo systemctl reload nginx

# Issue SSL cert for the status subdomain
sudo certbot --nginx -d status.yourdomain.com

In the Uptime Kuma Status Page settings, add status.yourdomain.com to the Domain Names field and save. Visiting that domain now shows only the public status page — the admin UI remains at your main Uptime Kuma URL.


Tips, Gotchas, and Troubleshooting

False Alarms on Flaky Networks

If you're getting alerts for brief blips that self-resolve, increase the Retries setting per monitor. With retries set to 3 and a 60-second interval, Uptime Kuma waits 3 minutes of consistent failure before firing an alert — eliminating noise from transient network hiccups.

Monitor Shows Wrong Status After Recovery

Uptime Kuma updates status in real time on the next successful heartbeat. If a monitor stays red after you've fixed the issue, force a recheck from the monitor detail page, or just wait for the next interval. The dashboard auto-refreshes.

Uptime Kuma Container Crashes or Won't Start

# Check logs for the actual error
docker logs uptime-kuma --tail 50

# Common issue: corrupted SQLite database
# The data lives in the volume at /app/data/kuma.db
docker exec uptime-kuma sqlite3 /app/data/kuma.db "PRAGMA integrity_check;"

# If the DB is corrupted, restore from your backup
# If no backup: remove the volume and start fresh (you'll lose config)
docker compose down
docker volume rm uptime_kuma_data
docker compose up -d

Updating Uptime Kuma

Updates are frequent and usually safe. Your data persists in the Docker volume:

docker compose pull
docker compose up -d

# Verify the new version is running
docker logs uptime-kuma --tail 10 | grep -i version

# Or check in the UI: Settings → About

Backing Up Uptime Kuma Data

All configuration, monitors, and history live in a single SQLite file. Back it up with a simple cron job:

# Add to crontab: crontab -e
# Backup daily at 2am, keep 7 days
0 2 * * * docker exec uptime-kuma sqlite3 /app/data/kuma.db ".backup '/app/data/kuma.db.bak'" && \
  docker cp uptime-kuma:/app/data/kuma.db.bak /opt/backups/uptime-kuma-$(date +\%Y-\%m-\%d).db && \
  find /opt/backups -name 'uptime-kuma-*.db' -mtime +7 -delete

Monitoring Uptime Kuma Itself

A monitoring tool that goes down silently is useless. Set up a second, independent monitor for your Uptime Kuma instance. The easiest option: add a free Uptime Robot monitor pointing at https://uptime.yourdomain.com. Two-layer monitoring costs nothing and ensures you're never flying blind.

Pro Tips

  • Use tags to organize monitors — tag by environment (production, staging), team (backend, infra), or criticality (p0, p1). Filter by tag on the dashboard to focus on what matters.
  • Set maintenance windows — under Maintenance, schedule planned downtime windows. Uptime Kuma suppresses alerts during maintenance so your team doesn't get paged during a known deployment.
  • Monitor your SSL certs explicitly — add a separate monitor per domain with Notify on certificate expiry enabled, alert at 30 days and 7 days out. Expired certs cause more outages than most teams expect.
  • Keep Uptime Kuma on a separate server from what it monitors — if your main VPS goes down, a Uptime Kuma running on the same VPS also goes down. A small separate instance (even a free-tier cloud VM) gives you genuinely independent monitoring.
  • Use the API for automation — Uptime Kuma has an unofficial but stable API you can use to programmatically create monitors in CI/CD pipelines as new services are deployed.

Wrapping Up

A complete Uptime Kuma setup takes under 30 minutes and gives you unlimited monitors, 20-second check intervals, 90+ notification channels, and a clean public status page — all running on your own server. Compare that to paying $20/month for a SaaS monitoring tool that caps you at 50 monitors with 5-minute intervals and you start to understand why self-hosters love this tool.

Deploy it, add your most critical services first, wire up at least one notification channel, and publish a status page for your users. Then layer in push monitors for your cron jobs, keyword checks for your key API endpoints, and TCP monitors for your databases. Within an hour you'll have a monitoring setup that would have caught every outage you've ever had.


Need a Full Observability Stack for Your Infrastructure?

Uptime Kuma is a great start, but production infrastructure deserves metrics, logs, and traces too. The sysbrix team builds complete observability stacks — Uptime Kuma, Grafana, Prometheus, and log aggregation — tailored to your infrastructure and team size. We make sure you know about problems before your users do.

Talk to Us →
Self-Host Supabase Docker: Run Your Own Firebase Alternative With Full Control
Learn how to self-host Supabase with Docker, configure authentication, connect your apps to a production-ready Postgres backend, and manage your data without depending on Supabase Cloud.