Many teams start with a browser home page as a personal productivity tool, but quickly realize they need a shared operational landing page for on-call and platform work. When incidents happen, engineers waste valuable minutes opening ten different tabs, checking stale bookmarks, and trying to remember which admin URL maps to production versus staging. Glance solves that with a fast, self-hosted dashboard that can aggregate links, service status, RSS, and custom widgets in one place.
This guide is written for production operators who want a reliable deployment that survives upgrades, supports TLS by default, and includes clear runbooks for backup, restore, and troubleshooting. We will deploy Glance with Docker Compose and Caddy on Ubuntu, keep configuration in version-controlled files, and apply practical secret-handling conventions that work for small and mid-sized teams.
By the end, you will have a hardened baseline with HTTPS, service isolation, health checks, deterministic startup behavior, and a repeatable maintenance workflow. The approach here deliberately favors simple, auditable operations over over-engineered abstractions so your team can operate confidently even at 2 a.m.
Architecture and flow overview
The deployment has three layers. First, Caddy terminates TLS and exposes your public endpoint. Second, the Glance container serves the dashboard application internally on a private Docker network. Third, persistent data and configuration are mounted from the host filesystem so you can back up and recover without rebuilding images.
Request flow is straightforward: user browser -> Caddy (443) -> Glance container (internal port). Caddy manages certificates automatically and renews them before expiry. Because Glance is not directly exposed on the host network, you reduce accidental exposure and keep one ingress path for logging and security controls.
For configuration, we store docker-compose.yml, Caddyfile, and glance.yml under /opt/glance. This keeps ownership clear and allows changes through pull requests if you keep this directory in a private Git repository. For resilience, we add a nightly backup of the directory and validate restore instructions as part of routine maintenance.
Prerequisites
- Ubuntu 22.04/24.04 server with sudo access
- DNS A record for your dashboard host (for example, dashboard.example.com)
- Docker Engine and Docker Compose plugin installed
- Ports 80 and 443 reachable from the internet (for ACME/TLS issuance)
- A non-root service user for day-to-day operations
Before proceeding, verify DNS propagation and confirm there is no competing reverse proxy already bound to ports 80/443. If another ingress is running, either migrate it or dedicate a separate VM to avoid ambiguous networking during certificate issuance.
Step-by-step deployment
1) Create directories and baseline files
Use a dedicated path and ownership model so backups, audits, and incident response are predictable.
sudo mkdir -p /opt/glance/{config,caddy,data,backups}
sudo useradd --system --home /opt/glance --shell /usr/sbin/nologin glance || true
sudo chown -R $USER:$USER /opt/glance
cd /opt/glance
If the copy button does not work in your browser, manually copy the command from the block above.
2) Create Docker Compose definition
Compose defines two services: Caddy as reverse proxy and Glance as app container. We keep restart policies and health checks explicit.
cat > /opt/glance/docker-compose.yml <<'YAML'
services:
caddy:
image: caddy:2.8
container_name: glance-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy/data:/data
- ./caddy/config:/config
networks: [glance_net]
glance:
image: glanceapp/glance:latest
container_name: glance-app
restart: unless-stopped
expose:
- "8080"
volumes:
- ./config/glance.yml:/app/config/glance.yml:ro
- ./data:/app/data
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/"]
interval: 30s
timeout: 5s
retries: 5
networks: [glance_net]
networks:
glance_net:
driver: bridge
YAML
If the copy button does not work in your browser, manually copy the command from the block above.
3) Configure Caddy for HTTPS
Replace the hostname with your real domain. Keep TLS email current for certificate notices.
cat > /opt/glance/caddy/Caddyfile <<'CADDY'
dashboard.example.com {
encode gzip zstd
reverse_proxy glance:8080
tls [email protected]
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
CADDY
If the copy button does not work in your browser, manually copy the command from the block above.
4) Create Glance configuration
Start with operationally meaningful widgets: service links, on-call docs, and key status surfaces.
cat > /opt/glance/config/glance.yml <<'YAML'
server:
host: 0.0.0.0
port: 8080
theme:
palette: dark
pages:
- name: Production Operations
columns:
- size: small
widgets:
- type: bookmarks
title: Incident Essentials
links:
- title: PagerDuty
url: https://pagerduty.com
- title: Grafana
url: https://grafana.example.com
- title: Runbooks
url: https://wiki.example.com/runbooks
- type: rss
title: Platform Updates
feeds:
- url: https://status.example.com/history.rss
- size: full
widgets:
- type: monitor
title: Critical Services
sites:
- title: API Gateway
url: https://api.example.com/health
- title: Identity
url: https://id.example.com/health
YAML
If the copy button does not work in your browser, manually copy the command from the block above.
5) Start and validate containers
Bring up services, then confirm health and proxy behavior before sharing the dashboard URL with your team.
cd /opt/glance
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --no-color --tail=120
If the copy button does not work in your browser, manually copy the command from the block above.
6) Add scheduled backups
Back up configuration and state daily. Keep at least 14 rolling snapshots in off-host storage.
cat > /etc/cron.daily/glance-backup <<'SH'
#!/usr/bin/env bash
set -euo pipefail
TS=$(date +%F-%H%M%S)
DEST=/opt/glance/backups/glance-$TS.tgz
tar -C /opt -czf "$DEST" glance/config glance/caddy glance/docker-compose.yml glance/data
find /opt/glance/backups -type f -name 'glance-*.tgz' -mtime +14 -delete
SH
chmod +x /etc/cron.daily/glance-backup
If the copy button does not work in your browser, manually copy the command from the block above.
7) Perform a restore drill
Do not treat backups as valid until restore is tested. Practice on a staging VM at least monthly.
# On a clean host with Docker installed:
sudo mkdir -p /opt/glance
sudo tar -C /opt -xzf /path/to/glance-YYYY-MM-DD-HHMMSS.tgz
cd /opt/glance
docker compose up -d
curl -I https://dashboard.example.com
If the copy button does not work in your browser, manually copy the command from the block above.
Configuration and secrets handling best practices
Even when a service appears simple, secret hygiene matters. Avoid embedding credentials in Compose files committed to Git. Instead, keep secrets in a root-readable environment file outside repositories (for example, /etc/glance/secrets.env) and mount only required values. If you integrate private feeds or APIs later, map these as environment variables and document ownership and rotation policy.
Use least privilege for operators: a small group gets sudo, while most team members only access the dashboard UI. Restrict SSH with key-based auth, disable password logins, and enable unattended security updates. At the network layer, only expose 80/443 publicly and deny direct container ports from the internet.
For change control, require pull-request review for updates to Caddyfile and dashboard widgets that point to critical systems. A surprising number of incidents come from outdated links to wrong environments; review gates and explicit naming (prod/staging/dev) reduce these mistakes significantly.
Verification checklist
- HTTPS certificate is valid and auto-renewing (no browser warnings)
- Dashboard loads through domain and not through raw container IP
- docker compose ps shows healthy state for Glance service
- Backup archive is created successfully and includes config + data paths
- Restore drill documented with timing, owner, and observed gaps
Capture baseline metrics after go-live: page load time, container restart count, and backup success rate. These become your early warning signals when behavior drifts.
Common issues and fixes
TLS certificate not issuing
Usually caused by DNS mismatch or blocked port 80. Confirm A record points to the server and ensure cloud firewall/security groups allow inbound 80/443.
502/Bad Gateway from Caddy
Glance container may be unhealthy or wrong upstream in Caddyfile. Check docker compose logs glance and verify reverse proxy target is glance:8080.
Widgets fail intermittently
Many widgets rely on remote endpoints. Add timeouts, validate upstream availability, and avoid overloading dashboard with too many high-frequency checks.
Unexpected config drift
If edits are made directly on the server, your repository can diverge. Reconcile by enforcing Git-backed changes and periodic checksum audits of config files.
Backup files growing too large
Compress only what is required and enforce retention pruning. Move long-term archives to object storage with lifecycle policies.
FAQ
Can I run Glance behind an existing Traefik or NGINX stack instead of Caddy?
Yes. The tool can be reused with different deployment methods, but this guide standardizes on Caddy for simpler automatic TLS and concise configuration.
Is SQLite acceptable for production?
For Glance, SQLite is often sufficient because workload is read-heavy and operationally lightweight. If your usage pattern changes, reassess storage strategy and backup frequency.
How often should we update images?
A practical cadence is weekly checks with a monthly controlled update window, plus emergency patching for critical CVEs. Always test in staging first.
What is the safest way to roll back?
Keep prior image tags and the latest known-good backup. If an update fails, restore compose/config from backup and redeploy previous images immediately.
Do we need container orchestration for this service?
Not necessarily. For a single dashboard service, Docker Compose keeps complexity low and reliability high. Introduce Kubernetes only when scaling or governance requires it.
How do we include this in on-call readiness?
Add the dashboard URL to incident runbooks, test it during game days, and assign clear ownership for widget freshness and link validity.
What logs should we keep for audits?
Retain Caddy access/error logs and container lifecycle logs with sensible retention windows. Include change logs for configuration updates in your ticketing system.
Related guides
- Production Guide: Deploy Umami with Docker Compose + Caddy + PostgreSQL on Ubuntu
- Production Guide: Deploy Harbor with Kubernetes Helm and ingress-nginx on Ubuntu
- Production Guide: Deploy Vaultwarden with Docker Compose + NGINX + PostgreSQL on Ubuntu
Talk to us
Need help deploying and hardening production AI platforms, improving reliability, or building practical runbooks for your operations team? We can help with architecture, migration, security, and ongoing optimization.