Many teams start with a quick self-hosted Git setup and then hit predictable pain: cert renewals get missed, backups are inconsistent, and upgrades are risky. This guide walks through a production-oriented deployment of Gitea with Docker Compose and Caddy so operations stay reliable as your repos and team grow.
This architecture gives you automatic HTTPS, persistent data separation, explicit health checks, and clear runbook-friendly workflows. It is a strong fit for startups and internal platform teams that need stability without adopting full Kubernetes complexity for a single core service.
We cover implementation and operations in depth: architecture, prerequisites, deployment steps, secrets handling, verification, troubleshooting, backups, and a practical FAQ. The objective is not just to start containers, but to run a maintainable service over time.
Architecture/flow overview
Caddy is the public edge. It terminates TLS on ports 80/443, renews certificates automatically, and forwards traffic to Gitea on the private Docker network. Gitea talks to PostgreSQL internally; the database is never internet-exposed. This lowers attack surface and keeps security controls centralized.
State is split by role: application and repository data under /srv/gitea/data, PostgreSQL files under /srv/gitea/db, and Caddy runtime/certs in dedicated volumes. This separation helps backup design, capacity planning, and incident triage.
Compose gives a deterministic deployment boundary. Explicit dependency checks reduce startup race conditions, and one declarative file keeps operations understandable for both app and platform engineers.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
Prerequisites
- Linux host with admin access
- DNS record for your Git domain
- Ports 80 and 443 reachable
- Minimum 2 vCPU, 4 GB RAM, SSD storage
- Defined backup location and restore owner
Validate DNS and firewall settings before rollout. If another proxy already owns 80/443, decide on a single edge authority first. Ambiguous TLS ownership is a frequent source of outages.
apt update && apt install -y docker-ce docker-compose-plugin curl
If the copy button is unavailable in your browser/editor, select the block and copy manually.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
Step-by-step deployment
1) Prepare directories and secrets
Create stable paths and generate strong secrets. Keep sensitive values outside version control and protect file permissions.
mkdir -p /srv/gitea/{data,db,caddy}
cd /srv/gitea
openssl rand -hex 32
openssl rand -base64 48
If the copy button is unavailable in your browser/editor, select the block and copy manually.
cat > /srv/gitea/.env <<'EOF'
GITEA_DOMAIN=YOUR_DOMAIN
POSTGRES_DB=gitea
POSTGRES_USER=gitea
POSTGRES_PASSWORD=CHANGE_ME
GITEA_SECRET_KEY=CHANGE_ME
GITEA_INTERNAL_TOKEN=CHANGE_ME
EOF
chmod 600 /srv/gitea/.env
If the copy button is unavailable in your browser/editor, select the block and copy manually.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
2) Build the Compose stack
Define PostgreSQL, Gitea, and Caddy with explicit health checks and environment settings.
version: "3.9"
services:
db:
image: postgres:16-alpine
restart: unless-stopped
env_file: .env
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- ./db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 8
gitea:
image: gitea/gitea:1.22
restart: unless-stopped
depends_on:
db:
condition: service_healthy
env_file: .env
environment:
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=${POSTGRES_DB}
- GITEA__database__USER=${POSTGRES_USER}
- GITEA__database__PASSWD=${POSTGRES_PASSWORD}
- GITEA__server__DOMAIN=${GITEA_DOMAIN}
- GITEA__server__ROOT_URL=https://${GITEA_DOMAIN}/
- GITEA__security__SECRET_KEY=${GITEA_SECRET_KEY}
- GITEA__security__INTERNAL_TOKEN=${GITEA_INTERNAL_TOKEN}
volumes:
- ./data:/data
expose:
- "3000"
caddy:
image: caddy:2.8-alpine
restart: unless-stopped
depends_on:
- gitea
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
caddy_config:
If the copy button is unavailable in your browser/editor, select the block and copy manually.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
3) Configure Caddy edge
Use reverse proxy and security headers to enforce HTTPS posture and consistent client behavior.
YOUR_DOMAIN {
encode zstd gzip
reverse_proxy gitea:3000
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
If the copy button is unavailable in your browser/editor, select the block and copy manually.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
4) Launch and initialize
Start services, wait for readiness, and complete Gitea bootstrap with your canonical URL.
cd /srv/gitea
docker compose pull
docker compose up -d
docker compose ps
If the copy button is unavailable in your browser/editor, select the block and copy manually.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
Configuration and secrets handling
Rotate high-impact secrets on a defined cadence and after personnel changes. Enforce least privilege for org/team permissions and require 2FA for privileged accounts. Avoid shared super-admin credentials.
For observability, forward Caddy and Gitea logs to centralized storage. Alert on 5xx spikes, repeated failed logins, certificate issues, and backup failures. These are strong early warning indicators.
Plan capacity around repository growth, LFS usage, and CI artifact retention. Establish thresholds that trigger storage expansion before performance degrades.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
Verification
Run this verification set after deployment and after each update:
docker compose logs --since=10m gitea | tail -n 80
docker compose logs --since=10m caddy | tail -n 80
If the copy button is unavailable in your browser/editor, select the block and copy manually.
- Create and clone a test repository over HTTPS.
- Push with non-admin account to validate permissions.
- Trigger webhook test payload and verify receipt.
- Confirm backups are written to expected target.
- Check certificate validity horizon.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
Backup and continuity strategy
Backups are valuable only if restore steps are tested. Capture DB and repository data consistently, keep off-host copies, and run full restore drills in isolated environments.
# backup
STAMP=$(date +%F-%H%M)
DEST=/srv/backups/gitea/$STAMP
mkdir -p "$DEST"
docker compose exec -T db pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" > "$DEST/gitea.sql"
cp -a /srv/gitea/data "$DEST/data"
If the copy button is unavailable in your browser/editor, select the block and copy manually.
Document RPO/RTO with stakeholders and align backup frequency accordingly. Operational confidence comes from repeatable restore drills, not just scheduled jobs.
In production environments, the difference between a working deployment and a maintainable deployment is operational discipline. Document every change, keep release notes tied to image updates, and run post-change verification the same way every time. Treat the stack as a product with an owner, not a one-time setup task. This mindset reduces configuration drift and makes handoffs between engineers safer.
Common issues/fixes
Certificate issuance errors
Usually DNS propagation or blocked port 80. Validate domain resolution and inspect Caddy logs.
Database connection failures
Typically credential mismatch or service startup timing. Recheck env values and DB health status.
Slow git operations
Investigate disk latency and IOPS first; repository performance is often storage-bound.
Problematic upgrade
Rollback to prior pinned tags, restore from tested backups if needed, then retry in staging.
FAQ
1) Is Docker Compose production-safe?
Yes for many teams, when paired with monitoring, backups, and controlled updates.
2) Why Caddy instead of manual TLS?
It simplifies cert lifecycle and centralizes HTTPS policy.
3) PostgreSQL vs SQLite?
PostgreSQL is the better default for reliability and concurrency.
4) How often rotate secrets?
At least quarterly, and immediately after potential exposure.
5) Which metrics matter most?
5xx rate, auth failures, cert expiry, backup success, storage growth, and service health.
6) Can SSO be added later?
Yes. Plan identity mapping and team permissions before rollout.
7) Safe update process?
Pin versions, backup first, validate in staging, then run post-change checks.
Related guides
- https://sysbrix.com/blog/guides-3/deploy-netbox-on-kubernetes-with-helm-external-postgresql-and-production-guardrails-203
- https://sysbrix.com/blog/guides-3/production-guide-deploy-gitea-with-docker-swarm-traefik-postgresql-on-ubuntu-208
- https://sysbrix.com/blog/guides-3/production-guide-deploy-metabase-with-docker-compose-nginx-postgresql-on-ubuntu-202
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.