Object storage usually becomes a production dependency long before teams formally plan for it. Build logs, CI artifacts, backups, machine-learning datasets, and application uploads all start as "just files" and quickly turn into business-critical data paths. Many teams begin with managed storage, then hit constraints around data residency, predictable cost, backup control, or custom retention. This is where running MinIO on your own infrastructure can be practical: you keep an S3-compatible API, but gain explicit control over placement, encryption boundaries, and operational playbooks.
This guide walks through a production-oriented MinIO deployment on Ubuntu using Docker Compose and Caddy. The goal is not a toy demo. You will set up a hardened baseline, run MinIO behind automatic HTTPS, isolate credentials, verify health with repeatable checks, and prepare backup and recovery procedures. You will also get troubleshooting paths for the failures operations teams actually encounter: wrong endpoint semantics, reverse-proxy header issues, disk pressure, and credential mismatches between clients and server policy.
By the end, you will have a repeatable stack suitable for internal platforms, developer artifact pipelines, and media-heavy applications that require reliable object storage with transparent runbooks.
Architecture and flow overview
The deployment uses three layers:
- MinIO container for S3-compatible object APIs and console access.
- Caddy reverse proxy for TLS termination, automatic certificate management, and controlled upstream routing.
- Persistent host volumes for object data and MinIO configuration, separated from container lifecycle.
Client applications and operators talk to a public HTTPS endpoint. Caddy forwards API traffic to MinIO while preserving the host and protocol headers required for URL generation and signed requests. Data is persisted on mounted volumes, so routine container upgrades do not delete objects. This model stays simple enough for small teams while preserving operational hygiene expected in production.
Prerequisites
- Ubuntu 22.04/24.04 host with at least 2 vCPU, 4 GB RAM, and fast disk sized for expected object growth.
- A DNS record pointing
storage.yourdomain.comto the server public IP. - Docker Engine and Docker Compose plugin installed.
- Ports 80 and 443 reachable from the internet for TLS issuance and HTTPS traffic.
- A secure way to store admin secrets (password manager, vault, or encrypted ops repo).
Operational recommendation: if this storage supports production applications, attach block storage with snapshot capability and define an explicit retention policy before go-live.
Step-by-step deployment
Step 1: Prepare host baseline and firewall
sudo apt update && sudo apt -y upgrade
sudo apt install -y ca-certificates curl gnupg ufw jq
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
# sanity checks
docker --version
docker compose version
df -h
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Step 2: Create directories with least-privilege ownership
sudo mkdir -p /opt/minio/{data,config,caddy}
sudo chown -R $USER:$USER /opt/minio
chmod 700 /opt/minio/config
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Step 3: Create environment file for secrets
cat > /opt/minio/config/.env <<'EOF'
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=REPLACE_WITH_LONG_RANDOM_SECRET
MINIO_DOMAIN=storage.yourdomain.com
MINIO_SERVER_URL=https://storage.yourdomain.com
EOF
chmod 600 /opt/minio/config/.env
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Step 4: Write docker-compose.yml for MinIO + Caddy
cat > /opt/minio/docker-compose.yml <<'EOF'
services:
minio:
image: minio/minio:latest
container_name: minio
command: server /data --console-address ":9001"
env_file:
- /opt/minio/config/.env
volumes:
- /opt/minio/data:/data
restart: unless-stopped
expose:
- "9000"
- "9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 5s
retries: 5
caddy:
image: caddy:2
container_name: minio-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /opt/minio/Caddyfile:/etc/caddy/Caddyfile:ro
- /opt/minio/caddy:/data
depends_on:
- minio
EOF
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Step 5: Configure Caddy reverse proxy and TLS
cat > /opt/minio/Caddyfile <<'EOF'
storage.yourdomain.com {
encode zstd gzip
# API + console through the same host
reverse_proxy minio:9000 {
header_up Host {host}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-For {remote_host}
}
}
EOF
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Step 6: Start stack and inspect logs
cd /opt/minio
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --tail=100 minio
docker compose logs --tail=100 caddy
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Step 7: Create buckets and scoped service accounts
# install MinIO client (mc)
curl -fsSL https://dl.min.io/client/mc/release/linux-amd64/mc -o mc
chmod +x mc && sudo mv mc /usr/local/bin/mc
# set alias
mc alias set prod https://storage.yourdomain.com "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD"
# create bucket and enforce private-by-default
mc mb prod/app-artifacts
mc anonymous set none prod/app-artifacts
# create policy and service account
cat > /tmp/app-artifacts-readwrite.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{"Effect":"Allow","Action":["s3:ListBucket"],"Resource":["arn:aws:s3:::app-artifacts"]},
{"Effect":"Allow","Action":["s3:GetObject","s3:PutObject","s3:DeleteObject"],"Resource":["arn:aws:s3:::app-artifacts/*"]}
]
}
EOF
mc admin policy create prod app-artifacts-rw /tmp/app-artifacts-readwrite.json
mc admin user add prod svc-app REPLACE_WITH_LONG_RANDOM_SECRET
mc admin policy attach prod app-artifacts-rw --user svc-app
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Configuration and secrets handling best practices
Production incidents around object storage are often caused by secret sprawl rather than software defects. Keep root credentials restricted to platform operators and issue dedicated service accounts per workload. Rotate service-account secrets on a schedule, and immediately after team changes or incident response.
- Store
.envfiles with mode600; never commit them to Git. - Prefer one bucket per application domain to isolate policy boundaries.
- Use explicit IAM-style policies instead of global admin credentials in application configs.
- Enable object lock and versioning where compliance or rollback requirements exist.
- Document endpoint conventions (
https://storage.yourdomain.com) so developers avoid mixed path/virtual host behavior.
If your organization has a vault product, move service credentials out of static files entirely and inject them at runtime via your orchestrator or CI/CD secret manager.
Verification checklist
Run these checks before declaring production readiness:
- HTTPS certificate is valid and auto-renewed by Caddy.
- MinIO health endpoints return success and container restarts are clean.
- Upload/download/delete operations work using non-admin service accounts.
- Data persists after
docker compose downandup -d. - Backups are restorable to a clean host, not just generated.
curl -I https://storage.yourdomain.com/minio/health/live
mc ls prod
mc cp /etc/hosts prod/app-artifacts/verify/hosts.txt
mc stat prod/app-artifacts/verify/hosts.txt
mc rm prod/app-artifacts/verify/hosts.txt
If the copy button does not work in your browser, select the code block manually and copy with Ctrl/Cmd+C.
Common issues and fixes
1) Signed request errors after proxying
If clients report signature mismatch, verify Caddy forwards Host and X-Forwarded-Proto. Missing headers can break canonical request generation.
2) Console/API confusion
Some teams expect separate ports for API and console externally. With reverse proxy, keep one canonical HTTPS hostname and test both operational paths early.
3) Disk fills unexpectedly
Object growth is nonlinear. Set quota alerts, lifecycle rules for short-lived artifacts, and periodic inventory reports per bucket owner.
4) Permission denied from application
Usually caused by bucket policy mismatch or app using wrong access key. Re-test with mc using the same credentials to isolate policy from app bugs.
5) Slow uploads during peak traffic
Validate host I/O saturation and network throughput before tuning MinIO. Storage bottlenecks often live below the container layer.
6) Backup files exist but restore fails
A backup without restore testing is not an operational control. Rehearse restore in a staging host monthly and track recovery time in your runbook.
FAQ
Can I run MinIO on one node in production?
Yes for low-to-moderate workloads with clear risk acceptance. For stronger durability and availability, plan distributed MinIO with multiple drives/nodes and fault-domain awareness.
Should applications use root credentials?
No. Use dedicated service accounts scoped to the exact bucket actions required. Root credentials should remain restricted to platform administration only.
How do I rotate credentials without downtime?
Create a second service account, update application secrets, deploy, verify traffic, then revoke the old account. This avoids hard cutovers that break active jobs.
Do I need versioning and object lock?
If data integrity and accidental-delete recovery matter, yes. Versioning and retention controls provide operational rollback options and improve incident response outcomes.
What is the minimum backup strategy?
At least daily snapshots of MinIO data volumes, off-host copy to separate storage, and monthly restore drills. Keep retention aligned with legal and business requirements.
How should I monitor MinIO in production?
Track container health, disk usage trend, failed auth events, API latency, and certificate renewal status. Alert on growth anomalies and repeated access errors.
Can I place MinIO behind another proxy instead of Caddy?
Yes, but preserve forwarding headers and TLS behavior. The core requirement is consistent external endpoint semantics for S3 clients and signed request validation.
Related guides
- Production Guide: Deploy Langfuse with Docker Compose + Caddy + Postgresql on Ubuntu
- Production Guide: Deploy RabbitMQ with Docker Compose + Caddy on Ubuntu
- How to Deploy Grafana in Production with Docker Compose + systemd
Talk to us
Need help deploying and hardening production AI platforms, improving reliability, or building practical runbooks for your operations team? We can help with architecture, migration, security, and ongoing optimization.