Skip to Content

Production MinIO on Docker Compose + systemd: A Practical S3 Deployment Guide

Deploy and harden self-hosted MinIO with repeatable operations, verification, and recovery workflows.

Object storage underpins CI artifacts, model checkpoints, media pipelines, and data exports. Many teams outgrow unmanaged buckets quickly and need predictable, auditable, self-hosted S3. This guide walks through deploying MinIO with Docker Compose plus systemd so the stack boots reliably, recovers from host restarts, and remains maintainable for production operations.

We focus on practical operations rather than a demo setup: controlled directory layout, strict secret handling, health checks, restart behavior, verification, and restore readiness. The result is a repeatable deployment pattern that is easy for platform teams to support.

Architecture and flow overview

The host runs Docker Engine and Compose. MinIO serves S3 API on 9000 and console on 9001, with persistent data on /srv/minio/data. systemd wraps Compose lifecycle so service startup and shutdown are predictable. Backups are generated to /srv/minio/backups and replicated off-host.

  • Runtime: Docker + Compose plugin
  • Service: MinIO single-node with health checks
  • Persistence: host-mounted volume
  • Lifecycle: systemd unit for auto-start/restart behavior
  • Operations: verification + backup + restore drills

Prerequisites

  • Ubuntu/Debian Linux host with sudo access
  • Docker Engine 24+ and Docker Compose plugin
  • Dedicated storage path with enough capacity
  • TLS termination strategy for production
  • Firewall controls for trusted source ranges

For production, keep MinIO data off the root disk where possible and track free capacity with alerting thresholds.

Step-by-step deployment

1) Install Docker and Compose

Install runtime dependencies and verify versions.

sudo apt update
sudo apt install -y docker.io docker-compose-plugin curl jq
sudo systemctl enable --now docker
docker --version
docker compose version

If the copy button does not work in your browser/editor, select the code manually and copy it.

2) Prepare directories and access controls

Use separate paths for data, config, and backups so permissions and operational workflows remain clear.

sudo mkdir -p /srv/minio/{data,config,backups}
sudo chown -R $USER:$USER /srv/minio
chmod 700 /srv/minio/config

If the copy button does not work in your browser/editor, select the code manually and copy it.

3) Create environment file

Keep credentials in an env file with restrictive permissions. Never commit this file to version control.

cat >/srv/minio/config/minio.env <<'EOF'
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=replace-with-very-long-random-secret
MINIO_SERVER_URL=https://s3.example.com
MINIO_BROWSER_REDIRECT_URL=https://s3.example.com/console
EOF
chmod 600 /srv/minio/config/minio.env

If the copy button does not work in your browser/editor, select the code manually and copy it.

4) Create docker-compose.yml

Define container image, ports, volume mount, and health check. Restart policy keeps service resilient across daemon or host restarts.

services:
  minio:
    image: quay.io/minio/minio:latest
    container_name: minio
    command: server /data --console-address ":9001"
    env_file:
      - /srv/minio/config/minio.env
    volumes:
      - /srv/minio/data:/data
    ports:
      - "9000:9000"
      - "9001:9001"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 5s
      retries: 5
    restart: unless-stopped

If the copy button does not work in your browser/editor, select the code manually and copy it.

5) Bring up the stack and smoke test

Start MinIO and validate liveness endpoint before integrating applications.

cd /srv/minio
docker compose up -d
docker compose ps
curl -i http://127.0.0.1:9000/minio/health/live

If the copy button does not work in your browser/editor, select the code manually and copy it.

6) Add systemd wrapper for lifecycle management

systemd gives native status checks, boot ordering, and simpler operator workflows.

[Unit]
Description=MinIO via Docker Compose
Requires=docker.service
After=docker.service network-online.target
Wants=network-online.target

[Service]
Type=oneshot
WorkingDirectory=/srv/minio
RemainAfterExit=true
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target

If the copy button does not work in your browser/editor, select the code manually and copy it.

sudo cp /tmp/minio-compose.service /etc/systemd/system/minio-compose.service
sudo systemctl daemon-reload
sudo systemctl enable --now minio-compose.service
sudo systemctl status minio-compose.service --no-pager

If the copy button does not work in your browser/editor, select the code manually and copy it.

Configuration and secrets handling

Use long random credentials and rotate them regularly. Restrict env file access to root, and avoid passing secrets directly in command-line flags that might leak into process lists or shell history. For multi-team environments, prefer a secret manager and short-lived application credentials scoped to least privilege.

Publish MinIO behind TLS, limit console exposure, and enforce network controls. Separate management and data traffic where possible. Add centralized logs and baseline alerts for availability, latency, and storage growth.

openssl rand -base64 48
chmod 600 /srv/minio/config/minio.env
chown root:root /srv/minio/config/minio.env

If the copy button does not work in your browser/editor, select the code manually and copy it.

Verification checklist

  • Container state is healthy
  • S3 endpoint responds with HTTP 200 liveness
  • Console is reachable over HTTPS
  • Bucket CRUD works from an S3 client
  • Host reboot preserves service and data
  • Backup artifacts are generated and checksummed
systemctl is-active minio-compose.service
cd /srv/minio && docker compose ps
curl -sf http://127.0.0.1:9000/minio/health/live && echo OK

If the copy button does not work in your browser/editor, select the code manually and copy it.

Common issues and fixes

Port already in use

Change mapped host ports and update reverse proxy rules consistently.

Permission errors on /data

Validate ownership, mount options, and host security policies.

Healthy status never turns green

Verify healthcheck command and endpoint path.

Uploads timing out

Tune reverse proxy and client timeout settings for large objects.

Secret exposure concerns

Rotate credentials immediately and move to managed secrets.

Backups not restorable

Run restore drills in staging and document every step.

FAQ

Can I run a single-node MinIO in production?

Yes for moderate workloads, but document the failure domain and recovery expectations.

Should MinIO be internet-facing?

Prefer controlled ingress with TLS and access policies rather than direct broad exposure.

How often should credentials be rotated?

At least quarterly, and immediately after any possible leak.

What backup strategy works best?

Combine frequent incrementals, periodic full snapshots, and tested restore drills.

How do I monitor health effectively?

Track liveness, request latency, error rates, and storage growth with alerts.

Can CI/CD use this endpoint for artifacts?

Yes, S3-compatible clients integrate well; isolate buckets and keys by environment.

When should I move to distributed MinIO?

When availability, throughput, or capacity requirements exceed single-node limits.

Operational maturity notes

As usage scales, set explicit RPO and RTO targets and map each backup/restore action to those objectives. Build monthly capacity reviews into routine operations. Define ownership for incident declaration, restore execution, and application-level validation after recovery.

Before upgrades, run a staging rehearsal with representative object counts and expected load profiles. Capture timings for startup, health transitions, and rollback paths. Maintain runbooks with screenshots and command snippets so on-call engineers can recover quickly under pressure.

Security posture should evolve continuously: review key scopes, rotate credentials, audit access logs, and validate firewall policy drift. Align storage controls with compliance requirements and retention standards. The more standardized your operational playbooks, the less risky emergency changes become.

Client behavior matters as much as server configuration. Standardize retry and backoff defaults across services to prevent request storms during transient failures. For large uploads, tune multipart settings and monitor timeouts at ingress and client SDK layers.

Finally, treat recovery tests as first-class engineering work. A backup that has never been restored is only an assumption. Schedule restore drills, measure recovery duration, and iterate until outcomes are predictable and repeatable.

Related guides

Talk to us

If you need help designing secure object storage, improving backup reliability, or planning a multi-node migration, our team can help.

Contact Us

Scale and resilience planning

Once your object count grows into the millions, review namespace and bucket strategy to keep lifecycle policies easy to reason about. Teams often benefit from separating artifacts, backups, analytics exports, and customer uploads into dedicated buckets with explicit retention and access boundaries.

Plan hardware and network growth before saturation. Track throughput peaks, request concurrency, and storage growth trends monthly. Capacity planning should include data ingress, egress, replication windows, and backup durations so you can avoid emergency scaling during incidents.

Introduce routine game days for object-store failure scenarios: host reboot, disk pressure, accidental key rotation, and network interruption. Each exercise should end with runbook updates and measurable timing data so response quality improves over time.

If multiple applications share the same endpoint, standardize SDK retry/backoff and multipart upload settings. Inconsistent clients can amplify transient failures and degrade user-facing systems. A small platform standards document can prevent recurring reliability issues.

Finally, run restore drills on a fixed schedule and validate recovered objects with checksums and application-level reads. Recovery confidence is built through practice, not assumptions based on successful backup job logs alone.

Production Guide: Deploy Glance with Docker Compose + Caddy on Ubuntu
A production-focused, operations-ready guide for running a secure Glance dashboard with TLS, backups, and practical runbooks.