MinIO Self-Host Setup: Distributed Mode, Erasure Coding, Multi-Site Replication, and Enterprise Storage Operations
The first MinIO guide covered single-node deployment, bucket management, access policies, and connecting apps via the S3 SDK — everything you need to replace AWS S3 for a single server. This guide covers what production storage at scale actually requires: distributed MinIO with erasure coding so drive failures don't lose data, active-active multi-site replication so storage survives datacenter failures, object locking and versioning for compliance requirements, lifecycle policies that automatically manage data retention and storage tiers, and the operational patterns that keep a MinIO cluster running reliably under real workloads.
Prerequisites
- A working single-node MinIO instance — see our MinIO getting started guide
- For distributed mode: at least 4 servers (or 4 VMs/containers on different hosts) with dedicated storage drives
- The
mcCLI configured for your MinIO instance - Network connectivity between all nodes on port 9000
- Linux with at least 2GB RAM per node — erasure coding is CPU-intensive
- Dedicated storage volumes (not the OS drive) for MinIO data on each node
Verify your current MinIO setup and the mc CLI before proceeding:
# Check MinIO version:
docker exec minio minio --version
# Verify mc CLI is configured:
mc alias list
# Check current cluster health:
mc admin info local
# Verify all drives are healthy:
mc admin health local
# Check disk usage:
mc admin disk local --json | jq '.disks[] | {path: .path, state: .state, used: .usedSpace}'
Distributed Mode: Erasure Coding for Data Durability
Single-node MinIO has no redundancy — if the disk fails, data is lost. Distributed MinIO with erasure coding stores objects across multiple drives on multiple servers, so the system can tolerate the loss of up to half the drives without losing any data and without any downtime.
How Erasure Coding Works in MinIO
MinIO uses Reed-Solomon erasure coding to split each object into data shards and parity shards. With a default EC:4 configuration (4 data + 4 parity drives in an 8-drive pool), you can lose any 4 drives simultaneously and still read every object. No RAID required, no special hardware — pure software-defined redundancy.
Distributed MinIO with Docker Compose
# docker-compose.distributed.yml
# 4-node MinIO cluster with 2 drives per node (8 drives total, EC:4)
# Deploy this on a single host for testing, or split across 4 servers for production
version: '3.8'
x-minio-common: &minio-common
image: quay.io/minio/minio:latest
command: server
--console-address ":9001"
http://minio{1...4}/data{1...2}
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
MINIO_VOLUMES: "/data1 /data2" # 2 drives per node
# Erasure coding stripe size — EC:4 requires minimum 4 drives
MINIO_ERASURE_STRIPE_SIZE: 4
# Site name for replication:
MINIO_SITE_NAME: primary-site
# Health check:
MINIO_HEALTHCHECK_ENABLE: "on"
expose:
- "9000"
- "9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- minio1-data1:/data1
- minio1-data2:/data2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- minio2-data1:/data1
- minio2-data2:/data2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- minio3-data1:/data1
- minio3-data2:/data2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- minio4-data1:/data1
- minio4-data2:/data2
# Nginx load balancer for distributed MinIO:
nginx:
image: nginx:alpine
container_name: minio_lb
volumes:
- ./nginx-minio.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000" # S3 API
- "9001:9001" # Console
depends_on:
- minio1
- minio2
- minio3
- minio4
volumes:
minio1-data1: minio1-data2:
minio2-data1: minio2-data2:
minio3-data1: minio3-data2:
minio4-data1: minio4-data2:
Nginx Load Balancer for Distributed MinIO
# nginx-minio.conf
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
upstream minio_s3 {
least_conn; # Route to least-loaded node
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream minio_console {
least_conn;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
listen [::]:9000;
# Disable buffering for large file uploads/downloads:
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_pass http://minio_s3;
proxy_http_version 1.1;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
chunked_transfer_encoding off;
}
}
server {
listen 9001;
listen [::]:9001;
location / {
proxy_pass http://minio_console;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
}
}
}
# Start the cluster:
docker compose -f docker-compose.distributed.yml up -d
# Verify cluster formed correctly:
mc alias set distributed http://localhost:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}
mc admin info distributed
# Look for: "Erasure Code: EC:4" and all 8 drives showing as "healing: false"
Verifying Erasure Coding and Simulating Drive Failure
# Verify erasure coding is active:
mc admin info distributed | jq '.info.backend'
# Should show: {"Type": "Erasure", "OnlineDisks": 8, "OfflineDisks": 0}
# Upload a test object:
echo "erasure coding test" | mc pipe distributed/test-bucket/test-object.txt
# Simulate drive failure by stopping one node:
docker stop minio_minio4_1
# Verify object is still readable with one node down:
mc cat distributed/test-bucket/test-object.txt
# Should still output the content — erasure coding in action
# Check cluster health with degraded node:
mc admin info distributed | jq '.info.backend'
# OnlineDisks: 6, OfflineDisks: 2 — still operating, still reading
# Restore the failed node:
docker start minio_minio4_1
# MinIO automatically heals degraded objects when the node rejoins:
mc admin heal distributed --recursive --verbose
# Watch healing progress:
watch -n5 'mc admin info distributed | jq .info.backend'
Multi-Site Replication for Geographic Redundancy
Erasure coding protects against drive failures within a site. Site replication protects against total site loss — datacenter fire, power failure, network partition. MinIO's site replication keeps two or more geographically separate MinIO deployments in sync, with active-active writes that let you read and write from any site.
Configuring Active-Active Site Replication
# Prerequisites:
# - Site A: https://minio-site-a.yourdomain.com
# - Site B: https://minio-site-b.yourdomain.com
# - Both sites must use identical root credentials OR service accounts
# - Both sites must be empty (no existing buckets) before configuring replication
# - Both sites need the same MINIO_SITE_NAME set
# Step 1: Configure mc aliases for both sites:
mc alias set site-a https://minio-site-a.yourdomain.com ${SITE_A_ACCESS_KEY} ${SITE_A_SECRET_KEY}
mc alias set site-b https://minio-site-b.yourdomain.com ${SITE_B_ACCESS_KEY} ${SITE_B_SECRET_KEY}
# Step 2: Enable site replication (run from either site):
mc admin replicate add site-a site-b
# This configures active-active replication where:
# - Writes to site-a are replicated to site-b
# - Writes to site-b are replicated to site-a
# - Bucket creation/deletion, IAM policies, and object data all replicate
# Step 3: Verify replication is established:
mc admin replicate info site-a
# Should show: Replicated to site-b | Status: active
# Step 4: Test replication:
mc mb site-a/replicated-test
echo "site replication test" | mc pipe site-a/replicated-test/test.txt
# Check it appeared on site-b:
mc cat site-b/replicated-test/test.txt
# Should return the same content
# Step 5: Monitor replication lag:
mc admin replicate status site-a
# Shows: pending objects, failed objects, and lag in seconds
Bucket-Level Replication for Selective Sync
# For selective replication (specific buckets, not whole site),
# use bucket-level replication rules instead of site replication
# Useful when you only need to replicate critical data cross-region
# Step 1: Enable versioning on source bucket (required for replication):
mc version enable site-a/critical-assets
# Step 2: Create target bucket:
mc mb site-b/critical-assets-backup
mc version enable site-b/critical-assets-backup
# Step 3: Create replication policy on source:
mc replicate add site-a/critical-assets \
--remote-bucket https://site-b-access-key:[email protected]/critical-assets-backup \
--replicate delete-marker,delete,existing-objects
# Step 4: Verify replication rule:
mc replicate ls site-a/critical-assets
# Step 5: Check replication status per bucket:
mc replicate status site-a/critical-assets
# Shows: objects queued, completed, failed
# Trigger immediate sync of existing objects:
mc replicate resync start site-a/critical-assets
mc replicate resync status site-a/critical-assets
# Monitor until Complete status
Object Locking, Versioning, and Compliance
Regulated industries (healthcare, finance, legal) require data immutability guarantees — objects that can't be deleted or overwritten for a specified period, regardless of who tries. MinIO's object locking (WORM — Write Once Read Many) provides S3-compatible immutability that satisfies SEC 17a-4, CFTC, HIPAA, and similar requirements.
Enabling Object Locking on a Bucket
# IMPORTANT: Object locking must be enabled at bucket creation
# It cannot be added to an existing bucket
# Create a compliance bucket with object locking:
mc mb --with-lock distributed/compliance-records
# Set default retention policy (GOVERNANCE or COMPLIANCE mode):
# GOVERNANCE: admins with special permissions can override
# COMPLIANCE: NO ONE can delete or modify until retention period expires
# Set COMPLIANCE mode retention (stricter — preferred for regulated industries):
mc retention set --default COMPLIANCE 7y distributed/compliance-records
# Now ALL objects in this bucket are retained for 7 years by default
# Verify retention policy:
mc retention info distributed/compliance-records
# Upload a compliance document:
mc cp /path/to/contract.pdf distributed/compliance-records/2026/Q1/contract.pdf
# Verify the object is locked:
mc stat distributed/compliance-records/2026/Q1/contract.pdf | grep -i retain
# Shows: Retention: COMPLIANCE | Retain Until: 2033-04-09
# Try to delete it (should fail):
mc rm distributed/compliance-records/2026/Q1/contract.pdf
# Error: object is WORM protected
# You can set per-object retention overrides:
mc retention set COMPLIANCE "2030-12-31T00:00:00Z" \
distributed/compliance-records/2026/Q1/special-contract.pdf
Versioning for Non-Compliance Use Cases
# Versioning keeps previous versions of objects when they're overwritten or deleted
# Useful for user file storage, configuration backups, and any "undo" capability
# Enable versioning on a bucket:
mc version enable distributed/user-files
# Verify versioning is on:
mc version info distributed/user-files
# Returns: Versioning status: Enabled
# Upload a file, then overwrite it:
echo "version 1" | mc pipe distributed/user-files/document.txt
echo "version 2" | mc pipe distributed/user-files/document.txt
echo "version 3" | mc pipe distributed/user-files/document.txt
# List all versions:
mc ls --versions distributed/user-files/document.txt
# Shows 3 versions with different version IDs
# Restore a specific version:
VERSION_ID=$(mc ls --versions --json distributed/user-files/document.txt | \
jq -r 'select(.versionId) | .versionId' | tail -3 | head -1) # Second version
mc cp --version-id "$VERSION_ID" \
distributed/user-files/document.txt \
distributed/user-files/document-v2-restored.txt
# Delete a specific version (permanent):
mc rm --version-id "$VERSION_ID" distributed/user-files/document.txt
# Set versioning to SUSPENDED (keep existing versions, don't create new ones):
mc version suspend distributed/user-files
Lifecycle Management: Automated Data Tiering and Expiry
Storing everything in hot storage forever is expensive. Lifecycle rules automatically move older objects to cheaper storage tiers, delete expired data, and clean up incomplete multipart uploads — all without manual intervention.
Implementing Tiered Lifecycle Policies
# MinIO lifecycle rules use the S3 ILM (Intelligent Lifecycle Management) format
# Create a JSON policy file:
cat > lifecycle-policy.json << 'EOF'
{
"Rules": [
{
"ID": "expire-old-logs",
"Status": "Enabled",
"Filter": {
"Prefix": "logs/"
},
"Expiration": {
"Days": 90
}
},
{
"ID": "clean-incomplete-uploads",
"Status": "Enabled",
"Filter": {},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
},
{
"ID": "expire-old-versions",
"Status": "Enabled",
"Filter": {},
"NoncurrentVersionExpiration": {
"NoncurrentDays": 30,
"NewerNoncurrentVersions": 5
}
},
{
"ID": "transition-to-warm-storage",
"Status": "Enabled",
"Filter": {
"Prefix": "archives/"
},
"Transition": {
"Days": 30,
"StorageClass": "WARM"
}
}
]
}
EOF
# Apply the lifecycle policy:
mc ilm import distributed/app-data < lifecycle-policy.json
# Verify the policy was applied:
mc ilm ls distributed/app-data
# Check which objects would be expired (dry run):
mc ilm tier stats distributed/app-data
# Create a warm storage tier (MinIO → MinIO tier for cost savings):
# This requires configuring a lower-cost storage backend:
mc ilm tier add minio distributed warm-tier \
--endpoint https://minio-warm.yourdomain.com \
--access-key ${WARM_ACCESS_KEY} \
--secret-key ${WARM_SECRET_KEY} \
--bucket warm-data \
--prefix archive-tier/
# Objects in archives/ prefix will automatically move to warm tier after 30 days
Production Operations: Monitoring, Capacity Planning, and Maintenance
Prometheus Metrics and Grafana Dashboard
# MinIO exposes Prometheus metrics natively
# Enable metrics collection:
mc admin prometheus generate distributed > minio-prometheus.yaml
cat minio-prometheus.yaml
# Outputs a scrape config with authentication tokens
# Add to prometheus.yml:
# - job_name: 'minio'
# bearer_token: GENERATED_TOKEN
# metrics_path: /minio/v2/metrics/cluster
# scheme: https
# static_configs:
# - targets: ['minio.yourdomain.com:9000']
# Key MinIO metrics for dashboards:
# minio_cluster_disk_free_bytes — available storage
# minio_cluster_disk_total_bytes — total capacity
# minio_cluster_disk_healing_total — drives being healed
# minio_s3_requests_total — request rate by type
# minio_s3_errors_total — error rate
# minio_node_process_uptime_seconds — per-node uptime
# Useful PromQL for Grafana panels:
# Storage utilization %:
# (1 - (minio_cluster_disk_free_bytes / minio_cluster_disk_total_bytes)) * 100
# Request rate (last 5 min):
# rate(minio_s3_requests_total[5m])
# Error rate:
# rate(minio_s3_errors_total[5m]) / rate(minio_s3_requests_total[5m]) * 100
# Alert: storage > 85% full:
# minio_cluster_disk_free_bytes / minio_cluster_disk_total_bytes * 100 < 15
# Download the official MinIO Grafana dashboard:
# Dashboard ID: 13502 at grafana.com/grafana/dashboards/13502
# Import to Grafana → Dashboards → Import → Enter ID 13502
Capacity Planning and Expansion
#!/usr/bin/env python3
# minio-capacity-planner.py
# Analyzes MinIO usage trends and projects when you'll need more capacity
import subprocess
import json
from datetime import datetime, timedelta
def run_mc(command: str) -> dict:
result = subprocess.run(
['mc'] + command.split(),
capture_output=True, text=True
)
try:
return json.loads(result.stdout)
except Exception:
return {"output": result.stdout}
# Get cluster-wide disk stats:
cluster_info = run_mc('admin info distributed --json')
total_bytes = 0
free_bytes = 0
buckets = []
if 'info' in cluster_info:
info = cluster_info['info']
for disk in info.get('backend', {}).get('disks', []):
total_bytes += disk.get('totalSpace', 0)
free_bytes += disk.get('freeSpace', 0)
used_bytes = total_bytes - free_bytes
used_pct = (used_bytes / total_bytes * 100) if total_bytes > 0 else 0
# Get per-bucket usage:
bucket_sizes = {}
result = subprocess.run(['mc', 'du', '--recursive', '--depth=1', 'distributed/'],
capture_output=True, text=True)
for line in result.stdout.strip().split('\n'):
parts = line.split('\t')
if len(parts) == 2:
size_str, path = parts
bucket = path.strip('/')
# Convert to bytes (mc returns human-readable)
bucket_sizes[bucket] = size_str
print(f"MinIO Capacity Report — {datetime.now().strftime('%Y-%m-%d %H:%M')}")
print("=" * 60)
print(f"Total capacity: {total_bytes / 1e12:.1f} TB")
print(f"Used: {used_bytes / 1e12:.1f} TB ({used_pct:.1f}%)")
print(f"Free: {free_bytes / 1e12:.1f} TB")
print()
print("Top buckets by size:")
for bucket, size in sorted(bucket_sizes.items(), key=lambda x: x[1], reverse=True)[:10]:
print(f" {bucket:40} {size}")
# Warning thresholds:
if used_pct > 85:
print(f"\n🚨 CRITICAL: Storage is {used_pct:.1f}% full — add capacity immediately")
elif used_pct > 70:
print(f"\n⚠️ WARNING: Storage is {used_pct:.1f}% full — plan capacity expansion")
else:
print(f"\n✅ Storage utilization is healthy at {used_pct:.1f}%")
# Schedule weekly:
# 0 9 * * 1 python3 /opt/scripts/minio-capacity-planner.py | mail -s "Weekly MinIO Capacity" [email protected]
Cluster Expansion: Adding New Server Pool
# MinIO expands by adding server pools — new groups of servers
# Existing data doesn't need to move; new writes distribute across pools
# This is a zero-downtime horizontal expansion
# Current state: 4 nodes (minio1-4) with the original pool
# Adding a new pool: minio5-8 with additional drives
# Step 1: Prepare the new nodes:
# Install MinIO on minio5, minio6, minio7, minio8
# Mount dedicated drives at /data1 and /data2 on each
# Step 2: Update the MinIO command on ALL nodes to include the new pool:
# Old command:
# server http://minio{1...4}/data{1...2}
# New command (add the second pool after a space):
# server http://minio{1...4}/data{1...2} http://minio{5...8}/data{1...2}
# Update docker-compose.yml on all 8 nodes:
# command: server --console-address ":9001" \
# http://minio{1...4}/data{1...2} \
# http://minio{5...8}/data{1...2}
# Step 3: Rolling restart — restart one node at a time:
# MinIO continues serving requests during rolling restart
for i in 1 2 3 4 5 6 7 8; do
docker restart minio${i}
sleep 30 # Wait for node to rejoin before restarting next
mc admin info distributed | jq '.info.backend.onlineDisks'
done
# Step 4: Verify new pool is active:
mc admin info distributed
# Should now show: 16 online disks (8 original + 8 new)
# New writes distribute across both pools automatically
# Step 5: Verify existing data is still accessible:
mc cat distributed/test-bucket/test-object.txt
# Existing data remains in original pool — no rebalancing needed
Tips, Gotchas, and Troubleshooting
Cluster Not Forming — Nodes Can't Find Each Other
# MinIO distributed mode requires all nodes to start with the same configuration
# and be able to reach each other on port 9000
# Step 1: Test inter-node connectivity:
docker exec minio1 ping -c 3 minio2
docker exec minio1 curl -s http://minio2:9000/minio/health/live
# Should return: OK
# Step 2: Verify all nodes have identical configuration:
for node in minio1 minio2 minio3 minio4; do
echo "=== $node ==="
docker exec $node env | grep MINIO_ROOT_USER
done
# ROOT_USER must be IDENTICAL on all nodes
# If different, the cluster won't form
# Step 3: Check MinIO logs for specific errors:
docker logs minio1 --tail 30 | grep -iE '(error|failed|unable|warn)'
# Common issues:
# - Different MINIO_ROOT_PASSWORD on different nodes:
# Fix: set identical credentials in .env and restart all nodes simultaneously
# - Drives not writable:
# docker exec minio1 touch /data1/test
# If permission denied: fix volume mount ownership
# docker run --rm -v minio1-data1:/data alpine chown -R 1000:1000 /data
# - Insufficient drives for erasure coding:
# MinIO requires minimum 4 drives for EC:2 or 8 for EC:4
# With 8 drives across 4 nodes, you need 2 drives per node minimum
Replication Falling Behind or Stuck
# Check replication backlog:
mc admin replicate status site-a | jq '.replication'
# If there's a large backlog:
# 1. Check network bandwidth between sites:
iperf3 -c minio-site-b.yourdomain.com -p 5201 -t 30
# Low bandwidth = slow replication
# 2. Check CPU load on both sites:
mc admin top distributed
# High CPU = slow replication processing
# 3. Manually trigger resync for stuck objects:
mc replicate resync start site-a/affected-bucket
mc replicate resync status site-a/affected-bucket
# 4. Check for failed replication objects:
mc replicate status site-a/affected-bucket --json | \
jq '.stats | {pending: .pendingCount, failed: .failedCount}'
# 5. If replication is completely stuck (no progress in hours):
# Reset replication state on source:
mc admin replicate reset site-a site-b
# WARNING: This resets replication metadata — objects resync from scratch
# 6. Increase replication worker threads:
# Add to MinIO environment:
# MINIO_REPLICATION_WORKERS=100 # Default is 100
# MINIO_REPLICATION_FAILED_WORKERS=4 # Workers for failed object retry
Object Lock Preventing Legitimate Deletion
# Check if an object is locked:
mc stat distributed/compliance-records/locked-file.pdf | grep -i retain
# Shows: Retain Until: 2030-12-31
# GOVERNANCE mode: admin can override the lock
# (Only works for GOVERNANCE, not COMPLIANCE mode)
mc rm --bypass-governance distributed/compliance-records/locked-file.pdf
# COMPLIANCE mode: NO override is possible — by design
# You cannot delete compliance-locked objects before the retention date
# This is intentional for regulatory compliance
# If you accidentally locked objects with wrong retention date in COMPLIANCE mode:
# There is no fix — you must wait for the retention period to expire
# This is why COMPLIANCE mode testing should always be done in a test bucket first
# Best practice for compliance bucket setup:
# 1. Test with GOVERNANCE mode first:
mc retention set --default GOVERNANCE 30d distributed/test-compliance-bucket
# 2. Verify everything works as expected
# 3. Create production COMPLIANCE bucket with correct retention period
# 4. Document the retention period and who approved it
# Check all locked objects in a bucket:
mc ls --recursive --json distributed/compliance-records/ | \
jq 'select(.retainUntilDate != null) | {key: .key, retain_until: .retainUntilDate}'
Pro Tips
- Always use an odd number of erasure stripes that's a multiple of 4 — MinIO performs best with 4, 8, 12, or 16 drives per erasure set. Mixing drive counts per node complicates capacity planning and can reduce effective storage utilization. Consistent hardware across nodes is the right default.
- Test site replication failover quarterly before you need it — create a test bucket, write objects, stop site-a, read from site-b, verify all objects are accessible. Document the actual failover procedure and time it. The first time your team executes a failover should not be during a real incident.
- Use separate MinIO deployments for different storage tiers — a high-performance NVMe cluster for hot data and a high-density HDD cluster for warm/cold data is a clean separation that avoids ILM complexity. Configure ILM to tier between them via the mc tier command rather than mixing drive types in the same erasure set.
- Monitor incomplete multipart uploads — large file uploads that fail partway through leave orphaned data. The lifecycle policy in this guide handles cleanup automatically, but check
mc ls --incomplete distributed/your-bucketmonthly until the policy kicks in. - Set MINIO_SITE_NAME before any objects are written — the site name is embedded in the metadata of every object. Changing it after objects exist requires a full data migration. Set it correctly at deployment time and never change it.
Wrapping Up
Advanced MinIO self-host setup at production scale — distributed erasure coding that survives multi-drive failures, active-active site replication that survives datacenter failures, compliance-grade object locking for regulated industries, automated lifecycle management that controls costs at scale, and the operational monitoring that gives you early warning before problems become incidents — this is what separates a MinIO deployment from a proof of concept into storage infrastructure your organization can stake its data on.
Start with distributed mode and erasure coding on your first multi-node cluster. Add Prometheus monitoring and the capacity planning script. Then layer on site replication when you need geographic redundancy, and object locking when compliance requirements demand it. Each layer is independently valuable — you don't need everything on day one.
For the single-node foundation that this guide builds on, see our MinIO getting started guide covering buckets, policies, SDK integration, and Nginx proxy configuration.
Need Enterprise Object Storage Infrastructure Built for Your Organization?
Designing a distributed MinIO cluster with erasure coding, multi-site replication, compliance object locking, automated lifecycle management, and production monitoring — the sysbrix team builds and operates object storage infrastructure for organizations that need S3-compatible storage at scale, on their own hardware, without AWS bills or vendor lock-in.
Talk to Us →