MinIO Self-Host Setup: Run S3-Compatible Object Storage on Your Own Server
AWS S3 is the de facto standard for object storage — but it comes with per-GB pricing, egress fees, and data leaving your infrastructure. MinIO gives you everything S3 offers: buckets, objects, presigned URLs, lifecycle policies, versioning, server-side encryption, and full S3 API compatibility — running on a server you own. Any app already using the AWS S3 SDK can switch to MinIO with a one-line config change. This MinIO self-host setup guide walks you through a complete deployment from scratch.
Prerequisites
- A Linux server (Ubuntu 20.04+ recommended) with at least 1GB RAM
- Docker Engine and Docker Compose v2 installed
- Sufficient disk space for your storage needs — MinIO stores data on the host filesystem
- A domain name for production use (MinIO Console and API both benefit from HTTPS)
- Ports 9000 (API) and 9001 (Console UI) available
Check your environment before starting:
docker --version
docker compose version
df -h / # Check available disk space
free -h
# Confirm ports are free
sudo ss -tlnp | grep -E ':9000|:9001'
What Is MinIO and When Should You Use It?
MinIO is a high-performance, S3-compatible object storage server written in Go. It's designed to be deployed anywhere — a single VPS, a bare-metal server, or a Kubernetes cluster — and it speaks the S3 API natively. That means any library, tool, or service that supports S3 (the AWS SDK, Terraform, Supabase Storage, Dify, Nextcloud external storage, and hundreds more) works with MinIO out of the box.
What You Get
- Full S3 API compatibility — buckets, objects, multipart uploads, presigned URLs, object tagging, lifecycle rules, versioning
- High performance — MinIO is benchmarked at hundreds of GB/s on NVMe storage; it's not a toy
- Web Console — a clean browser UI for managing buckets, objects, users, and policies
- Identity and access management — service accounts, access keys, bucket policies using IAM-style JSON
- Server-side encryption — SSE-S3 and SSE-C support
- Event notifications — trigger webhooks, Kafka, or NATS events on bucket operations
- Erasure coding — data protection across drives in multi-drive deployments
Single-Node vs. Distributed
For most self-hosted use cases — app file uploads, model storage, backups, media serving — a single-node MinIO deployment is exactly right. Distributed mode (multiple nodes, erasure coding) is for when you need high availability and are running at scale. This guide covers single-node, which covers 95% of self-hosted needs.
Deploying MinIO with Docker Compose
Single-Node Setup
Create a project directory and Compose file:
mkdir -p ~/minio/data
cd ~/minio
# docker-compose.yml
version: '3.8'
services:
minio:
image: quay.io/minio/minio:latest
container_name: minio
restart: unless-stopped
ports:
- "9000:9000" # S3 API
- "9001:9001" # Web Console
environment:
- MINIO_ROOT_USER=${MINIO_ROOT_USER}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
# Set your public domain for presigned URL generation
- MINIO_SERVER_URL=https://s3.yourdomain.com
- MINIO_BROWSER_REDIRECT_URL=https://minio.yourdomain.com
volumes:
- ./data:/data
command: server /data --console-address ":9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
Create your .env file — the root credentials are your admin account:
# .env
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=a-very-strong-password-at-least-8-chars
# Generate a strong password:
# openssl rand -base64 24
Start MinIO:
docker compose up -d
docker compose logs -f minio
Watch for API: http://0.0.0.0:9000 and WebUI: http://0.0.0.0:9001 in the output. Open http://localhost:9001 and log in with your root credentials. You're in the MinIO Console.
Storing Data on a Dedicated Disk
For production, point MinIO's data directory at a dedicated mounted volume rather than the system disk. Mount your storage disk and update the volume in Compose:
# Mount a dedicated disk (example: /dev/sdb → /mnt/minio-data)
sudo mkfs.ext4 /dev/sdb
sudo mkdir -p /mnt/minio-data
sudo mount /dev/sdb /mnt/minio-data
sudo chown -R 1000:1000 /mnt/minio-data
# Add to /etc/fstab for persistence across reboots:
echo "/dev/sdb /mnt/minio-data ext4 defaults 0 2" | sudo tee -a /etc/fstab
# Update docker-compose.yml volumes section:
# - /mnt/minio-data:/data
Configuring Buckets, Users, and Policies
Creating Buckets via the Console
In the MinIO Console, go to Buckets → Create Bucket. Key settings to consider:
- Versioning — keeps previous versions of objects on every overwrite. Enable for any bucket where you might need rollback.
- Object Locking — prevents object deletion for a configured period. Useful for backups and compliance. Must be enabled at bucket creation — can't be added later.
- Quota — cap the total size of a bucket to prevent runaway storage usage.
Creating Service Account Keys
Never use your root credentials in application code. Create service accounts with scoped permissions instead. In the Console: Identity → Service Accounts → Create Service Account. You'll get an Access Key and Secret Key pair.
Alternatively, use the MinIO Client (mc) CLI:
# Install mc CLI
curl -O https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/
# Configure mc to talk to your MinIO instance
mc alias set local http://localhost:9000 minioadmin yourpassword
# Verify connection
mc admin info local
# Create a bucket
mc mb local/my-app-uploads
# Create a service account for your app
mc admin user svcacct add local minioadmin
# Returns: Access Key and Secret Key — save these
Writing Bucket Policies
MinIO uses IAM-style JSON policies for access control. Here's a policy that gives read-write access to a specific bucket — attach this to a service account used by your application:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-app-uploads",
"arn:aws:s3:::my-app-uploads/*"
]
}
]
}
Apply it via mc:
# Save the policy JSON to a file, then:
mc admin policy create local app-uploads-policy policy.json
# Create a dedicated user for this app
mc admin user add local app-user a-strong-app-password
# Attach the policy to the user
mc admin policy attach local app-uploads-policy --user app-user
# Verify
mc admin user info local app-user
Public Read Bucket for Static Assets
For publicly accessible assets (images, static files, downloads), set an anonymous read policy on the bucket:
# Set public read policy on a bucket
mc anonymous set public local/public-assets
# Verify — objects in this bucket are accessible without auth:
curl https://s3.yourdomain.com/public-assets/your-file.jpg
# To revert to private:
mc anonymous set none local/public-assets
Connecting Apps via the S3 SDK
Python (boto3)
Any existing code using boto3 for AWS S3 works with MinIO — just change the endpoint URL and credentials:
import boto3
from botocore.client import Config
# MinIO client using boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.yourdomain.com',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY',
config=Config(signature_version='s3v4'),
region_name='us-east-1' # Required by boto3 but ignored by MinIO
)
# Upload a file
s3.upload_file('/local/path/file.pdf', 'my-app-uploads', 'documents/file.pdf')
# Generate a presigned URL (valid for 1 hour)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-app-uploads', 'Key': 'documents/file.pdf'},
ExpiresIn=3600
)
print(url)
# List objects in a bucket
response = s3.list_objects_v2(Bucket='my-app-uploads', Prefix='documents/')
for obj in response.get('Contents', []):
print(obj['Key'], obj['Size'])
JavaScript / TypeScript (AWS SDK v3)
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
const s3 = new S3Client({
endpoint: 'https://s3.yourdomain.com',
region: 'us-east-1', // Required but ignored by MinIO
credentials: {
accessKeyId: process.env.MINIO_ACCESS_KEY!,
secretAccessKey: process.env.MINIO_SECRET_KEY!
},
forcePathStyle: true // Required for MinIO — uses path-style URLs
})
// Upload a file
await s3.send(new PutObjectCommand({
Bucket: 'my-app-uploads',
Key: 'avatars/user-123.jpg',
Body: fileBuffer,
ContentType: 'image/jpeg'
}))
// Generate a presigned download URL
const url = await getSignedUrl(
s3,
new GetObjectCommand({ Bucket: 'my-app-uploads', Key: 'avatars/user-123.jpg' }),
{ expiresIn: 3600 }
)
console.log('Download URL:', url)
The critical setting for MinIO compatibility is forcePathStyle: true (JS) or the equivalent in other SDKs. AWS S3 uses virtual-hosted-style URLs (bucket.s3.amazonaws.com) by default — MinIO uses path-style URLs (s3.yourdomain.com/bucket). Without this flag, requests go to the wrong URL format and fail.
Serving MinIO Behind Nginx with HTTPS
For production, put both the S3 API and the Console behind Nginx with separate subdomains:
# /etc/nginx/sites-available/minio
# S3 API endpoint
server {
listen 80;
server_name s3.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name s3.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/s3.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/s3.yourdomain.com/privkey.pem;
# Large object uploads
client_max_body_size 0; # 0 = unlimited, MinIO handles its own limits
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
chunked_transfer_encoding off;
}
}
# MinIO Console
server {
listen 443 ssl http2;
server_name minio.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/minio.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/minio.yourdomain.com/privkey.pem;
location / {
proxy_pass http://localhost:9001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
sudo nginx -t && sudo systemctl reload nginx
# Get certs for both subdomains
sudo certbot --nginx -d s3.yourdomain.com -d minio.yourdomain.com
# Test the S3 API endpoint
curl -I https://s3.yourdomain.com/minio/health/live
# Should return HTTP 200
Tips, Gotchas, and Troubleshooting
Presigned URLs Pointing to Wrong Host
If your app generates presigned URLs and they contain localhost:9000 instead of your public domain, the MINIO_SERVER_URL environment variable isn't set or isn't matching. Confirm it's set to your full public URL and restart the container:
# Verify the env var is set in the running container
docker exec minio env | grep MINIO_SERVER_URL
# If missing, add to docker-compose.yml environment section:
# - MINIO_SERVER_URL=https://s3.yourdomain.com
# Restart to apply:
docker compose up -d --force-recreate minio
Large File Uploads Failing or Timing Out
MinIO uses multipart uploads for large files — the S3 SDK handles this automatically. If uploads are timing out, the issue is usually Nginx's proxy timeouts or client_max_body_size being set too low. The Nginx config above sets both to handle large objects. Also check that proxy_request_buffering off is set — without it, Nginx buffers the entire upload in memory before forwarding, which kills performance and causes timeouts for large files.
Access Denied Errors Despite Correct Credentials
# Test credentials directly with mc
mc alias set test https://s3.yourdomain.com YOUR_ACCESS_KEY YOUR_SECRET_KEY
mc ls test/
# Check what policies are attached to the user
mc admin user info local your-username
# List all policies
mc admin policy list local
# Test a specific operation
mc cp /tmp/test.txt test/your-bucket/test.txt
# The error message will tell you exactly which action is denied
Disk Space Running Out
MinIO stores objects as files on disk without compression. Monitor disk usage and set bucket quotas to prevent runaway growth:
# Check disk usage per bucket
mc du --depth 1 local/
# Set a 100GB quota on a bucket
mc quota set local/my-app-uploads --size 100GiB
# Configure lifecycle rules to auto-delete old objects
cat > lifecycle.json << 'EOF'
{
"Rules": [{
"ID": "expire-old-logs",
"Status": "Enabled",
"Filter": {"Prefix": "logs/"},
"Expiration": {"Days": 30}
}]
}
EOF
mc ilm import local/my-app-uploads < lifecycle.json
Updating MinIO
docker compose pull minio
docker compose up -d minio
# Verify new version and health
docker logs minio --tail 10
curl http://localhost:9000/minio/health/live
# Check current version
docker exec minio minio --version
Your data directory is on the host filesystem and completely separate from the MinIO container — updates never touch your stored objects.
Pro Tips
- Use path-style URLs consistently — MinIO supports virtual-hosted-style URLs but path-style is simpler to configure and debug. Set
forcePathStyle: true(or equivalent) in every SDK client. - Enable bucket versioning for anything important — with versioning on, accidental deletes and overwrites are recoverable. The storage cost is the only downside, and it's usually worth it for critical buckets.
- Use MinIO as a Supabase Storage backend — Supabase self-hosted lets you configure an S3-compatible backend for Storage. Point it at your MinIO instance and get Supabase's storage API layer on top of your own object store.
- Set up event notifications for automation — MinIO can POST to a webhook on every bucket event (PUT, DELETE, GET). Use this with n8n or your own endpoint to trigger workflows when files are uploaded.
- Back up MinIO's config, not just your data — the
./datavolume contains both objects and MinIO's internal metadata. Back up the whole directory, not just subdirectories that look like your buckets.
Wrapping Up
A complete MinIO self-host setup gives you production-grade S3-compatible object storage on hardware you control — no egress fees, no per-request pricing, and no data leaving your network. Any code already using the AWS S3 SDK works immediately with a one-line endpoint change. For self-hosted stacks running Supabase, Dify, Nextcloud, or anything else that needs object storage, MinIO is the obvious infrastructure layer to add.
Deploy with Docker Compose, point your data directory at a dedicated disk, create scoped service accounts per app, set up HTTPS via Nginx, and you have an S3-compatible storage layer that will scale with your disk size rather than your cloud bill.
Need Object Storage Designed Into Your Infrastructure?
If you're building a platform where multiple apps share object storage — with proper IAM policies, replication, CDN integration, and backup automation — the sysbrix team can design and implement it end to end. We build storage infrastructure that scales cleanly from day one.
Talk to Us →