Skip to Content

Production Guide: Deploy MinIO with Kubernetes + Helm + cert-manager + NGINX Ingress on Ubuntu

A production-focused deployment playbook with secure secrets handling, day-2 operations, verification, and incident-ready troubleshooting.

Introduction: real-world use case

Teams that run analytics, AI pipelines, media processing, and backup workloads often outgrow ad-hoc file storage. At that point, object storage becomes a core platform dependency: reliable enough for production data paths, secure enough for audits, and observable enough for incident response. MinIO is a strong open-source option when you need S3-compatible APIs without handing your architecture to a managed provider. This guide shows how to deploy MinIO on Kubernetes with Helm, terminate TLS with cert-manager, expose traffic through NGINX Ingress, and run it with practical operational guardrails. The goal is not a demo cluster; it is a deployment your team can monitor, recover, and upgrade with confidence.

We will assume you already operate Kubernetes for internal workloads and want a predictable path to production. That means we will focus on namespace isolation, principle-of-least-privilege credentials, storage-class decisions, health checks, upgrade sequencing, and a verification checklist you can hand to operations. By the end, you will have a deployment pattern that supports growth, keeps blast radius low, and minimizes surprises during maintenance windows.

Architecture and flow overview

The reference architecture is intentionally simple: MinIO runs in a dedicated namespace, persistent volumes are backed by your production storage class, and an Ingress route provides HTTPS access through your existing edge stack. cert-manager issues and renews TLS certificates automatically, while Kubernetes Secrets hold bootstrap credentials and optional policy automation inputs. This keeps platform concerns separated: ingress handles north-south traffic, MinIO handles object operations, and your observability stack tracks service health.

Request flow is straightforward. A client uploads an object to the public hostname. NGINX Ingress routes traffic to the MinIO service in-cluster. MinIO validates credentials and persists object data to volumes attached to the StatefulSet. Operationally, you will also expose the MinIO console endpoint with strict access controls so administrators can inspect buckets, policy assignments, and cluster health without shelling into pods.

Prerequisites

  • Kubernetes cluster (v1.27+) with a default StorageClass suitable for durable workloads.
  • kubectl and helm installed on your workstation with admin access to the target cluster.
  • NGINX Ingress Controller already installed and reachable from your DNS zone.
  • cert-manager installed with a working ClusterIssuer (for example, Let's Encrypt).
  • A DNS record pointing to your ingress endpoint, e.g., minio.example.com.
  • Secret management policy for storing bootstrap credentials outside Git.

Step-by-step deployment

1) Create namespace and baseline policy objects

Start by isolating MinIO into its own namespace. This improves policy scoping, keeps RBAC cleaner, and simplifies quota or network policies later.

kubectl create namespace minio

kubectl label namespace minio app=minio tier=storage

If the copy button does not work in your browser, select the block and copy manually.

2) Create secrets without committing credentials to Git

Use a Secret for root user and password. In production, source values from Vault, SOPS, or your secret manager and inject them at deploy time.

kubectl -n minio create secret generic minio-root-creds \
  --from-literal=rootUser='minio-admin' \
  --from-literal=rootPassword='REPLACE_WITH_LONG_RANDOM_PASSWORD'

If the copy button does not work in your browser, select the block and copy manually.

3) Add the MinIO chart repository and pin a chart version

Pinning chart versions prevents accidental behavior changes during routine deploy pipelines.

helm repo add minio https://operator.min.io
helm repo update
helm search repo minio/operator --versions | head -n 5

If the copy button does not work in your browser, select the block and copy manually.

4) Prepare production values for Helm

Create a values file tuned for your storage class, resource profile, and ingress hostnames. Keep secrets external when possible; placeholders below are for clarity.

mode: standalone
pools:
  - name: pool-0
    servers: 1
    volumesPerServer: 4
    size: 500Gi
    storageClassName: fast-ssd

resources:
  requests:
    cpu: 1000m
    memory: 2Gi
  limits:
    cpu: 2000m
    memory: 4Gi

rootUser: minio-admin
rootPassword: REPLACE_WITH_LONG_RANDOM_PASSWORD

service:
  type: ClusterIP

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
  hosts:
    - minio.example.com
  tls:
    - secretName: minio-tls
      hosts:
        - minio.example.com

consoleIngress:
  enabled: true
  ingressClassName: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - minio-console.example.com
  tls:
    - secretName: minio-console-tls
      hosts:
        - minio-console.example.com

If the copy button does not work in your browser, select the block and copy manually.

5) Deploy MinIO with Helm

Install or upgrade using an idempotent command so your pipeline can reuse the same step for first deploys and day-2 changes.

helm upgrade --install minio minio/operator \
  --namespace minio \
  --create-namespace \
  --values values-minio.yaml \
  --wait --timeout 15m

If the copy button does not work in your browser, select the block and copy manually.

6) Validate ingress, TLS, and endpoint reachability

Before onboarding applications, confirm pods are ready, certificates are issued, and DNS resolves correctly.

kubectl -n minio get pods -o wide
kubectl -n minio get ingress
kubectl -n minio get certificate
curl -I https://minio.example.com

If the copy button does not work in your browser, select the block and copy manually.

7) Create least-privilege access keys for applications

Do not share root credentials with workloads. Create scoped users and policies for each application boundary.

# Example using MinIO client from an admin workstation
mc alias set prod https://minio.example.com minio-admin 'REDACTED'
mc admin user add prod app-user 'REPLACE_STRONG_APP_PASSWORD'
mc admin policy attach prod readwrite --user app-user

If the copy button does not work in your browser, select the block and copy manually.

8) Define backup and upgrade runbooks

Production readiness requires repeatable backups and reversible upgrades. Store your runbook beside infrastructure code and test it quarterly.

# Snapshot PVCs (example pattern)
kubectl -n minio get pvc

# Chart upgrade dry-run first
helm upgrade --install minio minio/operator \
  --namespace minio --values values-minio.yaml --dry-run

If the copy button does not work in your browser, select the block and copy manually.

Configuration and secrets handling best practices

Secrets should never be hard-coded in repository files, CI variables with broad read access, or plaintext chat logs. Use short-lived credentials wherever possible and rotate static keys on a scheduled cadence. For MinIO specifically, split duties between administrator credentials and application-scoped credentials, then attach narrowly scoped policies per workload. This pattern constrains blast radius if a single key leaks.

Store environment-specific configuration in values files per environment (dev/staging/prod), but keep secret material external. If your platform uses External Secrets Operator or CSI drivers, mount credentials dynamically at runtime and avoid long-lived Kubernetes Secret objects. At minimum, encrypt Kubernetes secrets at rest and limit who can get/list secrets via RBAC.

Verification checklist

  • Pods are Running and Ready with no restart loop.
  • TLS certificates are Ready=True and renew automatically.
  • Ingress endpoints return HTTP 200/403 as expected (not 502/503).
  • Non-root application user can create/read/delete objects only in approved buckets.
  • Monitoring alerts trigger on pod down, storage pressure, and certificate expiry windows.
  • Backup restore test succeeds in a non-production namespace.
# Quick smoke test
mc alias set prod https://minio.example.com app-user 'REDACTED'
mc mb prod/health-check-bucket
echo 'ok' > /tmp/ok.txt
mc cp /tmp/ok.txt prod/health-check-bucket/ok.txt
mc cat prod/health-check-bucket/ok.txt

If the copy button does not work in your browser, select the block and copy manually.

Common issues and fixes

Ingress returns 502/503

Usually this means service port mismatch, failed pod readiness, or an ingress class mismatch. Verify service target ports, check endpoint objects, and ensure your ingress controller watches the namespace/class you declared.

kubectl -n minio describe ingress
kubectl -n minio get svc
kubectl -n minio get endpoints

If the copy button does not work in your browser, select the block and copy manually.

Certificate not issuing

Most often a DNS challenge or HTTP challenge cannot complete. Confirm DNS is pointing at the ingress controller and inspect cert-manager events.

kubectl -n cert-manager logs deploy/cert-manager --tail=200
kubectl -n minio describe certificate minio-tls

If the copy button does not work in your browser, select the block and copy manually.

Slow uploads under load

Check storage class IOPS limits, ingress body-size annotations, node network throughput, and CPU throttling. MinIO is sensitive to underlying disk performance and network consistency.

kubectl -n minio top pods
kubectl describe pvc -n minio
kubectl get events -n minio --sort-by=.lastTimestamp | tail -n 30

If the copy button does not work in your browser, select the block and copy manually.

FAQ

Can I run MinIO in distributed mode later without data migration pain?

Yes, but treat it as a planned architecture change. Test migration paths in staging, verify client compatibility, and benchmark read/write latency before production cutover.

Should I expose both API and console publicly?

Expose only what you need. Many teams keep console access restricted to VPN or office IP ranges while allowing API traffic from application networks.

What is the safest credential strategy for CI pipelines?

Use short-lived credentials issued by your secret manager at runtime. Avoid long-lived static keys in CI variables whenever possible.

How often should I test restore procedures?

At least quarterly, and after every major version upgrade. A backup policy is incomplete until restore has been validated on representative data.

Can I use this setup for AI artifact storage?

Yes. MinIO is commonly used for model artifacts, training datasets, and feature-store snapshots, provided you size storage throughput and lifecycle rules correctly.

How do I rotate application keys with minimal downtime?

Issue a second key, roll workloads gradually, verify access logs, and revoke the old key after successful cutover. Never rotate all production consumers in one step.

Do I need object versioning from day one?

For critical datasets, yes. Versioning and retention policies provide a safety net against accidental overwrites and destructive automation mistakes.

Related guides on SysBrix

Talk to us

Need help deploying MinIO in production, designing resilient storage architecture, or building secure backup and upgrade runbooks your team can trust? We can help with architecture, hardening, migration, and operational readiness.

Contact Us

Production Guide: Deploy n8n with Docker Compose + Nginx + PostgreSQL on Ubuntu
A production-ready n8n deployment pattern with TLS, PostgreSQL durability, secret hygiene, backup runbooks, and operational verification.