Skip to Content

Production Guide: Deploy Harbor with Kubernetes Helm and ingress-nginx on Ubuntu

A production-focused Harbor deployment with security hardening, TLS, RBAC, verification, and incident-ready troubleshooting.

If your platform team ships containerized services across business units, a private registry quickly becomes mandatory. Harbor provides project-level RBAC, vulnerability scanning, replication, retention, and policy controls that fit production governance. This guide shows a practical Kubernetes deployment using Helm with ingress-nginx, then layers day-2 hardening for real operations.

We will deploy Harbor in a dedicated namespace, configure TLS and storage, enforce safe secret handling, validate the workflow end-to-end, and add troubleshooting patterns your on-call team can apply under pressure. The goal is an implementation you can run in staging and promote into production without redesign.

Architecture and flow overview

Traffic enters through ingress-nginx over HTTPS and routes to Harbor services in a dedicated namespace. Harbor core, registry, jobservice, and portal communicate through Kubernetes services while data persists on PVC-backed storage. Security boundaries come from namespace isolation, Kubernetes RBAC, and Harbor project roles.

  • Edge: ingress-nginx handles TLS and routing.
  • Application: Harbor core/portal/registry/jobservice/scanner components.
  • Persistence: PVCs for blobs, metadata, and job logs.
  • Security: secrets for credentials and TLS, role-based access per project.
  • Operations: logs, health checks, retention, replication, and backup workflows.

Prerequisites

  • Ubuntu admin workstation with kubectl and helm.
  • Kubernetes 1.27+ with default StorageClass.
  • ingress-nginx installed and externally reachable.
  • DNS hostname such as harbor.example.com.
  • TLS certificate (cert-manager or pre-provisioned secret).
  • Backup destination and change-control window.

Step 1: Create namespace and baseline guardrails

Use a dedicated namespace to limit blast radius and simplify auditing. Add quotas early so growth does not silently overwhelm cluster resources.

kubectl create namespace harbor
kubectl -n harbor create resourcequota harbor-quota   --hard=requests.cpu=8,requests.memory=16Gi,limits.cpu=16,limits.memory=32Gi,persistentvolumeclaims=20
kubectl -n harbor create limitrange harbor-limits --dry-run=client -o yaml > /tmp/harbor-limits.yaml
kubectl apply -f /tmp/harbor-limits.yaml

If the copy button does not work in your browser/editor, select the code manually and copy.

Step 2: Add Helm repo and pin chart versions

Pinning chart versions avoids accidental changes across maintenance windows. Always inspect chart defaults before overriding values.

helm repo add harbor https://helm.goharbor.io
helm repo update
helm search repo harbor/harbor --versions | head -n 10
export HARBOR_CHART_VERSION=1.14.0
helm show values harbor/harbor --version ${HARBOR_CHART_VERSION} > /tmp/harbor-values-default.yaml

If the copy button does not work in your browser/editor, select the code manually and copy.

Step 3: Define production values

Size persistence for real artifact growth, set ingress behavior for large image uploads, and keep external URL consistent for clients and robot accounts.

expose:
  type: ingress
  tls:
    enabled: true
    certSource: secret
    secret:
      secretName: harbor-tls
  ingress:
    className: nginx
    hosts:
      core: harbor.example.com
    annotations:
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
      nginx.ingress.kubernetes.io/proxy-send-timeout: "600"

externalURL: https://harbor.example.com
harborAdminPassword: "REPLACE_WITH_STRONG_SECRET"

persistence:
  enabled: true
  persistentVolumeClaim:
    registry:
      size: 200Gi
    chartmuseum:
      size: 20Gi
    jobservice:
      jobLog:
        size: 20Gi
    database:
      size: 20Gi
    redis:
      size: 10Gi

trivy:
  enabled: true
  skipUpdate: false

If the copy button does not work in your browser/editor, select the code manually and copy.

Step 4: Secret and TLS handling

Do not commit plaintext credentials. Inject secrets at deploy time from your secret manager. Keep TLS certificates rotated and observable.

openssl rand -base64 36
export HARBOR_ADMIN_PASSWORD='replace-this-at-runtime'
sed "s/REPLACE_WITH_STRONG_SECRET/${HARBOR_ADMIN_PASSWORD}/g" /tmp/harbor-values.yaml.tpl > /tmp/harbor-values.yaml
kubectl -n harbor create secret tls harbor-tls   --cert=/path/to/fullchain.pem   --key=/path/to/privkey.pem

If the copy button does not work in your browser/editor, select the code manually and copy.

Step 5: Install Harbor atomically

Use atomic mode to avoid half-deployed states. If anything fails, Helm rolls back cleanly.

helm upgrade --install harbor harbor/harbor   --namespace harbor   --version ${HARBOR_CHART_VERSION}   -f /tmp/harbor-values.yaml   --atomic   --timeout 20m
kubectl -n harbor get pods -o wide
kubectl -n harbor get ingress

If the copy button does not work in your browser/editor, select the code manually and copy.

Step 6: Validate endpoint and registry behavior

Before broad rollout, validate TLS, auth paths, and artifact lifecycle operations from both CI and operator endpoints.

curl -I https://harbor.example.com

# Example local validation workflow
docker login harbor.example.com

docker pull alpine:3.20
docker tag alpine:3.20 registry.internal.example/team/alpine:test
# push/pull test to your own project namespace after project creation

If the copy button does not work in your browser/editor, select the code manually and copy.

Step 7: Day-2 operations and governance

After deployment, focus on policy and reliability. Default projects to private, enforce robot-account usage for CI, and apply retention controls per team. Configure scanner schedules to match release cadence so vulnerabilities are fresh at promotion time rather than discovered days later.

For platform governance, track repository ownership and map it to service ownership. This is critical during incident response when you need quick answers about who can approve, rollback, or quarantine artifacts. Implement a monthly access review for Harbor project memberships and automate removal of stale permissions.

Replication is your safety net for regional disruption and migration projects. Test replication under load, verify conflict behavior, and monitor lag. Backups should capture both metadata and blobs, followed by restore drills in staging. Production readiness requires confirmed restore time objectives, not just successful backup logs.

Observability should include ingress error rates, Harbor pod restarts, storage growth trendlines, scanner queue depth, and authentication failures. Build dashboards your on-call team can read in seconds and couple each alert to a concrete runbook step. This reduces time-to-mitigate during high-pressure incidents.

Finally, document upgrade strategy. Pin versions, test in staging, snapshot state before changes, and perform blue/green or maintenance-window cutovers based on organizational tolerance. A predictable change process is often more valuable than raw feature velocity in regulated or high-availability environments.

Configuration and secrets best practices

Use least privilege everywhere. Kubernetes service accounts that deploy apps should read only required secrets and should not hold broad namespace rights. In Harbor, separate reader, developer, maintainer, and admin responsibilities clearly.

Use short-lived credentials where possible and rotate robot tokens on a schedule. For TLS, monitor expiry proactively. In internal PKI environments, ensure every CI runner and developer workstation trusts the full CA chain, or pushes will fail inconsistently and consume incident time.

When introducing policy gates, phase them in. Start with visibility-only scans, then enforce blocking at promotion boundaries once teams have remediation cadence in place. This balances security outcomes with delivery speed.

Verification

  • All Harbor pods are running and ready.
  • HTTPS endpoint presents expected certificate chain.
  • Project creation and role assignment work.
  • CI robot account can push to allowed project only.
  • Scanner results are visible and current.
  • Retention policy behaves as expected in dry-run checks.
  • Backup and restore drill completed in staging.
kubectl -n harbor get pods
kubectl -n harbor describe ingress
kubectl -n harbor logs deploy/harbor-core --tail=120
kubectl -n harbor get pvc
kubectl -n harbor get events --sort-by=.lastTimestamp | tail -n 30

If the copy button does not work in your browser/editor, select the code manually and copy.

Common issues and fixes

413 during push

Ingress body size limit is too low. Set proxy body size to unlimited or a suitable high value for artifact sizes.

x509 errors on Docker clients

Client trust store is missing internal CA or intermediate certs. Install full chain and restart Docker daemon where required.

Scanner delays

Scanner resources are undersized. Increase CPU/memory and tune schedules by project priority.

Unexpected access to repositories

Project visibility or role drift occurred. Revert to private defaults and run permission audits.

Registry instability under load

Storage throughput is insufficient. Upgrade storage class and monitor queue depth and IOPS saturation.

Replication failures

Remote credentials expired or network path is unstable. Rotate credentials and add retries with alerting on lag growth.

FAQ

1) Can I run Harbor with external PostgreSQL?

Yes. Many teams use managed PostgreSQL for backup, HA, and operational consistency.

2) Is ingress-nginx mandatory?

No, but this guide aligns with ingress-nginx because it is common and operationally well understood.

3) How should we rotate robot credentials?

Create new token, update CI, validate, then revoke old token to avoid production interruption.

4) What backup scope is required?

Back up metadata and blobs together and validate restore regularly in staging.

5) Should vulnerability scans block all environments?

Block at promotion boundaries first, then extend once remediation workflows are stable.

6) Can Harbor replicate cross-region?

Yes. Validate latency, conflict behavior, and credentials before relying on DR.

7) Should Harbor be internet-exposed?

Only when required; prefer private networking or strong edge controls with rate limits and identity enforcement.

Related internal guides

Talk to us

Need help deploying and hardening production AI platforms, improving reliability, or building practical runbooks for your operations team? We can help with architecture, migration, security, and ongoing optimization.

Contact Us

Production Guide: Deploy Plausible with Docker Compose + Caddy + PostgreSQL + ClickHouse on Ubuntu
A production-first, operator-focused guide for self-hosting privacy-friendly analytics with secure defaults, observability, and disaster-recovery runbooks.