Container images are now part of every release path, but many teams still rely on public registries and ad-hoc credential handling. That creates avoidable risk: image provenance becomes unclear, pull-rate limits interrupt CI, and incident response slows down when you need to quarantine artifacts quickly. Harbor gives platform teams a private, policy-driven registry with role-based access, robot accounts, immutable tags, and built-in vulnerability scanning.
This guide shows how to deploy Harbor on Kubernetes using Helm with a hardened ingress path, TLS automation through cert-manager, and an external PostgreSQL backend for durability and upgrade safety. The walkthrough is written for day-2 operations, not just first boot: you will set secret boundaries, validate critical paths, and implement backup and recovery checks so the registry remains trustworthy during scale and change.
Target environment in this guide: Ubuntu-based Kubernetes cluster, ingress-nginx installed, DNS controlled by your team, and object storage available for Harbor chart storage where relevant. Commands are production-oriented and can be adapted to managed clusters with the same control-plane concepts.
Architecture and flow overview
The deployment uses a dedicated harbor namespace and Helm-managed releases. ingress-nginx terminates HTTPS at the edge using certificates issued by cert-manager. Harbor core, jobservice, portal, and registry run as Kubernetes workloads, while PostgreSQL is external to reduce upgrade coupling and simplify backup strategy. Trivy is enabled for vulnerability scanning, and robot accounts are used for non-human pull/push pipelines.
- Ingress layer: ingress-nginx + cert-manager for TLS lifecycle and HTTP routing.
- Application layer: Harbor components managed by Helm values and version-pinned chart release.
- Data layer: external PostgreSQL with dedicated DB/user and network restrictions.
- Security controls: immutable tags, content trust strategy, robot accounts, and scoped credentials.
- Operations: backup jobs for database and registry storage metadata, plus smoke tests after upgrades.
Prerequisites
- Kubernetes cluster (v1.27+) with kubectl admin access.
- Helm v3.13+ installed locally.
- ingress-nginx controller installed and reachable from public DNS.
- cert-manager installed with a working ClusterIssuer (for example Let's Encrypt production).
- External PostgreSQL 14+ reachable from Harbor namespace network path.
- DNS record planned for
harbor.sysbrix.internal(replace with your domain). - A secure secret manager or at minimum encrypted CI variables for bootstrap credentials.
Step-by-step deployment
Step 1: Prepare namespace and baseline policies
Create a dedicated namespace and apply minimal baseline controls before installing Harbor. This keeps blast radius tight and makes later policy auditing easier.
kubectl create namespace harbor
kubectl label namespace harbor app=harbor env=prod
cat <<'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-by-default
namespace: harbor
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Step 2: Add Harbor Helm repository and pin chart version
Pinning chart versions avoids surprise behavior changes across environments and lets you stage upgrades safely.
helm repo add harbor https://helm.goharbor.io
helm repo update
helm search repo harbor/harbor --versions | head -n 10
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Step 3: Create Kubernetes secrets for Harbor admin and PostgreSQL
Do not hardcode credentials in values files committed to Git. Store the sensitive strings in separate Kubernetes secrets or use External Secrets if available.
kubectl -n harbor create secret generic harbor-admin-secret --from-literal=HARBOR_ADMIN_PASSWORD='REPLACE_WITH_STRONG_PASSWORD'
kubectl -n harbor create secret generic harbor-db-secret --from-literal=POSTGRESQL_HOST='postgresql-prod.internal' --from-literal=POSTGRESQL_PORT='5432' --from-literal=POSTGRESQL_DATABASE='harbor' --from-literal=POSTGRESQL_USERNAME='harbor_app' --from-literal=POSTGRESQL_PASSWORD='REPLACE_WITH_DB_PASSWORD'
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Step 4: Define production values.yaml
This values file enables ingress with TLS, external PostgreSQL, and scanner support. Keep this file in source control but never inline plaintext secrets.
cat > values-harbor-prod.yaml <<'YAML'
expose:
type: ingress
tls:
enabled: true
certSource: secret
secret:
secretName: harbor-tls
ingress:
hosts:
core: harbor.sysbrix.internal
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
externalURL: https://harbor.sysbrix.internal
harborAdminPassword: CHANGEME_USE_SECRET_INJECTION
database:
type: external
external:
host: postgresql-prod.internal
port: "5432"
username: harbor_app
password: CHANGEME_DB_SECRET
coreDatabase: harbor
trivy:
enabled: true
persistence:
enabled: true
persistentVolumeClaim:
registry:
size: 200Gi
jobservice:
size: 20Gi
redis:
size: 8Gi
YAML
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Step 5: Inject secrets into runtime values and install Harbor
Use a temporary generated values file at deploy time so secret material is not persisted in Git.
ADMIN_PASS=$(kubectl -n harbor get secret harbor-admin-secret -o jsonpath='{.data.HARBOR_ADMIN_PASSWORD}' | base64 -d)
DB_HOST=$(kubectl -n harbor get secret harbor-db-secret -o jsonpath='{.data.POSTGRESQL_HOST}' | base64 -d)
DB_PORT=$(kubectl -n harbor get secret harbor-db-secret -o jsonpath='{.data.POSTGRESQL_PORT}' | base64 -d)
DB_NAME=$(kubectl -n harbor get secret harbor-db-secret -o jsonpath='{.data.POSTGRESQL_DATABASE}' | base64 -d)
DB_USER=$(kubectl -n harbor get secret harbor-db-secret -o jsonpath='{.data.POSTGRESQL_USERNAME}' | base64 -d)
DB_PASS=$(kubectl -n harbor get secret harbor-db-secret -o jsonpath='{.data.POSTGRESQL_PASSWORD}' | base64 -d)
cp values-harbor-prod.yaml /tmp/values-harbor-rendered.yaml
sed -i "s/CHANGEME_USE_SECRET_INJECTION/${ADMIN_PASS//\//\\/}/" /tmp/values-harbor-rendered.yaml
sed -i "s/postgresql-prod.internal/${DB_HOST}/" /tmp/values-harbor-rendered.yaml
sed -i "s/"5432"/"${DB_PORT}"/" /tmp/values-harbor-rendered.yaml
sed -i "s/harbor_app/${DB_USER}/" /tmp/values-harbor-rendered.yaml
sed -i "s/CHANGEME_DB_SECRET/${DB_PASS//\//\\/}/" /tmp/values-harbor-rendered.yaml
sed -i "s/coreDatabase: harbor/coreDatabase: ${DB_NAME}/" /tmp/values-harbor-rendered.yaml
helm upgrade --install harbor harbor/harbor --namespace harbor --values /tmp/values-harbor-rendered.yaml --version 1.14.0 --wait --timeout 15m
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Step 6: Create TLS certificate object (if your chart path does not auto-generate it)
cat <<'EOF' | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: harbor-tls
namespace: harbor
spec:
secretName: harbor-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- harbor.sysbrix.internal
EOF
kubectl -n harbor wait --for=condition=Ready certificate/harbor-tls --timeout=180s
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Step 7: Configure robot accounts and immutable tags
After bootstrap, restrict pushes from CI to robot identities and enforce immutable tags on release repositories to prevent accidental overwrite.
# Example API flow (token retrieval then project policy)
HARBOR_URL="https://harbor.sysbrix.internal"
ADMIN_USER="admin"
ADMIN_PASS="<admin-password>"
TOKEN=$(curl -s -u "$ADMIN_USER:$ADMIN_PASS" "$HARBOR_URL/api/v2.0/users/current" >/dev/null && echo "session-established")
echo "Create robot account and immutable tag rule from Harbor UI/API as part of onboarding checklist."
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Configuration and secrets handling best practices
For production Harbor, separate concerns clearly: chart configuration, runtime secrets, and operational policy should not live in one file or one repository. Keep Helm values in Git with non-secret defaults, inject credentials from a secret manager at deployment time, and rotate all bootstrap passwords immediately after first login. Use robot accounts for CI and map each account to least-privilege project scopes.
Recommended controls:
- Enable immutable tags on release repositories (for example
main-*and semantic versions). - Require signed commits and signed images where your supply-chain tooling allows it.
- Limit admin UI access behind SSO and conditional access controls.
- Store scanner policy exceptions with expiry dates and owner annotations.
- Protect external PostgreSQL with network ACLs that allow traffic only from Harbor namespace egress.
- Document emergency registry read-only procedure for incident containment.
If your organization handles regulated workloads, pair Harbor with an auditable promotion flow: build in dev, scan and sign in staging, then promote immutable digest references into production projects. This dramatically reduces "latest tag drift" and makes rollback deterministic during incidents.
Verification checklist
Run these checks before giving teams production write access:
kubectl -n harbor get pods
kubectl -n harbor get ingress
kubectl -n harbor get certificate harbor-tls
curl -I https://harbor.sysbrix.internal
# Test docker login and push/pull workflow
docker login harbor.sysbrix.internal
docker pull alpine:3.20
docker tag alpine:3.20 harbor.sysbrix.internal/library/alpine:3.20-smoke
docker push harbor.sysbrix.internal/library/alpine:3.20-smoke
docker pull harbor.sysbrix.internal/library/alpine:3.20-smoke
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Expected outcomes: pods are Ready, ingress presents a valid certificate chain, login succeeds for approved identities, push and pull succeed, and Trivy scan results appear for new artifacts within your configured scan window.
# Optional: inspect Harbor release and config drift
helm -n harbor list
helm -n harbor get values harbor
helm -n harbor get manifest harbor | grep -E "kind: (Deployment|StatefulSet|Ingress)" -n
If the copy button does not work in your browser/editor, select the block and press Ctrl/Cmd+C.
Common issues and fixes
TLS certificate stays Pending
Usually this is DNS mismatch or ClusterIssuer challenge failure. Verify A/AAAA records, ingress class, and cert-manager controller logs. Ensure the Harbor host is resolvable from public ACME validation endpoints if using HTTP-01.
Harbor core starts but cannot connect to PostgreSQL
Check network policy egress, firewall rules, and SSL mode expectations. Validate username/password directly against the PostgreSQL endpoint from a debug pod in the harbor namespace.
Image push returns 413 Request Entity Too Large
Increase body size in ingress-nginx annotations or controller config map, then redeploy ingress. Large layers are common with ML or monorepo artifact images.
Scanner backlog grows and results arrive late
Scale Trivy resources, tune jobservice workers, and enforce scan-on-push only for target projects where latency matters. Archive old artifacts and apply retention policies to keep queue pressure controlled.
CI jobs break after password rotation
Move CI to robot accounts with dedicated scopes. Rotate robot secrets through your CI secret store and never tie pipeline auth to shared admin users.
FAQ
Can we run Harbor with internal PostgreSQL from the Helm chart instead of external DB?
You can, but external PostgreSQL is usually better for backup consistency, upgrade control, and performance tuning. Internal DB is acceptable for non-critical environments.
What is the minimum HA posture for production Harbor?
At minimum: redundant Kubernetes worker nodes, durable storage classes, external PostgreSQL with backup verification, and ingress configured with monitored TLS renewal.
Should we allow developers to push directly to release projects?
Prefer gated promotion. Let developers push to integration projects, then promote immutable digests to release projects through CI policy checks and approvals.
How often should vulnerability databases and scan policies be reviewed?
Review scanner update cadence weekly, and revisit severity gates monthly with security teams so policies stay aligned with risk tolerance and patch SLAs.
How do we back up Harbor safely?
Back up external PostgreSQL and registry storage metadata on a schedule, then test restore into a staging cluster. A backup without restore validation is only a hope, not a control.
Can Harbor integrate with enterprise SSO?
Yes. Harbor supports OIDC and LDAP integration. Enforce SSO for human users, keep robot accounts for automation, and document break-glass access procedures.
How do we prevent tag overwrite mistakes in CI?
Enable immutable tag rules for release patterns and use digest pinning in deployment manifests. This removes ambiguity and makes rollbacks fast and deterministic.
Related internal guides
- How to Deploy Gitea with Docker Compose and Caddy: Production Guide
- Deploy NetBox on Kubernetes with Helm, External PostgreSQL, and Production Guardrails
- Production Guide: Deploy Gitea with Docker Swarm + Traefik + PostgreSQL on Ubuntu
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.