Argo CD is one of the fastest ways to move a Kubernetes platform from ad-hoc changes to repeatable, auditable GitOps operations. In real production environments, teams often start with good intentions, then drift into manual patching during incidents, rushed Friday deploys, and inconsistent rollback practices across clusters. This guide shows a hardened path to deploy Argo CD on Ubuntu-backed Kubernetes with Helm, ingress-nginx, and TLS, then connect it to a practical Git workflow you can run under operational pressure.
The objective is not just “make Argo CD run,” but “make Argo CD dependable”: controlled access, durable state in external PostgreSQL, predictable ingress behavior, observable sync health, and clean operational routines for backup and upgrade. If you are running customer-facing services or internal critical apps, this pattern reduces deployment risk and gives your team one source of truth for desired state.
Architecture and flow overview
This deployment uses a dedicated argocd namespace in Kubernetes, the official Argo Helm chart, ingress-nginx as the public entry point, and cert-manager for automated TLS issuance. We keep secrets in Kubernetes Secret objects (optionally sealed or externally managed), and we store platform configuration in Git so every change is reviewable. Argo CD continuously compares cluster state with repository state and reconciles drift.
For high-trust production, use external PostgreSQL for add-ons that need relational persistence in your platform stack, and keep Argo CD itself stateless where possible so recovery is straightforward. The most important design decision is separation of concerns: ingress and certificates at the cluster edge, Argo CD for deployment orchestration, and application data services managed independently with explicit backup policy.
Developer PR -> Git main branch
Git webhook/poll -> Argo CD repo-server
Argo CD controller -> Kubernetes API reconciliation
ingress-nginx -> argocd-server service
cert-manager -> Let's Encrypt certificate lifecycle
Audit trail -> Git history + Argo CD sync events
If the copy button does not work, select the full code block and copy manually.
Prerequisites
Before deployment, confirm your baseline: an Ubuntu host or VM that administers a healthy Kubernetes cluster, kubectl access with cluster-admin permissions, Helm v3 installed, ingress-nginx running, DNS control for your Argo CD hostname, and cert-manager installed with a working ClusterIssuer. If your networking path has a firewall or cloud security group in front, open 80/443 to the ingress endpoints.
Operationally, decide ownership up front: who approves GitOps changes, who can add cluster destinations, and who rotates credentials. Most outages in GitOps platforms are process failures, not software defects. Define role boundaries now so incident response is fast later.
kubectl version --short
helm version
kubectl get ns
kubectl get pods -n ingress-nginx
kubectl get pods -n cert-manager
kubectl get clusterissuer
If the copy button does not work, select the full code block and copy manually.
Step-by-step deployment
1) Create namespace and baseline configuration
Create a dedicated namespace and apply baseline labels to make policy and monitoring easier. Keep platform namespaces clean and explicit; this makes RBAC auditing and backup scoping much safer.
kubectl create namespace argocd
kubectl label namespace argocd app.kubernetes.io/part-of=platform
kubectl get ns argocd --show-labels
If the copy button does not work, select the full code block and copy manually.
2) Add Helm repository and pin chart version
Do not deploy “latest” in production blindly. Pin a tested chart version and upgrade intentionally. This avoids surprise behavior changes and gives you reproducible rollbacks.
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm search repo argo/argo-cd --versions | head -n 10
If the copy button does not work, select the full code block and copy manually.
3) Prepare production values file
Your values file is the heart of repeatability. Set the hostname, enforce TLS, enable ingress annotations for your ingress-nginx class, and keep insecure server mode off. Keep this file in your infrastructure repository with code review gates.
global:
domain: argocd.sysbrix.com
configs:
params:
server.insecure: false
server:
ingress:
enabled: true
ingressClassName: nginx
hostname: argocd.sysbrix.com
tls: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
controller:
replicas: 1
repoServer:
replicas: 2
redis:
enabled: true
If the copy button does not work, select the full code block and copy manually.
4) Install Argo CD with Helm
Install with explicit namespace and values. Keep command output in your deployment log so on-call engineers can quickly confirm what was applied during an incident review.
helm upgrade --install argocd argo/argo-cd \
--namespace argocd \
--version 7.7.16 \
--values values-argocd-prod.yaml
kubectl get pods -n argocd -o wide
kubectl get ingress -n argocd
If the copy button does not work, select the full code block and copy manually.
5) Bootstrap admin access and rotate immediately
The default admin secret is acceptable for first login only. Retrieve it, log in, then rotate credentials immediately and integrate SSO as soon as possible. For teams larger than one or two engineers, SSO with groups is mandatory for safe operations.
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d; echo
argocd login argocd.sysbrix.com --grpc-web
argocd account update-password
If the copy button does not work, select the full code block and copy manually.
6) Register a repository and create first application
Start with one low-risk application. Avoid onboarding your entire platform in one pass. A staged rollout gives you clean blast-radius control and helps your team learn sync behavior before high-impact workloads move under GitOps.
argocd repo add [email protected]:your-org/platform-gitops.git \
--ssh-private-key-path ~/.ssh/id_ed25519
argocd app create demo-nginx \
--repo [email protected]:your-org/platform-gitops.git \
--path apps/demo-nginx \
--dest-server https://kubernetes.default.svc \
--dest-namespace demo
argocd app sync demo-nginx
argocd app get demo-nginx
If the copy button does not work, select the full code block and copy manually.
Configuration and secrets handling best practices
Never store plaintext secrets in Git. Use Sealed Secrets, SOPS (age/GPG), or an external secrets operator tied to your vault platform. Argo CD should deploy encrypted references or sealed blobs, not human-readable credentials. This is the single most important control to keep GitOps compliant in regulated environments.
For repository credentials, favor deploy keys scoped to specific repos over broad personal tokens. For Kubernetes service accounts, grant the minimum namespace and verb scope required. Cluster-admin for all workloads defeats the purpose of policy-driven operations and makes post-incident forensics difficult.
Set sync windows for sensitive services so high-risk workloads only reconcile during approved periods. Use health checks and sync waves to sequence dependencies (for example, CRDs first, controllers second, applications last). During emergency operations, document every manual override and backport it to Git immediately to prevent hidden drift.
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: platform-prod
namespace: argocd
spec:
description: Production workloads with bounded destinations
sourceRepos:
- '[email protected]:your-org/*'
destinations:
- namespace: '*'
server: https://kubernetes.default.svc
clusterResourceWhitelist:
- group: '*'
kind: '*'
roles:
- name: readonly
policies:
- p, proj:platform-prod:readonly, applications, get, platform-prod/*, allow
groups:
- sso-platform-readonly
If the copy button does not work, select the full code block and copy manually.
Verification checklist
Production readiness means proving expected behavior before onboarding critical apps. Verify certificate issuance, UI reachability, repo connectivity, and app reconciliation from commit to running pod. Run this checklist after install and after every upgrade.
- Argo CD server ingress resolves publicly with valid TLS certificate.
- All core pods are healthy in the
argocdnamespace. - Git repository credentials are accepted and repository status is healthy.
- At least one test application syncs cleanly and reports Healthy/Synced.
- Drift test succeeds: intentional out-of-band change is detected and reconciled.
- Auditability confirmed: change appears in Git history and Argo CD events.
kubectl get pods -n argocd
kubectl get ingress -n argocd
argocd app list
argocd app get demo-nginx
argocd app history demo-nginx
kubectl -n demo get deploy,pods,svc
If the copy button does not work, select the full code block and copy manually.
Common issues and fixes
Issue: ingress works but login fails intermittently
This often comes from mismatched TLS termination assumptions, proxy headers, or cookie settings. Ensure ingress annotations and Argo server protocol settings align. If you terminate TLS at ingress, preserve forwarding headers consistently and avoid split behavior across multiple ingress classes.
Issue: repository authentication fails after key rotation
Argo CD caches repository connection state. Re-add repository credentials cleanly and test connectivity with a shallow repo operation. Also verify that host key verification policy and known_hosts entries are correct for your Git provider.
Issue: apps stay OutOfSync due to mutating webhooks
Some controllers mutate resources at runtime, creating expected differences. Configure resource customizations/ignore differences for known fields, but do this narrowly. Broad ignore rules can hide real drift and create security blind spots.
Issue: sync succeeds but workloads fail readiness
Argo CD only confirms Kubernetes object application unless health checks are configured properly. Use probes, dependency ordering, and post-sync validation jobs for stateful or externally dependent services. Add synthetic checks where possible so failures surface quickly.
Issue: recovery after namespace deletion is slow
Keep a tested bootstrap runbook: recreate namespace, reinstall Argo CD from pinned chart/version, restore required secrets, then resync applications from Git. If this procedure is not rehearsed, your recovery time objective is probably aspirational rather than real.
FAQ
Can I run Argo CD without exposing it publicly?
Yes. Many teams keep Argo CD private behind VPN or zero-trust access and only expose webhook endpoints through controlled ingress. The operational tradeoff is additional access workflow complexity for engineers and automation.
Should I enable auto-sync for every production application?
Not immediately. Start with manual sync for high-risk services, then move to auto-sync once guardrails, review discipline, and rollback confidence are mature. Auto-sync is powerful, but governance must be in place first.
How do I structure Git repositories for multi-team platforms?
Use clear separation between platform components, shared services, and product applications. A mono-repo can work with strict ownership rules; multi-repo works well when teams are autonomous. Whichever model you choose, enforce branch protections and review requirements.
What is the safest way to handle secrets in GitOps?
Prefer encryption-at-rest in Git via SOPS or Sealed Secrets plus strict key management. Avoid plaintext secrets entirely. Rotate keys regularly and test decryption/recovery paths before they are needed during incidents.
How do I prevent one team from deploying to every namespace?
Use Argo CD AppProjects with explicit destination constraints and role policies mapped to SSO groups. This keeps deployment boundaries clear and reduces lateral risk from accidental or unauthorized changes.
What backup strategy should I implement for this setup?
Back up Argo CD namespace manifests, critical secrets, and any external data services tied to your workloads. More importantly, test restore quarterly. A backup plan that has never been restored is not a reliable control.
Related internal guides
- Deploy Harbor with Kubernetes + Helm + cert-manager + ingress-nginx
- Deploy NetBox on Kubernetes with Helm and external PostgreSQL
- Deploy Authentik with Docker Compose + Traefik
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.