Self-hosted photo libraries usually start as a weekend project and become critical infrastructure the first time a team, family, or field crew depends on them to find evidence, project photos, receipts, or historical media. Immich is a strong fit because it gives users modern mobile backup, albums, search, facial recognition, and video support while keeping the media store under your control. This guide shows a production-oriented Ubuntu deployment that uses Docker Compose for repeatable services, Caddy for automatic HTTPS, PostgreSQL with vector support for metadata and search, Redis for background jobs, and a dedicated machine-learning container for recognition workloads.
The goal is not just to make the login screen appear. A durable Immich deployment needs predictable storage paths, documented secrets, recoverable database backups, clear upgrade windows, and verification commands that operators can run after every change. The pattern below mirrors how we deploy small internal media systems: one application host, local volumes that can be snapshotted or backed up, TLS termination outside the app container, and enough health checks to catch obvious issues before users report missing uploads.
Architecture and flow overview
Traffic enters at https://photos.example.com and terminates at Caddy, which renews certificates automatically and proxies requests to Immich on localhost. Immich stores original uploads in the library directory, stores metadata in PostgreSQL, uses Redis for queues, and sends machine-learning tasks to the model container. PostgreSQL, Redis, application files, and the model cache are deliberately separated so they can be backed up, restored, and monitored independently.
A typical request flow is: browser or mobile app connects to Caddy, Caddy forwards to immich-server, the server writes metadata to PostgreSQL, places background tasks in Redis, and stores media under /opt/immich/library. The machine-learning container consumes jobs for thumbnails, smart search, and recognition. For small teams this single-host model is straightforward to operate; for larger libraries, move media storage to durable block storage or an object-storage-backed design and test import speed before committing.
Prerequisites
- Ubuntu 22.04 or 24.04 with a non-root sudo user.
- Docker Engine and the Docker Compose plugin installed.
- A DNS record such as
photos.example.compointing at the server. - At least 4 CPU cores, 8 GB RAM, and storage sized for the original media plus backups.
- TCP ports 80 and 443 open to Caddy, with Immich bound only to localhost.
Before deploying, decide where the media library will live. Photos and videos grow quickly, so avoid the root filesystem unless it is intentionally sized for media. Mount a data volume at /opt/immich/library or bind it to dedicated storage. If the library is business-critical, make sure snapshots or off-host backups are already available before users begin importing large archives.
Step-by-step deployment
Create a dedicated application directory and keep every deployment artifact under version control or a private operations repository. The examples below use /opt/immich, but the same layout works on any mounted volume with enough capacity.
sudo mkdir -p /opt/immich/{library,postgres,redis,model-cache}
sudo chown -R $USER:$USER /opt/immich
cd /opt/immich
umask 077
openssl rand -base64 36
If the copy button is unavailable in your browser, select the command text and copy it manually.
Save the generated password in a password manager and place it in .env. Do not reuse a database password from another service. Immich upgrades can be frequent, so pinning to release is convenient, but regulated environments may prefer explicit image tags and scheduled update windows.
TZ=Etc/UTC
IMMICH_VERSION=release
UPLOAD_LOCATION=./library
DB_HOSTNAME=database
DB_USERNAME=immich
DB_DATABASE_NAME=immich
DB_PASSWORD=replace-with-a-long-random-password
REDIS_HOSTNAME=redis
If the copy button is unavailable in your browser, select the command text and copy it manually.
Create the Compose file. The PostgreSQL image below includes vector support used by Immich search features. The Immich project periodically updates its recommended database image and variables, so review upstream release notes before major upgrades, especially when crossing database or machine-learning changes.
services:
database:
image: tensorchord/pgvecto-rs:pg14-v0.2.0
container_name: immich_postgres
restart: unless-stopped
environment:
POSTGRES_DB: immich
POSTGRES_USER: immich
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- ./postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U immich -d immich"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: immich_redis
restart: unless-stopped
command: redis-server --save 60 1 --loglevel warning
volumes:
- ./redis:/data
immich-server:
image: ghcr.io/immich-app/immich-server:release
container_name: immich_server
restart: unless-stopped
depends_on: [database, redis]
env_file: .env
volumes:
- ./library:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
ports:
- "127.0.0.1:2283:2283"
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:release
container_name: immich_machine_learning
restart: unless-stopped
env_file: .env
volumes:
- ./model-cache:/cache
If the copy button is unavailable in your browser, select the command text and copy it manually.
Start the stack and watch logs until the server finishes migrations. Initial machine-learning model downloads can take several minutes and should be expected during the first boot.
cd /opt/immich
docker compose pull
docker compose up -d
docker compose logs -f --tail=100 immich-server immich-machine-learning
If the copy button is unavailable in your browser, select the command text and copy it manually.
Install Caddy on the host or use an existing reverse-proxy tier. Binding Immich to 127.0.0.1:2283 keeps the app off the public interface while Caddy handles TLS, compression, and security headers.
photos.example.com {
encode zstd gzip
reverse_proxy 127.0.0.1:2283
header {
X-Content-Type-Options nosniff
Referrer-Policy strict-origin-when-cross-origin
X-Frame-Options SAMEORIGIN
}
}
If the copy button is unavailable in your browser, select the command text and copy it manually.
sudo cp /opt/immich/Caddyfile /etc/caddy/Caddyfile
sudo caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy
If the copy button is unavailable in your browser, select the command text and copy it manually.
Configuration and secrets handling best practices
Treat .env, database dumps, and media backups as sensitive. The media library may contain personal identifiers, location history, documents, or customer evidence. Keep .env mode 600, restrict the application directory to the operations group, and never paste real secrets into tickets or shared chat. If you sync backups to object storage, use server-side encryption and a lifecycle policy that matches your retention requirements.
For user access, create named accounts, disable abandoned accounts, and enforce strong passwords or upstream single sign-on if your environment supports it. Do not expose PostgreSQL or Redis externally. If administrators need database access, use SSH tunneling from a trusted workstation rather than opening database ports. For high-volume mobile uploads, document expected Wi-Fi behavior and avoid placing aggressive request-size limits in the proxy.
Backups must include both the database and the media directory. A database-only backup preserves albums and metadata but not originals; a media-only backup preserves files but not the application state. Use a script like the following as a starting point, then copy the archive off-host with your normal backup tool.
#!/usr/bin/env bash
set -euo pipefail
cd /opt/immich
stamp=$(date -u +%Y%m%dT%H%M%SZ)
mkdir -p /opt/backups/immich
/usr/bin/docker compose exec -T database pg_dump -U immich immich | gzip > /opt/backups/immich/db-$stamp.sql.gz
/usr/bin/tar -C /opt/immich -czf /opt/backups/immich/library-$stamp.tar.gz library .env docker-compose.yml Caddyfile
find /opt/backups/immich -type f -mtime +14 -delete
If the copy button is unavailable in your browser, select the command text and copy it manually.
sudo install -m 0750 /opt/immich/backup-immich.sh /usr/local/sbin/backup-immich
sudo crontab -e
# Example: 02:20 UTC every day
20 2 * * * /usr/local/sbin/backup-immich >/var/log/immich-backup.log 2>&1
If the copy button is unavailable in your browser, select the command text and copy it manually.
Verification checklist
After deployment, verify the container state, the public HTTPS endpoint, the local application health endpoint, and the Caddy logs. Run these checks after every upgrade and after any DNS, firewall, or certificate change.
docker compose ps
curl -I https://photos.example.com
curl -s http://127.0.0.1:2283/api/server/ping
sudo journalctl -u caddy --since "15 minutes ago" --no-pager
If the copy button is unavailable in your browser, select the command text and copy it manually.
- The public endpoint should return a successful HTTP response over TLS.
docker compose psshould show the server, database, Redis, and machine-learning containers running.- Uploads from the mobile app should create files under
/opt/immich/library. - Background jobs should drain after an import rather than growing indefinitely.
- A test backup should produce both a compressed database dump and a media archive.
Do not mark the service production-ready until you have restored at least one database dump on a test host. Restore testing is the difference between having a backup file and having an actual recovery plan.
cd /opt/immich
docker compose down
mv postgres postgres.before-restore.$(date +%s)
mkdir postgres
docker compose up -d database redis
zcat /opt/backups/immich/db-YYYYMMDDTHHMMSSZ.sql.gz | docker compose exec -T database psql -U immich immich
docker compose up -d
If the copy button is unavailable in your browser, select the command text and copy it manually.
Common issues and fixes
Uploads fail behind the reverse proxy
Confirm the Immich container is reachable on 127.0.0.1:2283 from the host and that Caddy is not enforcing a small body limit. Also check available disk space. Large phone videos often reveal storage and timeout assumptions that small test images do not.
Machine-learning jobs stay queued
Check docker compose logs immich-machine-learning for model download failures or memory pressure. The first start can be slow, but repeated restarts usually indicate insufficient memory, a broken cache directory, or an incompatible image tag.
PostgreSQL health checks fail
Verify the values in .env match the database service environment and that the data directory is writable by Docker. If you changed the password after initialization, update it inside PostgreSQL as well; changing only the environment file does not rewrite existing database credentials.
Certificates do not issue
Confirm DNS points to the server, ports 80 and 443 are reachable, and no other service is already bound to those ports. Use sudo journalctl -u caddy to see the ACME error instead of repeatedly reloading the proxy.
Backups are too large
Separate retention for database dumps and media archives. Database dumps are usually small and can be retained longer. Media archives may require incremental filesystem snapshots, restic, Borg, or object-storage lifecycle rules instead of full daily tarballs.
FAQ
Can I use an external PostgreSQL server?
Yes. Point the Immich variables at the managed database, verify the required vector extension support, and keep network access private. Still export logical backups before major upgrades.
Should I store the library on NFS?
It can work, but local block storage is simpler and usually faster. If you use NFS, test thumbnail generation, concurrent uploads, locking behavior, and recovery after a network interruption.
How often should I upgrade Immich?
Use a scheduled maintenance window. Read release notes, run docker compose pull, snapshot or back up first, upgrade, then run the verification checklist before inviting users back.
Can this deployment support multiple families or departments?
Technically yes, but define account ownership, retention, storage quotas, and support expectations first. Media systems accumulate personal data quickly and need governance as much as uptime.
Do I need GPU acceleration?
Not for a small installation. CPU-based machine learning is acceptable for modest libraries, but initial indexing of very large archives can be slow. Consider GPU planning only after measuring queue times.
What should I monitor?
Monitor disk utilization, container restarts, backup success, Caddy certificate renewal, PostgreSQL health, Redis availability, and queue depth after imports. Alert on storage growth before the disk is critically full.
Is a database dump enough for disaster recovery?
No. You need the database, media library, Compose file, environment file, proxy config, and a documented restore sequence. Test the restore on a clean host at least quarterly.
Internal links
- Nextcloud with Docker Compose, NGINX, MariaDB, and Redis
- Paperless-ngx with Docker Compose, Caddy, PostgreSQL, and Redis
- MinIO with Docker Compose, NGINX, Let's Encrypt, and UFW
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.