Self-hosted teams often start with spreadsheets for operations tracking, customer onboarding, inventory, audits, or lightweight CRM work. That works until permissions, forms, automations, and API access become business requirements. Baserow is a practical open-source alternative: it gives non-developers a familiar database UI while still giving technical teams an API-friendly platform they can operate, back up, and secure.
This guide follows the house pattern used in recent SysBrix Guides: a production-oriented Ubuntu deployment, Docker Compose for repeatability, Caddy for HTTPS, PostgreSQL for durable application data, Redis for background work, explicit verification, and a recovery routine. The example domain is baserow.example.com; replace it with your real hostname before running commands.
Architecture and flow overview
The public path is intentionally simple. Users connect to Caddy over HTTPS. Caddy terminates TLS, applies basic security headers, and proxies traffic to Baserow on 127.0.0.1:8087. Baserow runs in Docker and talks only to internal PostgreSQL and Redis containers over a private Compose network. PostgreSQL stores workspaces, tables, users, permissions, and metadata. Redis supports cache and background coordination. No database or cache port is published to the internet.
This layout keeps the reverse proxy visible to the host while keeping stateful services isolated. It also mirrors an upgrade-friendly operating model: configuration lives in /opt/baserow, secrets are stored as root-readable files, backups are scripted, and verification commands prove each layer before users depend on the service.
Prerequisites
- Ubuntu 22.04 or 24.04 server with DNS pointing
baserow.example.comto the host. - At least 2 vCPU and 4 GB RAM for small teams; allocate more memory for large tables or heavy API usage.
- Ports 80 and 443 reachable from the internet for Caddy certificate issuance.
- An SMTP mailbox for invitations, password resets, and operational notifications.
- A tested place to move backups off the server, such as S3-compatible storage or a restricted backup host.
Step-by-step deployment
1) Install Docker, Compose, Caddy, and firewall basics
Install the container runtime and expose only SSH plus web traffic. If your organization manages firewalls upstream, keep the host firewall anyway; it is a useful last line of defense when network rules drift.
sudo apt update
sudo apt install -y ca-certificates curl gnupg ufw
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
sudo apt install -y caddy
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
If the copy button is unavailable, select the block text and copy it manually.
2) Create the application layout and strong secrets
Use a dedicated directory so application data, database files, Redis persistence, backups, and secrets stay together. Keep the generated files out of Git and restrict permissions before writing secret material.
sudo mkdir -p /opt/baserow/{data,postgres,redis,backups}
sudo chown -R $USER:$USER /opt/baserow
cd /opt/baserow
umask 077
openssl rand -base64 48 > .secret_key
openssl rand -base64 32 > .postgres_password
If the copy button is unavailable, select the block text and copy it manually.
3) Write environment values
Replace the domain, sender address, and SMTP values. For production, use an application-specific SMTP password and rotate it when staff changes. The public URL must match the external HTTPS address or users may see broken links in email and browser redirects.
cat > .env <<'EOF'
BASEROW_DOMAIN=baserow.example.com
BASEROW_PUBLIC_URL=https://baserow.example.com
BASEROW_SECRET_KEY_FILE=/run/secrets/baserow_secret_key
POSTGRES_DB=baserow
POSTGRES_USER=baserow
POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
DATABASE_HOST=postgres
DATABASE_PORT=5432
REDIS_HOST=redis
[email protected]
EMAIL_SMTP_HOST=smtp.example.com
EMAIL_SMTP_PORT=587
[email protected]
EMAIL_SMTP_PASSWORD=replace-with-smtp-secret
EMAIL_SMTP_USE_TLS=true
EOF
If the copy button is unavailable, select the block text and copy it manually.
4) Define the Docker Compose stack
The Baserow container binds to localhost only, which allows host-level Caddy to reverse proxy it without exposing the application port publicly. PostgreSQL and Redis remain on the internal network. Pin versions during controlled maintenance windows if your change process does not allow tracking latest.
cat > compose.yaml <<'EOF'
services:
baserow:
image: baserow/baserow:latest
restart: unless-stopped
env_file: .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
secrets:
- baserow_secret_key
- postgres_password
volumes:
- ./data:/baserow/data
ports:
- "127.0.0.1:8087:80"
networks: [internal]
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: baserow
POSTGRES_USER: baserow
POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
secrets:
- postgres_password
volumes:
- ./postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U baserow -d baserow"]
interval: 10s
timeout: 5s
retries: 6
networks: [internal]
redis:
image: redis:7-alpine
restart: unless-stopped
command: ["redis-server", "--appendonly", "yes"]
volumes:
- ./redis:/data
networks: [internal]
secrets:
baserow_secret_key:
file: ./.secret_key
postgres_password:
file: ./.postgres_password
networks:
internal:
driver: bridge
EOF
If the copy button is unavailable, select the block text and copy it manually.
5) Configure Caddy for HTTPS
Create a small site block that proxies the public hostname to the local application port. The port mapping in Compose uses 127.0.0.1:8087:80, so Caddy can reach the service from the host while the internet cannot.
sudo tee /etc/caddy/Caddyfile.d/baserow.caddy >/dev/null <<'EOF'
baserow.example.com {
encode zstd gzip
reverse_proxy 127.0.0.1:8087
header {
X-Content-Type-Options nosniff
Referrer-Policy strict-origin-when-cross-origin
X-Frame-Options SAMEORIGIN
}
}
EOF
sudo sh -c 'cat /etc/caddy/Caddyfile.d/*.caddy > /etc/caddy/Caddyfile'
sudo caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy
If the copy button is unavailable, select the block text and copy it manually.
6) Start Baserow and watch first boot
The first start can take several minutes while migrations complete. Do not create users until the logs settle and the health checks show stable containers. If you plan to use single sign-on later, create a break-glass local administrator first and store its credentials in your password manager.
cd /opt/baserow
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --tail=80 baserow
If the copy button is unavailable, select the block text and copy it manually.
Configuration and secrets handling best practices
Treat Baserow like a business system rather than a convenience spreadsheet. Put production values in .env, keep high-value secrets in files, and prevent shell history from capturing passwords. Limit admin access, disable unused invitation paths, and document who owns schema changes for critical bases.
For SMTP, prefer a dedicated sender account with the minimum permissions your mail provider supports. Monitor bounce and delivery errors because password reset messages are operationally important. For database access, do not publish port 5432; use docker compose exec postgres from the host or a temporary SSH tunnel during maintenance.
Plan upgrades as small, reversible changes. Before each upgrade, run the backup script, read the release notes for breaking changes, pull images during a maintenance window, and keep the previous image tag available for rollback. Large teams should test migrations on a restored staging copy before touching production.
Verification checklist
Verify the stack from outside in: DNS, HTTPS, application response, database readiness, Redis readiness, and email delivery. This catches the most common mistakes quickly, especially wrong hostnames, blocked ports, and SMTP credentials that were copied incorrectly.
curl -I https://baserow.example.com
curl -fsS http://127.0.0.1:8087/api/redoc/ >/dev/null && echo "local app responds"
docker compose exec postgres pg_isready -U baserow -d baserow
docker compose exec redis redis-cli ping
If the copy button is unavailable, select the block text and copy it manually.
- Open the site in a private browser window and create the first administrator.
- Create a workspace, table, form view, and test record.
- Invite a test user and confirm the email arrives with the correct HTTPS link.
- Check
docker compose logs --tail=120 baserowfor migration or SMTP errors. - Confirm Caddy certificates renew automatically with
sudo systemctl status caddy.
Backups and recovery routine
Backups must include PostgreSQL data, uploaded files, Redis persistence, and configuration. Store at least one copy off the host and periodically restore to a disposable server. A backup that has never been restored is only a hopeful archive.
cat > /opt/baserow/backup.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/baserow
stamp=$(date +%Y%m%d-%H%M%S)
mkdir -p backups/$stamp
docker compose exec -T postgres pg_dump -U baserow baserow | gzip > backups/$stamp/baserow.sql.gz
tar -czf backups/$stamp/baserow-data.tar.gz data redis .env compose.yaml
find backups -mindepth 1 -maxdepth 1 -type d -mtime +14 -exec rm -rf {} +
EOF
chmod 700 /opt/baserow/backup.sh
sudo tee /etc/cron.d/baserow-backup >/dev/null <<'EOF'
17 2 * * * root /opt/baserow/backup.sh >/var/log/baserow-backup.log 2>&1
EOF
If the copy button is unavailable, select the block text and copy it manually.
Run a restore drill after the first successful production backup and after major version upgrades. The following example shows the shape of a restore; replace the timestamp with a real backup set and perform the drill on a non-production host first.
cd /opt/baserow
docker compose down
zcat backups/20260101-021700/baserow.sql.gz | docker compose run --rm -T postgres psql -h postgres -U baserow baserow
tar -xzf backups/20260101-021700/baserow-data.tar.gz -C /opt/baserow
sudo systemctl reload caddy
docker compose up -d
If the copy button is unavailable, select the block text and copy it manually.
Common issues and fixes
Caddy cannot issue a certificate
Confirm the DNS A record points to this server, ports 80 and 443 are reachable, and no other service is already bound to those ports. Check journalctl -u caddy -n 100 for ACME validation errors. If the host sits behind a load balancer, make sure HTTP challenge traffic reaches Caddy.
Baserow loads but email invitations fail
Recheck SMTP host, port, TLS mode, username, and password. Many providers require app passwords or verified sender domains. Send a test invite after each change and watch the Baserow logs instead of guessing from the UI alone.
Uploads or imports are slow
Large CSV imports stress CPU, memory, and database I/O. Schedule bulk imports outside business hours, increase server resources, and verify that backups are not running at the same time. For very large datasets, test import limits before promising spreadsheet-like behavior to every department.
Database health checks keep failing
Inspect file ownership under /opt/baserow/postgres, confirm the PostgreSQL password secret matches the configured user, and look for disk-full errors. Avoid deleting the volume directory unless you have a verified restore path.
Users see mixed-content or wrong-link warnings
Ensure BASEROW_PUBLIC_URL uses the final HTTPS hostname and that Caddy forwards traffic to the correct local port. After changing the public URL, restart the application container and send a fresh test email.
FAQ
Can Baserow replace every spreadsheet?
No. It is strongest when teams need shared structured data, forms, roles, and API access. Financial models, heavy pivot analysis, or offline spreadsheet workflows may still belong in specialist tools.
Should PostgreSQL be external or containerized?
Containerized PostgreSQL is acceptable for small and midsize self-hosted deployments when backups and restore drills are disciplined. Use a managed database when your organization already has database operations, monitoring, and retention policies in place.
How should we handle user permissions?
Create workspaces around business ownership, keep administrators limited, and review membership monthly. Avoid shared admin accounts because auditability matters when bases drive operational decisions.
Can we use NGINX or Traefik instead of Caddy?
Yes. The important pattern is TLS termination at the edge, proxying to the local Baserow port, and keeping PostgreSQL and Redis private. Caddy is used here because certificate automation is concise and reliable.
How often should we back up Baserow?
Daily is a practical baseline for small teams. Increase frequency if users update business-critical data throughout the day, and always keep at least one recent off-host backup.
What is the safest upgrade process?
Back up first, read release notes, test on a restored staging copy if possible, pull images in a maintenance window, and verify login, table views, forms, automations, and email before announcing completion.
How do we monitor this stack?
Track container health, disk usage, backup freshness, Caddy certificate status, HTTP response time, and PostgreSQL availability. Alert on failed backups and low disk space before users notice data-entry problems.
Internal links
For related production patterns, compare this guide with recent SysBrix deployments:
- Production Guide: Deploy Apache Airflow with Docker Compose + Caddy + PostgreSQL + Redis on Ubuntu
- Deploy Actual Budget with Docker Compose and Traefik
- Production Guide: Deploy Zammad with Docker Compose + Caddy + PostgreSQL + Elasticsearch on Ubuntu
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.
Header image: Unsplash, no watermark.