Skip to Content

Self-Host Supabase Docker: Run Your Own Firebase Alternative With Full Control

Learn how to self-host Supabase with Docker, configure authentication, connect your apps to a production-ready Postgres backend, and manage your data without depending on Supabase Cloud.
Supabase setup guide

Self-Host Supabase Docker: Run Your Own Firebase Alternative With Full Control

Supabase gives you a Postgres database, authentication, real-time subscriptions, file storage, and auto-generated REST and GraphQL APIs — all from one platform. The cloud version is excellent, but when you need data sovereignty, custom infrastructure, or you're just not ready to hand your user data to a third party, self-hosting is the answer. This guide walks you through a complete self-host Supabase Docker setup: from first clone to a fully operational backend your apps can connect to.


Prerequisites

  • A Linux server (Ubuntu 22.04 LTS recommended) with at least 4GB RAM and 20GB disk
  • Docker Engine and Docker Compose v2 installed
  • A domain name with DNS access (required for production — Supabase Auth needs HTTPS)
  • Ports 80, 443, and 5432 (optional, for direct Postgres access) available
  • Basic familiarity with Docker Compose and environment variables

Verify your environment before starting:

docker --version
docker compose version
free -h
df -h /

# Confirm ports are free
sudo ss -tlnp | grep -E ':80|:443|:5432'

What You Get When You Self-Host Supabase

Supabase is not a single service — it's a suite of open-source components orchestrated together. Understanding what's running helps you debug it when something goes wrong.

The Component Stack

  • PostgreSQL — the core database. Everything else is built on top of it.
  • PostgREST — auto-generates a REST API from your Postgres schema. Create a table, get an API endpoint for free.
  • GoTrue — the auth service. Handles user signup, login, JWT issuance, OAuth providers, and magic links.
  • Realtime — broadcasts Postgres changes over WebSockets. Powers real-time subscriptions in your frontend.
  • Storage API — S3-compatible file storage backed by Postgres metadata and your choice of storage backend.
  • Kong — the API gateway that routes requests to the right service and handles API key authentication.
  • Studio — the Supabase dashboard UI. Table editor, SQL runner, auth management, logs — all in the browser.
  • Meta — a service that exposes Postgres metadata for the Studio UI.

Self-hosting means running all of these yourself. Docker Compose handles the orchestration — your job is getting the configuration right.


Cloning and Configuring the Self-Hosted Setup

Clone the Supabase Repository

Supabase ships its self-hosting configuration in the main repo under docker/:

# Clone only the docker directory (faster than cloning the full repo)
git clone --depth 1 https://github.com/supabase/supabase.git
cd supabase/docker

# Copy the example env file
cp .env.example .env

Generate Required Secrets

Supabase needs several cryptographic secrets generated before first run. Do not skip this step — using the example defaults is a serious security risk in production:

# Generate a strong Postgres password
openssl rand -base64 32

# Generate the JWT secret (at least 32 chars)
openssl rand -base64 32

# Generate anon and service_role JWTs
# Use the Supabase JWT generator: https://supabase.com/docs/guides/self-hosting/docker#generate-api-keys
# Or use jwt.io with the HS256 algorithm and your JWT secret

# Generate a dashboard password
openssl rand -base64 16

Configure the .env File

Open .env and update these critical values. Everything else has sensible defaults for getting started:

############################################
# SITE URL — your public domain
############################################
SITE_URL=https://supabase.yourdomain.com
API_EXTERNAL_URL=https://supabase.yourdomain.com

############################################
# JWT — generate and set all three
############################################
JWT_SECRET=your-super-secret-jwt-token-at-least-32-chars
ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...  # generated anon JWT
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...  # generated service_role JWT

############################################
# DATABASE
############################################
POSTGRES_PASSWORD=your-strong-postgres-password
POSTGRES_HOST=db
POSTGRES_PORT=5432
POSTGRES_DB=postgres

############################################
# DASHBOARD
############################################
DASHBOARD_USERNAME=admin
DASHBOARD_PASSWORD=your-dashboard-password

############################################
# SMTP — for auth emails (magic links, confirmations)
############################################
[email protected]
SMTP_HOST=smtp.yourdomain.com
SMTP_PORT=587
[email protected]
SMTP_PASS=your-smtp-password
SMTP_SENDER_NAME=Supabase

The JWT keys (ANON_KEY and SERVICE_ROLE_KEY) must be JWTs signed with your JWT_SECRET. The Supabase docs include a generator, or you can use jwt.io with the correct payload structure. Get this right before starting — changing these later breaks existing sessions.


Starting Supabase and Verifying the Stack

Pull Images and Start

cd supabase/docker

# Pull all images first (avoids timeout issues on first start)
docker compose pull

# Start the full stack
docker compose up -d

# Watch startup — wait for all services to become healthy
docker compose ps
docker compose logs -f

First boot takes 2–4 minutes while Postgres initializes and migrations run. Watch for the kong service to become healthy — that's the API gateway and the last service to come up. Once everything is green, open http://localhost:8000 (or your domain) to access the Studio dashboard.

Verify Each Service

# Kong API gateway health
curl http://localhost:8000/rest/v1/ \
  -H "apikey: YOUR_ANON_KEY" | head -c 200

# PostgREST — list tables
curl http://localhost:8000/rest/v1/ \
  -H "apikey: YOUR_ANON_KEY" \
  -H "Authorization: Bearer YOUR_ANON_KEY"

# Auth service health
curl http://localhost:8000/auth/v1/health

# Postgres direct connection
docker exec -it supabase-db psql -U postgres -c '\dt auth.*'

Connecting Your App to Self-Hosted Supabase

JavaScript / TypeScript

The Supabase JS client works identically with self-hosted — just swap the URL and key:

import { createClient } from '@supabase/supabase-js'

// Self-hosted config — use your domain and anon key
const supabaseUrl = 'https://supabase.yourdomain.com'
const supabaseAnonKey = 'YOUR_ANON_KEY'

export const supabase = createClient(supabaseUrl, supabaseAnonKey)

// Everything works exactly as with Supabase Cloud
async function fetchUsers() {
  const { data, error } = await supabase
    .from('profiles')
    .select('id, username, created_at')
    .order('created_at', { ascending: false })
    .limit(10)

  if (error) throw error
  return data
}

// Auth works the same
async function signUp(email: string, password: string) {
  const { data, error } = await supabase.auth.signUp({ email, password })
  return { data, error }
}

Connecting Directly to Postgres

One major advantage of self-hosting is direct database access. Expose port 5432 in your Compose file and connect with any Postgres client:

# Add to the db service ports in docker-compose.yml:
# ports:
#   - "5432:5432"

# Connection string for your apps or tools (pgAdmin, DBeaver, etc.)
postgresql://postgres:[email protected]:5432/postgres

# Python example with psycopg2
import psycopg2

conn = psycopg2.connect(
    host="supabase.yourdomain.com",
    port=5432,
    database="postgres",
    user="postgres",
    password="YOUR_POSTGRES_PASSWORD"
)

cur = conn.cursor()
cur.execute("SELECT COUNT(*) FROM auth.users")
print(cur.fetchone())

Setting Up OAuth Providers

Social auth (Google, GitHub, etc.) works the same as Supabase Cloud — configure it in the Studio UI under Authentication → Providers, or set environment variables in your .env:

# .env additions for Google OAuth
GOTRUE_EXTERNAL_GOOGLE_ENABLED=true
GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=your-google-client-id
GOTRUE_EXTERNAL_GOOGLE_SECRET=your-google-client-secret
GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=https://supabase.yourdomain.com/auth/v1/callback

# GitHub OAuth
GOTRUE_EXTERNAL_GITHUB_ENABLED=true
GOTRUE_EXTERNAL_GITHUB_CLIENT_ID=your-github-client-id
GOTRUE_EXTERNAL_GITHUB_SECRET=your-github-client-secret
GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI=https://supabase.yourdomain.com/auth/v1/callback

Restart the auth service after updating OAuth config: docker compose restart auth.


Putting Supabase Behind a Reverse Proxy

For production, put Supabase behind Nginx or Traefik to handle HTTPS termination and clean domain routing. Here's a minimal Nginx config:

server {
    listen 80;
    server_name supabase.yourdomain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name supabase.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/supabase.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/supabase.yourdomain.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    # Increase body size for Storage uploads
    client_max_body_size 100M;

    location / {
        proxy_pass http://localhost:8000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 3600s;  # Required for Realtime WebSocket connections
    }
}

The proxy_read_timeout 3600s is critical — Supabase Realtime uses long-lived WebSocket connections that Nginx will kill prematurely with the default 60-second timeout.

Reload Nginx and test:

sudo nginx -t && sudo systemctl reload nginx

# Test the API through HTTPS
curl https://supabase.yourdomain.com/rest/v1/ \
  -H "apikey: YOUR_ANON_KEY"

# Test auth endpoint
curl https://supabase.yourdomain.com/auth/v1/health

Tips, Gotchas, and Troubleshooting

Studio Dashboard Returns 401

The Studio dashboard is protected by basic auth using DASHBOARD_USERNAME and DASHBOARD_PASSWORD from your .env. If you're getting 401s, confirm those values are set and restart the Studio container:

docker compose restart studio
docker compose logs studio --tail 20

Auth Emails Not Sending

If users aren't receiving confirmation or magic link emails, SMTP is misconfigured. Check the auth service logs:

docker compose logs auth --tail 50 | grep -i smtp
docker compose logs auth --tail 50 | grep -i mail

# For development, disable email confirmation entirely:
# In Studio: Authentication → Settings → Enable email confirmations → OFF
# Or in .env:
GOTRUE_MAILER_AUTOCONFIRM=true

Realtime Subscriptions Not Working

Two common causes: the WebSocket connection is being dropped by a reverse proxy timeout (fix: increase proxy_read_timeout), or Realtime isn't enabled for the table. Enable it in Studio under Database → Replication or via SQL:

-- Enable Realtime for a table
alter publication supabase_realtime add table your_table_name;

-- Verify which tables are in the publication
select * from pg_publication_tables where pubname = 'supabase_realtime';

Updating Supabase

Supabase releases updates frequently. Update by pulling new images and restarting — migrations run automatically:

cd supabase/docker

# Pull latest images
docker compose pull

# Restart all services
docker compose up -d

# Watch for migration completion
docker compose logs db --tail 20
docker compose ps

Always read the release notes before updating — breaking changes to environment variables or migration requirements are called out explicitly.

Row Level Security Is Off by Default

This is the most important gotcha. New tables you create do not have Row Level Security (RLS) enabled. Without RLS, any authenticated user can read and write all rows in a table via the API. Always enable RLS and write policies before exposing a table through PostgREST:

-- Enable RLS on a table
alter table public.profiles enable row level security;

-- Users can only read their own profile
create policy "Users can view own profile"
  on public.profiles for select
  using (auth.uid() = id);

-- Users can only update their own profile
create policy "Users can update own profile"
  on public.profiles for update
  using (auth.uid() = id);

-- Service role bypasses RLS (used for admin operations)
-- Never expose the service_role key in client-side code

Pro Tips

  • Back up your Postgres volume daily — all your data, auth users, and storage metadata live there. A simple pg_dump scheduled via cron is enough for most setups.
  • Use the service role key only server-side — it bypasses RLS entirely. Never put it in a frontend app or a public repository.
  • Use database functions for complex logic — instead of making multiple API calls from the client, write a Postgres function and call it via RPC: supabase.rpc('my_function', { param: value }).
  • Enable the pg_cron extension for scheduled database tasks — it runs directly in Postgres, no external scheduler needed for simple recurring jobs.
  • Monitor with pgBadger or Postgres logs — slow query detection matters more than you think when PostgREST is generating queries from your schema automatically.

Wrapping Up

A complete self-host Supabase Docker setup gives you a full-stack backend — Postgres, auth, real-time, storage, and APIs — running on infrastructure you control. No per-row pricing, no data sovereignty concerns, and no dependency on Supabase's uptime for your product's availability.

The setup takes a focused afternoon: clone the repo, configure secrets properly, get Nginx and HTTPS sorted, and enable RLS on your tables. Once the stack is running, your app uses the same Supabase client SDK it always has — the only difference is where the requests go. From that point, you're running a production backend that scales with your Postgres instance rather than your cloud bill.


Need Supabase Deployed and Hardened for Production?

If you're running Supabase for a real product — with high availability, automated backups, performance tuning, and proper RLS policies reviewed — the sysbrix team can design and deploy it. We get your self-hosted backend production-ready so you can ship features, not infrastructure.

Talk to Us →
Windmill Self-Host Setup: Turn Scripts and Workflows Into Internal Tools That Actually Run
Learn how to deploy Windmill on your own infrastructure with Docker, write scripts in Python or TypeScript, build multi-step workflows, and schedule jobs — all from a clean self-hosted platform.