Production setup
The production compose override, nginx reverse proxy, TLS, and rate limiting — the deltas between a dev stack and a real deployment.
The base docker-compose.yml boots a developer-friendly stack with volume-mounted code, hot-reloading frontend, and database ports exposed on localhost. None of that belongs in production. This page walks the production overrides and the reverse-proxy layer that sits in front.
The production compose override
Fabrik ships a second compose file, docker-compose.prod.yml, that layers on top of the base and swaps out anything dev-specific. You activate it by passing both files:
docker compose \
-f docker-compose.yml \
-f docker-compose.prod.yml \
up -d --buildWhat the override changes:
Dockerfile.prodeverywhere. Backend and frontend both switch to their production Dockerfiles — multi-stage builds, no dev dependencies, smaller images.- Volume mounts removed. Dev mounts
./backend:/appso code changes reflect live. Production setsvolumes: []to pin the container to the built image only. - Database ports unbound. Postgres, Redis, RabbitMQ, and Neo4j all get
ports: []in production — reachable from other containers but not from the host. - Frontend built and served by nginx. The dev Vite server is replaced with static assets baked into the frontend image and proxied by nginx.
- Backend runs
daphne --proxy-headers. Required so Django sees the original client IP from nginx'sX-Forwarded-Forinstead of the container IP. - Mailpit moved to a
dev-onlyprofile. Won't start unless explicitly requested. - nginx container added. A new service fronts everything — the only container with published ports.
Don't run the production override against volumes created by the dev stack without a backup. The schemas match (same migrations), but a rolling data migration during an upgrade can fail halfway if one of the services is unexpectedly still on a dev image. Always back up Postgres and Neo4j before switching.
The nginx reverse proxy
The shipped nginx/nginx.conf is a production-ready template with two server blocks — one for a marketing landing page (fabrikops.com) and one for the app (demo.fabrikops.com). Edit it for your domain, or replace it entirely if you have existing nginx infrastructure.
Key things it handles:
TLS termination
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;Mount your certificate and key into the nginx container (the override has a commented example). Any valid cert works — Let's Encrypt, Cloudflare Origin, or your internal CA.
The default config references Cloudflare Origin certificate paths. If you proxy through Cloudflare, point those at the Origin cert you generate in the Cloudflare dashboard. If you don't, switch to Let's Encrypt via certbot or mount whatever cert you already have.
Rate limiting
Two zones configured at the http {} level:
| Zone | Limit | Burst | Used for |
|---|---|---|---|
api | 30 requests/sec per IP | 50 | Every /api/* endpoint |
login | 5 requests/minute per IP | 3 | /api/auth/login/, /api/auth/password-reset/ |
The login zone is much tighter — it blunts credential-stuffing attacks. If you see legitimate users hitting the limit (likely never), raise the burst rather than the rate.
Upstream routing
upstream backend { server backend:8000; }
upstream frontend { server frontend:80; }
upstream docs { server docs:3000; }All internal. The reverse proxy is the only path from the outside world to any Fabrik service.
WebSocket proxying
location /ws/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
...
}Essential — without the Upgrade / Connection headers, Daphne drops WebSocket connections and the frontend falls back to permanent "connecting…" state. If real-time updates aren't working after a deploy, this is the first thing to check.
Other sensible defaults
client_max_body_size 50m— generous enough for CSV uploads in the AWX automation wizard, small enough not to enable accidental multi-gigabyte uploads.gzip onwith a curated MIME type list — compresses JSON API responses, which is most of Fabrik's traffic.- Static file caching via
Cache-Controlheaders on the frontend upstream.
Deployment steps
Prepare the host. Verify prerequisites: Docker ≥ 24, the host sized appropriately, DNS pointed at the host, TLS certificate issued. See Prerequisites.
Clone and configure. Clone the repository to the production host, copy .env.example to .env, generate all required secrets, and fill in every production value. See Environment variables.
Mount the TLS certificate. Edit docker-compose.prod.yml to uncomment the SSL lines and the 443 port mapping, then place your cert and key at nginx/ssl/fullchain.pem and nginx/ssl/privkey.pem (or edit the paths to match what you have).
Edit nginx for your domain. Replace occurrences of fabrikops.com and demo.fabrikops.com in nginx/nginx.conf with your actual hostname. Remove the landing server block if you don't need it.
Build and start. Run docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build. First start takes longest — Neo4j warms up, the backend applies migrations and bootstraps the MIM.
Create the first admin. Exec into the backend and create a superuser: docker compose exec backend python manage.py createsuperuser. You'll log in as this user to finish configuration.
Verify. Reach https://your-host/ (landing), https://app.your-host/ (if you kept the split), log in, run the onboarding through the dashboard. Check /api/health/ returns {"status": "ok"}.
Health checks in production
Every service has a Docker healthcheck. Query them at any time:
docker compose ps
# Shows State: Up (healthy) / Up (unhealthy) per serviceExternal monitoring should scrape two endpoints:
| Endpoint | Meaning |
|---|---|
GET /nginx-health | nginx is up and responding. Returns 200 OK with the body healthy. |
GET /api/health/ | Backend is up, Postgres is reachable, migrations are applied. Returns JSON {"status": "ok", "version": "..."}. |
If /nginx-health is 200 but /api/health/ is 5xx, the reverse proxy is fine and the problem is the backend or its databases.
Hardening beyond the defaults
The shipped configuration is a reasonable baseline. Depending on threat model, consider:
- Security headers. Add
Strict-Transport-Security,X-Content-Type-Options: nosniff,Referrer-Policy: same-origin, and aContent-Security-Policytuned to what Fabrik actually loads (the React bundle, the API, the WebSocket). - IP allowlists for
/api/admin/and the Neo4j Browser if you expose it. - Fail2ban on the login zone in addition to the nginx rate limit — logs to
/var/log/nginx/access.loggive you the signal. - Separate the reverse proxy host from the application host for a clean cert/secret boundary.
- External Postgres and Neo4j. For larger deployments, managed databases with their own backup story are less operational burden than container volumes.
None of these are required to run Fabrik safely — they're lift on top of a deployment that's already hardened by default.
Services
What each of the nine Fabrik services does, how they depend on each other, and how to tell when one is unhealthy.
Behind a Corporate Proxy
Configure Fabrik to build and run on hosts that reach the internet through an HTTP/HTTPS proxy. Covers Docker daemon, container build, and runtime traffic.