FabrikFabrik
FabrikAPI Reference

WebSockets

Real-time channels for query execution, notifications, AWX jobs, and MIM imports. Ticket handshake, message schemas, and reconnection behavior.

Fabrik streams real-time events over WebSocket. Django Channels handles routing, Daphne terminates the ASGI connection, and Redis is the channel layer backing group membership. Five channels are exposed today, each tied to a specific resource.

Base URL

wss://your-fabrik-host/ws/<path>/

The nginx reverse proxy forwards /ws/ with the required Upgrade and Connection headers — don't try to reach Daphne directly.

Authentication — the ticket flow

WebSocket requests can't set an Authorization header from the browser, so Fabrik uses a one-shot ticket exchange:

Client calls POST /api/ws-ticket/ with its normal JWT. Response: { "ticket": "…", "expires_in": 60 }.

Client opens wss://…/ws/<channel>/?ticket=<ticket> within the 60-second window.

The backend middleware validates the ticket once, binds the WebSocket to the user, and invalidates the ticket. Subsequent attempts with the same ticket are rejected.

Tickets are single-use. Need to reconnect? Mint a new ticket.

The ticket mechanism exists because query strings are visible in nginx access logs — a one-shot 60-second token is much safer there than a long-lived JWT. Don't try to reuse tickets or extend their lifetime.

Message envelope

Every server-sent frame is JSON with a consistent envelope:

{
  "type": "progress",
  "timestamp": "2026-04-22T08:00:00Z",
  "data": { /* type-specific payload */ }
}

Clients should ignore unknown type values — new event types are additive.

Channels

ws/chain-execution/<job_id>/

Stream progress and results for a background query execution. Opened after calling POST /api/queries/saved-queries/<id>/execute_background/.

Events:

// Incremental progress (one per node in the chain)
{ "type": "node_start", "data": { "node_id": "n3", "class_name": "fvBD" } }
{ "type": "node_result", "data": { "node_id": "n3", "row_count": 142 } }
{ "type": "node_complete", "data": { "node_id": "n3", "duration_ms": 1840 } }

// Terminal events
{ "type": "execution_complete", "data": { "status": "success", "total_rows": 142, "results_url": "/api/queries/execution-logs/<id>/" } }
{ "type": "execution_failed", "data": { "error": "APIC 401 — authentication failed", "failed_node": "n2" } }

The server closes the connection after the terminal event.

ws/notifications/

Live feed of notifications for the connected user. Fires whenever a new notification is created (scheduled task done, AWX job finished, etc.).

Events:

{
  "type": "notification_created",
  "data": {
    "id": "…",
    "category": "scheduled_task",
    "severity": "warning",
    "title": "Scheduled task failed",
    "body": "…",
    "link": "/scheduled-tasks/…"
  }
}

// When marked read (from another tab or via REST)
{ "type": "notification_read", "data": { "id": "…" } }

// Unread-count changes (cheap counter updates)
{ "type": "unread_count", "data": { "count": 3 } }

The frontend connects to this channel on login and keeps it open for the session — a single connection carries every notification for the user.

ws/awx/request/<request_id>/

Progress for an AWX AutomationRequest. One request can produce multiple executions (though bulk mode always produces one) — this channel aggregates updates across all of them.

Events:

{ "type": "request_status", "data": { "status": "running", "executions_total": 1, "executions_complete": 0 } }
{ "type": "execution_spawned", "data": { "execution_id": "…", "awx_job_id": 8472 } }
{ "type": "execution_status", "data": { "execution_id": "…", "status": "running" } }
{ "type": "execution_complete", "data": { "execution_id": "…", "status": "successful" } }
{ "type": "request_complete", "data": { "status": "succeeded", "duration_ms": 48291 } }

ws/awx/execution/<execution_id>/

Focused on a single AutomationExecution. Streams stdout chunks as AWX produces them, plus lifecycle events. Higher frequency than the request channel — use this when you need tail-style output.

Events:

{ "type": "stdout_chunk", "data": { "offset": 12480, "text": "TASK [Configure BD] *****\nok: [apic-prod]\n" } }
{ "type": "status_changed", "data": { "from": "pending", "to": "running" } }
{ "type": "execution_complete", "data": { "status": "successful", "artifacts": { /* AWX artifacts dict */ } } }

ws/mim-import/<job_id>/

Progress for a MIM registry install or upload. Large imports can take several minutes — this channel keeps the UI honest.

Events:

{ "type": "phase", "data": { "phase": "download", "message": "Fetching MIM dump for 6.0.8…" } }
{ "type": "phase", "data": { "phase": "parse", "message": "Parsing 1842 classes…" } }
{ "type": "progress", "data": { "step": "classes", "current": 820, "total": 1842 } }
{ "type": "progress", "data": { "step": "properties", "current": 14500, "total": 28400 } }
{ "type": "import_complete", "data": { "version": "6.0.8", "classes": 1842, "properties": 28400, "duration_ms": 94812 } }
{ "type": "import_failed", "data": { "error": "Neo4j out of memory during property import" } }

Lifecycle and errors

Connection close codes:

CodeMeaning
1000Normal close. The server finished what it was streaming.
4001Ticket invalid or expired. Get a new ticket and reconnect.
4003Authenticated but not authorized for this resource (e.g. job owned by another user).
4004Resource not found. Job ID or execution ID doesn't exist.
1011Server error. Check backend logs.

Reconnection strategy: For long-running channels (ws/notifications/), the frontend reconnects automatically with exponential backoff (1s, 2s, 4s, up to 30s). For short-lived channels (ws/chain-execution/, ws/mim-import/), reconnecting after a drop means you might miss events — fetch the current state from the corresponding REST endpoint to fill the gap.

Heartbeats

The server sends a keepalive frame every 30 seconds:

{ "type": "ping", "timestamp": "…" }

Clients don't need to reply — it exists to keep intermediate proxies from timing out idle connections. Absence of a ping for 60 seconds is a reliable signal that the connection is dead; drop and reconnect.

Testing WebSocket endpoints

Minimal browser console example:

// 1. Get a ticket
const { ticket } = await fetch('/api/ws-ticket/', {
  method: 'POST',
  headers: { 'Authorization': `Bearer ${accessToken}` }
}).then(r => r.json());

// 2. Open the channel
const ws = new WebSocket(`wss://${location.host}/ws/notifications/?ticket=${ticket}`);

ws.onmessage = ev => console.log(JSON.parse(ev.data));
ws.onclose = ev => console.log('closed', ev.code, ev.reason);

For CLI testing, websocat works well:

TICKET=$(curl -sX POST -H "Authorization: Bearer $TOKEN" \
  https://fabrik.example.com/api/ws-ticket/ | jq -r .ticket)

websocat "wss://fabrik.example.com/ws/notifications/?ticket=$TICKET"