AI builder
Configure AI providers, fetch available models, test credentials, and drive natural-language query generation — endpoints under /api/ai/.
Two ViewSets under /api/ai/:
/api/ai/settings/— platform-wide toggles (admin only, plus a read-all form)./api/ai/provider/— the authenticated user's personal provider config (BYOK).
The conversion "natural language → query canvas" itself happens through /api/queries/saved-queries/.../execute/ and other existing query endpoints — this surface just configures which LLM backs the generation.
Providers
Fabrik supports seven provider backends. Each has a fixed set of required fields:
| Provider | Needs API key | Needs base URL | Needs deployment name |
|---|---|---|---|
openai | ✅ | — | — |
anthropic | ✅ | — | — |
azure_openai | ✅ | ✅ | ✅ |
google | ✅ | — | — |
groq | ✅ | — | — |
openrouter | ✅ | — | — |
ollama | — | ✅ | — |
GET /api/ai/providers/
Enumerate providers available on this deployment. Returns static metadata — display name, default model, required fields. Useful for building a provider picker.
Response 200:
{
"providers": [
{
"id": "openai",
"display_name": "OpenAI",
"requires_api_key": true,
"requires_base_url": false,
"default_model": "gpt-4o-mini",
"fallback_models": ["gpt-4o", "gpt-4o-mini", "gpt-4-turbo"]
},
{ "id": "ollama", "display_name": "Ollama", "requires_api_key": false, "requires_base_url": true, "default_model": "llama3.1", "fallback_models": [...] }
]
}User provider config
GET /api/ai/provider/
Current user's provider. Returns null if the user has no override (the platform default applies).
Response 200:
{
"provider": "anthropic",
"model": "claude-sonnet-4-6",
"base_url": null,
"azure_deployment_name": null,
"has_api_key": true,
"is_configured": true,
"updated_at": "2026-04-22T08:00:00Z"
}has_api_key tells you whether an encrypted key is stored; the key itself is never returned.
PUT /api/ai/provider/
Replace the user's provider config.
Request:
{
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": "sk-…" // omit to keep existing key
}Azure example:
{
"provider": "azure_openai",
"model": "my-deployment",
"base_url": "https://resource.openai.azure.com",
"azure_deployment_name": "my-deployment",
"api_key": "…"
}DELETE /api/ai/provider/
Clear the user's override and fall back to the platform default.
GET /api/ai/provider/models/?provider=<id>&api_key=<optional>
Live fetch of available models for a provider. Uses the saved API key if api_key is omitted, or the given one if passed (for validating a fresh key without saving it).
Response 200:
{
"source": "live",
"models": ["gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo"],
"recommended": "gpt-4o-mini"
}source is live when the list came from the provider API, fallback when the fetch failed and the hardcoded default list is returned. The frontend shows a "Live" / "Defaults" badge matching this value.
The live fetch requires the API key to be at least 8 characters long — shorter strings are clearly invalid and would just produce a 4xx. This check avoids burning retry budget on typos.
POST /api/ai/provider/test/
Send a one-word prompt to the configured provider and report the result. Uses the in-form config (including a newly-typed-but-not-saved key) if provided, otherwise the saved config.
Request:
{
"provider": "anthropic",
"model": "claude-sonnet-4-6",
"api_key": "…"
}Response 200 (success):
{ "status": "success", "message": "Hello!", "latency_ms": 420 }Response 200 (failure):
{ "status": "failure", "message": "401 Unauthorized — invalid API key" }Always 200 — inspect status to branch.
Platform AI settings (admin)
GET /api/ai/settings/
Available to all authenticated users as a read — tells the UI whether the AI builder is enabled and what the platform default is.
Response 200:
{
"ai_builder_enabled": true,
"platform_default_provider": "anthropic",
"platform_default_model": "claude-sonnet-4-6",
"prompt_style": "concise",
"verbosity": "normal",
"default_complexity": "auto"
}PATCH /api/ai/settings/
Admin only. Change any field above. The ai_builder_enabled flag is the master switch — turning it off hides the AI button in the query builder UI for every user.
Changing platform_default_provider or platform_default_model takes effect immediately for every user without their own override. Clear-override users will see AI builder requests start hitting the new backend on the next call — notify them first if cost or latency changes meaningfully.