AI provider
Configure the AI model that powers the AI query builder — OpenAI, Anthropic, Azure OpenAI, Google, Groq, OpenRouter, or a local Ollama instance. Per-user API keys supported.
The AI query builder converts natural-language descriptions into query-builder canvases. For that to work, Fabrik needs an AI provider — an LLM endpoint to send prompts to. The AI provider page is where each user picks their own provider and model, overriding any platform default.
Supported providers
Seven backends are listed in the provider grid, each with a brand icon, display name, and the default model it ships with:
- OpenAI — GPT-family models via the public API. Requires an API key.
- Anthropic — Claude models via the Anthropic API. Requires an API key.
- Azure OpenAI — OpenAI models hosted in Azure. Requires an API key, an endpoint URL, and a deployment name.
- Google — Gemini models. Requires an API key.
- Groq — fast inference for open-weight models. Requires an API key.
- OpenRouter — gateway to many providers with a single key. Requires an API key.
- Ollama — local model server. No API key; requires the Ollama base URL (default
http://localhost:11434).
The list is driven by the backend — GET /api/ai/providers/ returns provider metadata (display name, default model, whether an API key / base URL / Azure deployment name is needed). The UI renders one card per provider.
Selecting a provider
Click a provider card to pick it. The card highlights, and the configuration panel below expands with the fields that provider needs:
- API Key — password field with a show/hide toggle. Shown only if the provider requires it.
- Server URL / Azure Endpoint URL — shown only for providers that need a base URL (Ollama, Azure).
- Deployment Name — shown only for Azure OpenAI.
- Model — dropdown, populated dynamically (see below).
Any previously saved provider keeps a small green dot on its card, so you can tell at a glance which one you're currently using even when a different card is highlighted for editing.
API key handling
API keys are stored encrypted in the database and never sent back to the UI after save. The input field shows •••••••• as a placeholder once a key is saved; leave it empty on subsequent edits to keep the existing key, or type a new one to replace it.
The Configured badge appears on the card header once a provider has a saved key. You can switch providers freely — each provider's key is stored separately, so coming back to a previously-configured provider doesn't lose its key.
API keys cost real money per call. The AI builder counts against your daily AI limit (ai_analysis_daily quota, see groups and quotas), but that's a count limit, not a cost limit. A generous quota with a cheap model is safe; a generous quota with gpt-4 can add up fast. Be deliberate about which model you pick.
Model selection
The model dropdown tries to show the live model list from the provider — a real API call to the provider that enumerates what's available to your key.
Live fetch fires automatically when:
- You type enough of an API key (≥8 characters) for a request to plausibly succeed.
- You already have a saved key for this provider.
- The provider doesn't need a key (Ollama).
A small label next to the Model label shows the source:
- Live (green) — the list came from the provider API.
- Defaults (gray) — the API fetch didn't happen or failed; the list is a hardcoded set of common models for that provider.
A Refresh button forces a fresh fetch — useful after rotating a key or adding a new Azure deployment.
The provider's recommended model is labeled (recommended) in the dropdown. If you pick a different model, save it; if your current choice disappears from a refreshed list, the dropdown auto-falls-back to the recommended model so the UI doesn't get stuck on a missing value.
Testing the provider
Before saving, click Test Connection (visible once a provider is selected). The backend sends a one-word prompt to the provider with the current config and reports back:
- Green banner — provider responded successfully. Includes a one-line message, usually echoing the reply.
- Red banner — failure. Includes the error message from the provider: invalid key, model not found, quota exceeded, network error, and so on.
Testing uses the config in the form — if you haven't typed a key but have one saved, the saved key is used. If you've typed a new key, that new key is tested. This means you can validate a fresh key without saving it first.
Saving
Click Save to persist:
- The selected provider.
- Any newly-typed API key (existing key kept if field is empty).
- The base URL and deployment name (for providers that need them).
- The selected model.
Save takes effect immediately. Your next AI builder request uses the new config. If anything is misconfigured, you'll see an error at query time — testing before save catches most of these ahead of the real request.
Per-user vs. platform default
Deployments can configure a platform default provider that applies to every user who hasn't set their own. The flow:
- User fires an AI request.
- Backend looks up
user_ai_provider— if present and valid, use it. - Otherwise fall back to the platform default.
- If neither exists, the AI builder returns a "no AI provider configured" error.
Platform defaults are an admin concern; the page on this surface only shows your personal override. If you want to remove your override and fall back to the platform default, clear the provider selection and save — the user-level config is deleted.
Ollama specifics
Ollama is the no-API-key option. Running Ollama on your own hardware means:
- No per-call cost.
- Your prompts never leave your network.
- You pick whichever open-weight model fits (Llama, Mistral, Qwen, etc.).
Install and start Ollama (curl -fsSL https://ollama.com/install.sh | sh, then ollama serve).
Pull a model: ollama pull llama3.1 (or whatever you want).
In Fabrik, pick Ollama. The base URL defaults to http://localhost:11434 — change to your actual Ollama host if it's running elsewhere.
The live model list should populate from your Ollama instance. Pick one and save.
If Fabrik runs in Docker and Ollama runs on the host, localhost in the Fabrik container means the container, not your host. Use your host's LAN IP or host.docker.internal (on Mac/Windows) as the base URL so the backend can reach Ollama across the container boundary.
Additional AI settings
Below the provider grid, extra switches control the AI builder's behavior:
- Enable AI builder — master switch. Off means the AI button is hidden in the query builder. Still respects the admin-level
can_use_ai_builderquota toggle. - Other options (prompt style, verbosity, default complexity) may be exposed depending on the deployment. They save the same way — change fires a PATCH against
/api/ai/settings/.
Changes apply immediately to your next AI request.
Troubleshooting
AI provider issues that come up often:
- "Test shows green but the AI builder errors." The test sends a single small prompt; the real builder sends larger, structured prompts. A test that passes with a tight token budget can still fail on real calls. Check the provider's usage dashboard for rate/token limits.
- "Model dropdown shows Defaults, not Live." Either you haven't typed enough of the API key yet, the provider's list endpoint rejected the request, or the provider returned an empty list. Click Refresh after the key is fully typed.
- "Azure OpenAI returns 'deployment not found.'" Azure requires both the endpoint URL (your resource's root) and the deployment name. Get the deployment name from the Azure portal — it's not the model name.
- "Ollama reaches the server but the model list is empty." The Ollama host is up but has no models pulled.
ollama pull <model>on the Ollama host, then click Refresh in Fabrik. - "Daily AI limit hit." That's the
ai_analysis_dailyquota from your group. It resets at midnight UTC. If the limit is too low for your work, ask an admin to raise it — the quota lives on the group, not your user. - "I cleared my provider and the AI button is still gone."
can_use_ai_builder(group quota) might be off. Clearing your provider falls back to the platform default; if the platform also has no default configured, the AI builder shows a "not configured" state.
Appearance
Theme, display timezone, date and time formats, and Class Browser detail-panel toggles — the knobs that shape how Fabrik looks to you.
Deployment
Self-host Fabrik on your own infrastructure with Docker Compose — nine services, clearly scoped dependencies, and a production topology you can reason about.