Scheduled queries
Run a saved query on a clock — hourly, daily, weekly, monthly, or at a specific one-time date — across one or more APIC connections, with retries and execution history.
The canvas is for iteration — you build, run, tweak. Once a query is settled, you usually don't want to keep clicking Run. You want it to fire every morning at 07:00, or once a week, or every hour on the minute, and drop the results somewhere you can pick them up later.
That's what scheduled queries do. A schedule takes a saved query plus a cadence plus one or more APIC connections and produces an unattended execution on Fabrik's Celery workers.
Prerequisites
- The query must be saved. Scheduling a canvas draft isn't an option — the schedule carries a saved-query ID, not an inline graph. Save first (see Saving and sharing).
- The saved query can be a template. If it declares variables, the schedule captures their values at configuration time.
Everything else — filter state, post-processors, pipeline stages, pagination settings — is read from the saved query when the schedule fires. Updating the saved query updates what future runs execute.
Creating a schedule
Scheduled tasks are managed from the Tasks page in the main navigation. New Task opens a dialog that maps one-to-one onto what the backend needs.
Basics
| Field | Notes |
|---|---|
| Task name (required) | Shown in the task list, logs, and notification emails. |
| Description | Free-form; what the task is for. |
| Priority | Low, Medium (default), High. Affects execution order when multiple tasks fire simultaneously. |
| Saved query (required) | Picker over every saved query you can access. Templates show up too — if a template is picked, variable inputs appear below. |
APIC connections
Scheduled tasks run across one or more APIC connections. Tick every connection you want the query to run against — the task forks one execution per connection and each produces its own result record.
This is the unattended analogue of picking an APIC on the Start node: the schedule can hit several fabrics in a single configured run.
At least one connection is required; the task can't save without.
Frequency
Five options; no cron syntax — Fabrik stays friendlier.
| Frequency | Extra fields | Means |
|---|---|---|
| Once | Scheduled datetime | Single-shot run at a specific wall-clock moment. |
| Hourly | Minute of hour (0–59) | Every hour on that minute. |
| Daily | Time of day | Every day at that time. |
| Weekly | Day of week + time of day | Every week on that day at that time. |
| Monthly | Day of month (1–31) + time of day | Every month on that day. Months without that day (e.g. Feb 31) clamp to the last valid day. |
Timezone. Schedules resolve in the task's timezone, not the server's. Your user timezone (from Settings) prefills the picker, but you can override per-task — useful when a fabric in another region should run on local business hours.
Variable values (templates only)
When the selected saved query is a template, the form grows a section listing every declared variable. You fill in a value per variable; these get baked into the schedule and re-used on every run. Editing the source template's variable defaults does not retroactively change the scheduled values — the schedule owns its copy.
To change a schedule's values, edit the schedule directly.
Retry
Transient failures (APIC hiccup, network blip, token refresh race) are common enough that retries are a first-class option.
- Enable automatic retry on failure — master toggle.
- Retry count — how many attempts on top of the initial run. Default 3.
- Retry interval (minutes) — wait between attempts. Default 5.
Retries only kick in for failures, not for successful-but-empty results. A query that returns zero rows is still a success.
Log retention
Days to retain logs controls how long this task's execution records stick around before the cleanup job deletes them. Default 30 days. Set it higher for tasks you want to audit long-term, lower for noisy every-minute health checks.
Retention applies to the execution record itself — timestamp, status, row count, error message if any. The result payload can be large; it's kept alongside the record for the same duration.
What happens when a task fires
The Celery beat scheduler polls the database once per minute and compares each task's next_run_at to the current time. When one is due:
- An execution record is created in
pendingstatus — one per APIC connection. - Each record is handed to a Celery worker which transitions it to
running, opens an APIC session, and runs the saved query's pipeline (optimizer, APIC call, post-processors). - The result and row count land on the record; status becomes
success. - If the query raises, the record is marked
failedwith the error type, message, and traceback; retry kicks in if configured. - The task's
next_run_atis recomputed for the next tick.
One task, N connections ticked → N execution records per fire. The Tasks page shows them grouped under the parent task.
The execution history
Every fire leaves a trail. Open a task and the Executions tab lists every run, newest first:
- Timestamp (started / completed).
- APIC connection name (denormalised, so renames don't break history).
- Status:
pending,running,success,failed,cancelled. - Row count returned.
- Execution time in milliseconds.
- Retry attempt number (
0for the initial run,1+for retries). - Error message and traceback for failures.
Click a row and the stored result opens in the same results panel you use for interactive runs — same tabs (Response / Table), same export options.
Status — Active, Paused, Disabled
Three states control whether a task fires.
| Status | What it means |
|---|---|
| Active | Default on create. Fires on schedule, retries on failure. |
| Paused | Skipped by the scheduler. Kept intact; flipping back to Active recomputes the next run. |
| Disabled | Hard-off. Typically set by an admin; conveys "don't run this under any circumstance." |
Pausing is the right tool for "we're deploying tonight, quiet this down for the next few hours." Disabling is the right tool for "we're retiring this workflow, don't restart it."
Permissions
Fabrik's scheduled-task permission model is straightforward:
- Create. Any authenticated user can create tasks against queries they can access.
- Edit / delete. Only the task's creator or an admin.
- Execute on demand. Same as edit — only creator or admin can trigger a manual run.
- View history. Anyone who can see the task can see its executions.
The admin-only special case: system tasks (MIM sync, time machine cleanup, awx polling) are marked is_system_task and can only be edited by admins. Regular users see them in the task list but can't touch them.
Running on demand
Sometimes you want to trigger a scheduled task outside its cadence — for a test run after a schedule change, or to backfill a missed window. The task detail view has a Run Now button that queues the same Celery pipeline the scheduler would use, against all configured connections.
A manual run shows up in the history like any other execution, with a flag indicating it was triggered manually.
Common patterns
Morning health check. Daily at 06:00, query every fault object across all fabrics. Ticks every connection, emits a single fault-digest result per connection.
Drift snapshot. Query with Time Machine on, daily at 02:00. Snapshots stack up; drift detection compares them. Retention 90+ days for compliance-style audits.
Per-tenant report. Template with a tenant variable. Schedule one task per tenant, each filling the variable differently. Same query, different filter values.
High-frequency liveness. Hourly task, one APIC connection, minimal result size. Retry enabled with a short interval. Treat the failure count on the task detail as the liveness indicator.
Troubleshooting
Scheduled-task surprises that come up often:
- "The task never fires." Check status — it's probably Paused or Disabled. If Active, verify
next_run_atis in the future but reasonably close — if it's months away, the timezone or frequency is wrong. - "It fired at the wrong time." Task timezone ≠ your viewing timezone. The schedule resolves in the task's own timezone field, not in UI display time. Edit the task to confirm.
- "A template task ran with wrong variable values." The schedule captures variable values at configuration time. Changing the template's defaults later doesn't update scheduled tasks already created against it — edit each task.
- "Retry didn't kick in." Retry is opt-in. Edit the task, enable Retry, set count and interval, save.
- "One APIC succeeded, another failed — is the task failed?" No. Each (task × connection) is its own execution record. The task is healthy; one connection's record is failed. The failure count on the task summarises across all connections and executions.
- "I can't delete a task." Only the creator or an admin can. System tasks can't be deleted at all — they're platform-managed.
- "The scheduler seems slow." Celery beat polls once per minute. A task due at 09:00:00 can fire anywhere in the 09:00–09:01 window. Sub-minute precision isn't supported (and cron wouldn't give you it reliably either).
That's the Query Builder. From canvas mechanics through saving, sharing, and scheduling, you now have every lever. The next sections — Class Browser, AWX Automation, Time Machine — build on top of queries: they assume you know how to make one.
Saving and sharing
Persist queries to the server, organise them by category and tags, mark them as templates, and share them with the rest of the team.
Class Browser
The searchable, MIM-backed picker that turns the entire ACI Managed Information Model into a five-second lookup — for novices, operators, power users, and senior experts alike.