FabrikFabrik
FabrikQuery Builder

Pipelines

Feed the output of one query into the filters of the next — multi-stage APIC queries with filter injection, DN scoping, or per-value iteration.

A single query says "give me every X matching Y." A pipeline says "give me every X matching Y, then for each of those, give me the Z underneath." The second query depends on the first — you can't write the filter until the first query has returned.

Fabrik pipelines wire that dependency into the canvas. The output of an upstream stage becomes the filter input of a downstream stage, automatically, at run time.

The pipeline edge

A pipeline edge is the one structural exception to Fabrik's single-outgoing-edge rule. It connects an Output or Post-Processor on the source side to a Class node on the target side — starting a new query stage that reads the previous stage's results.

Visually, it's drawn differently from a containment edge: dashed line with a small label showing the extract field (usually dn). Click the edge and the right panel opens the Pipeline Connection configuration.

Every stage — the nodes grouped by a containment chain — is one query against APIC. Fabrik runs them in topological order, waiting for each to finish before starting the next.

Drawing a pipeline stage

Finish the first stage

Build a normal query: Start → <class> → (optional filters / post-processors) → Output. Run it once if you want to see the shape of the data you'll be feeding forward.

Drag from the pipeline-out handle

The Output node has a handle on its right edge dedicated to pipeline-out. Drag from it and release on empty canvas. The Add Node menu offers Pipeline Stage — pick it and the Class Browser opens.

Pick the downstream class

Unlike a Child Class pick, the Class Browser in pipeline mode isn't scoped to a containment hierarchy — you can target any class you want. The upstream and downstream classes don't need a parent/child relationship in the MIM; they just need a shared field (usually dn) to link them.

Configure the edge

Click the dashed pipeline edge. The right panel shows Pipeline Connection with three fields: extract field, injection mode, and (for one mode) a target filter property. Defaults are usually right; tweak when you need.

Terminate the stage

The downstream class needs its own Output. Drag from its right handle, pick Output from the menu. The graph now has two stages, each with its own terminator.

The three injection modes

The pipeline edge carries one piece of metadata that matters: how to turn upstream values into downstream filters. There are three modes, and picking the right one is the difference between a pipeline that runs in two seconds and one that runs in two minutes.

Filter Values (default)

Best for small-to-medium sets — up to ~200 values.

Fabrik extracts the chosen field from every upstream row, then builds a single APIC filter expression of the form or(eq(prop,"v1"),eq(prop,"v2"),…) and sends one downstream query. One APIC round trip regardless of how many upstream rows there were.

When to use.

  • "Give me the bridge domains attached to these specific tenants."
  • "Show status on these 30 interfaces."
  • Anything where the upstream set is modest and you need everything back in one response.

Target filter property. The filter is built against {downstreamClass}.dn by default. Override in the Pipeline Connection panel if you want to filter on a different property (for example fvBD.name instead of fvBD.dn).

Hard limit. 200 values. Above that, Fabrik truncates with a warning in the logs — APIC URLs get unwieldy and some fabrics return 414.

DN Scope

Best when you need children of specific parents — up to ~50 DNs.

Fabrik iterates the upstream DNs and for each one issues an MO subtree query (/api/mo/<dn>.json?query-target=subtree&target-subtree-class=<cls>), merging the results. One query per upstream DN, N round trips, N result sets stitched together.

When to use.

  • "For each of these tenants, give me every subnet underneath."
  • "For each EPG, list every endpoint attached."
  • The downstream class lives under the upstream DN in the ACI hierarchy.

Hard limit. 50 DNs. Above that, Fabrik truncates — latency becomes the bottleneck rather than result size.

Iterate

Most flexible, slowest — up to ~100 values.

For each upstream value, Fabrik re-runs the entire downstream stage from scratch with the value substituted into the stage's filters. N round trips, N separate result sets, merged at the end.

When to use. When neither filter_values nor dn_scope fits — you need the downstream query to actually be parameterised per upstream value, not just filtered.

Hard limit. 100 values. Iterate is the slowest mode; keep the upstream set small.

Extract field

Which value to pluck from each upstream row. Default: dn.

Switch it when you want a different column to drive the next stage — for example name on a tenant result to then lookup by human-friendly name, or a custom attribute like ip you've extracted with a Post-Processor earlier.

The extract operation walks the same envelope-aware path resolver the Post-Processors use: attributes.dn and bare dn both work, regardless of whether the row is wrapped in an APIC class envelope.

Limits and safety rails

ConstraintValue
Maximum pipeline stages per query10
filter_values upstream count200
dn_scope upstream count50
iterate upstream count100

Exceeding a limit doesn't abort the run — Fabrik truncates to the limit and logs a warning. If you know the upstream set is larger, filter it down upstream (with a Filter node or a Post-Processor) before it hits the pipeline edge.

Chaining multiple stages

A pipeline isn't limited to two stages. You can terminate stage 2 with its own Output and pipeline-out to a stage 3, stage 4, and so on, up to ten. Each edge carries its own injection mode and extract field.

One Output can also fan out to multiple pipeline stages in parallel — drag from the pipeline-out handle twice, to different target classes, and you get two independent downstream stages reading the same upstream result. This is the only way a node on the canvas has more than one outgoing edge.

Executing a pipeline

Clicking Run on a pipeline query triggers a backend pipeline executor rather than the single-query executor. The right panel shows per-stage progress: current stage, stage class, status (executing / success / failed), elapsed milliseconds, and a count of rows returned by that stage.

When every stage finishes, the results panel shows each stage's output as its own tab. You can inspect any stage's rows without re-running the pipeline.

Cancelling a pipeline cancels cleanly between stages; the current stage's APIC call runs to completion but no further stages are started.

Pipelines and scheduled queries

A scheduled query can be a pipeline — the scheduler calls the same pipeline executor the interactive Run button calls. Variables work the same way too: their values come from the schedule's configuration, substituted into the first stage's filters, and the pipeline carries them forward.

Each stage in a scheduled pipeline counts as a separate iteration in the job's progress tracking, so the scheduled-tasks page shows per-stage status during the run.

Troubleshooting

Pipeline mistakes that come up often:

  • "Downstream stage returned zero rows but the upstream had results." The injection target is wrong. If the upstream extract is dn but the downstream class uses a different linking property, switch modes or set an explicit Target Filter Property in the pipeline edge config.
  • "Pipeline got truncated at 200." That's the filter_values hard limit. Switch to dn_scope if the downstream is hierarchical under the upstream DNs, or filter the upstream more aggressively.
  • "dn_scope is slow." Every DN is a separate APIC round trip. 50 DNs × 200 ms latency ≈ 10 seconds minimum. Prefer filter_values when the downstream isn't actually scoped by containment.
  • "iterate returns nothing for every value." The downstream stage's filter isn't referencing the injected value. iterate substitutes the upstream value into ${value} placeholders on the stage's filters — make sure there's a ${value} somewhere to substitute into.
  • "Run button is disabled on a pipeline query." Each stage needs its own Output. Check for a class node at the end of any stage without a terminator.
  • "Pipeline stages executed in the wrong order." Stages are ordered topologically by pipeline edges — you can't have two stages running in parallel unless they fan out from the same upstream. Re-examine the edge directions.

Pipelines compose queries. The next page — Execution — explains what happens on the backend when you hit Run, from graph to APIC URL to results.