Post-processors
Reshape APIC results after they come back — filter rows, extract fields, run regex, sort, aggregate — with a visual top-to-bottom pipeline on the canvas.
Filters narrow what APIC sends; post-processors reshape what's already on the way home. They're the transformation layer: pick fields, flatten envelopes, regex-replace DNs, sort, dedup, group, count. Anything you'd otherwise do in a spreadsheet or a jq chain, you do here — visually, one box per step.
The pipeline model
A Post-Processor node takes the previous stage's output, applies one transformation, and passes the result to the next stage. Chain several of them and you have a pipeline.
Two things about the pipeline are easy to miss:
- Execution order is visual. When multiple Post-Processor nodes feed into the same Output, the backend runs them top-to-bottom by their Y position on the canvas, not by edge order. Laying them out top-down is how you control sequencing.
- First input is normalized. APIC returns
{ imdata: [...] }. The first processor in the chain receives the unwrappedimdataarray directly — no processor needs to strip the envelope itself.
Pause without deleting
Every Post-Processor has a Pause toggle in its hover action bar. A paused processor is greyed out on the canvas and skipped at run time — the input flows through untouched to the next stage.
This is the right tool for "does this step matter?" experiments. You can compare paused vs. unpaused runs without rebuilding the chain.
Pause is Post-Processor-only. Class and Filter nodes don't have it because skipping them would break the query's structural meaning.
Previewing without re-running
Class nodes carry a preview button (the small play-circle that appears on hover). Preview runs the query up to and including that class and caches the raw result in memory.
After a preview, you can add, remove, re-order, or reconfigure downstream Post-Processors and see their effect without hitting APIC again. The backend re-runs the pipeline against the cached data. This matters for two reasons:
- Iterating on a 500-row regex is fast — no round trip.
- You don't spam APIC while you're tuning the pipeline.
The cache is per-session and not persisted.
The processor types
The dropdown groups processors by purpose. The UI label is what you see on the node; the id in parentheses is the wire-format identifier stored in saved queries — useful when reading audit logs or sharing JSON exports.
| Category | UI label | Id | One-line |
|---|---|---|---|
| Filter rows | Pattern Filter | pattern-filter | Include/exclude rows by regex (grep-like). |
| Pick / shape data | DN Extract | dn-extract | Pull one attribute (default dn) out of APIC objects as a flat list. |
| Field Extract | field-extract | Select specific fields from each object, flat or nested. | |
| Flatten | flatten | Flatten nested arrays, or join a dict's keys with a separator. | |
| Transform values | Regex Transform | regex-transform | Regex search/replace with optional in-place rewrite of a chosen field (sed-like). |
| Map Transform | map-transform | Project each item to a single field or pass it through. | |
| Text Operations | text-operations | Split, join, trim, upper/lower, replace, substring. | |
| Summarize | Array Sort | array-sort | Sort with optional dedup, numeric, reverse. |
| Aggregate | aggregate | Count, sum, avg, min, max, or group-by. | |
| Custom code | JavaScript | javascript | Free-form JS expression. Preview-only. |
Preview-only processors (javascript) run only in the browser when previewing. The backend execution pipeline doesn't recognise these ids and a saved query that contains one will fail at scheduled run time with Unknown processor type. Use them to explore; once you've settled on a transformation, port the logic into the supported processors.
The rest of this page walks each processor in dropdown order.
Pattern Filter (pattern-filter)
Grep-like inclusion/exclusion by regex.
Config:
- Include patterns — one or more regexes. A row passes if it matches any of them (OR logic). Empty list means "include everything."
- Exclude patterns — one or more regexes. A row is dropped if it matches any. Exclude wins over include.
- Field — the field to test on each row. Supports nested paths (e.g.
attributes.name); APIC envelope unwrap is automatic. - Case-sensitive — toggle.
Use when you want "everything under uni/tn-prod/ except the monitoring objects" — two include/exclude lines and you're done.
DN Extract (dn-extract)
Pull attributes out of APIC's envelope. This is usually the first processor in any pipeline because the raw response is { classname: { attributes: {...} } } and every later step is easier when you've collapsed that.
Config:
- Field — which attribute to extract (default
dn). - Remove prefix (optional) — regex applied to each extracted value. Matches are stripped.
- Extract pattern (optional) — regex with an optional capture group. The first capture group (or whole match, if no group) becomes the value.
Running both removePrefix and extractPattern on the same step is fine — removePrefix runs first, then extractPattern.
Field Extract (field-extract)
Select a subset of fields from each object. Two output shapes:
- Flat (default) — output keys are the last segment of each path.
attributes.dnbecomes{ dn: "..." }. Good for table rendering. - Keep structure — output preserves the dotted path as a nested object.
attributes.dnstays{ attributes: { dn: "..." } }. Good when a downstream processor needs the original shape.
Config:
- Fields — list of dotted paths (e.g.
attributes.name,attributes.status). - Keep structure — toggle the output shape.
Flatten (flatten)
Two different operations under one name, depending on whether the input is an array or an object.
For arrays:
- Depth — how many levels of nested arrays to unwrap.
1(default) unwraps one level;0means unlimited.
For dicts:
- Separator — joins nested keys. With separator
.,{a: {b: 1}}becomes{"a.b": 1}.
Typical use: after field-extract with keepStructure, flatten to get table-friendly keys.
Regex Transform (regex-transform)
Sed-like find-and-replace. Operates on strings, lists of strings, and lists of objects (when Apply To selects a string field).
Config:
- Pattern — the regex.
- Replacement — the replacement string. Backreferences use JavaScript syntax:
$1,$2, … — capture groups.$&— the entire match.$$— a literal$. Fabrik translates these to Python's\1/\g<0>form internally; do not type\1yourself, it would be inserted literally.
- Apply To Field (optional) — when the input is a list of dict rows (the typical APIC shape), set this to the field that holds the string you want to transform (e.g.
attributes.dn). The other fields stay intact and the row's structure is preserved. Leave blank when the input is already a list of strings. - Flags — any subset of
g,i,m. Python regex is always global, sogis implicit;ienables case-insensitive,menables multi-line.
Use this when a DN comes back with a prefix or embedded segment you want to rewrite across every row without dropping the rest of the columns.
Example — strip the uni/tn- prefix from every tenant DN:
pattern: .*/tn-([^/]+).*
replacement: $1
applyTo: attributes.dnWithout Apply To, the same config applied to a list of dicts is a no-op — there's no string to match against at the row level.
Map Transform (map-transform)
Project each item through a deliberately-limited expression. Two forms are supported:
item— identity (pass through).item.some.path— pluck a nested field. APIC envelope is not unwrapped here, so write the full path:item.fvAEPg.attributes.dn, notitem.attributes.dn.
Anything else (arithmetic, function calls, ternaries) is silently passed through unchanged rather than raising. The MapTransformConfig examples show item.value * 2 for illustration but only the two patterns above actually execute. For real transformations, reach for regex-transform (with Apply To) or the JavaScript processor while previewing.
Text Operations (text-operations)
One string operation applied across scalars or lists of scalars.
Operations:
| Operation | Config | Notes |
|---|---|---|
split | separator, optional limit | Scalar → array. |
join | delimiter | List → scalar. Applied once to the whole list (not element-wise). |
trim | — | Strip leading/trailing whitespace. |
upper / lower | — | Case conversion. |
replace | find (regex), replaceWith | Regex replacement. |
substring | start, optional end | Python slice semantics. |
On lists, every operation except join maps element-wise.
This processor expects strings. To apply it to APIC results, pipe a dn-extract (or field-extract) in front of it first — otherwise it sees dict rows and silently passes them through.
Array Sort (array-sort)
Sort a list, with optional dedup and direction.
Config:
- Field (optional) — when the list contains dicts, sort by this nested path.
- Unique — drop duplicates after sorting.
- Numeric — coerce to float before comparing. Rows that fail coercion are treated as
0. - Reverse — descending.
Order of operations is sort → dedup → reverse, so a descending unique list comes back as you'd expect.
Aggregate (aggregate)
Collapse an array to a single value (or a grouped dict).
Operations:
| Operation | Config | Returns |
|---|---|---|
count | — | Integer row count. |
sum | field | Sum of the field across rows. |
avg | field | Mean, or 0 when no numeric rows. |
min / max | field | Min/max, or null when no numeric rows. |
group | groupBy | Dict of { key: [rows] }. Downstream processors must expect a dict. |
For sum, avg, min, and max, string values that parse as numbers ("1024") are coerced to floats. Values that don't parse are silently skipped — a single bad row won't break the aggregation.
group keys are taken verbatim from the field. To group by a derived value (e.g. tenant name extracted from a DN), put a regex-transform step ahead of aggregate to rewrite the field first.
JavaScript (javascript)
A free-form processor that lets you write custom JavaScript in the browser for preview.
Preview-only. Not registered in the backend pipeline. Saved queries that include it will fail at scheduled execution with Unknown processor type. Treat it as an exploration tool — when you've settled on a transformation, port the logic into the fixed processor types above.
The current data is bound to the variable data (already imdata-unwrapped). The expression must return the new data; explicit return is required.
How paths are resolved
Every processor that takes a field path (field-extract, map-transform, aggregate, dn-extract, regex-transform's applyTo) resolves it the same way:
- Walk the dotted path from the root of each item.
- If that returns
Noneand the root is a single-key dict wrapping another dict (APIC's envelope shape:{ fvTenant: { attributes: {...} } }), retry the walk one level inside.
The effect: you can write attributes.name instead of fvTenant.attributes.name and it works regardless of the class. That's why dn-extract almost never needs to know the class name — the envelope unwraps automatically.
The one exception is map-transform — its item.path syntax does not auto-unwrap the envelope, so you must include the class name (item.fvAEPg.attributes.dn).
Composing a pipeline
The common shape in practice:
- DN Extract to pull
dn(or a specific attribute), or keep dict rows and useregex-transformwithApply To. - Regex Transform or Pattern Filter to narrow and rewrite.
- Array Sort with
uniqueto dedup and order. - Optionally Aggregate for a count or grouping.
Stack the four nodes top-to-bottom, connect them, Run. The Output panel shows the final array (or scalar, for aggregations).
Troubleshooting
The processor mistakes that come up most often:
- "Processor failed at run time but worked in preview." You're using JavaScript — preview-only, doesn't exist in the backend pipeline. Replace with the supported processors (
dn-extract,field-extract,regex-transform, etc.) or port the logic to one of them. - "
regex-transformreturned the original rows unchanged." The input is a list of dict rows andApply To Fieldis empty. Set it to the path that holds the string you want rewritten (typicallyattributes.dn). - "
regex-transforminserted$1literally instead of the captured group." This shouldn't happen anymore — Fabrik translates JS-style$1/$&/$$to Python's backref syntax automatically. If you typed\1directly, change it to$1. - "
text-operationssilently did nothing." It needs strings or lists of strings. Pipe adn-extract(withextractFieldset to the attribute you want, defaultdn) in front of it. - "
map-transformreturned the original items unchanged." Either the expression isn'titem/item.<path>(other forms are silent passthrough), or you wroteitem.attributes.dnwithout the class name — tryitem.fvBD.attributes.dn. - "
aggregate sumreturned 0." The field values are strings APIC didn't coerce to numbers, or the field path was wrong. Test with afield-extractfirst to confirm the value is reaching the aggregator. - "
aggregate groupproduced one bucket per row." The group key is unique per row (e.g. you grouped bydn). Add aregex-transformupstream to rewrite the field to the value you actually want to bucket on (e.g. tenant name from the DN). - "
flattenon a dict produced a dotted-key monster." That's what it's supposed to do — every nested path becomes a flat key. Usefield-extractwithkeepStructure: falseif you only need specific leaves. - "Pipeline runs but in the wrong order." Post-Processor execution is top-to-bottom by Y position on the canvas, not by edge order. Drag the nodes into the order you want.
- "I want to skip a step to see what changes." Use the Pause toggle on the processor — it's reversible and doesn't disturb the pipeline's topology.
Post-processors shape data after APIC. The next page — Variables — is about the values you leave out of the query until run time.
Filters
Shape what APIC returns with property filters, DN wildcard patterns, and subscription filters — without writing any filter strings by hand.
Variables
Turn a filter value into a named runtime prompt — write a query once, run it against different tenants, DNs, or thresholds without editing the graph.