Skip to main content

MAF v1 — Declarative Workflows (Python + .NET)

Nitin Kumar Singh
Author
Nitin Kumar Singh
I build enterprise AI solutions and cloud-native systems. I write about architecture patterns, AI agents, Azure, and modern development practices — with full source code.
MAF v1 — Declarative Workflows (Python + .NET)
MAF v1: Python and .NET - This article is part of a series.
Part 19: This Article

Series note — Part of MAF v1: Python and .NET. Third of four advanced chapters. New material — no counterpart in the original Python-only series.

Repo — Full runnable code for this chapter is at https://github.com/nitin27may/e-commerce-agents/tree/main/tutorials/19-declarative-workflows. Clone the repo, cd tutorials/19-declarative-workflows, and follow the per-language instructions below.

When you’d reach for this — A workflow’s shape is now a configurable concern, not a code concern. Examples: ops needs to insert a “validate compliance” step before the existing “issue refund” step without an engineering deploy; a customer-success lead wants to A/B-test two different orderings of a return flow; a compliance review needs the workflow graph to live in a YAML the auditor can read without opening a C# project. If any of those sound familiar, declarative belongs on the table. If your team owns the graph end-to-end and changes it via PR-then-deploy like every other code change, the imperative builders from Ch09–Ch18 are a better fit.

Why this chapter
#

Ch09 through Ch18 built every workflow in code. That’s right for engineers who own the graph — strong typing, IDE completion, refactors that fail loudly at build time. It stops being right the moment a non-engineer needs to tweak ordering, a support team wants to swap in a different retry policy during an incident, or operations wants to ship a pipeline change without a deploy.

Declarative — the opposite of imperative; you describe what the pipeline looks like (a YAML document), not how to build it (a sequence of WorkflowBuilder.add_edge(...) calls). A loader reads the document and emits the same Workflow object code would have.

Three concrete reasons to reach for a declarative loader:

  • GitOps — a workflow edit shows up in a pull request as a YAML diff, not a C#/Python diff. Reviewers who don’t read code can still approve it. “Does the graph now run Validate before Log?” is answerable from the diff alone.
  • Hot reload — restart the service (or a config-watcher inside it) and a new graph takes effect. No rebuild, no image push.
  • Authoring by non-engineers — ops, compliance, customer-success. Anyone who can edit a YAML file in the admin UI can reorder steps or flip a config value.

MAF ships an officially supported declarative schema via Microsoft.Agents.AI.Workflows.Declarative (.NET) and agent_framework.declarative (Python). That schema is much richer than what we’ll build here — it supports kind: Workflow, SetVariable, If/ConditionGroup/Foreach, PowerFx expressions, agent invocations, and checkpointing. We’ll point at it at the end of the chapter.

This chapter rolls its own minimal loader instead, because the shape of the pattern — parse YAML → look up an op by name → instantiate executors → wire edges → return a Workflow — is worth seeing end-to-end in ~90 lines. Once you’ve built one, the production-grade one reads the same way, just with more dials.

Prerequisites
#

  • Completed Chapter 09 — Workflow Executors and Edges — we reuse the same three-step text pipeline from that chapter.
  • Python 3.12+ via uv; .NET 10 SDK.
  • No LLM key required — workflows are deterministic coordinators and our example is pure text manipulation.
  • A passing familiarity with YAML (keys, lists, strings) is enough.

The concept
#

A workflow spec names three things: executors, edges, and a start node. Every executor entry carries an id, an op (operation-by-name — a string key the loader looks up in an op registry to decide what behaviour the executor has), and optional per-op config. Every edge entry names two executor ids.

# workflow.yaml
name: text-pipeline
start: uppercase
executors:
  - id: uppercase
    op: upper
  - id: validate
    op: non_empty         # emits a terminal output when input is blank
  - id: log
    op: prefix
    prefix: "LOGGED: "    # free-form op config
edges:
  - from: uppercase
    to: validate
  - from: validate
    to: log

Jargon used above, defined once:

  • Declarative — the workflow is described in data, not built by code. The loader is the only imperative step.
  • Op (operation-by-name) — a string in the YAML that selects behaviour. op: upper means “the uppercase transform”; op: prefix means “wrap the message with a prefix”. Ops are a minimal domain-specific language; each one is a pure function registered at startup.
  • Op registry — the dict/dictionary that maps op names to factory functions. The single extension point: adding a new op means adding one entry. No new YAML syntax.
  • WorkflowFactory — the name MAF gives to the officially supported declarative loader (agent_framework.declarative.WorkflowFactory in Python; DeclarativeWorkflowBuilder.Build<TInput>(...) in .NET). Our chapter calls its equivalent load_workflow / DeclarativeWorkflowLoader.Load to make the pedagogical intent obvious.
  • GitOps — workflow definition lives in git; deploy is “change the file and push”; review is a diff on the YAML; rollback is a revert. Shorthand coined by Weaveworks in 2017.
  • Lazy instantiation — only build the executor when the loader hits its YAML entry, not upfront for every op in the registry. Our loader does this implicitly: the registry holds factories, not instances.

Loader flow
#

The loader is a small pipeline in its own right — a spec on disk becomes a Workflow through one parse, one registry lookup per entry, and one builder pass.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor': '#2563eb','primaryTextColor': '#ffffff','primaryBorderColor': '#1e40af', 'lineColor': '#64748b','secondaryColor': '#f59e0b','tertiaryColor': '#10b981', 'background': 'transparent'}}}%% flowchart LR classDef core fill:#2563eb,stroke:#1e40af,color:#ffffff classDef external fill:#f59e0b,stroke:#b45309,color:#000000 classDef success fill:#10b981,stroke:#047857,color:#ffffff classDef error fill:#ef4444,stroke:#b91c1c,color:#ffffff classDef infra fill:#64748b,stroke:#334155,color:#ffffff yaml[("workflow.yaml
(spec on disk)")] parser["YAML parser
(pyyaml / YamlDotNet)"] registry{{"Op registry
upper, non_empty, prefix, ..."}} factory["load_workflow
DeclarativeWorkflowLoader.Load"] execs["DeclarativeExecutor instances
(one per spec entry)"] builder["WorkflowBuilder
start + edges + WithOutputFrom"] wf(["Workflow
(same type as Ch09)"]) unknown[["unknown op
raise WorkflowSpecError"]] yaml --> parser parser --> factory factory -- "lookup by name" --> registry registry -- "match" --> execs registry -. "miss" .-> unknown factory -- "wire edges" --> builder execs --> builder builder --> wf class yaml infra class parser,factory core class registry external class execs,builder core class wf success class unknown error

The bytes on disk and the Workflow object are separated by two lookups — a YAML parse and a registry hit per executor. Unknown ops are caught at load time, not at run time. The output is the same Workflow type Ch09’s hand-wired code produced; nothing downstream (streaming, checkpoints, visualisation) knows the graph came from YAML.

Tradeoff — when declarative pays, when it costs
#

Every declarative system is a small language of its own. You trade off Python’s or C#’s expressive power for a smaller, reviewable surface.

When declarative winsWhen code wins
Non-engineers need to tweak orderingThe graph branches on runtime values the YAML can’t see
Config-driven tenant variations (one spec per tenant)You need IDE refactors, type-checked dataclasses, full LSP
Review-by-diff matters; reviewers aren’t Python devsA new node requires a genuinely new algorithm, not a new op combination
Hot reload — push a file, not a deployYou want a single source of truth and already have a code review culture
GitOps discipline (signed commits, policy gates on PRs)The graph is short-lived (one script, one team, one reviewer)

Two hard limits specifically on the rolled-our-own pattern in this chapter:

  • Ops must be pre-registered. register_op("my_op", factory) runs at startup. A YAML file that references an op nobody registered fails at load time. That’s a feature — unreviewed code can’t execute — but it means the YAML is not a full programming language. You cannot inline arbitrary logic.
  • No runtime-state access from the spec. The YAML has no way to say “if the user is an admin” or “if the previous step’s output was empty”. Conditional edges would need to be modelled as ops that inspect the message, or by extending the schema (Microsoft’s built-in loader does the latter via PowerFx).

Both constraints vanish if you reach for Microsoft.Agents.AI.Workflows.Declarative — its schema includes If, ConditionGroup, Foreach, SetVariable, and PowerFx expressions, at the cost of ~7× the schema surface. Pick the smallest loader that solves your problem.

Code walkthrough
#

Full source: Program.cs plus workflow.yaml.

// dotnet/Program.cs (excerpt)
using Microsoft.Agents.AI.Workflows;
using YamlDotNet.Serialization;
using YamlDotNet.Serialization.NamingConventions;

// ── Executor ──────────────────────────────────────────────
[SendsMessage(typeof(string))]
[YieldsOutput(typeof(string))]
internal sealed partial class DeclarativeExecutor : Executor
{
    private readonly OpFunction _op;

    public DeclarativeExecutor(string id, OpFunction op) : base(id) => _op = op;

    [MessageHandler]
    public async ValueTask HandleAsync(
        string message, IWorkflowContext ctx, CancellationToken ct = default)
    {
        (string? forward, string? terminal) = _op(message);
        if (terminal is not null) { await ctx.YieldOutputAsync(terminal, ct); return; }
        if (forward  is not null) { await ctx.SendMessageAsync(forward, ct); }
    }
}

// ── Op registry ───────────────────────────────────────────
public delegate (string? Forward, string? Terminal) OpFunction(string input);

public static class OpRegistry
{
    private static readonly Dictionary<string, Func<ExecutorSpec, OpFunction>> _ops = new()
    {
        ["upper"]     = _    => s => (s.ToUpperInvariant(), null),
        ["non_empty"] = spec => s => string.IsNullOrWhiteSpace(s)
                                      ? (null, spec.EmptyOutput ?? "[skipped: empty input]")
                                      : (s, null),
        ["prefix"]    = spec => s => (null, (spec.Prefix ?? "") + s),
        // ... lower, strip, reverse, passthrough
    };

    public static void Register(string name, Func<ExecutorSpec, OpFunction> factory)
        => _ops[name] = factory;

    public static OpFunction Build(ExecutorSpec spec) => _ops[spec.Op](spec);
}

// ── Loader ────────────────────────────────────────────────
public static Workflow Load(string specPath)
{
    IDeserializer yaml = new DeserializerBuilder()
        .WithNamingConvention(UnderscoredNamingConvention.Instance)
        .IgnoreUnmatchedProperties().Build();
    WorkflowSpec spec = yaml.Deserialize<WorkflowSpec>(File.ReadAllText(specPath));

    var execs = spec.Executors.ToDictionary(
        e => e.Id, e => new DeclarativeExecutor(e.Id, OpRegistry.Build(e)));

    WorkflowBuilder builder = new(execs[spec.Start]);
    foreach (var exec in execs.Values) builder = builder.WithOutputFrom(exec);
    foreach (var edge in spec.Edges)   builder = builder.AddEdge(execs[edge.From], execs[edge.To]);
    return builder.Build();
}

Five pieces worth highlighting:

  • One partial class DeclarativeExecutor. The source generator in Microsoft.Agents.AI.Workflows.Generators reads [MessageHandler] and [SendsMessage]/[YieldsOutput] exactly like Ch09’s concrete executors. Every YAML entry instantiates the same class with a different OpFunction.
  • OpFunction delegate is the .NET analogue of Python’s tuple-returning lambda. (string? Forward, string? Terminal) — same contract, statically typed.
  • WithOutputFrom(exec) per executor. Any declarative executor can yield a terminal output (validate short-circuit or prefix terminal), so we register every one. Miss an executor here and its YieldOutputAsync call would silently never surface as a WorkflowOutputEvent.
  • YamlDotNet, UnderscoredNamingConvention. The .NET property EmptyOutput maps to the YAML key empty_output. IgnoreUnmatchedProperties() tolerates config keys the loader doesn’t know about — per-op config lives on the spec record.
  • Exactly the same Workflow type as Ch09. Once Build() returns, nothing — streaming, visualisation (Ch20), checkpoints (Ch18) — knows or cares that the graph came from YAML.

Run it:

cd tutorials/19-declarative-workflows/dotnet
dotnet run -- "hello world"
# spec:   workflow.yaml
# input:  'hello world'
# output: 'LOGGED: HELLO WORLD'

dotnet run -- ""
# output: '[skipped: empty input]'

Registering a custom op in .NET
#

OpRegistry.Register("check_stock", spec =>
{
    int threshold = int.Parse(spec.Prefix ?? "1"); // reuse Prefix for config, or add a field
    return productId =>
    {
        // call DB, return (forward, terminal)
        return (productId, null);
    };
});

Now a YAML file can reference op: check_stock and the loader wires it. In the capstone, ExecutorSpec grows extra fields — one per op config key — to keep the mapping strongly typed.

Full source: main.py plus workflow.yaml. ~90 lines total.

# python/main.py (excerpt)
import yaml
from agent_framework._workflows._executor import Executor, handler
from agent_framework._workflows._workflow_builder import WorkflowBuilder
from agent_framework._workflows._workflow_context import WorkflowContext


# ── Op registry ───────────────────────────────────────────
def _build_op(op: str, config: dict):
    if op == "upper":     return lambda s: (s.upper(), None)
    if op == "lower":     return lambda s: (s.lower(), None)
    if op == "strip":     return lambda s: (s.strip(), None)
    if op == "reverse":   return lambda s: (s[::-1], None)
    if op == "non_empty":
        def _impl(s):
            return (s, None) if s.strip() else (None, "[skipped: empty input]")
        return _impl
    if op == "prefix":
        prefix = config.get("prefix", "")
        return lambda s: (None, f"{prefix}{s}")
    raise ValueError(f"unknown op: {op!r}")


# ── Executor ──────────────────────────────────────────────
class DeclarativeExecutor(Executor):
    """One Executor subclass. Behaviour comes from an op, not a method body."""

    def __init__(self, executor_id: str, op: str, config: dict) -> None:
        super().__init__(id=executor_id)
        self._op = _build_op(op, config)

    @handler
    async def run(self, message: str, ctx: WorkflowContext[str, str]) -> None:
        forward, terminal = self._op(message)
        if terminal is not None:
            await ctx.yield_output(terminal)   # short-circuit
        elif forward is not None:
            await ctx.send_message(forward)    # forward downstream


# ── Loader ────────────────────────────────────────────────
def load_workflow(spec_path):
    spec = yaml.safe_load(spec_path.read_text())
    execs = {
        e["id"]: DeclarativeExecutor(e["id"], e["op"], e)
        for e in spec["executors"]
    }
    builder = WorkflowBuilder(start_executor=execs[spec["start"]], name=spec["name"])
    for edge in spec["edges"]:
        builder = builder.add_edge(execs[edge["from"]], execs[edge["to"]])
    return builder.build()

Three things worth staring at:

  • DeclarativeExecutor is one class, many ops. The Ch09 pattern was one Executor subclass per behaviour. Here every YAML entry instantiates the same class; the constructor picks a different op function.
  • Ops return a tagged tuple (forward, terminal). Exactly one is non-None. forward fires an edge; terminal yields a final output and stops that branch. Keeping the op a pure function makes it trivial to unit-test.
  • Errors are load-time, not runtime. An unknown op raises when the loader hits it, before any message is dispatched. That’s half the value of a registry — typos become YAML schema errors, not 2 AM incidents.

Run it against both inputs:

cd tutorials/19-declarative-workflows/python
uv run python main.py "hello world"
# spec: workflow.yaml
# input: 'hello world'
# output: 'LOGGED: HELLO WORLD'

uv run python main.py ""
# input: ''
# output: '[skipped: empty input]'

Registering a custom op
#

The capstone extends this exact loader with domain-specific ops — for example, one that calls an eligibility-check tool. The extension point is a single function:

from shared.workflow_loader import register_op

def _op_check_stock(config: dict):
    threshold = int(config.get("threshold", 1))

    def _impl(product_id: str):
        # call DB, return (forward, terminal)
        ...

    return _impl

register_op("check_stock", _op_check_stock)

A YAML author can now write:

- id: stock-gate
  op: check_stock
  threshold: 5

and the loader wires the new executor automatically. No YAML-schema change, no loader edit. The op is the only thing that needs a code review.

Side-by-side differences
#

AspectPython.NET
YAML parserpyyaml (yaml.safe_load)YamlDotNet + DeserializerBuilder
Executor registration@handler decorator, runtime reflection[MessageHandler] + partial + source generator
Outbound type declarationWorkflowContext[str, str] generic[SendsMessage(typeof(string))] + [YieldsOutput(typeof(string))] attributes
Op registryModule-level dict + register_op(name, factory)Static OpRegistry.Register(name, factory)
Op factory signatureCallable[[dict], OpFn]Func<ExecutorSpec, OpFunction>
Multi-output wiringAutomatic — every yield_output surfacesExplicit builder.WithOutputFrom(exec) per executor
Config typingFree-form dictStrongly typed ExecutorSpec properties
Loader APIload_workflow(path) -> WorkflowDeclarativeWorkflowLoader.Load(path) -> Workflow
Official MAF loaderagent_framework.declarative.WorkflowFactoryMicrosoft.Agents.AI.Workflows.Declarative.DeclarativeWorkflowBuilder.Build<TInput>(path, options)

Structural parity, as usual. The two real differences:

  • .NET makes outbound types explicit via attributes, Python via generics. Same information, different place.
  • .NET requires a per-executor WithOutputFrom(...) when every executor can yield. Python’s runtime surfaces every yield_output without an opt-in list.

Gotchas
#

  • Schema freedom is a tax. Every declarative system is a small language you’ll end up maintaining. Validate aggressively — our loader rejects unknown ops, missing id/op fields, duplicate ids, and undeclared edge endpoints at load time. Production loaders should go further (jsonschema on the YAML, a CI check that every committed spec loads clean).
  • Executors aren’t free. Each YAML entry builds a real DeclarativeExecutor instance at load time. Our loader uses lazy instantiation at the op level — registry entries are factories, not instances — but the executor itself is eager. For 50+ entry specs, consider building executors on first edge traversal.
  • Edge validation is your job. A from: typo that doesn’t match any declared id is caught by the loader here. If you loosen that check and let the WorkflowBuilder catch it, the error surfaces deeper in the stack and is harder to read.
  • Custom ops must live somewhere. Our example embeds them in the loader module. Production systems register ops by name from a plugin folder, a DI container, or an explicit bootstrap call. Either way, ops must be pre-registered before the loader runs — the YAML cannot inline behaviour.
  • YAML’s boolean/string coercion. prefix: "" is fine, but prefix: off becomes Python’s False / .NET’s bool. Always quote string values if there’s any doubt ("off").
  • WithOutputFrom per-executor in .NET. Miss an executor that calls YieldOutputAsync and the output never surfaces. The Python runtime doesn’t have this trap.
  • Don’t confuse our loader with the built-in one. Ours is load_workflow / DeclarativeWorkflowLoader.Load on a minimal op+edge schema. MAF’s built-in is WorkflowFactory / DeclarativeWorkflowBuilder on a much richer kind: Workflow schema with PowerFx, agents, and control-flow actions. Don’t mix the two in one project — pick the schema that fits, stay consistent.
  • Hot-reload is a race condition. If you watch the YAML file and rebuild the Workflow on change, be sure you don’t swap the reference mid-run. Either drain in-flight runs first, or version the Workflow and keep old ones alive until their runs complete.

Tests
#

# Python — 10 unit tests: every built-in op, unknown-op error, loader wiring,
# happy path + short-circuit path through the YAML-built workflow.
python -m pytest tutorials/19-declarative-workflows/python/tests/ -v
# 10 passed

# .NET — same workflow, happy path + short-circuit path via dotnet run.
cd tutorials/19-declarative-workflows/dotnet
dotnet build
dotnet run -- "hello world"
dotnet run -- ""

The Python suite splits by concern: op unit tests (each op is a pure function — assert op("hello") == ("HELLO", None)), registry tests (unknown op raises cleanly), and loader end-to-end tests (load the YAML, run it, assert terminal output). The same split works for any extension — when you register check_stock, write unit tests against the op function first, then one integration test that runs it from a YAML file.

How this shows up in the capstone
#

The capstone extends the exact pattern in this chapter:

  • agents/python/shared/workflow_loader.py (load_workflow, lines 158–227) is the full production version. Same op-registry shape. Same YAML schema. The delta vs this chapter is error handling, bulk load_workflows_directory for agents/config/workflows/*.yaml, and a description field piped through to WorkflowBuilder.
  • Built-in opspassthrough, upper, lower, strip, reverse, non_empty, prefix — match this chapter one-for-one.
  • register_op(name, factory) is the extension point. Phase 7’s pre-purchase rewrite (plans/refactor/08-pre-purchase-concurrent.md) registers domain ops (check_stock, fetch_reviews, get_price_history) and drops them into agents/config/workflows/pre-purchase.yaml. The workflow file becomes reviewable by product/support without diving into Python.
  • DeclarativeExecutor (lines 129–148) is the one-for-one twin of this chapter’s class — same handler signature, same op call, same dual-output (forward vs terminal) contract.
  • .NET parity port lives at agents/dotnet/src/ECommerceAgents.Shared/Workflows/DeclarativeWorkflow.cs. (That port predates the MAF-native refactor and currently executes ops with a hand-rolled Run() loop rather than via WorkflowBuilder; plans/dotnet-port/05-declarative.md tracks the migration to the shape shown in this chapter.)

The pattern the capstone avoids: a separate schema per workflow type. One YAML schema, one op registry, one loader — the workflows differ only in which ops and edges they wire. That’s also why the capstone ships a scripts/visualize_workflows.py (Ch20) that renders any registered spec to Mermaid without per-workflow code.

Further reading
#

This chapter

Microsoft Agent Framework docs

  • Declarative Workflows — overview — the officially supported schema (kind: Workflow, PowerFx, control-flow actions).
  • Workflows — Workflow Builder — the code-first API we’re wrapping.
  • agent_framework.declarative.WorkflowFactory (Python) · Microsoft.Agents.AI.Workflows.Declarative.DeclarativeWorkflowBuilder (NuGet, prerelease).

Where it lives in the capstone

  • Production loader: agents/python/shared/workflow_loader.py:1-244 (Python), agents/dotnet/src/ECommerceAgents.Shared/Workflows/DeclarativeWorkflow.cs (.NET).
  • Workflow specs: agents/python/config/workflows/*.yaml.
  • Migration plan: plans/refactor/12-declarative-workflows.md, plans/dotnet-port/05-declarative.md.

What’s next
#

Chapter 20 — Visualization shows how to render any Workflow — code-built or declarative — to Mermaid and Graphviz DOT in one line. Combined with this chapter, you can commit the YAML spec and its rendered diagram to git, so reviewers see the graph diff without a Mermaid preview extension.

MAF v1: Python and .NET - This article is part of a series.
Part 19: This Article

Related