Series note — Part of MAF v1: Python and .NET. First of five orchestration chapters — Sequential, then Concurrent, Handoff, Group Chat, Magentic.
Repo — Full runnable code for this chapter: tutorials/12-sequential-orchestration. Clone,
cd tutorials/12-sequential-orchestration, follow the per-language instructions below.
Why this chapter#
Chapter 11 got an LLM into the workflow graph. To chain two translators, we built two InputAdapter/OutputAdapter executors, wrapped each agent in AgentExecutor(...), and wired four edges manually. That worked — but it’s a lot of plumbing for “make agent B see whatever agent A said.”
Real multi-agent pipelines almost always want that exact shape: agent A produces a draft, agent B reviews it, agent C finalises it. Each agent reads the shared conversation so far and appends its turn. MAF ships a convenience builder that compresses the whole thing to one line:
- Python:
SequentialBuilder(participants=[writer, reviewer, finalizer]).build() - .NET:
AgentWorkflowBuilder.BuildSequential(new[] { writer, reviewer, finalizer })
Under the hood it builds the same graph Chapter 11 did — input normaliser, agent-executors, response-to-conversation adapters, a terminator — but you don’t write any of it. The adapters become implementation detail.
This chapter walks the Writer -> Reviewer -> Finalizer pipeline in both languages, unpacks the session-sharing behaviour that makes it work, and names the event shape (executor_completed carrying list[AgentExecutorResponse] in Python; AgentResponseEvent in .NET) so you can observe each agent’s turn as it happens.
What this chapter is not. It’s not about parallel fan-out (Ch13), handoff meshes (Ch14), group-chat (Ch15), or LLM-orchestrated managers (Ch16). Those are the other four orchestration patterns. This is the deterministic, in-order, one-agent-at-a-time shape.
Prerequisites#
- Completed Chapter 11 — Agents in Workflows. The raw agent-executor chain from Ch11 is the “before” picture this chapter replaces.
.envat the repo root with eitherOPENAI_API_KEYor the Azure OpenAI trio (AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_KEY,AZURE_OPENAI_DEPLOYMENT). Sequential runs three real LLM calls per invocation — unlike Ch09/Ch10 this chapter won’t work offline.- Familiarity with the jargon-glossary entries for AgentExecutor, InputAdapter / OutputAdapter, SequentialBuilder. They’re re-defined inline below.
What you’ll learn#
- Build a Writer -> Reviewer -> Finalizer pipeline in Python with
SequentialBuilder(participants=[...]).build()and in .NET withAgentWorkflowBuilder.BuildSequential(...). - Name the three pieces the convenience builder hides: the input-conversation normaliser, the per-agent response-to-conversation adapter, and the end-with-conversation terminator.
- Observe each agent’s turn on the event stream —
executor_completedevents carryinglist[AgentExecutorResponse]in Python,AgentResponseEventin .NET. - Understand session sharing — why Reviewer sees Writer’s draft without you passing it anywhere, and what “full conversation” actually means across the chain.
- Know when to reach for Sequential versus Handoff, Concurrent, or Group Chat.
The concept#
One shared conversation, three agents#
A Sequential workflow is the agent-pipeline equivalent of a UNIX pipe — the shared thing that flows between stages is a conversation (list[Message]), not a single string or a bag of named fields. Each participant reads the conversation built so far, produces its response, and the runtime appends that response to the conversation before handing it to the next participant.
That’s why Reviewer can critique Writer’s draft without anybody passing the draft around explicitly: the draft is already in the conversation the Reviewer agent receives. Same for Finalizer — it sees Writer’s draft, then Reviewer’s critique, then produces the final sentence.
user input]) writer[Writer
agent] reviewer[Reviewer
agent] finalizer[Finalizer
agent] out([Final conversation
list of Message]) topic -- "normalised to
list[Message]" --> writer writer -- "+ draft" --> reviewer reviewer -- "+ critique" --> finalizer finalizer -- "+ final sentence" --> out class writer,reviewer,finalizer core class topic infra class out success
One workflow, three LLM calls, one shared conversation that grows by one assistant message per stage. The Writer sees just the topic; the Reviewer sees topic + Writer’s draft; the Finalizer sees topic + draft + critique.
What the convenience builder hides#
In Ch11 you wrote this skeleton by hand for a two-agent chain:
InputAdapter (str -> AgentExecutorRequest)
-> AgentExecutor(english_to_french)
-> AgentExecutor(french_to_spanish)
-> OutputAdapter (AgentExecutorResponse -> str)SequentialBuilder / BuildSequential gives you the same shape for N agents, plus a crucial upgrade: the adapters in the middle forward the full conversation rather than just the last agent’s text. From the Python source (agent_framework_orchestrations/_sequential.py):
_InputToConversation
-> AgentExecutor(writer) -> _ResponseToConversation:writer
-> AgentExecutor(reviewer) -> _ResponseToConversation:reviewer
-> AgentExecutor(finalizer) -> _ResponseToConversation:finalizer
-> _EndWithConversationThe middle adapters unwrap each AgentExecutorResponse into the full_conversation field and hand that to the next agent, which is why every downstream agent gets the prior turns for free.
Jargon to nail#
- Participants — the ordered list of agents (or custom
Executors) you hand to the builder. Can be anySupportsAgentRun— aChatClientAgent, aFoundryagent, a MAF-native custom agent. Executors that consumelist[Message]also work if you need a non-LLM step in the chain (summariser, filter, audit hook). SequentialBuilder(Python) /AgentWorkflowBuilder.BuildSequential(.NET) — the entry points. Both return a regularWorkflowyou run with the samerun(...)/InProcessExecution.RunStreamingAsync(...)APIs from Ch09–Ch11. The Python builder has awith_request_info(...)fluent toggle for human-in-the-loop pauses (Ch17 will use that); .NET exposes the same shape viaRequestPorton the workflow.executor_completedevent carryinglist[AgentExecutorResponse](Python) — the per-turn observation point. After each agent runs, the stream yields anexecutor_completedevent whosedatais a list ofAgentExecutorResponserecords (one per executor that completed this superstep — for Sequential, exactly one). That’s where you pull.agent_response.textand.executor_idfor per-agent logging.AgentResponseEvent(.NET) — the .NET equivalent. Pattern-matches cleanly in aswitchoverWorkflowEventand carriesExecutorIdplus anAgentResponseyou can print directly.- Session sharing — shorthand for “the conversation flows across participants.” Each agent sees prior turns because the adapters thread the full conversation list through the edges. You don’t create or wire
AgentSessionobjects yourself; the builder handles it.
Python walkthrough#
Source: python/main.py. Three agent factories, one builder call, one event-stream consumer.
from agent_framework import Agent
from agent_framework.orchestrations import SequentialBuilder
def writer() -> Agent:
return Agent(
_default_client(),
instructions=(
"You are a Writer. Draft a 2-sentence paragraph on the topic the user provides. "
"Keep it short."
),
name="writer",
)
def reviewer() -> Agent:
return Agent(
_default_client(),
instructions=(
"You are a Reviewer. Read the draft above and produce a single-sentence review "
"pointing out one strength and one weakness. Do not rewrite the draft."
),
name="reviewer",
)
def finalizer() -> Agent:
return Agent(
_default_client(),
instructions=(
"You are a Finalizer. Produce a one-sentence final version of the paragraph that "
"addresses the reviewer's feedback. Output ONLY the final sentence — no preamble."
),
name="finalizer",
)
def build_workflow():
return SequentialBuilder(participants=[writer(), reviewer(), finalizer()]).build()Three things worth staring at:
- Agents are factories. Each
writer()/reviewer()/finalizer()call constructs a freshAgentwith its own chat client. That matters for parallel runs — you never want two workflow invocations to share mutable agent state accidentally. - The
name=argument becomes the executor id inside the workflow. It’s what shows up onAgentResponseEvent.ExecutorId/executor_completed.data[*].executor_id. Keep them short and stable —"writer", not"WriterAgent_v2_preproduction". - Instructions do more work here than in a single-agent chapter. Each participant sees the whole prior conversation, so the instruction is what prevents the Reviewer from rewriting the draft and the Finalizer from editorialising. The “Do not rewrite the draft” and “Output ONLY the final sentence — no preamble” clauses aren’t cosmetic; drop them and you’ll see the pipeline drift.
The runner iterates events and pulls each agent’s response text:
async def run(topic: str) -> list[str]:
workflow = build_workflow()
per_agent: dict[str, str] = {}
async for event in workflow.run(topic, stream=True):
if getattr(event, "type", None) != "executor_completed":
continue
payload = getattr(event, "data", None)
if not isinstance(payload, list):
continue
for item in payload:
agent_resp = getattr(item, "agent_response", None)
eid = getattr(item, "executor_id", "")
text = getattr(agent_resp, "text", None)
if text and eid:
per_agent[eid] = text
return [per_agent.get(name, "") for name in ("writer", "reviewer", "finalizer")]Two non-obvious bits:
- Filter on
event.type == "executor_completed", not"data". The Sequential orchestration surfaces each agent’s response asexecutor_completed-event-with-list-payload, not as a separatedataevent. If you copy Ch10’s “handledataevents” pattern over, you’ll see zero responses and spend an hour blaming the LLM. payloadis always alist[AgentExecutorResponse]. For Sequential it contains exactly one entry (only one executor finishes per superstep). For Concurrent (next chapter), the same event type will carry several.
Running it:
uv run python tutorials/12-sequential-orchestration/python/main.py "Why sleep matters"Output:
Topic: Why sleep matters
Writer: Sleep is essential for physical health, mental clarity, and emotional balance...
Reviewer: The draft clearly explains why sleep matters, but it could be strengthened by citing...
Finalizer: Sleep is essential for physical health, mental clarity, and emotional balance, as studies...Three real LLM calls. Each one sees more context than the previous.
.NET walkthrough#
Source: dotnet/Program.cs. Same three agents, one-line workflow, same streaming iteration — the source-generator boilerplate from Ch09/Ch10 disappears because BuildSequential doesn’t need a custom Executor subclass.
using System.ClientModel;
using Azure.AI.OpenAI;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Workflows;
using OpenAI;
using OpenAI.Chat;
public static Workflow BuildWorkflow()
{
var chatClient = BuildChatClient();
AIAgent writer = chatClient.AsAIAgent(instructions: WriterInstructions, name: "writer");
AIAgent reviewer = chatClient.AsAIAgent(instructions: ReviewerInstructions, name: "reviewer");
AIAgent finalizer = chatClient.AsAIAgent(instructions: FinalizerInstructions, name: "finalizer");
return AgentWorkflowBuilder.BuildSequential(new[] { writer, reviewer, finalizer });
}The runner pattern-matches AgentResponseEvent off the event stream:
await using var run = await InProcessExecution.RunStreamingAsync(workflow, topic);
await foreach (var evt in run.WatchStreamAsync())
{
if (evt is AgentResponseEvent r)
{
Console.WriteLine($"{r.ExecutorId,-9}: {r.Response.Text}");
Console.WriteLine();
}
}Three practical notes:
- No
partialclasses, no[MessageHandler], no source generator. The convenience builder wraps eachAIAgentin a hosted executor internally; you never write anExecutorsubclass. That’s whySequential.csprojonly referencesMicrosoft.Agents.AI.Workflowsplus the OpenAI client bits — theMicrosoft.Agents.AI.Workflows.Generatorspackage that Ch09/Ch10 needed isn’t required here. AgentResponseEventis the canonical per-agent observation point. The baseWorkflowEventstream still emitsExecutorInvokedEvent,ExecutorCompletedEvent,SuperStepStartedEventetc. — filter onAgentResponseEventwhen you want “show me each agent’s text output in order.”AsAIAgent(name:)sets the executor id the wayname="writer"does in Python. The underlying id resolver readsAIAgent.Nameand uses it as the string shown onAgentResponseEvent.ExecutorId.
Sequential.csproj:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net10.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Agents.AI.Workflows" Version="1.1.0" />
<PackageReference Include="Microsoft.Agents.AI" Version="1.1.0" />
<PackageReference Include="Microsoft.Agents.AI.OpenAI" Version="1.1.0" />
<PackageReference Include="Azure.AI.OpenAI" Version="2.1.0" />
</ItemGroup>
</Project>Running it:
cd tutorials/12-sequential-orchestration/dotnet
dotnet run -- "Why sleep matters"
# Topic: Why sleep matters
#
# writer : Sleep is essential for physical health...
# reviewer : The draft clearly explains why sleep matters...
# finalizer: Sleep is essential for physical health, mental clarity, and emotional balance...Same three LLM calls, same shape of output.
Side-by-side — Python vs .NET#
| Aspect | Python | .NET |
|---|---|---|
| Builder entry point | SequentialBuilder(participants=[a, b, c]).build() | AgentWorkflowBuilder.BuildSequential(new[] { a, b, c }) |
| Agent wrapping | Agent(client, instructions=..., name=...) directly in the list | chatClient.AsAIAgent(instructions:..., name:...) cast as AIAgent |
| Executor id source | Agent.name field | AIAgent.Name, set by name: in AsAIAgent(...) |
| Per-turn event | event.type == "executor_completed" with event.data: list[AgentExecutorResponse] | evt is AgentResponseEvent r, with r.ExecutorId + r.Response.Text |
| Final output | event.type == "output" — list[Message] conversation | WorkflowOutputEvent with the final conversation payload |
| HITL toggle | .with_request_info(agents=[...]) fluent call on the builder | RequestPort on the workflow (see Ch17) |
| Checkpoint storage | SequentialBuilder(participants=[...], checkpoint_storage=storage) | Configured on the builder equivalent; see Ch18 |
| Source-generator needed? | No — pure Python | No — convenience builder wraps agents internally, no [MessageHandler] |
The contrast with Chapter 11 is sharper than it looks in the tables: you write no InputAdapter, no OutputAdapter, no per-edge glue, no AgentExecutorRequest/AgentExecutorResponse marshalling. That’s the point of this chapter. Sequential is the pattern 80% of production chains want, and the one-liner exists so you don’t retype the same scaffolding in every workflow.
When Sequential is the right shape#
Reach for Sequential when:
- Order matters. The reviewer must run after the writer. The finalizer must run after both. That’s exactly the shape the builder encodes.
- Each agent’s role is fixed. A handoff mesh (Ch14) picks the next agent dynamically — Sequential doesn’t. If the same three agents run in the same order every time, Sequential is the cheaper, more auditable choice.
- One shared conversation is enough. All three agents read and append to the same list. No branching, no fan-out, no separate per-agent memories.
- You want the whole run to be deterministic given inputs. Sequential on the same input produces the same LLM round-trip pattern every time. Debuggable, checkpointable, traceable.
Prefer Handoff (Ch14) when the next agent is chosen at runtime by the current agent. Prefer Concurrent (Ch13) when agents run independently and their outputs are later aggregated. Prefer Group Chat (Ch15) when a manager picks the next speaker each turn. Prefer Magentic (Ch16) when the whole plan is LLM-generated.
Gotchas#
- Event shape is not
type="data". Sequential emits each agent’s response inside anexecutor_completedevent whosedatais alist[AgentExecutorResponse]. If you copy Chapter 10’s “filter ondataevents” pattern, you’ll see zero agent outputs and chase ghosts. Filter onexecutor_completedand unpack the list. - Instructions are load-bearing. Each agent sees the full prior conversation. “Do not rewrite the draft” on the Reviewer, “Output ONLY the final sentence — no preamble” on the Finalizer — both are behaviour-shaping. Skip them and the Reviewer will helpfully rewrite the paragraph, the Finalizer will add a preamble, and nothing in the framework will stop it.
- Agent factories, not agent singletons. Build one agent per workflow invocation. Sharing a
ChatClientAgentacross concurrentrun(...)calls in Python is safe (stateless), but sharing an agent whose middleware holds state will surprise you. The factory pattern (def writer() -> Agent: ...) costs nothing and avoids the class of bug entirely. - Duplicate participant instances fail at build time.
SequentialBuilderchecks for the sameExecutorinstance twice; pass three distinct agents. If you legitimately want the same agent to appear twice in the chain, wrap it in twoAgentExecutorinstances with different ids. with_request_info()pauses the workflow between participants. It’s how you add HITL in Ch17. Enable it by mistake and every run blocks waiting for an externalresponse_handlercall — easy to miss because it looks like the workflow hung.- The final output is a
list[Message], not a string.workflow.run(...)yieldsoutput-type events whosedatais the full conversation. If you want just the Finalizer’s sentence, pick the last message withrole == "assistant"— or iterateexecutor_completedevents the way the sample does. - Sequential is not checkpointed by default. Supply
checkpoint_storage=...(Python) or the .NET equivalent to resume mid-pipeline. Chapter 18 walks the full checkpoint surface. - .NET:
BuildSequentialdoes not need the source-generator package.Microsoft.Agents.AI.Workflows.Generatorsis only required when you write your ownExecutorsubclasses. The convenience builder wraps the providedAIAgentinstances internally.
Tests#
Python — 1 wiring test + 3 real-LLM integration tests. Integration tests skip if no LLM credentials are in .env.
# Python (4 passed — 3 hit real Azure OpenAI)
source agents/.venv/bin/activate
python -m pytest tutorials/12-sequential-orchestration/python/tests/ -vThe tests exercise:
- Wiring —
build_workflow()returns a non-nullWorkflowwith three participants. - Happy path — all three agents produce non-empty text given a real topic.
- Role enforcement — the Reviewer’s output contains critique markers (“strength”, “weakness”, “however”, “but”, “improve”), proving it read the draft and didn’t just re-draft.
- Distinct outputs — Writer, Reviewer, and Finalizer produce three different strings, catching accidental loopbacks in the adapters.
.NET — the Ch12 project ships as an end-to-end runnable console app against real credentials. The integration harness lives in the capstone Phase 7 refactor (plans/refactor/09-return-replace-sequential-hitl.md) so the unit test suite isn’t duplicated per chapter.
cd tutorials/12-sequential-orchestration/dotnet
dotnet build
dotnet run -- "Why sleep matters" # requires .env with LLM credentialsHow this shows up in the capstone#
agents/python/workflows/return_replace.py is a hand-rolled state machine today: a linear sequence of steps (validate_eligibility, inspect_condition, generate_rma, notify_carrier) driven by a custom async for step in steps loop. It’s the classic “before” picture for this chapter.
Phase 7 plans/refactor/09-return-replace-sequential-hitl.md replaces that state machine with SequentialBuilder(participants=[...]) plus a with_request_info(agents=["generate_rma"]) HITL gate for high-value returns — the exact shape this chapter teaches, plus the Ch17 human-in-the-loop toggle bolted on. The refactor plan documents the current state as the “before” and cites this chapter as the pattern reference.
The .NET parity port in agents/dotnet/src/ECommerceAgents.Shared/Workflows/ReturnReplaceWorkflow.cs will follow the same trajectory via AgentWorkflowBuilder.BuildSequential(...) + RequestPort once Ch17 lands.
Further reading#
This chapter
- Source on GitHub: tutorials/12-sequential-orchestration
- Previous: Chapter 11 — Agents in Workflows · Next: Chapter 13 — Concurrent Orchestration
Microsoft Agent Framework docs
Where it lives in the capstone
- Python
return_replace.py(pre-refactor state machine):agents/python/workflows/return_replace.py - .NET parity (pre-refactor):
agents/dotnet/src/ECommerceAgents.Shared/Workflows/ReturnReplaceWorkflow.cs - Refactor plan:
plans/refactor/09-return-replace-sequential-hitl.md
What’s next#
Chapter 13 — Concurrent Orchestration takes the same “wrap agents, hand to a builder” idea and applies it to parallel execution: ConcurrentBuilder(participants=[...]) / AgentWorkflowBuilder.BuildConcurrent(...) runs N agents on the same input in one superstep and aggregates their outputs. You’ll see why a barrier-based scheduler (Ch09) makes that trivial and why the “list of AgentExecutorResponse” payload on executor_completed events finally shows why it’s a list.

