Series note — Final chapter of MAF v1: Python and .NET. No new code. This is a guided tour of the capstone application with pointers at the file-and-line level for every concept you learned.
Repo — Source: tutorials/21-putting-it-all-together. The capstone itself lives at the repo root —
agents/,web/,docker/,scripts/.
Why this chapter#
The previous 22 chapters drilled one concept each in isolation. This one reassembles them into the multi-agent e-commerce platform that ships in the repo — the same app that appears in the original Python-only series, now viewed through the MAF v1 lens and with a .NET twin alongside Python.
If you’ve read every prior chapter, you can now open any file under agents/python/ or agents/dotnet/ without a rosetta stone. This tour tells you where to start and what to match against.
Prerequisites#
- Chapters 00–20 (plus 20b — DevUI) read end-to-end, or at least the ones whose concepts you want to locate in the live code.
- The stack running locally. If you haven’t brought it up before, circle back to Chapter 00 — Setup and run
./scripts/verify-setup.shfollowed by./scripts/dev.sh.
The stack in one picture#
The diagram below is the full runtime. Every box maps to a directory in the repo; every arrow is something a chapter walked you through.
chat/page.tsx + api.ts"] end subgraph backend["Backend agents (agents/python or agents/dotnet)"] orch["Orchestrator
:8080"] pd["product-discovery
:8081"] om["order-management
:8082"] pp["pricing-promotions
:8083"] rs["review-sentiment
:8084"] inv["inventory-fulfillment
:8085"] mcp["Inventory MCP
:9000"] end subgraph data["Shared infra"] pg[(Postgres 16
+ pgvector)] redis[(Redis 7)] end subgraph obs["Observability + test harness"] aspire["Aspire Dashboard
:18888"] devui["DevUI
:5173"] end llm(["LLM provider
OpenAI / Azure OpenAI"]) browser -->|JWT Bearer| next next -->|"POST /api/chat[/stream]"| orch orch -->|"A2A HTTP
/message:send"| pd orch --> om orch --> pp orch --> rs orch --> inv inv -.-> mcp orch --> pg orch --> redis pd --> pg om --> pg orch --> llm pd --> llm om --> llm orch -. OTel .-> aspire pd -. OTel .-> aspire om -. OTel .-> aspire devui -. drives .-> orch class next success class orch,pd,om,pp,rs,inv core class mcp,llm external class pg,redis,aspire,devui infra class browser success
One orchestrator, five specialists, one MCP server. Postgres and Redis sit behind every agent; the LLM provider is a single external dependency swapped via LLM_PROVIDER. A2A (agent-to-agent) is plain HTTP POST to /message:send — no magic transport. Aspire Dashboard at :18888 is the passive telemetry view; DevUI at :5173 is the interactive test harness from Ch20b.
Each box on the diagram is one process, one Dockerfile target, one port. The specialists all share a single multi-target Dockerfile (agents/python/Dockerfile, ARG AGENT_NAME) — changing the spec of one rebuilds only its layer. The .NET twin is one project per specialist (agents/dotnet/src/ECommerceAgents.*), composed via agents/dotnet/ECommerceAgents.sln. Pick whichever stack you want; the frontend and the database schema don’t care.
Jargon recap (inline, since this is the last chapter)#
- A2A — Agent-to-Agent HTTP protocol. Orchestrator POSTs to
http://<specialist>:<port>/message:send; the specialist replies with{ "response": "..." }. Authenticated with anx-agent-secretheader and user identity forwarded viax-user-email/x-user-role/x-session-id. - ContextVar — Python’s
contextvars.ContextVar, async-safe per-request state. The capstone usescurrent_user_email,current_user_role,current_session_idso tools don’t need to thread identity through every call signature. - Aspire Dashboard — .NET’s dev-time telemetry UI. Traces, metrics, structured logs from every service in one place. Runs at
http://localhost:18888in this repo. - DevUI — MAF-native browser dashboard for driving a single agent or workflow interactively. Chat on the left, tool calls + span tree on the right. Runs at
http://localhost:5173. Covered in Ch20b.
Full definitions: jargon glossary.
Request lifecycle through the live stack#
Every chat request takes the same path. The sequence diagram below is what you see replayed in the Aspire trace view when you send one prompt.
(browser) participant W as Next.js
chatStream() participant O as Orchestrator
:8080 participant L as LLM
(OpenAI / Azure) participant S as Specialist
(e.g. product-discovery :8081) participant D as Postgres
+ pgvector participant A as Aspire
:18888 U->>W: "show me running shoes under $100" W->>O: POST /api/chat/stream + JWT Bearer O->>O: AgentAuthMiddleware → ContextVar O->>D: load conversation history (last 50) O->>L: agent.run(messages, stream=True) L-->>O: tool call → call_specialist_agent("product-discovery", ...) O->>S: A2A POST /message:send + x-agent-secret S->>S: ECommerceContextProvider.before_run S->>L: agent.run(messages) with @tool schemas L-->>S: tool call → semantic_search(...) S->>D: pgvector cosine similarity query D-->>S: matching products S->>L: tool result → continue L-->>S: final text S-->>O: { "response": "..." } O->>L: synthesize with specialist result in context L-->>O: SSE chunks O-->>W: text/event-stream W-->>U: rendered markdown + product cards O-)A: OTel spans (invoke_agent, chat, execute_tool) S-)A: OTel spans
Every arrow is covered by a chapter. Step 1 is Ch03. Step 3 is Ch06. Step 4 loads history — Ch04. Step 5 runs with context providers — Ch05. Step 6 is the tool call — Ch02. Steps 7–8 are A2A handoff — Ch14. The dashed spans to Aspire are Ch07.
Concept → file:line map#
Each row lists a chapter you’ve completed and where that pattern lives in the capstone today. File:line citations were verified on 2026-04-21 — re-check with grep -n before editing.
| Chapter | Where it lives today | Twin / follow-up |
|---|---|---|
| Ch00 Setup | scripts/verify-setup.sh — single-shot toolchain check; scripts/dev.sh — compose up | docker-compose.dotnet.yml brings the .NET twin up |
| Ch01 First Agent | agents/python/orchestrator/agent.py:86 — Agent(client, instructions=..., tools=..., context_providers=...) | agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs:10 — SpecialistAgentFactory.Create(...) |
| Ch02 Tools | agents/python/product_discovery/tools.py:15 — @tool with Annotated[...] hints (7 tools total) | agents/dotnet/src/ECommerceAgents.ProductDiscovery/ — [Description] attributes + AIFunctionFactory.Create |
| Ch03 Streaming + Multi-turn | agents/python/orchestrator/routes.py:400 — SSE /api/chat/stream; shared/agent_host.py:61 — _run_agent_native_stream drives MAF-native streaming | agents/dotnet/src/ECommerceAgents.Orchestrator/Routes/ — ChatStreamEndpoint |
| Ch04 Sessions + Memory | agents/python/shared/session.py:131 — PostgresSessionHistoryProvider backing AgentSession; shared/session.py:201 — get_history_provider() factory | agents/dotnet/src/ECommerceAgents.Shared/Sessions/PostgresSessionHistoryProvider.cs |
| Ch05 Context Providers | agents/python/shared/context_providers.py:35 — UserProfileProvider; :89 — RecentOrdersProvider; :151 — AgentMemoriesProvider; :213 — composite ECommerceContextProvider | agents/dotnet/src/ECommerceAgents.Shared/ContextProviders/ |
| Ch06 Middleware | agents/python/shared/middleware.py:40 — AgentRunLogger; :76 — ToolAuditMiddleware; :127 — PiiRedactionMiddleware; :164 — default_middleware_stack(). HTTP-layer auth sits separately at shared/auth.py:27 | agents/dotnet/src/ECommerceAgents.Shared/Middleware/AgentRunLogger.cs + ToolAuditMiddleware.cs + PiiRedactor.cs |
| Ch07 Observability | agents/python/shared/telemetry.py:30 — setup_telemetry(); :184 — agent_run_span; :221 — a2a_call_span; :250 — tool_call_span | agents/dotnet/src/ECommerceAgents.Shared/Telemetry/ |
| Ch08 MCP Tools | agents/python/mcp/inventory_server.py:29 — standalone MCP server (FastAPI) with stock / warehouse / shipping tools | agents/dotnet/src/ECommerceAgents.Mcp/ |
| Ch09 Executors + Edges | agents/python/workflows/pre_purchase.py:49 — _FanOutExecutor, :60 _ReviewsExecutor, :79 _StockExecutor, :98 _PriceHistoryExecutor, :117 _MergeAndShipExecutor, :148 _SynthesisExecutor | agents/dotnet/src/ECommerceAgents.Shared/Workflows/PrePurchaseWorkflow.cs |
| Ch10 Events + Builder | agents/python/workflows/pre_purchase.py:218 — PrePurchaseWorkflow builds via WorkflowBuilder; events flow through WorkflowEvent | agents/dotnet/src/ECommerceAgents.Shared/Workflows/PrePurchaseWorkflow.cs (WorkflowBuilder<T>) |
| Ch11 Agents in Workflows | agents/python/shared/remote_agent.py — RemoteSpecialistChatClient wraps A2A as an Agent, used by both the handoff mesh and the workflows | agents/dotnet/src/ECommerceAgents.Shared/Agents/ |
| Ch12 Sequential | agents/python/workflows/return_replace.py:73 — _CheckEligibilityExecutor, :101 _InitiateReturnExecutor, :135 _SearchReplacementsExecutor, :206 _ApplyDiscountExecutor, :230 _FinalizeExecutor; wired via SequentialBuilder in ReturnAndReplaceWorkflow:243 | agents/dotnet/src/ECommerceAgents.Shared/Workflows/ReturnAndReplaceWorkflow.cs |
| Ch13 Concurrent | agents/python/workflows/pre_purchase.py:49 — fan-out to reviews / stock+shipping / price; _MergeAndShipExecutor:117 aggregates | agents/dotnet/src/ECommerceAgents.Shared/Workflows/PrePurchaseWorkflow.cs (ConcurrentBuilder) |
| Ch14 Handoff | agents/python/orchestrator/handoff.py:46 — build_orchestrator_handoff_workflow wires the mesh; agent.py:25 keeps the tool-router fallback (call_specialist_agent) | agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorTools.cs (tool-router parity) |
| Ch15 Group Chat | Not yet wired into the capstone — follow-up for a “product-launch review” flow | Parity pending |
| Ch16 Magentic | Not yet wired into the capstone — follow-up for a “shopping concierge” flow | Parity pending |
| Ch17 HITL | agents/python/workflows/return_replace.py:157 — _HitlGateExecutor gates approval above RETURN_HITL_THRESHOLD via request_info + @response_handler | agents/dotnet/src/ECommerceAgents.Shared/Workflows/ReturnAndReplaceWorkflow.cs |
| Ch18 Checkpoints | agents/python/shared/checkpoint_storage.py:34 — PostgresCheckpointStorage backed by the workflow_checkpoints table | agents/dotnet/src/ECommerceAgents.Shared/Checkpoints/ |
| Ch19 Declarative | agents/python/shared/workflow_loader.py:158 — load_workflow(path); :118 register_op; :230 load_workflows_directory. Specs under agents/python/config/workflows/*.yaml | agents/dotnet/src/ECommerceAgents.Shared/Workflows/DeclarativeWorkflow.cs |
| Ch20 Visualization | scripts/visualize_workflows.py — iterates every registered workflow and writes docs/workflows/*.mmd + *.dot | agents/dotnet/src/ECommerceAgents.Shared/Workflows/MermaidExporter.cs |
| Ch20b DevUI | DevUI isn’t wired into the Docker stack — it’s launched on demand via uv run devui agents/python/orchestrator/agent.py (or the .NET equivalent) while the backend is down; the point is to drive a single agent without the full cluster | Same surface — dotnet devui |
| Ch20c Production hardening | HTTP-layer hardening lives in agents/python/shared/auth/ — keyring.py holds the kid-indexed signing keys; password_reset.py and refresh_tokens.py own the token tables. Phase 8 of the refactor plan tracks the wiring | agents/dotnet/src/ECommerceAgents.Shared/Auth/ (parity via IOptionsMonitor<JwtKeyringOptions> and CryptographicOperations.FixedTimeEquals) |
| Ch21 Putting it all together | You are here. Every row above cross-references this post for “where does that concept actually live?” | — |
| Ch22 Asymmetries | Reference appendix — six surface-API differences between Python and .NET MAF stacks | — |
| Ch23 Evaluation framework | agents/python/evals/ — golden datasets, three-axis scoring, CLI runner with smoke / full tiers, CI gate. Phase 9 of the refactor plan migrates the original Part 9 framework | agents/dotnet/src/ECommerceAgents.Evals/ |
| Ch24 Prompt engineering | agents/python/shared/prompt_loader.py — YAML composition for the base prompt; shared/context_providers.py:RoleContextProvider — per-request role injection. Phase 10 of the refactor plan drops the per-role agent factory in favor of a single instance per agent | agents/dotnet/src/ECommerceAgents.Shared/Prompts/ |
| Ch25 Deployment | agents/python/Dockerfile (multi-target, ARG-driven), docker-compose.yml (YAML anchors + health gates), scripts/dev.sh (poll-don’t-sleep). Phase 11 of the refactor plan tightens the dev.sh wait loop and splits the .NET Dockerfile into build / runtime stages | agents/dotnet/Dockerfile + docker-compose.dotnet.yml overlay |
28 rows. Every row links to the chapter and cites an exact file path; every “Twin” column shows what the .NET side of the port looks like so you can skim both stacks in one pass.
Try it yourself#
Ten commands, from cold clone to interactive trace:
# 1. clone + environment check
git clone https://github.com/nitin27may/e-commerce-agents.git
cd e-commerce-agents
cp .env.example .env # paste OPENAI_API_KEY or the Azure trio
./scripts/verify-setup.sh # Ch00 smoke test
# 2. bring up the full stack
./scripts/dev.sh # orchestrator + 5 specialists + Postgres + Redis + Aspire
# 3. open the UIs
open http://localhost:3000 # Next.js frontend — log in, chat
open http://localhost:18888 # Aspire Dashboard — Traces tab, pick a request
open http://localhost:9000/.well-known/mcp-manifest.json # Inventory MCP server manifest
# 4. (optional) drive a single agent via DevUI
docker compose stop orchestrator
cd agents/python && uv run devui orchestrator/agent.py # Ch20b flowThen line up what you see with what you read:
- Chat a prompt like “show me running shoes under $100” in the frontend. Watch the Aspire trace view — you’ll see
invoke_agenton the orchestrator, achatspan to OpenAI, a tool call namedcall_specialist_agent, ana2a_callspan toproduct-discovery, and nestedexecute_toolspans forsearch_products/semantic_search. That entire trace is what the sequence diagram above is narrating. - Grep for
@toolinagents/python/product_discovery/tools.py— the sameAnnotated[type, Field(description=...)]shape you first saw in Ch02. All 7 tools in this agent are the pattern repeated. - Open
agents/python/workflows/return_replace.py— the sequential executors (Ch12), the HITL gate (Ch17), and the discount branch all live in one file under 250 lines. It’s the densest piece of MAF in the repo and rewards a careful read.
Every concept from the series is sitting at a file path you already know how to read.
Python vs .NET — where the parity lives#
The port reached 100% surface parity on 2026-04-21 (207 tests green, snake_case JSON on the wire). Both stacks ship the same:
| Capability | Python | .NET |
|---|---|---|
| Agent factory | agents/python/orchestrator/agent.py:86 | agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs:10 |
| Specialist agents ×5 | agents/python/{product_discovery,order_management,pricing_promotions,review_sentiment,inventory_fulfillment}/agent.py | agents/dotnet/src/ECommerceAgents.* (five mirror projects) |
| MCP server | agents/python/mcp/inventory_server.py | agents/dotnet/src/ECommerceAgents.Mcp/ |
| Middleware stack | agents/python/shared/middleware.py | agents/dotnet/src/ECommerceAgents.Shared/Middleware/ |
| Session providers | agents/python/shared/session.py | agents/dotnet/src/ECommerceAgents.Shared/Sessions/ |
| Checkpoint storage | agents/python/shared/checkpoint_storage.py | agents/dotnet/src/ECommerceAgents.Shared/Checkpoints/ |
| Declarative workflows | agents/python/shared/workflow_loader.py | agents/dotnet/src/ECommerceAgents.Shared/Workflows/DeclarativeWorkflow.cs |
| Workflows | agents/python/workflows/{pre_purchase,return_replace}.py | agents/dotnet/src/ECommerceAgents.Shared/Workflows/{PrePurchaseWorkflow,ReturnAndReplaceWorkflow}.cs |
Switch between stacks without changing the frontend — both compose files expose the backend on :8080.
Reading paths by role#
Not every reader needs to open every file. Pick the entry point that matches what you’re here to do:
“I want to add a new specialist agent”#
agents/python/product_discovery/agent.py:35— the simplest template. A factory that returnsAgent(client, name, description, instructions, tools, context_providers).agents/python/product_discovery/tools.py:15—@tooldecorator +Annotatedhints + async function. Copy the shape, swap the body.agents/python/config/prompts/product_discovery.yaml— the YAML composition system. Add your system prompt here, not inline.agents/python/product_discovery/main.py— thecreate_agent_app(...)entry point. 30 lines of FastAPI lifespan.docker-compose.yml— copy theproduct-discoveryservice block, bump the port, changeAGENT_NAME.- Add your URL to
AGENT_REGISTRYin.envso the orchestrator knows where to route.
Ten minutes from clone to a new specialist answering A2A calls.
“I want to understand how a chat becomes a stream”#
web/src/app/(app)/chat/page.tsx:181—ChatPagecomponent.sendMessage()at:297is where the UI kicks off the stream.web/src/lib/api.ts:192—chatStream(). SSE client, JWT inAuthorization, auto-refresh on 401.agents/python/orchestrator/routes.py:400—/api/chat/streamendpoint. Loads history from Postgres, calls_run_agent_native_stream().agents/python/shared/agent_host.py:61— MAF-native streaming. Oneasync foroveragent.run(..., stream=True), emit each update’stextdownstream.agents/python/shared/telemetry.py:184—agent_run_span(...)context manager wrapping every run so you see the span tree in Aspire.
“I want to understand a workflow end-to-end”#
agents/python/workflows/return_replace.py is the densest file in the repo — 11 KB and it covers Ch09 through Ch18:
- Executors as nodes (Ch09) — classes at
:73,:101,:135,:157,:206,:230. - Sequential topology (Ch12) — built inside
ReturnAndReplaceWorkflow:243. - HITL gate (Ch17) —
_HitlGateExecutor:157emitsrequest_info, the response handler resumes on human reply. - Checkpointed state (Ch18) —
WorkflowState:36is the dataclass that round-trips throughPostgresCheckpointStorage.
Read it alongside agents/python/workflows/pre_purchase.py for the concurrent counterpart (Ch13).
“I want to change the observability story”#
agents/python/shared/telemetry.py:30—setup_telemetry(service_name)configuresTracerProvider+MeterProvider+LoggerProviderwith OTLP exporters pointed at Aspire.agents/python/shared/telemetry.py:184—agent_run_span·:221a2a_call_span·:250tool_call_span. These are the three semantic spans that match MAF’s built-in GenAI conventions.agents/python/shared/telemetry.py:356_instrument_openai()·:380_instrument_httpx()·:388_instrument_asyncpg()— auto-instrumentation hooks. Zero code change in agents to get LLM, HTTP, and DB spans.
Swap the OTLP endpoint away from Aspire in .env (OTEL_EXPORTER_OTLP_ENDPOINT) to point at Azure Monitor, Grafana Tempo, or any OTel-compatible backend.
What Phase 7 actually changed#
When this series started, the capstone was a working MAF app with hand-rolled pieces. Phase 7 (the 13-step refactor under plans/refactor/) replaced them with MAF-native primitives. Five are visible in the file:line map above; all have landed as of 2026-04-21:
| Before Phase 7 | After | Location |
|---|---|---|
Custom OpenAI tool-calling loop in agent_host.py (250 lines of manual JSON parsing) | MAF-native agent.run() / agent.run(..., stream=True) | agents/python/shared/agent_host.py:50 _run_agent_native, :61 _run_agent_native_stream |
Manual history forwarding truncated to last 10 / 500 chars per message | AgentSession + PostgresSessionHistoryProvider, full history per conversation | agents/python/shared/session.py:131 |
Single monolithic ECommerceContextProvider (one class, three concerns) | Three composable providers + a backward-compat composite | agents/python/shared/context_providers.py:35, :89, :151, :213 |
| HTTP-only middleware (auth). Nothing inside the agent run. | Three MAF middlewares — agent-run logger, tool auditor, PII redactor | agents/python/shared/middleware.py:40, :76, :127 |
call_specialist_agent tool-router as the only topology | Tool router still the default; HandoffBuilder mesh behind MAF_HANDOFF_MODE=handoff | agents/python/orchestrator/handoff.py:46 |
The .NET port was built against the post-refactor shape, so every agents/dotnet/... path lines up with the right column above.
What the Aspire trace looks like for one prompt#
When the frontend posts one prompt to /api/chat/stream, the Aspire trace view shows a span tree. Reading top to bottom maps exactly onto the sequence diagram earlier in this post:
POST /api/chat/stream [orchestrator] 850 ms
└── invoke_agent orchestrator 830 ms ← agent_run_span
├── SELECT ... FROM messages WHERE conversation_id = $1 12 ms ← asyncpg auto-instrumentation
├── chat openai.gpt-4.1 380 ms ← MAF GenAI semantic span
│ ├── gen_ai.request.model=gpt-4.1
│ ├── gen_ai.usage.input_tokens=2140
│ ├── gen_ai.usage.output_tokens=34
│ └── gen_ai.response.finish_reasons=["tool_calls"]
├── execute_tool call_specialist_agent 420 ms ← tool_call_span
│ └── a2a_call orchestrator → product-discovery 410 ms ← a2a_call_span
│ └── POST /message:send 408 ms ← httpx auto-instrumentation
│ └── [product-discovery service]
│ └── invoke_agent product-discovery 395 ms
│ ├── chat openai.gpt-4.1 180 ms
│ ├── execute_tool semantic_search 55 ms
│ │ └── SELECT ... <=> embedding LIMIT 20 48 ms ← pgvector query
│ └── chat openai.gpt-4.1 155 ms
└── chat openai.gpt-4.1 145 ms ← orchestrator synthesisThree things worth calling out:
- Two
chatspans on the orchestrator, one on the specialist. The orchestrator talks to the LLM once to decide the tool call, then again to synthesize the final reply around the specialist’s answer. The specialist itself ran twochatspans too (one for the tool call, one for the final text). a2a_callwraps thehttpxspan.a2a_call_spanis the semantic layer this repo adds; thePOST /message:sendspan below it is pure transport from the auto-instrumentedhttpxclient. Both have the full request/response latency; you can drill from “which agent called which” down to “which socket timed out.”- The pgvector query shows up as a
SELECT ... <=> embedding. That’s theasyncpgauto-instrumentation — zero code in the agent. If a semantic search regresses, the DB latency is right there next to the LLM latency.
This is what “GenAI semantic attributes” (Ch07) actually gives you in practice — one consistent vocabulary across every agent, every tool call, every provider, in both Python and .NET stacks.
Versions this post was verified against#
agent-framework(Python) — v1.0Microsoft.Agents.AI(.NET) — v1.0- Python — 3.12
- .NET — 9.0
- Next.js — 16.x (App Router)
- Postgres — 16 with
pgvector - Redis — 7
- Aspire Dashboard — the compose image pinned in
docker-compose.yml
Everything in the file:line map is from commit main@2026-04-21. The sequence diagram and the span-tree snippet above were captured from a live trace run against gpt-4.1 on Azure OpenAI with a warm Postgres connection pool.
Gotchas#
- MAF v1.0 empty
__init__.py. The Pythonagent-frameworkwheel ships an empty package init.agents/python/patch_maf.pyre-exports the public API; it runs at Dockerfile build time. Upstream will fix this eventually, at which point the patch becomes a no-op. - File paths drift. Every
:linecitation was verified on 2026-04-21. If you’re reading this months later, trust the file, verify the line.grep -nis your friend. - A2A transport vs. orchestration. Ch14 Handoff is a routing topology; A2A is the transport underneath. When
MAF_HANDOFF_MODE=handoff, MAF’sHandoffBuilderdecides who speaks next; when it’stool(default), the orchestrator LLM picks a specialist viacall_specialist_agent. Both modes use A2A on the wire. - DevUI is not the Docker stack. DevUI (Ch20b) launches a single agent in a standalone process for interactive debugging. It is not a substitute for the full compose up — tools that depend on sibling specialists will error out against mocks.
- Aspire shows passive telemetry. If a trace is missing, check
OTEL_ENABLED=truein.env, confirm theAspire Dashboardcontainer is running (docker compose ps aspire-dashboard), and look for asetup_telemetrylog line at agent start.
Where to go next#
Microsoft Agent Framework docs (every chapter has already pointed at these — this is the consolidated index):
- Agent Framework — Overview
- Journey — From LLMs to Agents
- Journey — From Agents to Workflows
- Agents — Providers (OpenAI, Azure OpenAI, Foundry)
- Agents — Sessions
- Agents — Context Providers
- Agents — Middleware
- Agents — Observability
- Agents — MCP Tools
- Workflows — Overview
- Workflows — Orchestration Patterns
- Workflows — Human-in-the-Loop
- Workflows — Checkpointing
- Workflows — Declarative
- Workflows — Visualization
- DevUI
This series
- Top to bottom: Ch00 → Ch01 → Ch02 → … → Ch20 → Ch20b → Ch20c → this post.
- Shared palette + vocabulary: Mermaid style guide · Jargon glossary.
Where the capstone goes from here
The app in this tour is the baseline. Planned follow-ups that will slot in without reshaping the architecture:
- Ch15 Group Chat wired into a “product-launch review” flow that runs three reviewer agents over a draft listing, with the Round-Robin manager gating turn-taking.
- Ch16 Magentic wired into a “shopping concierge” that plans a multi-step gift-purchase with a facts ledger, delegates to specialists, and iterates until the user confirms.
- A multi-tenant pass across auth, data isolation, and per-tenant prompt overrides. Every tool already reads
current_user_email— the extension is tenant scoping on top. - A cost + rate-limit middleware layer that tracks token spend per conversation (Ch06 function middleware is the right extension point). The
usage_dbschema already exists; the wiring doesn’t. - A deployment-focused companion series covering AKS, Managed Identity, Key Vault, and private endpoints — out of scope for a framework tutorial, squarely in scope for a production one.
Open issues and discussion welcome at github.com/nitin27may/e-commerce-agents/issues.
A final reading order#
If you want to internalise MAF past “I read the tutorials,” the order that matched what actually paid off:
- Run the full stack locally and send one prompt. Watch the Aspire trace. That’s the ground truth.
- Re-read
agents/python/orchestrator/agent.pyend-to-end. 96 lines. Every field onAgent(...)is a chapter. - Re-read one specialist’s tools file (
agents/python/product_discovery/tools.py). Every@toolis a chapter. - Re-read one workflow (
agents/python/workflows/return_replace.py). Every executor class is a chapter. - Open the .NET twin for the same three files side-by-side. The framework is the same; the idioms swap.
Everything else in the series is elaboration on those three files.
Porting between languages#
The series ships every chapter twice — once in Python, once in .NET. The two stacks are 95% interchangeable, but the last 5% causes most porting bugs. Ch22 — Python ↔ .NET asymmetries collects the six surface-API differences in one reference page: streaming type, middleware shape, function-middleware methods, WorkflowContext generics, RequestPort identity, and workflow output routing. Skim before you port a chapter’s code from one language to the other.
See also — companion essays not duplicated in this series#
This series is the framework reference. The original Python-only e-commerce series still owns four topics that the framework series intentionally doesn’t cover:
- Prompt engineering: grounding, roles, and YAML configuration — the systematic prompt architecture (identity / capabilities / constraints / tool guidance / output format) that ECommerce Agents uses for every agent. Read this before writing your first non-trivial system prompt.
- Production auth, RBAC, and deployment — JWT issuance, refresh tokens, role-aware system prompts, the multi-target Dockerfile and
dev.shbootstrap. The HTTP layer that sits in front of every agent in the capstone. - Agent memory: long-term + pgvector — the
memoriestable design withcategory/importance/embedding/expires_at, thestore_memoryandrecall_memoriestools, and the pgvector foundation. Where Ch04 (Sessions) and Ch05 (Context Providers) end, this picks up. - Evaluation framework — golden datasets, three-axis scoring (groundedness / correctness / completeness), CLI runner, CI gate. The framework reference doesn’t ship a chapter on evals; this is the canonical one.
The MAF v1 series owns how to use the framework. These four essays own patterns we built on top of MAF for the capstone. They predate v1 and use the older Python-only stack, but the patterns translate cleanly.
Thank you#
Twenty-three chapters, two languages, one working app. If the series was useful, the best thing you can do is fork the repo, change something, and see what breaks in the Aspire trace. That’s where the real learning lives.
— Nitin

