Skip to main content

MAF v1 — E-commerce repo guided tour (Python + .NET)

Nitin Kumar Singh
Author
Nitin Kumar Singh
I build enterprise AI solutions and cloud-native systems. I write about architecture patterns, AI agents, Azure, and modern development practices — with full source code.
MAF v1 — E-commerce repo guided tour (Python + .NET)
MAF v1: Python and .NET - This article is part of a series.
Part 21: This Article

Series note — Final chapter of MAF v1: Python and .NET. No new code. This is a guided tour of the capstone application with pointers at the file-and-line level for every concept you learned.

Repo — Source: tutorials/21-putting-it-all-together. The capstone itself lives at the repo root — agents/, web/, docker/, scripts/.

Why this chapter
#

The previous 22 chapters drilled one concept each in isolation. This one reassembles them into the multi-agent e-commerce platform that ships in the repo — the same app that appears in the original Python-only series, now viewed through the MAF v1 lens and with a .NET twin alongside Python.

If you’ve read every prior chapter, you can now open any file under agents/python/ or agents/dotnet/ without a rosetta stone. This tour tells you where to start and what to match against.

Prerequisites
#

  • Chapters 00–20 (plus 20b — DevUI) read end-to-end, or at least the ones whose concepts you want to locate in the live code.
  • The stack running locally. If you haven’t brought it up before, circle back to Chapter 00 — Setup and run ./scripts/verify-setup.sh followed by ./scripts/dev.sh.

The stack in one picture
#

The diagram below is the full runtime. Every box maps to a directory in the repo; every arrow is something a chapter walked you through.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor': '#2563eb','primaryTextColor': '#ffffff','primaryBorderColor': '#1e40af', 'lineColor': '#64748b','secondaryColor': '#f59e0b','tertiaryColor': '#10b981', 'background': 'transparent'}}}%% flowchart LR classDef core fill:#2563eb,stroke:#1e40af,color:#ffffff classDef external fill:#f59e0b,stroke:#b45309,color:#000000 classDef success fill:#10b981,stroke:#047857,color:#ffffff classDef infra fill:#64748b,stroke:#334155,color:#ffffff browser([Browser]) subgraph frontend["Frontend (web/)"] next["Next.js 16
chat/page.tsx + api.ts"] end subgraph backend["Backend agents (agents/python or agents/dotnet)"] orch["Orchestrator
:8080"] pd["product-discovery
:8081"] om["order-management
:8082"] pp["pricing-promotions
:8083"] rs["review-sentiment
:8084"] inv["inventory-fulfillment
:8085"] mcp["Inventory MCP
:9000"] end subgraph data["Shared infra"] pg[(Postgres 16
+ pgvector)] redis[(Redis 7)] end subgraph obs["Observability + test harness"] aspire["Aspire Dashboard
:18888"] devui["DevUI
:5173"] end llm(["LLM provider
OpenAI / Azure OpenAI"]) browser -->|JWT Bearer| next next -->|"POST /api/chat[/stream]"| orch orch -->|"A2A HTTP
/message:send"| pd orch --> om orch --> pp orch --> rs orch --> inv inv -.-> mcp orch --> pg orch --> redis pd --> pg om --> pg orch --> llm pd --> llm om --> llm orch -. OTel .-> aspire pd -. OTel .-> aspire om -. OTel .-> aspire devui -. drives .-> orch class next success class orch,pd,om,pp,rs,inv core class mcp,llm external class pg,redis,aspire,devui infra class browser success

One orchestrator, five specialists, one MCP server. Postgres and Redis sit behind every agent; the LLM provider is a single external dependency swapped via LLM_PROVIDER. A2A (agent-to-agent) is plain HTTP POST to /message:send — no magic transport. Aspire Dashboard at :18888 is the passive telemetry view; DevUI at :5173 is the interactive test harness from Ch20b.

Each box on the diagram is one process, one Dockerfile target, one port. The specialists all share a single multi-target Dockerfile (agents/python/Dockerfile, ARG AGENT_NAME) — changing the spec of one rebuilds only its layer. The .NET twin is one project per specialist (agents/dotnet/src/ECommerceAgents.*), composed via agents/dotnet/ECommerceAgents.sln. Pick whichever stack you want; the frontend and the database schema don’t care.

Jargon recap (inline, since this is the last chapter)
#

  • A2AAgent-to-Agent HTTP protocol. Orchestrator POSTs to http://<specialist>:<port>/message:send; the specialist replies with { "response": "..." }. Authenticated with an x-agent-secret header and user identity forwarded via x-user-email / x-user-role / x-session-id.
  • ContextVar — Python’s contextvars.ContextVar, async-safe per-request state. The capstone uses current_user_email, current_user_role, current_session_id so tools don’t need to thread identity through every call signature.
  • Aspire Dashboard — .NET’s dev-time telemetry UI. Traces, metrics, structured logs from every service in one place. Runs at http://localhost:18888 in this repo.
  • DevUI — MAF-native browser dashboard for driving a single agent or workflow interactively. Chat on the left, tool calls + span tree on the right. Runs at http://localhost:5173. Covered in Ch20b.

Full definitions: jargon glossary.

Request lifecycle through the live stack
#

Every chat request takes the same path. The sequence diagram below is what you see replayed in the Aspire trace view when you send one prompt.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor': '#2563eb','primaryTextColor': '#ffffff','primaryBorderColor': '#1e40af', 'lineColor': '#64748b','secondaryColor': '#f59e0b','tertiaryColor': '#10b981', 'background': 'transparent'}}}%% sequenceDiagram autonumber participant U as User
(browser) participant W as Next.js
chatStream() participant O as Orchestrator
:8080 participant L as LLM
(OpenAI / Azure) participant S as Specialist
(e.g. product-discovery :8081) participant D as Postgres
+ pgvector participant A as Aspire
:18888 U->>W: "show me running shoes under $100" W->>O: POST /api/chat/stream + JWT Bearer O->>O: AgentAuthMiddleware → ContextVar O->>D: load conversation history (last 50) O->>L: agent.run(messages, stream=True) L-->>O: tool call → call_specialist_agent("product-discovery", ...) O->>S: A2A POST /message:send + x-agent-secret S->>S: ECommerceContextProvider.before_run S->>L: agent.run(messages) with @tool schemas L-->>S: tool call → semantic_search(...) S->>D: pgvector cosine similarity query D-->>S: matching products S->>L: tool result → continue L-->>S: final text S-->>O: { "response": "..." } O->>L: synthesize with specialist result in context L-->>O: SSE chunks O-->>W: text/event-stream W-->>U: rendered markdown + product cards O-)A: OTel spans (invoke_agent, chat, execute_tool) S-)A: OTel spans

Every arrow is covered by a chapter. Step 1 is Ch03. Step 3 is Ch06. Step 4 loads history — Ch04. Step 5 runs with context providers — Ch05. Step 6 is the tool call — Ch02. Steps 7–8 are A2A handoff — Ch14. The dashed spans to Aspire are Ch07.

Concept → file:line map
#

Each row lists a chapter you’ve completed and where that pattern lives in the capstone today. File:line citations were verified on 2026-04-21 — re-check with grep -n before editing.

ChapterWhere it lives todayTwin / follow-up
Ch00 Setupscripts/verify-setup.sh — single-shot toolchain check; scripts/dev.sh — compose updocker-compose.dotnet.yml brings the .NET twin up
Ch01 First Agentagents/python/orchestrator/agent.py:86Agent(client, instructions=..., tools=..., context_providers=...)agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs:10SpecialistAgentFactory.Create(...)
Ch02 Toolsagents/python/product_discovery/tools.py:15@tool with Annotated[...] hints (7 tools total)agents/dotnet/src/ECommerceAgents.ProductDiscovery/[Description] attributes + AIFunctionFactory.Create
Ch03 Streaming + Multi-turnagents/python/orchestrator/routes.py:400 — SSE /api/chat/stream; shared/agent_host.py:61_run_agent_native_stream drives MAF-native streamingagents/dotnet/src/ECommerceAgents.Orchestrator/Routes/ChatStreamEndpoint
Ch04 Sessions + Memoryagents/python/shared/session.py:131PostgresSessionHistoryProvider backing AgentSession; shared/session.py:201get_history_provider() factoryagents/dotnet/src/ECommerceAgents.Shared/Sessions/PostgresSessionHistoryProvider.cs
Ch05 Context Providersagents/python/shared/context_providers.py:35UserProfileProvider; :89RecentOrdersProvider; :151AgentMemoriesProvider; :213 — composite ECommerceContextProvideragents/dotnet/src/ECommerceAgents.Shared/ContextProviders/
Ch06 Middlewareagents/python/shared/middleware.py:40AgentRunLogger; :76ToolAuditMiddleware; :127PiiRedactionMiddleware; :164default_middleware_stack(). HTTP-layer auth sits separately at shared/auth.py:27agents/dotnet/src/ECommerceAgents.Shared/Middleware/AgentRunLogger.cs + ToolAuditMiddleware.cs + PiiRedactor.cs
Ch07 Observabilityagents/python/shared/telemetry.py:30setup_telemetry(); :184agent_run_span; :221a2a_call_span; :250tool_call_spanagents/dotnet/src/ECommerceAgents.Shared/Telemetry/
Ch08 MCP Toolsagents/python/mcp/inventory_server.py:29 — standalone MCP server (FastAPI) with stock / warehouse / shipping toolsagents/dotnet/src/ECommerceAgents.Mcp/
Ch09 Executors + Edgesagents/python/workflows/pre_purchase.py:49_FanOutExecutor, :60 _ReviewsExecutor, :79 _StockExecutor, :98 _PriceHistoryExecutor, :117 _MergeAndShipExecutor, :148 _SynthesisExecutoragents/dotnet/src/ECommerceAgents.Shared/Workflows/PrePurchaseWorkflow.cs
Ch10 Events + Builderagents/python/workflows/pre_purchase.py:218PrePurchaseWorkflow builds via WorkflowBuilder; events flow through WorkflowEventagents/dotnet/src/ECommerceAgents.Shared/Workflows/PrePurchaseWorkflow.cs (WorkflowBuilder<T>)
Ch11 Agents in Workflowsagents/python/shared/remote_agent.pyRemoteSpecialistChatClient wraps A2A as an Agent, used by both the handoff mesh and the workflowsagents/dotnet/src/ECommerceAgents.Shared/Agents/
Ch12 Sequentialagents/python/workflows/return_replace.py:73_CheckEligibilityExecutor, :101 _InitiateReturnExecutor, :135 _SearchReplacementsExecutor, :206 _ApplyDiscountExecutor, :230 _FinalizeExecutor; wired via SequentialBuilder in ReturnAndReplaceWorkflow:243agents/dotnet/src/ECommerceAgents.Shared/Workflows/ReturnAndReplaceWorkflow.cs
Ch13 Concurrentagents/python/workflows/pre_purchase.py:49 — fan-out to reviews / stock+shipping / price; _MergeAndShipExecutor:117 aggregatesagents/dotnet/src/ECommerceAgents.Shared/Workflows/PrePurchaseWorkflow.cs (ConcurrentBuilder)
Ch14 Handoffagents/python/orchestrator/handoff.py:46build_orchestrator_handoff_workflow wires the mesh; agent.py:25 keeps the tool-router fallback (call_specialist_agent)agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorTools.cs (tool-router parity)
Ch15 Group ChatNot yet wired into the capstone — follow-up for a “product-launch review” flowParity pending
Ch16 MagenticNot yet wired into the capstone — follow-up for a “shopping concierge” flowParity pending
Ch17 HITLagents/python/workflows/return_replace.py:157_HitlGateExecutor gates approval above RETURN_HITL_THRESHOLD via request_info + @response_handleragents/dotnet/src/ECommerceAgents.Shared/Workflows/ReturnAndReplaceWorkflow.cs
Ch18 Checkpointsagents/python/shared/checkpoint_storage.py:34PostgresCheckpointStorage backed by the workflow_checkpoints tableagents/dotnet/src/ECommerceAgents.Shared/Checkpoints/
Ch19 Declarativeagents/python/shared/workflow_loader.py:158load_workflow(path); :118 register_op; :230 load_workflows_directory. Specs under agents/python/config/workflows/*.yamlagents/dotnet/src/ECommerceAgents.Shared/Workflows/DeclarativeWorkflow.cs
Ch20 Visualizationscripts/visualize_workflows.py — iterates every registered workflow and writes docs/workflows/*.mmd + *.dotagents/dotnet/src/ECommerceAgents.Shared/Workflows/MermaidExporter.cs
Ch20b DevUIDevUI isn’t wired into the Docker stack — it’s launched on demand via uv run devui agents/python/orchestrator/agent.py (or the .NET equivalent) while the backend is down; the point is to drive a single agent without the full clusterSame surface — dotnet devui
Ch20c Production hardeningHTTP-layer hardening lives in agents/python/shared/auth/keyring.py holds the kid-indexed signing keys; password_reset.py and refresh_tokens.py own the token tables. Phase 8 of the refactor plan tracks the wiringagents/dotnet/src/ECommerceAgents.Shared/Auth/ (parity via IOptionsMonitor<JwtKeyringOptions> and CryptographicOperations.FixedTimeEquals)
Ch21 Putting it all togetherYou are here. Every row above cross-references this post for “where does that concept actually live?”
Ch22 AsymmetriesReference appendix — six surface-API differences between Python and .NET MAF stacks
Ch23 Evaluation frameworkagents/python/evals/ — golden datasets, three-axis scoring, CLI runner with smoke / full tiers, CI gate. Phase 9 of the refactor plan migrates the original Part 9 frameworkagents/dotnet/src/ECommerceAgents.Evals/
Ch24 Prompt engineeringagents/python/shared/prompt_loader.py — YAML composition for the base prompt; shared/context_providers.py:RoleContextProvider — per-request role injection. Phase 10 of the refactor plan drops the per-role agent factory in favor of a single instance per agentagents/dotnet/src/ECommerceAgents.Shared/Prompts/
Ch25 Deploymentagents/python/Dockerfile (multi-target, ARG-driven), docker-compose.yml (YAML anchors + health gates), scripts/dev.sh (poll-don’t-sleep). Phase 11 of the refactor plan tightens the dev.sh wait loop and splits the .NET Dockerfile into build / runtime stagesagents/dotnet/Dockerfile + docker-compose.dotnet.yml overlay

28 rows. Every row links to the chapter and cites an exact file path; every “Twin” column shows what the .NET side of the port looks like so you can skim both stacks in one pass.

Try it yourself
#

Ten commands, from cold clone to interactive trace:

# 1. clone + environment check
git clone https://github.com/nitin27may/e-commerce-agents.git
cd e-commerce-agents
cp .env.example .env          # paste OPENAI_API_KEY or the Azure trio
./scripts/verify-setup.sh     # Ch00 smoke test

# 2. bring up the full stack
./scripts/dev.sh              # orchestrator + 5 specialists + Postgres + Redis + Aspire

# 3. open the UIs
open http://localhost:3000    # Next.js frontend — log in, chat
open http://localhost:18888   # Aspire Dashboard — Traces tab, pick a request
open http://localhost:9000/.well-known/mcp-manifest.json  # Inventory MCP server manifest

# 4. (optional) drive a single agent via DevUI
docker compose stop orchestrator
cd agents/python && uv run devui orchestrator/agent.py   # Ch20b flow

Then line up what you see with what you read:

  • Chat a prompt like “show me running shoes under $100” in the frontend. Watch the Aspire trace view — you’ll see invoke_agent on the orchestrator, a chat span to OpenAI, a tool call named call_specialist_agent, an a2a_call span to product-discovery, and nested execute_tool spans for search_products / semantic_search. That entire trace is what the sequence diagram above is narrating.
  • Grep for @tool in agents/python/product_discovery/tools.py — the same Annotated[type, Field(description=...)] shape you first saw in Ch02. All 7 tools in this agent are the pattern repeated.
  • Open agents/python/workflows/return_replace.py — the sequential executors (Ch12), the HITL gate (Ch17), and the discount branch all live in one file under 250 lines. It’s the densest piece of MAF in the repo and rewards a careful read.

Every concept from the series is sitting at a file path you already know how to read.

Python vs .NET — where the parity lives
#

The port reached 100% surface parity on 2026-04-21 (207 tests green, snake_case JSON on the wire). Both stacks ship the same:

CapabilityPython.NET
Agent factoryagents/python/orchestrator/agent.py:86agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs:10
Specialist agents ×5agents/python/{product_discovery,order_management,pricing_promotions,review_sentiment,inventory_fulfillment}/agent.pyagents/dotnet/src/ECommerceAgents.* (five mirror projects)
MCP serveragents/python/mcp/inventory_server.pyagents/dotnet/src/ECommerceAgents.Mcp/
Middleware stackagents/python/shared/middleware.pyagents/dotnet/src/ECommerceAgents.Shared/Middleware/
Session providersagents/python/shared/session.pyagents/dotnet/src/ECommerceAgents.Shared/Sessions/
Checkpoint storageagents/python/shared/checkpoint_storage.pyagents/dotnet/src/ECommerceAgents.Shared/Checkpoints/
Declarative workflowsagents/python/shared/workflow_loader.pyagents/dotnet/src/ECommerceAgents.Shared/Workflows/DeclarativeWorkflow.cs
Workflowsagents/python/workflows/{pre_purchase,return_replace}.pyagents/dotnet/src/ECommerceAgents.Shared/Workflows/{PrePurchaseWorkflow,ReturnAndReplaceWorkflow}.cs

Switch between stacks without changing the frontend — both compose files expose the backend on :8080.

Reading paths by role
#

Not every reader needs to open every file. Pick the entry point that matches what you’re here to do:

“I want to add a new specialist agent”
#

  1. agents/python/product_discovery/agent.py:35 — the simplest template. A factory that returns Agent(client, name, description, instructions, tools, context_providers).
  2. agents/python/product_discovery/tools.py:15@tool decorator + Annotated hints + async function. Copy the shape, swap the body.
  3. agents/python/config/prompts/product_discovery.yaml — the YAML composition system. Add your system prompt here, not inline.
  4. agents/python/product_discovery/main.py — the create_agent_app(...) entry point. 30 lines of FastAPI lifespan.
  5. docker-compose.yml — copy the product-discovery service block, bump the port, change AGENT_NAME.
  6. Add your URL to AGENT_REGISTRY in .env so the orchestrator knows where to route.

Ten minutes from clone to a new specialist answering A2A calls.

“I want to understand how a chat becomes a stream”
#

  1. web/src/app/(app)/chat/page.tsx:181ChatPage component. sendMessage() at :297 is where the UI kicks off the stream.
  2. web/src/lib/api.ts:192chatStream(). SSE client, JWT in Authorization, auto-refresh on 401.
  3. agents/python/orchestrator/routes.py:400/api/chat/stream endpoint. Loads history from Postgres, calls _run_agent_native_stream().
  4. agents/python/shared/agent_host.py:61 — MAF-native streaming. One async for over agent.run(..., stream=True), emit each update’s text downstream.
  5. agents/python/shared/telemetry.py:184agent_run_span(...) context manager wrapping every run so you see the span tree in Aspire.

“I want to understand a workflow end-to-end”
#

agents/python/workflows/return_replace.py is the densest file in the repo — 11 KB and it covers Ch09 through Ch18:

  • Executors as nodes (Ch09) — classes at :73, :101, :135, :157, :206, :230.
  • Sequential topology (Ch12) — built inside ReturnAndReplaceWorkflow:243.
  • HITL gate (Ch17) — _HitlGateExecutor:157 emits request_info, the response handler resumes on human reply.
  • Checkpointed state (Ch18) — WorkflowState:36 is the dataclass that round-trips through PostgresCheckpointStorage.

Read it alongside agents/python/workflows/pre_purchase.py for the concurrent counterpart (Ch13).

“I want to change the observability story”
#

  • agents/python/shared/telemetry.py:30setup_telemetry(service_name) configures TracerProvider + MeterProvider + LoggerProvider with OTLP exporters pointed at Aspire.
  • agents/python/shared/telemetry.py:184agent_run_span · :221 a2a_call_span · :250 tool_call_span. These are the three semantic spans that match MAF’s built-in GenAI conventions.
  • agents/python/shared/telemetry.py:356 _instrument_openai() · :380 _instrument_httpx() · :388 _instrument_asyncpg() — auto-instrumentation hooks. Zero code change in agents to get LLM, HTTP, and DB spans.

Swap the OTLP endpoint away from Aspire in .env (OTEL_EXPORTER_OTLP_ENDPOINT) to point at Azure Monitor, Grafana Tempo, or any OTel-compatible backend.

What Phase 7 actually changed
#

When this series started, the capstone was a working MAF app with hand-rolled pieces. Phase 7 (the 13-step refactor under plans/refactor/) replaced them with MAF-native primitives. Five are visible in the file:line map above; all have landed as of 2026-04-21:

Before Phase 7AfterLocation
Custom OpenAI tool-calling loop in agent_host.py (250 lines of manual JSON parsing)MAF-native agent.run() / agent.run(..., stream=True)agents/python/shared/agent_host.py:50 _run_agent_native, :61 _run_agent_native_stream
Manual history forwarding truncated to last 10 / 500 chars per messageAgentSession + PostgresSessionHistoryProvider, full history per conversationagents/python/shared/session.py:131
Single monolithic ECommerceContextProvider (one class, three concerns)Three composable providers + a backward-compat compositeagents/python/shared/context_providers.py:35, :89, :151, :213
HTTP-only middleware (auth). Nothing inside the agent run.Three MAF middlewares — agent-run logger, tool auditor, PII redactoragents/python/shared/middleware.py:40, :76, :127
call_specialist_agent tool-router as the only topologyTool router still the default; HandoffBuilder mesh behind MAF_HANDOFF_MODE=handoffagents/python/orchestrator/handoff.py:46

The .NET port was built against the post-refactor shape, so every agents/dotnet/... path lines up with the right column above.

What the Aspire trace looks like for one prompt
#

When the frontend posts one prompt to /api/chat/stream, the Aspire trace view shows a span tree. Reading top to bottom maps exactly onto the sequence diagram earlier in this post:

POST /api/chat/stream                                [orchestrator]  850 ms
└── invoke_agent orchestrator                                        830 ms   ← agent_run_span
    ├── SELECT ... FROM messages WHERE conversation_id = $1          12 ms    ← asyncpg auto-instrumentation
    ├── chat openai.gpt-4.1                                         380 ms   ← MAF GenAI semantic span
    │   ├── gen_ai.request.model=gpt-4.1
    │   ├── gen_ai.usage.input_tokens=2140
    │   ├── gen_ai.usage.output_tokens=34
    │   └── gen_ai.response.finish_reasons=["tool_calls"]
    ├── execute_tool call_specialist_agent                          420 ms   ← tool_call_span
    │   └── a2a_call orchestrator → product-discovery               410 ms   ← a2a_call_span
    │       └── POST /message:send                                  408 ms   ← httpx auto-instrumentation
    │           └── [product-discovery service]
    │               └── invoke_agent product-discovery              395 ms
    │                   ├── chat openai.gpt-4.1                     180 ms
    │                   ├── execute_tool semantic_search             55 ms
    │                   │   └── SELECT ... <=> embedding LIMIT 20    48 ms   ← pgvector query
    │                   └── chat openai.gpt-4.1                     155 ms
    └── chat openai.gpt-4.1                                         145 ms   ← orchestrator synthesis

Three things worth calling out:

  1. Two chat spans on the orchestrator, one on the specialist. The orchestrator talks to the LLM once to decide the tool call, then again to synthesize the final reply around the specialist’s answer. The specialist itself ran two chat spans too (one for the tool call, one for the final text).
  2. a2a_call wraps the httpx span. a2a_call_span is the semantic layer this repo adds; the POST /message:send span below it is pure transport from the auto-instrumented httpx client. Both have the full request/response latency; you can drill from “which agent called which” down to “which socket timed out.”
  3. The pgvector query shows up as a SELECT ... <=> embedding. That’s the asyncpg auto-instrumentation — zero code in the agent. If a semantic search regresses, the DB latency is right there next to the LLM latency.

This is what “GenAI semantic attributes” (Ch07) actually gives you in practice — one consistent vocabulary across every agent, every tool call, every provider, in both Python and .NET stacks.

Versions this post was verified against
#

  • agent-framework (Python) — v1.0
  • Microsoft.Agents.AI (.NET) — v1.0
  • Python — 3.12
  • .NET — 9.0
  • Next.js — 16.x (App Router)
  • Postgres — 16 with pgvector
  • Redis — 7
  • Aspire Dashboard — the compose image pinned in docker-compose.yml

Everything in the file:line map is from commit main@2026-04-21. The sequence diagram and the span-tree snippet above were captured from a live trace run against gpt-4.1 on Azure OpenAI with a warm Postgres connection pool.

Gotchas
#

  • MAF v1.0 empty __init__.py. The Python agent-framework wheel ships an empty package init. agents/python/patch_maf.py re-exports the public API; it runs at Dockerfile build time. Upstream will fix this eventually, at which point the patch becomes a no-op.
  • File paths drift. Every :line citation was verified on 2026-04-21. If you’re reading this months later, trust the file, verify the line. grep -n is your friend.
  • A2A transport vs. orchestration. Ch14 Handoff is a routing topology; A2A is the transport underneath. When MAF_HANDOFF_MODE=handoff, MAF’s HandoffBuilder decides who speaks next; when it’s tool (default), the orchestrator LLM picks a specialist via call_specialist_agent. Both modes use A2A on the wire.
  • DevUI is not the Docker stack. DevUI (Ch20b) launches a single agent in a standalone process for interactive debugging. It is not a substitute for the full compose up — tools that depend on sibling specialists will error out against mocks.
  • Aspire shows passive telemetry. If a trace is missing, check OTEL_ENABLED=true in .env, confirm the Aspire Dashboard container is running (docker compose ps aspire-dashboard), and look for a setup_telemetry log line at agent start.

Where to go next
#

Microsoft Agent Framework docs (every chapter has already pointed at these — this is the consolidated index):

This series

Where the capstone goes from here

The app in this tour is the baseline. Planned follow-ups that will slot in without reshaping the architecture:

  • Ch15 Group Chat wired into a “product-launch review” flow that runs three reviewer agents over a draft listing, with the Round-Robin manager gating turn-taking.
  • Ch16 Magentic wired into a “shopping concierge” that plans a multi-step gift-purchase with a facts ledger, delegates to specialists, and iterates until the user confirms.
  • A multi-tenant pass across auth, data isolation, and per-tenant prompt overrides. Every tool already reads current_user_email — the extension is tenant scoping on top.
  • A cost + rate-limit middleware layer that tracks token spend per conversation (Ch06 function middleware is the right extension point). The usage_db schema already exists; the wiring doesn’t.
  • A deployment-focused companion series covering AKS, Managed Identity, Key Vault, and private endpoints — out of scope for a framework tutorial, squarely in scope for a production one.

Open issues and discussion welcome at github.com/nitin27may/e-commerce-agents/issues.

A final reading order
#

If you want to internalise MAF past “I read the tutorials,” the order that matched what actually paid off:

  1. Run the full stack locally and send one prompt. Watch the Aspire trace. That’s the ground truth.
  2. Re-read agents/python/orchestrator/agent.py end-to-end. 96 lines. Every field on Agent(...) is a chapter.
  3. Re-read one specialist’s tools file (agents/python/product_discovery/tools.py). Every @tool is a chapter.
  4. Re-read one workflow (agents/python/workflows/return_replace.py). Every executor class is a chapter.
  5. Open the .NET twin for the same three files side-by-side. The framework is the same; the idioms swap.

Everything else in the series is elaboration on those three files.

Porting between languages
#

The series ships every chapter twice — once in Python, once in .NET. The two stacks are 95% interchangeable, but the last 5% causes most porting bugs. Ch22 — Python ↔ .NET asymmetries collects the six surface-API differences in one reference page: streaming type, middleware shape, function-middleware methods, WorkflowContext generics, RequestPort identity, and workflow output routing. Skim before you port a chapter’s code from one language to the other.

See also — companion essays not duplicated in this series
#

This series is the framework reference. The original Python-only e-commerce series still owns four topics that the framework series intentionally doesn’t cover:

  • Prompt engineering: grounding, roles, and YAML configuration — the systematic prompt architecture (identity / capabilities / constraints / tool guidance / output format) that ECommerce Agents uses for every agent. Read this before writing your first non-trivial system prompt.
  • Production auth, RBAC, and deployment — JWT issuance, refresh tokens, role-aware system prompts, the multi-target Dockerfile and dev.sh bootstrap. The HTTP layer that sits in front of every agent in the capstone.
  • Agent memory: long-term + pgvector — the memories table design with category / importance / embedding / expires_at, the store_memory and recall_memories tools, and the pgvector foundation. Where Ch04 (Sessions) and Ch05 (Context Providers) end, this picks up.
  • Evaluation framework — golden datasets, three-axis scoring (groundedness / correctness / completeness), CLI runner, CI gate. The framework reference doesn’t ship a chapter on evals; this is the canonical one.

The MAF v1 series owns how to use the framework. These four essays own patterns we built on top of MAF for the capstone. They predate v1 and use the older Python-only stack, but the patterns translate cleanly.

Thank you
#

Twenty-three chapters, two languages, one working app. If the series was useful, the best thing you can do is fork the repo, change something, and see what breaks in the Aspire trace. That’s where the real learning lives.

— Nitin

MAF v1: Python and .NET - This article is part of a series.
Part 21: This Article

Related