Series note — This article is part of MAF v1: Python and .NET. The original Python-only walkthrough lives at Part 1 — AI Agents: Concepts and Your First Implementation. That article is still the best read for the conceptual split between chatbots and agents; this one focuses on the minimum code to get an agent running in both languages, and anchors every later chapter in the real API surface.
Repo — Runnable code for this chapter: tutorials/01-first-agent. Clone,
cdin, follow along.
Why this chapter#
An agent in MAF is a chat client plus instructions. That’s it. Before we add tools, memory, middleware, or workflows, we need that ~40-line baseline running on both stacks — because every later chapter adds exactly one thing to this starting point.
We’ll answer one question: “What is the capital of France?”
Prerequisites#
- Completed Chapter 00 — Setup.
.envat the repo root with eitherOPENAI_API_KEYor the Azure OpenAI trio (AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_KEY,AZURE_OPENAI_DEPLOYMENT).- Read-first (optional): Get Started — Your First Agent.
The concept#
A Microsoft Agent Framework agent wraps three things:
- A chat client — the object that talks to the LLM. In .NET this is any
IChatClient; in Python it’s aChatClientsubclass likeOpenAIChatClient. You never call the LLM directly; you hand the chat client to the agent and let MAF drive it. - Instructions — the persona and guardrails the agent carries into every turn. Passed as the system prompt on your behalf. First-class field on the agent, not a message you prepend.
- A name (optional but recommended) — used for telemetry and multi-agent routing later.
You call agent.run(question) (Python) or agent.RunAsync(question) (.NET) and you get back an AgentResponse with a .text / .Text property. No tool-calling loop yet, no session, no middleware — just the simplest thing that could possibly work.
(instructions + name) participant Client as ChatClient
(IChatClient / ChatClient) participant LLM as LLM
(OpenAI / Azure) User->>Agent: agent.run("What is the capital of France?") Agent->>Client: system + user messages Client->>LLM: POST /chat/completions or /responses LLM-->>Client: completion tokens Client-->>Agent: ChatResponse Agent-->>User: AgentResponse(.text = "Paris.")
The agent is the thin blue box in the middle. It never sees tokens — it sees messages in and a response out. Everything in the rest of the series decorates this flow.
Responses API vs Chat Completions — pick one#
MAF has two code paths to OpenAI-compatible APIs:
- Chat Completions (the classic
/chat/completionsendpoint) — universally supported. Every OpenAI-compatible model, every Azure OpenAI deployment, every third-party provider. UseOpenAIChatCompletionClientin Python /client.GetChatClient(...)in .NET. - Responses API (the newer
/responsesendpoint) — richer (service-managed conversation state, structured outputs, background responses) but not yet rolled out to every Azure region. UseOpenAIChatClientin Python /client.GetResponseClient(...)in .NET.
This chapter uses Chat Completions on Azure (because it works against any deployment) and whichever OpenAI class the user’s key supports. Later chapters that need Responses-API-only features will call that out explicitly.
Jargon recap#
AIAgent(.NET) — abstract base class every agent inherits from. The polymorphic type you pass around.ChatClientAgent(.NET) /Agent(Python) — the concrete implementation that wraps anIChatClient/ChatClient. Produced by.AsAIAgent(...)on a chat client in .NET, or byAgent(chat_client, ...)in Python.AgentResponse— result of a run. Holds the text and (in later chapters) tool call metadata, token usage, and streaming updates.
Full definitions in the jargon glossary.
Python#
Full source: python/main.py. Key lines:
# python/main.py (excerpt)
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient, OpenAIChatCompletionClient
INSTRUCTIONS = "You are a concise geography assistant. Keep answers to one short sentence."
def _default_client():
provider = os.environ.get("LLM_PROVIDER", "openai").lower()
if provider == "azure":
return OpenAIChatCompletionClient(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_KEY"],
api_version=os.environ.get("AZURE_OPENAI_API_VERSION", "2024-10-21"),
)
return OpenAIChatClient(
model=os.environ.get("LLM_MODEL", "gpt-4.1"),
api_key=os.environ["OPENAI_API_KEY"],
)
def build_agent(client=None) -> Agent:
return Agent(client or _default_client(), instructions=INSTRUCTIONS, name="first-agent")
async def main():
agent = build_agent()
response = await agent.run("What is the capital of France?")
print("A:", response.text)Two points worth staring at:
- The factory returns either
OpenAIChatClient(Responses) orOpenAIChatCompletionClient(Chat Completions) depending onLLM_PROVIDER. They’re two different classes from the same package — same agent code on top works with either. build_agent(client=None)is a seam for tests. Tests inject a stub chat client (seetests/test_main.py) so the integration test is the only path that touches the network.
Run it:
cd tutorials/01-first-agent/python
uv sync
uv run python main.py
# Q: What is the capital of France?
# A: The capital of France is Paris..NET#
Full source: dotnet/Program.cs. Key lines:
// dotnet/Program.cs (excerpt)
using System.ClientModel;
using Azure.AI.OpenAI;
using Microsoft.Agents.AI;
using OpenAI;
using OpenAI.Chat;
public const string Instructions =
"You are a concise geography assistant. Keep answers to one short sentence.";
public static AIAgent BuildAgent()
{
var provider = Environment.GetEnvironmentVariable("LLM_PROVIDER")?.ToLowerInvariant() ?? "openai";
if (provider == "azure")
{
var azure = new AzureOpenAIClient(
new Uri(Required("AZURE_OPENAI_ENDPOINT")),
new ApiKeyCredential(Required("AZURE_OPENAI_KEY")));
return azure.GetChatClient(Required("AZURE_OPENAI_DEPLOYMENT"))
.AsAIAgent(instructions: Instructions, name: "first-agent");
}
var openAi = new OpenAIClient(new ApiKeyCredential(Required("OPENAI_API_KEY")));
return openAi.GetChatClient(Environment.GetEnvironmentVariable("LLM_MODEL") ?? "gpt-4.1")
.AsAIAgent(instructions: Instructions, name: "first-agent");
}
var agent = BuildAgent();
var response = await agent.RunAsync("What is the capital of France?");
Console.WriteLine($"A: {response.Text}");.AsAIAgent(...) is the key method — it’s an extension on ChatClient that returns an AIAgent. Under the hood it creates a ChatClientAgent, but you rarely need that concrete type; AIAgent is what consumers type against.
Run it:
cd tutorials/01-first-agent/dotnet
dotnet run
# Q: What is the capital of France?
# A: The capital of France is Paris.Quick aside — multimodal input#
Your first agent is text-only, but the exact same agent accepts images once you pass a multi-content message. In Python:
from agent_framework import ChatMessage, TextContent, UriContent
response = await agent.run([
ChatMessage(role="user", contents=[
TextContent(text="Describe this logo:"),
UriContent(uri="https://…/logo.png", media_type="image/png"),
])
])The .NET equivalent uses new ChatMessage(ChatRole.User, [new TextContent("…"), new UriContent("…", "image/png")]). No agent changes — just a richer message shape. We won’t touch multimodal again until Chapter 21; file this away.
Quick aside — AgentRunOptions#
Both run() and RunAsync() accept an options object (AgentRunOptions in .NET, options= in Python) where you can set cancellation tokens, per-run temperature, and max-turn budgets. You don’t need it today; later chapters (streaming, orchestrations) will.
Side-by-side differences#
| Aspect | Python | .NET |
|---|---|---|
| Agent type | agent_framework.Agent | Microsoft.Agents.AI.AIAgent (typically a ChatClientAgent) |
| Chat client | OpenAIChatClient (Responses) or OpenAIChatCompletionClient (Chat Completions) — same package | Raw OpenAI.Chat.ChatClient / AzureOpenAIClient.GetChatClient(...), converted via .AsAIAgent() |
| Instructions | Agent(..., instructions="...") | .AsAIAgent(instructions: "...") |
| Invocation | await agent.run("...") → .text | await agent.RunAsync("...") → .Text |
| API switch | Different class per API path | Different factory method (GetChatClient vs GetResponseClient) |
| Cancellation | asyncio task cancellation | CancellationToken in RunAsync(..., cancellationToken) |
Gotchas#
- “API version not supported” on Azure. Your deployment doesn’t expose the Responses API. Use
OpenAIChatCompletionClientin Python / plainGetChatClient(...).AsAIAgent()in .NET, with an olderapi_versionlike2024-10-21. - MAF v1.0 Python wheel has an empty
__init__.py. The tutorials in this series calltutorials/_shared/maf_bootstrap.pyat startup to patch it. Upstream will fix this eventually; the bootstrap becomes a no-op at that point. - Don’t forget
using OpenAI.Chat;in .NET — theAsAIAgentextension lives there. Required("…")throws on missing vars. If a test seems to hang on.RunAsync, the agent never built — check for a missing env var raised out of the factory.
Tests#
Both sides ship tests that exercise:
- A stub chat client returns a canned answer (no network required).
- Agent name and instructions propagate correctly to the underlying client.
- A real LLM call answering “capital of France” when credentials are present (skipped otherwise).
# Python
cd tutorials/01-first-agent/python
uv run pytest -v
# .NET
cd tutorials/01-first-agent/dotnet
dotnet test tests/FirstAgent.Tests.csproj11 tests total (6 Python, 5 .NET). Both integration tests successfully hit Azure OpenAI when .env is populated.
How this shows up in the capstone#
The orchestrator at agents/python/orchestrator/agent.py:86-95 is this exact pattern with more fields (tools, context providers, description). Every specialist agent starts the same way. Once you can read this chapter’s 40 lines, you can read every agent construction in the repo.
In .NET, the equivalent factory lives at agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs.
Further reading & links#
This chapter
- Source on GitHub: tutorials/01-first-agent
- Previous: Chapter 00 — Setup · Next: Chapter 02 — Adding Tools
Microsoft Agent Framework docs
- Get Started — Your First Agent
- Agents — Overview (agent types,
IChatClientabstraction) - Agents — Running Agents (streaming, options, responses)
- Agents — Providers (OpenAI vs Azure OpenAI vs Foundry)
- Journey — From LLMs to Agents
Where it lives in the capstone
- Python:
agents/python/orchestrator/agent.py:86-95(orchestrator factory),agents/python/product_discovery/agent.py:85-96(specialist) - .NET:
agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs
Series shared resources
What’s next#
Chapter 02 — Adding Tools teaches the agent to call functions. The agent object doesn’t change; the tools=[...] list does.

