Skip to main content

MAF v1 — Your First Agent (Python + .NET)

Nitin Kumar Singh
Author
Nitin Kumar Singh
I build enterprise AI solutions and cloud-native systems. I write about architecture patterns, AI agents, Azure, and modern development practices — with full source code.
MAF v1 — Your First Agent (Python + .NET)
Table of Contents
MAF v1: Python and .NET - This article is part of a series.
Part 1: This Article

Series note — This article is part of MAF v1: Python and .NET. The original Python-only walkthrough lives at Part 1 — AI Agents: Concepts and Your First Implementation. That article is still the best read for the conceptual split between chatbots and agents; this one focuses on the minimum code to get an agent running in both languages, and anchors every later chapter in the real API surface.

Repo — Runnable code for this chapter: tutorials/01-first-agent. Clone, cd in, follow along.

Why this chapter
#

An agent in MAF is a chat client plus instructions. That’s it. Before we add tools, memory, middleware, or workflows, we need that ~40-line baseline running on both stacks — because every later chapter adds exactly one thing to this starting point.

We’ll answer one question: “What is the capital of France?”

Prerequisites
#

The concept
#

A Microsoft Agent Framework agent wraps three things:

  1. A chat client — the object that talks to the LLM. In .NET this is any IChatClient; in Python it’s a ChatClient subclass like OpenAIChatClient. You never call the LLM directly; you hand the chat client to the agent and let MAF drive it.
  2. Instructions — the persona and guardrails the agent carries into every turn. Passed as the system prompt on your behalf. First-class field on the agent, not a message you prepend.
  3. A name (optional but recommended) — used for telemetry and multi-agent routing later.

You call agent.run(question) (Python) or agent.RunAsync(question) (.NET) and you get back an AgentResponse with a .text / .Text property. No tool-calling loop yet, no session, no middleware — just the simplest thing that could possibly work.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor': '#2563eb','primaryTextColor': '#ffffff','primaryBorderColor': '#1e40af', 'lineColor': '#64748b','secondaryColor': '#f59e0b','tertiaryColor': '#10b981', 'background': 'transparent'}}}%% sequenceDiagram autonumber participant User as User code participant Agent as Agent
(instructions + name) participant Client as ChatClient
(IChatClient / ChatClient) participant LLM as LLM
(OpenAI / Azure) User->>Agent: agent.run("What is the capital of France?") Agent->>Client: system + user messages Client->>LLM: POST /chat/completions or /responses LLM-->>Client: completion tokens Client-->>Agent: ChatResponse Agent-->>User: AgentResponse(.text = "Paris.")

The agent is the thin blue box in the middle. It never sees tokens — it sees messages in and a response out. Everything in the rest of the series decorates this flow.

Responses API vs Chat Completions — pick one
#

MAF has two code paths to OpenAI-compatible APIs:

  • Chat Completions (the classic /chat/completions endpoint) — universally supported. Every OpenAI-compatible model, every Azure OpenAI deployment, every third-party provider. Use OpenAIChatCompletionClient in Python / client.GetChatClient(...) in .NET.
  • Responses API (the newer /responses endpoint) — richer (service-managed conversation state, structured outputs, background responses) but not yet rolled out to every Azure region. Use OpenAIChatClient in Python / client.GetResponseClient(...) in .NET.

This chapter uses Chat Completions on Azure (because it works against any deployment) and whichever OpenAI class the user’s key supports. Later chapters that need Responses-API-only features will call that out explicitly.

Jargon recap
#

  • AIAgent (.NET) — abstract base class every agent inherits from. The polymorphic type you pass around.
  • ChatClientAgent (.NET) / Agent (Python) — the concrete implementation that wraps an IChatClient / ChatClient. Produced by .AsAIAgent(...) on a chat client in .NET, or by Agent(chat_client, ...) in Python.
  • AgentResponse — result of a run. Holds the text and (in later chapters) tool call metadata, token usage, and streaming updates.

Full definitions in the jargon glossary.

Python
#

Full source: python/main.py. Key lines:

# python/main.py (excerpt)
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient, OpenAIChatCompletionClient

INSTRUCTIONS = "You are a concise geography assistant. Keep answers to one short sentence."

def _default_client():
    provider = os.environ.get("LLM_PROVIDER", "openai").lower()
    if provider == "azure":
        return OpenAIChatCompletionClient(
            model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
            azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
            api_key=os.environ["AZURE_OPENAI_KEY"],
            api_version=os.environ.get("AZURE_OPENAI_API_VERSION", "2024-10-21"),
        )
    return OpenAIChatClient(
        model=os.environ.get("LLM_MODEL", "gpt-4.1"),
        api_key=os.environ["OPENAI_API_KEY"],
    )

def build_agent(client=None) -> Agent:
    return Agent(client or _default_client(), instructions=INSTRUCTIONS, name="first-agent")

async def main():
    agent = build_agent()
    response = await agent.run("What is the capital of France?")
    print("A:", response.text)

Two points worth staring at:

  • The factory returns either OpenAIChatClient (Responses) or OpenAIChatCompletionClient (Chat Completions) depending on LLM_PROVIDER. They’re two different classes from the same package — same agent code on top works with either.
  • build_agent(client=None) is a seam for tests. Tests inject a stub chat client (see tests/test_main.py) so the integration test is the only path that touches the network.

Run it:

cd tutorials/01-first-agent/python
uv sync
uv run python main.py
# Q: What is the capital of France?
# A: The capital of France is Paris.

.NET
#

Full source: dotnet/Program.cs. Key lines:

// dotnet/Program.cs (excerpt)
using System.ClientModel;
using Azure.AI.OpenAI;
using Microsoft.Agents.AI;
using OpenAI;
using OpenAI.Chat;

public const string Instructions =
    "You are a concise geography assistant. Keep answers to one short sentence.";

public static AIAgent BuildAgent()
{
    var provider = Environment.GetEnvironmentVariable("LLM_PROVIDER")?.ToLowerInvariant() ?? "openai";

    if (provider == "azure")
    {
        var azure = new AzureOpenAIClient(
            new Uri(Required("AZURE_OPENAI_ENDPOINT")),
            new ApiKeyCredential(Required("AZURE_OPENAI_KEY")));
        return azure.GetChatClient(Required("AZURE_OPENAI_DEPLOYMENT"))
                    .AsAIAgent(instructions: Instructions, name: "first-agent");
    }

    var openAi = new OpenAIClient(new ApiKeyCredential(Required("OPENAI_API_KEY")));
    return openAi.GetChatClient(Environment.GetEnvironmentVariable("LLM_MODEL") ?? "gpt-4.1")
                 .AsAIAgent(instructions: Instructions, name: "first-agent");
}

var agent = BuildAgent();
var response = await agent.RunAsync("What is the capital of France?");
Console.WriteLine($"A: {response.Text}");

.AsAIAgent(...) is the key method — it’s an extension on ChatClient that returns an AIAgent. Under the hood it creates a ChatClientAgent, but you rarely need that concrete type; AIAgent is what consumers type against.

Run it:

cd tutorials/01-first-agent/dotnet
dotnet run
# Q: What is the capital of France?
# A: The capital of France is Paris.

Quick aside — multimodal input
#

Your first agent is text-only, but the exact same agent accepts images once you pass a multi-content message. In Python:

from agent_framework import ChatMessage, TextContent, UriContent
response = await agent.run([
    ChatMessage(role="user", contents=[
        TextContent(text="Describe this logo:"),
        UriContent(uri="https://…/logo.png", media_type="image/png"),
    ])
])

The .NET equivalent uses new ChatMessage(ChatRole.User, [new TextContent("…"), new UriContent("…", "image/png")]). No agent changes — just a richer message shape. We won’t touch multimodal again until Chapter 21; file this away.

Quick aside — AgentRunOptions
#

Both run() and RunAsync() accept an options object (AgentRunOptions in .NET, options= in Python) where you can set cancellation tokens, per-run temperature, and max-turn budgets. You don’t need it today; later chapters (streaming, orchestrations) will.

Side-by-side differences
#

AspectPython.NET
Agent typeagent_framework.AgentMicrosoft.Agents.AI.AIAgent (typically a ChatClientAgent)
Chat clientOpenAIChatClient (Responses) or OpenAIChatCompletionClient (Chat Completions) — same packageRaw OpenAI.Chat.ChatClient / AzureOpenAIClient.GetChatClient(...), converted via .AsAIAgent()
InstructionsAgent(..., instructions="...").AsAIAgent(instructions: "...")
Invocationawait agent.run("...").textawait agent.RunAsync("...").Text
API switchDifferent class per API pathDifferent factory method (GetChatClient vs GetResponseClient)
Cancellationasyncio task cancellationCancellationToken in RunAsync(..., cancellationToken)

Gotchas
#

  • “API version not supported” on Azure. Your deployment doesn’t expose the Responses API. Use OpenAIChatCompletionClient in Python / plain GetChatClient(...).AsAIAgent() in .NET, with an older api_version like 2024-10-21.
  • MAF v1.0 Python wheel has an empty __init__.py. The tutorials in this series call tutorials/_shared/maf_bootstrap.py at startup to patch it. Upstream will fix this eventually; the bootstrap becomes a no-op at that point.
  • Don’t forget using OpenAI.Chat; in .NET — the AsAIAgent extension lives there.
  • Required("…") throws on missing vars. If a test seems to hang on .RunAsync, the agent never built — check for a missing env var raised out of the factory.

Tests
#

Both sides ship tests that exercise:

  1. A stub chat client returns a canned answer (no network required).
  2. Agent name and instructions propagate correctly to the underlying client.
  3. A real LLM call answering “capital of France” when credentials are present (skipped otherwise).
# Python
cd tutorials/01-first-agent/python
uv run pytest -v

# .NET
cd tutorials/01-first-agent/dotnet
dotnet test tests/FirstAgent.Tests.csproj

11 tests total (6 Python, 5 .NET). Both integration tests successfully hit Azure OpenAI when .env is populated.

How this shows up in the capstone
#

The orchestrator at agents/python/orchestrator/agent.py:86-95 is this exact pattern with more fields (tools, context providers, description). Every specialist agent starts the same way. Once you can read this chapter’s 40 lines, you can read every agent construction in the repo.

In .NET, the equivalent factory lives at agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs.

Further reading & links#

This chapter

Microsoft Agent Framework docs

Where it lives in the capstone

  • Python: agents/python/orchestrator/agent.py:86-95 (orchestrator factory), agents/python/product_discovery/agent.py:85-96 (specialist)
  • .NET: agents/dotnet/src/ECommerceAgents.Orchestrator/Agent/OrchestratorAgentFactory.cs

Series shared resources

What’s next
#

Chapter 02 — Adding Tools teaches the agent to call functions. The agent object doesn’t change; the tools=[...] list does.

MAF v1: Python and .NET - This article is part of a series.
Part 1: This Article

Related

Azure App Registration: A Complete Guide from Zero to Working App

·12 mins
If you have ever tried to call the Microsoft Graph API, integrate with SharePoint, or add “Sign in with Microsoft” to an application, you hit the same wall: you need an App Registration. It is the first thing you do, and if you get it wrong, nothing else works.