Skip to main content

MAF v1 — Setup your dev environment (Python + .NET)

Nitin Kumar Singh
Author
Nitin Kumar Singh
I build enterprise AI solutions and cloud-native systems. I write about architecture patterns, AI agents, Azure, and modern development practices — with full source code.
MAF v1 — Setup your dev environment (Python + .NET)
Table of Contents
MAF v1: Python and .NET - This article is part of a series.
Part 0: This Article

Series note — This is chapter 0 of MAF v1: Python and .NET. The original Python-only series at Building a Multi-Agent E-Commerce Platform assumed you had the Python stack ready. This series covers both languages, so you need a slightly bigger tool belt.

Repo — All runnable code for the series lives at github.com/nitin27may/e-commerce-agents. Clone it once and keep it open; every chapter folder under tutorials/ is self-contained.

Why this chapter
#

The rest of the series runs real code against real LLMs. You need one Python toolchain, one .NET toolchain, Docker for the supporting infra, and a key for either OpenAI or Azure OpenAI. Do this once and forget it.

If you have never heard of the Microsoft Agent Framework before, read the next section first — it will make the installation steps feel less arbitrary. If you’re already running MAF locally, skip straight to Step 1.

What is the Microsoft Agent Framework?
#

The Microsoft Agent Framework (MAF) is Microsoft’s SDK for building LLM-powered agents and multi-agent workflows. It ships in two flavours that share the same abstractions:

  • Python — the agent-framework package (plus agent-framework-a2a, agent-framework-openai, and friends).
  • .NET — the Microsoft.Agents.AI family of NuGet packages (built on Microsoft.Extensions.AI).

Under the hood MAF wraps three moving parts. An agent is an LLM (called a chat client in MAF) plus instructions, tools, and optional middleware. A session is how MAF remembers a multi-turn conversation. A workflow is a graph of executors you use when an LLM-driven agent is too free-form and you want deterministic orchestration instead.

This tutorial series builds a full e-commerce platform on top of those three primitives. The stack you’re installing in this chapter is a superset of what any individual chapter needs — we’re over-provisioning so later chapters don’t ask you to install anything new.

%%{init: {'theme':'base', 'themeVariables': { 'primaryColor': '#2563eb','primaryTextColor': '#ffffff','primaryBorderColor': '#1e40af', 'lineColor': '#64748b','secondaryColor': '#f59e0b','tertiaryColor': '#10b981', 'background': 'transparent'}}}%% flowchart LR classDef core fill:#2563eb,stroke:#1e40af,color:#ffffff classDef external fill:#f59e0b,stroke:#b45309,color:#000000 classDef success fill:#10b981,stroke:#047857,color:#ffffff classDef infra fill:#64748b,stroke:#334155,color:#ffffff subgraph lang["Your laptop — language toolchains"] py["Python 3.12 + uv
agent-framework"] net[".NET 9 SDK
Microsoft.Agents.AI"] end subgraph infra_stack["Docker Compose (local infra)"] pg[(Postgres 16
+ pgvector)] redis[(Redis 7)] aspire[Aspire Dashboard
:18888] end subgraph web["Frontend"] next["Next.js 16
Node 20 + pnpm"] end subgraph cloud["LLM provider (pick one)"] openai(["OpenAI
gpt-4.1"]) azure(["Azure OpenAI
gpt-4.1 deployment"]) end py --> pg py --> redis py --> openai py --> azure net --> pg net --> redis net --> openai net --> azure next --> py next --> net py -. OTel .-> aspire net -. OTel .-> aspire class py,net core class openai,azure external class pg,redis,aspire infra class next success

You install the blue pieces (language toolchains) and the grey pieces (Docker + infra) locally. You bring your own orange piece (one LLM provider key). The green piece is Next.js, which talks to whichever backend you run.

Prerequisites
#

You need a Unix-like shell. macOS and Linux work out of the box; on Windows, use WSL2. Everything below assumes bash or zsh.

If you plan to follow the official Microsoft tutorials alongside this series, keep the MAF docs tabs open:

Step 1 — Install the toolchains
#

uv (Python package manager)
#

curl -LsSf https://astral.sh/uv/install.sh | sh

Verify and install Python 3.12:

uv --version          # expect: uv 0.5.x or later
uv python install 3.12

Why uv and not pip / poetry — it resolves and installs 10–100× faster, manages the virtualenv for you, and is what every tutorial in this repo pins. Treat uv as the single entry point for Python deps.

.NET 9 SDK
#

PlatformInstall
macOSbrew install --cask dotnet-sdk
UbuntuMicrosoft’s official instructions
Windowsdotnet.microsoft.com/download

Verify:

dotnet --list-sdks    # expect: 9.0.x or later

Docker + Compose v2
#

Install Docker Desktop (macOS / Windows) or Docker Engine + docker-compose-plugin on Linux. docker compose version must print a v2.x string.

Node 20 + pnpm (for the frontend)
#

# If you don't have Node:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
nvm install 20

# pnpm:
corepack enable pnpm

Step 2 — Clone and configure
#

git clone https://github.com/nitin27may/e-commerce-agents.git
cd e-commerce-agents
cp .env.example .env

Open .env and pick one provider. You only need one key to follow the series; the code reads LLM_PROVIDER and switches automatically.

Option A — OpenAI
#

LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...           # from platform.openai.com/api-keys
LLM_MODEL=gpt-4.1
EMBEDDING_MODEL=text-embedding-3-small

Get a key at platform.openai.com/api-keys. The series assumes a model with tool-calling support — gpt-4.1, gpt-4o, or gpt-4o-mini all work.

Option B — Azure OpenAI
#

LLM_PROVIDER=azure
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_KEY=...
AZURE_OPENAI_DEPLOYMENT=gpt-4.1
AZURE_OPENAI_API_VERSION=2025-03-01-preview
AZURE_EMBEDDING_DEPLOYMENT=text-embedding-3-small

On Azure OpenAI, AZURE_OPENAI_DEPLOYMENT is the deployment name you created in the portal — not the model family. If your deployment is called gpt-4.1-prod, that’s what goes here.

Environment variable reference
#

VariablePurposeRequired whenDefault
LLM_PROVIDERSelects OpenAI or Azure OpenAI code pathsAlwaysopenai
LLM_MODELModel name (OpenAI only)LLM_PROVIDER=openaigpt-4.1
OPENAI_API_KEYOpenAI secret keyLLM_PROVIDER=openai
EMBEDDING_MODELEmbedding model (OpenAI only)Embeddings-aware chapterstext-embedding-3-small
AZURE_OPENAI_ENDPOINTAzure resource URLLLM_PROVIDER=azure
AZURE_OPENAI_KEYAzure resource keyLLM_PROVIDER=azure
AZURE_OPENAI_DEPLOYMENTChat deployment nameLLM_PROVIDER=azure
AZURE_OPENAI_API_VERSIONAzure REST API versionLLM_PROVIDER=azure2025-03-01-preview
AZURE_EMBEDDING_DEPLOYMENTEmbedding deployment nameEmbeddings-aware chapters
JWT_SECRETSigns user JWTs (capstone auth)Capstone onlydev default
AGENT_SHARED_SECRETInter-agent x-agent-secret headerCapstone onlydev default

Keep JWT_SECRET and AGENT_SHARED_SECRET at their defaults for local dev. They’re rotated in production only.

Step 3 — Verify
#

One script checks everything at once:

./scripts/verify-setup.sh

Expected output:

Tooling
  ✓ uv (Python package manager)
  ✓ Python 3.12+
  ✓ .NET SDK 9+
  ✓ Docker
  ✓ Docker Compose v2
  ...

Summary
  All 15 checks passed.

If any check shows ✗, fix that item and re-run — the script is idempotent. See Troubleshooting below for common failures.

Step 4 — Run something
#

Bring up the Python stack:

./scripts/dev.sh

Log in with any seeded test user (see the repo root README.md) and try a prompt like “show me running shoes under $100”.

To try the .NET stack (as each chapter’s dotnet/ side lands):

docker compose -f docker-compose.dotnet.yml --profile agents up --build

Both compose files expose the backend on :8080, so you only run one backend at a time.

Side-by-side differences
#

This chapter is pure tooling — no agent code yet. Two cross-cutting differences are worth flagging before Chapter 01:

ConcernPython.NET
Env-var loadingpydantic-settings reads .env automaticallyASP.NET Core reads env vars + launchSettings.json; .env needs a small loader (shipped in ECommerceAgents.Shared)
Package managementuv sync (per-project lockfile)dotnet restore with central package management via Directory.Packages.props
RuntimePython 3.12 — single interpreter per service.NET 9 — one runtime per service, can run self-contained
LLM SDKopenai + azure-openai, wrapped by agent-framework-openaiAzure.AI.OpenAI + Microsoft.Extensions.AI.OpenAI, wrapped by Microsoft.Agents.AI

Troubleshooting
#

The most common verify-setup.sh failures and what to do about them:

SymptomLikely causeFix
✗ uv (Python package manager)uv binary not on PATHRestart your shell, or add ~/.local/bin to PATH.
✗ Python 3.12+uv installed but no 3.12 toolchainuv python install 3.12 then re-run.
✗ .NET SDK 9+Only older SDKs installeddotnet --list-sdks — if nothing ≥ 9.0, reinstall from the vendor link above.
✗ Docker Compose v2Standalone docker-compose (v1) installedUse docker compose (no hyphen). Install docker-compose-plugin on Linux.
✗ OPENAI_API_KEYPlaceholder still in .envThe script rejects sk-your-openai-api-key-here. Paste your real key.
Requests fail with 404 on AzureDeployment name mismatchAZURE_OPENAI_DEPLOYMENT must equal the portal deployment name exactly, not the model family.
docker compose freezes on macOSDocker Desktop stalledQuit Docker Desktop → open again, or reinstall if it won’t start.
bind: address already in use on 5432 / 6379 / 8080Existing local Postgres / Redis / other serviceStop the host service, or change ports in docker-compose.yml.
./scripts/dev.sh: Permission deniedScripts not executable after clonechmod +x scripts/*.sh

Gotchas
#

  • Azure deployment name vs model family. Portal shows gpt-4.1-prod → put gpt-4.1-prod in .env, not gpt-4.1. This is the #1 Azure pitfall.
  • verify-setup.sh rejects the placeholder. OPENAI_API_KEY=sk-your-openai-api-key-here is a sentinel. Replace with your real key.
  • WSL2 + Docker Desktop networking. On Windows, the containers bind to WSL2’s localhost; confirm localhost:3000 reaches it.
  • Cost awareness. Every request hits a paid LLM. gpt-4.1 is cheap per-request, but tight loops (e.g. streaming tests) add up. Use gpt-4o-mini for iteration if you want to keep the meter low.

Tests
#

This chapter’s “test” is the verify script. Run it in CI to catch toolchain regressions:

./scripts/verify-setup.sh
echo "exit code: $?"   # 0 if all checks pass

The script exits non-zero on the first failure, making it usable as a gate in GitHub Actions.

How this shows up in the capstone
#

verify-setup.sh lives alongside scripts/dev.sh and is linked from the repo root README.md. It is the single source of truth for “my environment is ready” — both for newcomers on their first clone and for CI.

Further reading & links#

This chapter

Microsoft Agent Framework docs

Supporting tools

Series shared resources

What’s next
#

Chapter 01 — Your First Agent takes the stack you just installed and builds the smallest possible MAF agent — about 40 lines per language — to prove everything is wired up.

MAF v1: Python and .NET - This article is part of a series.
Part 0: This Article

Related

Azure App Registration: A Complete Guide from Zero to Working App

·12 mins
If you have ever tried to call the Microsoft Graph API, integrate with SharePoint, or add “Sign in with Microsoft” to an application, you hit the same wall: you need an App Registration. It is the first thing you do, and if you get it wrong, nothing else works.