Visual, demo-ready multi-agent interaction patterns using Foundry Local + Microsoft Agent Framework
Run multi-agent orchestrations entirely on-device with animated graph visualizations, live message tracing, and replay capabilities.
This demo pack contains seven runnable multi-agent orchestration patterns — Sequential, Concurrent, Handoff, Group Chat, Supervisor Router, Swarm + Auditor, and Magentic One — each with a live web dashboard that animates the agent graph, streams messages in real time, and logs every event for replay. It provides full coverage of every pattern in Microsoft Agent Framework: SequentialBuilder, ConcurrentBuilder, HandoffBuilder, GroupChatBuilder, and MagenticBuilder. It runs entirely on your laptop using Foundry Local (no API keys, no cloud costs), or you can switch to Microsoft Foundry for cloud-hosted models deployed via the model-router with a single .env change. The pack is designed to help developers see how agents collaborate so they can apply the right pattern in their own projects.
- Python 3.10 or later — check your version with
python --version. Download from python.org if needed. - Git — to clone this repo. Download from git-scm.com if needed.
Choose one of the options below. You can switch between them at any time from the UI settings panel.
Foundry Local runs AI models entirely on your device — no internet connection required during inference.
Windows (PowerShell or Command Prompt):
winget install Microsoft.FoundryLocalmacOS: Support coming soon — check foundrylocal.ai for updates.
Microsoft Foundry lets you deploy and route to cloud-hosted models via a single model-router endpoint.
- Sign in at ai.azure.com
- Create or select a project
- In the left sidebar, go to My assets → Models + endpoints
- Click + Deploy model and select your model choose Model-Router — this creates a smart routing endpoint that can balance across multiple model deployments
- Complete the deployment wizard; note the Target URI and API key from the deployment detail page
- Add them to your
.envfile see the .env.example:
> **What is model-router?** The model-router in Microsoft Foundry is a managed deployment type that intelligently routes requests across multiple model deployments based on availability, latency, and quota — giving you a single stable endpoint even as you add or swap underlying models.
---
### Step 2 — Start a model *(Foundry Local — Option A only)*
If you chose **Option B (Microsoft Foundry)**, skip this step — your model is already running in the cloud.
```bash
foundry model run qwen2.5-1.5b
This downloads (first run only) and starts the qwen2.5-1.5b model — a small, fast model well-suited for demos. Leave this terminal open.
Tip: Run
foundry model listto see all available models.
git clone https://github.com/your-org/agent-patterns-foundry-demo.git
cd agent-patterns-foundry-demoA virtual environment keeps this project's Python packages separate from everything else on your system. This is a best practice that avoids version conflicts.
Windows (PowerShell):
python -m venv .venv
.venv\Scripts\Activate.ps1Windows (Command Prompt):
python -m venv .venv
.venv\Scripts\activate.batmacOS / Linux:
python3 -m venv .venv
source .venv/bin/activateOnce activated, your terminal prompt will show (.venv) at the start — this confirms the virtual environment is active.
Troubleshooting (Windows PowerShell): If you see an error about script execution being disabled, run:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUserThen try activating again.
With the virtual environment active, install all required packages:
pip install -r requirements.txtThis installs the Microsoft Agent Framework, Foundry Local SDK, FastAPI, and all other dependencies. It typically takes 1–2 minutes.
Verify the install: Run
pip listto see all installed packages.
python app.pyOpen http://localhost:8765 in your browser — you'll see the demo launcher. Pick any demo card to start it.
Note: If you chose Option A (Foundry Local), make sure it is still running (from Step 2) before launching. If you chose Option B (Microsoft Foundry), ensure your
.envhas the endpoint and API key set.
The fastest way to explore all seven demos is through the unified web app:
python app.pyOpen http://localhost:8765 in your browser. You'll see a card-based launcher showing all demos with their orchestration patterns and agents. Click any card to start that demo and open its live dashboard (graph, timeline, stream panels).
Use ← All Demos in the dashboard header to switch between demos, or Re-run to restart the current one.
You can also run any demo standalone:
python -m demos.maker_checker.run
python -m demos.hierarchical_research.run
python -m demos.handoff_support.run
python -m demos.network_brainstorm.run
python -m demos.supervisor_router.run
python -m demos.swarm_auditor.run
python -m demos.magentic_one.run
# Each opens http://localhost:8765 with the agent graph animationAll five Agent Framework orchestration builders are covered:
| AF Pattern | Framework Builder | Demo(s) |
|---|---|---|
| Sequential | SequentialBuilder |
Maker–Checker PR Review |
| Concurrent | ConcurrentBuilder |
Hierarchical Research Brief, Swarm + Auditor |
| Handoff | HandoffBuilder |
Hand-off Customer Support, Supervisor Router |
| Group Chat | GroupChatBuilder |
Network Brainstorm |
| Magentic One | MagenticBuilder |
Magentic One Assessment |
| # | Demo | AF Builder | Agents | Pattern |
|---|---|---|---|---|
| 1 | Maker–Checker PR Review | SequentialBuilder |
2 (Worker + Reviewer) | Sequential — iterative review loop, up to 3 rounds |
| 2 | Hierarchical Research Brief | ConcurrentBuilder |
4 (Manager + 2 Specialists + Synthesizer) | Concurrent specialists → sequential synthesizer |
| 3 | Hand-off Customer Support | HandoffBuilder |
3 (Triage + Billing + Tech) | Handoff — autonomous mode with termination condition |
| 4 | Network Brainstorm | GroupChatBuilder |
4 peers | Group Chat — round-robin, 4 max rounds |
| 5 | Supervisor Router | HandoffBuilder |
4 (Supervisor + 3 Specialists) | Handoff — Supervisor transfers to matching specialist |
| 6 | Swarm + Auditor | ConcurrentBuilder |
5 (3 Generators + Auditor + Selector) | Concurrent generators → sequential audit + selection |
| 7 | Magentic One Assessment | MagenticBuilder |
4 (MagenticManager + Researcher + Strategist + Critic) | Magentic One — manager routes dynamically, not round-robin |
Each demo launches a web UI at http://localhost:8765 with:
- Graph Panel: nodes are agents, edges are interaction routes; active agent highlighted
- Live Stream: real-time messages as agents communicate
- Timeline: chronological trace with expandable event details
- Replay: load any saved run (JSONL) and replay the animation
agentpatterns/
├── app.py # Unified web launcher (start here)
├── capture_screenshots.py # Playwright E2E screenshot & video capture
├── validate_demos.py # Shim → tests/test_demos.py
├── requirements.txt
├── .env.example
├── shared/
│ ├── runtime/
│ │ ├── foundry_client.py # Foundry Local client (foundry-local-sdk)
│ │ ├── model_config.py # Runtime-switchable provider config singleton
│ │ ├── agent_wrapper.py # Instrumented agent wrapper emitting trace events
│ │ └── orchestrations.py # Pattern helpers using AF orchestration builders
│ ├── events/
│ │ ├── event_types.py # agent_started, agent_message, handoff, etc.
│ │ └── event_bus.py # In-process pub/sub + WebSocket bridge + JSONL log
│ └── ui/
│ ├── server.py # FastAPI + WebSocket server
│ └── static/
│ ├── launcher.html # Demo launcher home page
│ ├── dashboard.html # Per-demo live dashboard
│ ├── graph.js # D3.js force-directed graph with zoom
│ ├── dashboard.js # Dashboard event handling
│ ├── timeline.js # Timeline + trace inspector
│ ├── stream.js # Live message stream
│ └── styles.css # Styling
├── demos/
│ ├── maker_checker/ # Demo 1 — Sequential (SequentialBuilder)
│ ├── hierarchical_research/ # Demo 2 — Concurrent (ConcurrentBuilder)
│ ├── handoff_support/ # Demo 3 — Handoff (HandoffBuilder)
│ ├── network_brainstorm/ # Demo 4 — Group Chat (GroupChatBuilder)
│ ├── supervisor_router/ # Demo 5 — Handoff (HandoffBuilder)
│ ├── swarm_auditor/ # Demo 6 — Concurrent (ConcurrentBuilder)
│ └── magentic_one/ # Demo 7 — Magentic One (MagenticBuilder)
├── agents.md # Agent reference — all 7 demos, model picker docs
├── tests/
│ ├── test_demos.py # E2E demo validation (all 7)
│ ├── test_topology.py # Unit tests — topology.json structure
│ ├── test_event_bus.py # Unit tests — EventBus
│ ├── test_model_config.py # Unit tests — ModelConfig
│ └── test_api.py # Unit tests — FastAPI endpoints incl. model picker
└── docs/
├── architecture.md
├── demo-day-checklist.md
└── walkthrough.md
| Component | Technology |
|---|---|
| Model Runtime | Foundry Local: on-device, OpenAI-compatible |
| Cloud Runtime | Microsoft Foundry: model-router deployment — swap to cloud with a single .env change |
| Orchestration | Microsoft Agent Framework — all five builders covered: SequentialBuilder, ConcurrentBuilder, HandoffBuilder, GroupChatBuilder, MagenticBuilder |
| Agent SDK | agent-framework, agent-framework-orchestrations, agent-framework-foundry-local |
| UI Backend | FastAPI + WebSocket |
| Visualization | D3.js (force-directed graph), vanilla JS (timeline/stream) |
- Install: Visit foundrylocal.ai or use:
winget install Microsoft.FoundryLocal
- List available models:
foundry model list
- Run a model (starts the local service automatically):
foundry model run phi-4-mini
- Check service status (Foundry Local uses a dynamic port):
This returns the actual URL (e.g.
foundry service status
http://localhost:47372). The demos auto-detect this via the foundry-local-sdk.
Note: Foundry Local starts on a dynamic port — do not hardcode
5273. The shared runtime usesFoundryLocalManagerfrom thefoundry-local-sdkto discover the correct endpoint automatically.
Microsoft Foundry provides cloud-hosted models via the model-router — a single managed endpoint that routes requests across your deployed models.
- Sign in at ai.azure.com
- Open your project (or create one: + New project)
- Go to My assets → Models + endpoints → + Deploy model
- Choose your model (e.g.
gpt-4o-mini,Phi-4,Mistral-Large) - Set Deployment type to Model router
- Finish the wizard; on the deployment detail page copy:
- Target URI — the model-router endpoint
- Key — your API key
MODEL_PROVIDER=azure_foundry
AZURE_FOUNDRY_ENDPOINT=https://<your-project>.services.ai.azure.com/models
AZURE_FOUNDRY_API_KEY=<your-api-key>
AZURE_FOUNDRY_MODEL=gpt-4o-mini
# Optional: pin to a specific deployment name
# AZURE_FOUNDRY_DEPLOYMENT=my-gpt4o-deploymentRestart python app.py and all demos will route through Microsoft Foundry. No code changes needed — the ModelConfig singleton reads from .env at startup. You can also switch providers live from the Model Settings panel in the launcher UI (click the ⚙ gear icon or the provider chip in the header).
Tip: The model-router endpoint (
/models) is compatible with the OpenAI Python SDK and the Microsoft Agent Framework without any additional changes.
The launcher includes a live model picker that lets you switch models without editing .env. Click the ⚙ icon (or the provider chip) to open the settings panel.
| Foundry Local | Microsoft Foundry |
|---|---|
![]() |
![]() |
Each model card shows:
- Status badge:
Loaded(in memory),Cached(on disk), orAvailable(in catalog) - Device type (GPU / CPU), size in MB, tool-calling support, publisher
Click any card to select it, then Save & Apply. The status chip in the header updates immediately.
Click ↻ Refresh to re-query the running Foundry Local service for the latest model list.
Enter the Endpoint URL and API Key from your Foundry project. Click ↻ List Models to browse available deployments, then click any entry to select it. Leave Deployment Name blank to use the model name directly.
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/models/local |
Catalog models with loaded/cached/catalog status |
GET |
/api/models/azure |
Deployed models from the configured Foundry endpoint |
GET |
/api/model-config |
Current provider, model, and endpoint settings |
POST |
/api/model-config |
Update and persist provider/model settings |
Copy .env.example to .env and adjust if needed:
cp .env.example .env| Variable | Default | Description |
|---|---|---|
MODEL_PROVIDER |
foundry_local |
foundry_local or azure_foundry |
FOUNDRY_LOCAL_ENDPOINT |
(auto-detected via SDK) | Override local endpoint if needed |
FOUNDRY_MODEL |
qwen2.5-1.5b |
Local model alias |
AZURE_FOUNDRY_ENDPOINT |
— | Microsoft Foundry endpoint URL |
AZURE_FOUNDRY_API_KEY |
— | Microsoft Foundry API key |
AZURE_FOUNDRY_MODEL |
gpt-4o-mini |
Azure model name |
AZURE_FOUNDRY_DEPLOYMENT |
— | Azure deployment name |
UI_PORT |
8765 |
Web UI port |
# Unit tests — no live service needed
python -m pytest tests/test_topology.py tests/test_event_bus.py tests/test_model_config.py tests/test_api.py -v
# E2E demo validation — requires Foundry Local running with a model loaded
python tests/test_demos.py
# All tests
python -m pytest tests/ -v
validate_demos.pyin the project root is a backwards-compatible shim that forwards totests/test_demos.py.
- Foundry Local Homepage
- Foundry Local Documentation
- Foundry Local SDK Reference
- foundry-local-sdk (PyPI)
- Foundry Local GitHub
- Microsoft Agent Framework GitHub
- Agent Framework Orchestration Patterns
- Foundry Local Workshop (Community)
See agents.md for the full agent reference — every agent across all seven demos, their roles, instructions, a pattern decision guide, and model configuration documentation.
| Home | Model Settings — Foundry Local | Model Settings — Azure Foundry |
|---|---|---|
![]() |
![]() |
![]() |
| Demo 1: Maker-Checker | Demo 2: Hierarchical Research |
|---|---|
![]() |
![]() |
| Demo 3: Hand-off Support | Demo 4: Network Brainstorm |
|---|---|
![]() |
![]() |
| Demo 5: Supervisor Router | Demo 6: Swarm + Auditor |
|---|---|
![]() |
![]() |
| Demo 7: Magentic One Assessment | |
|---|---|
![]() |
Regenerate screenshots and a walkthrough video with:
python capture_screenshots.py --video
# Outputs to screenshots/ and screenshots/video/demo_walkthrough.mp4MIT
Note: This project is designed for local development and demos. The web server has no authentication and should only be run on localhost. See SECURITY.md for details.










