Skip to content

AutoGen Integration

Provara integrates with Microsoft AutoGen to provide safe command execution for multi-agent systems. AutoGen agents can queue commands through Provara’s approval pipeline instead of executing them directly.

Install AutoGen dependencies:

Terminal window
uv sync --extra autogen

This adds:

  • autogen-agentchat >= 0.4.0
  • autogen-ext[openai] >= 0.4.0
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ AutoGen │ │ Provara │ │ Human │
│ Planner │────▶│ /plan API │────▶│ Approval │
│ Agent │ │ │ │ │
└──────────────┘ └──────────────┘ └──────────────┘

The AutoGen agent decides what to execute, but the command goes through Provara’s approval queue before reaching the system.

import os
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from agents.hub_tool import plan
MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")
client = OpenAIChatCompletionClient(model=MODEL)
SYSTEM = """You are a cautious operator.
You MUST output EXACTLY one line:
CMD:<powershell command>
No other text. No quotes. Read-only diagnostics only.
Never approve/deny."""
PROMPT = """Return ONE safe read-only PowerShell command to inspect
the Provara local API status.
Prefer one of:
- Test-NetConnection 127.0.0.1 -Port 8787
- Get-Process python | Select Id,Path,StartTime
- Get-ChildItem .\\runtime\\pending | Select Name,Length,LastWriteTime
Output format MUST be exactly: CMD:<powershell command>"""
planner = AssistantAgent("planner", model_client=client, system_message=SYSTEM)
async def main():
team = RoundRobinGroupChat([planner], max_turns=1)
msg = TextMessage(content=PROMPT, source="user")
result = await team.run(task=msg)
# Extract command from agent output
cmdline = None
for m in reversed(result.messages):
if getattr(m, "source", None) == "planner":
cmdline = (m.content or "").strip()
break
if not cmdline or not cmdline.startswith("CMD:"):
raise SystemExit(f"Bad output from planner: {cmdline!r}")
command = cmdline[4:].strip()
plan(command, note="autogen queued")
if __name__ == "__main__":
asyncio.run(main())

The system message constrains the agent to output a single line in CMD: format. This makes parsing deterministic:

SYSTEM = """You MUST output EXACTLY one line: CMD:<command>"""

The prompt explicitly guides the agent toward read-only diagnostics:

PROMPT = """Return ONE safe read-only PowerShell command..."""

Even if the agent suggests a destructive command, Provara’s policy engine will block it.

Using max_turns=1 prevents the agent from entering multi-turn loops:

team = RoundRobinGroupChat([planner], max_turns=1)

The plan() function queues the command — it does not execute it. A human must approve via the UI or /approve API.

For more complex workflows, you can use multiple AutoGen agents where one generates commands and another validates them before queueing:

planner = AssistantAgent("planner", model_client=client,
system_message="Generate diagnostic commands in CMD: format")
reviewer = AssistantAgent("reviewer", model_client=client,
system_message="Review the command. Reply APPROVED or REJECTED with reason")
team = RoundRobinGroupChat([planner, reviewer], max_turns=2)

After the reviewer approves, extract the command and pass it through plan().

Required environment variables:

Terminal window
# Provara
AGENT_HUB_BASE=http://127.0.0.1:8787
AGENT_HUB_TOKEN=your-token
# OpenAI (for AutoGen)
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o-mini