Skip to content

Custom Agents

When building AI agents that interact with Provara, follow these principles:

  1. Agents propose, humans dispose — Agents queue commands, humans approve them
  2. Read-only first — Start with diagnostic commands before attempting modifications
  3. One command at a time — Queue individual commands, not scripts
  4. Descriptive notes — Always include a note explaining the agent’s intent

Use this prompt structure to constrain agent behavior:

You have access to my local Provara server.
RULES:
- You MUST ONLY queue commands via the plan() function
- You MUST NOT call /approve or /deny, ever
- Propose ONE command at a time. Wait after queueing.
- Prefer read-only diagnostics (status, list, test) over changes.
ENV:
AGENT_HUB_BASE=http://127.0.0.1:8787
AGENT_HUB_TOKEN=<provided at runtime>
┌────────────────────────────────────┐
│ Your Agent │
│ │
│ ┌──────────┐ ┌───────────────┐ │
│ │ LLM │ │ plan() │ │
│ │ (any) │─▶│ function call │──│──▶ Provara API
│ └──────────┘ └───────────────┘ │
└────────────────────────────────────┘
"""
A minimal agent that checks system status via Provara.
"""
from agents.hub_tool import plan
def diagnostic_agent():
"""Run a series of diagnostic checks."""
checks = [
("whoami", "Verify running user"),
("hostname", "Check hostname"),
("Get-Date", "Current system time"),
("Get-Process python | Measure-Object", "Count Python processes"),
]
for command, note in checks:
result = plan(command=command, note=f"diagnostic: {note}")
print(f"Queued: {result['pending_id']}{note}")
"""
An agent where an LLM decides what commands to run.
"""
import re
from agents.hub_tool import plan
def llm_agent(llm_client, user_request: str):
"""Let an LLM generate a command based on user request."""
prompt = f"""You are a system diagnostics assistant.
The user wants: {user_request}
Generate ONE PowerShell command to help. Output format:
CMD:<powershell command>
Rules:
- Read-only commands only
- No deletions, downloads, or modifications
- No encoded commands or Invoke-Expression"""
response = llm_client.generate(prompt)
match = re.search(r"CMD:(.+)", response)
if not match:
print(f"LLM did not produce a valid command: {response[:100]}")
return None
command = match.group(1).strip()
result = plan(command=command, note=f"llm-agent: {user_request[:100]}")
return result

For complex operations that require multi-line scripts:

"""
Queue a script by writing it to workspace/inbox, then executing it.
"""
from pathlib import Path
from agents.hub_tool import plan
def run_script(script_name: str, script_content: str):
"""Stage a script in workspace/inbox and queue its execution."""
# Write script to inbox
inbox = Path("workspace/inbox")
inbox.mkdir(parents=True, exist_ok=True)
script_path = inbox / script_name
script_path.write_text(script_content, encoding="utf-8")
# Queue execution via Provara
command = f'powershell -NoProfile -ExecutionPolicy Bypass -File "{script_path.resolve()}"'
result = plan(
command=command,
note=f"script: {script_name}",
timeout_s=600
)
return result
# Usage
run_script("check_disk.ps1", """
$drives = Get-PSDrive -PSProvider FileSystem
$drives | Select Name, @{N='Used(GB)';E={[math]::Round($_.Used/1GB,2)}},
@{N='Free(GB)';E={[math]::Round($_.Free/1GB,2)}} | Format-Table
""")

An agent that queues a command and polls for the result:

import time
import httpx
BASE = "http://127.0.0.1:8787"
HEADERS = {"X-Agent-Token": "your-token"}
def queue_and_poll(command: str, note: str = ""):
"""Queue a command and poll until it's been processed."""
# Queue
r = httpx.post(f"{BASE}/plan",
headers=HEADERS,
json={"command": command, "note": note}
)
pending_id = r.json()["pending_id"]
print(f"Queued: {pending_id} — waiting for approval...")
# Poll pending
while True:
r = httpx.get(f"{BASE}/pending", headers=HEADERS)
pending_ids = [p["pending_id"] for p in r.json()]
if pending_id not in pending_ids:
print("Command was approved or denied")
break
time.sleep(2)
# Check runs for result
r = httpx.get(f"{BASE}/runs", headers=HEADERS, params={"limit": 5})
for run in r.json():
if run.get("command") == command:
return run
return None

When designing agents:

  • Never embed tokens in source code — Use environment variables
  • Validate LLM output before passing to plan() — Check for suspicious patterns
  • Log agent decisions — Keep your own log of what the agent intended
  • Rate limit agent calls — Don’t let agents flood the approval queue
  • Use timeout_s appropriately — Long-running commands should have longer timeouts