-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Open
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationquestionQuestion about using the SDKQuestion about using the SDK
Description
Summary
We've built a governance integration for the OpenAI Agents SDK in the Agent Governance Toolkit (MIT, 6,100+ tests). The adapter lives at packages/agentmesh-integrations/openai-agents-trust/.
What it provides (distinct from prompt-level guardrails)
| Capability | Description |
|---|---|
| Policy enforcement | Deterministic allow/deny rules before tool execution (<0.1ms) |
| Trust guardrails | Cryptographic agent identity with trust scoring (0–1000) |
| Governance hooks | Pre/post execution hooks for policy and audit |
| Audit logging | Hash-chained audit trail for every agent action |
This is complementary to the SDK's existing guardrails — those focus on prompt/output safety, while this handles runtime governance (which tools can be called, by which agents, with what permissions).
Integration approach
The adapter wraps the Agent class with governance middleware, intercepting tool calls for policy evaluation. No changes to the SDK needed.
pip install openai-agents-trustWhy this matters
- Enterprise deployment — runtime policy enforcement and audit trails are prerequisites for production
- OWASP coverage — addresses OWASP Agentic Top 10 risks at the runtime layer
- Handoff governance — trust-gated agent handoffs with accountability
Open question
Would there be interest in listing this as a community integration or documenting a governance middleware pattern?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationquestionQuestion about using the SDKQuestion about using the SDK