Guardian – Governance infrastructure for AI agents
by•
AI agents can now execute code, call APIs, and run tools.
But most agent architectures still look like:
Agent → Tool → Execution
Which raises a question:
Who approved the action?
I built an open source project called Guardian.
It introduces a deterministic governance layer between agent intent and execution.
Intent → Policy → Decision → Evidence → Execution
The goal is to make autonomous systems:
• auditable
• deterministic
• policy-governed
Repo:
https://github.com/xsa520/guardian
Would love feedback from people building agent systems.
4 views

Replies
Architecture overview:
LLM
↓
Agent
↓
Guardian
↓
Execution
↓
Evidence
https://github.com/xsa520/guardian
Hello Aria
This is a really important problem to solve. As AI agents get more autonomous, the gap between "agent decided to do X" and "someone approved X" is getting dangerously wide. Adding a deterministic governance layer that creates an audit trail is exactly what enterprise adoption needs. The Intent → Policy → Decision → Evidence → Execution flow is clean. Have you thought about integrating with existing agent frameworks like LangChain or CrewAI?