PIC Standard: AI Action Firewall - Stop prompt injection from triggering tools.
by•
Open protocol that forces AI agents to prove their intent and back every important action with verifiable evidence, before anything dangerous happens.
Quick benefits:
- Stops prompt-injection disasters and hallucinations from turning into real money losses or data leaks
- Works locally: no sending sensitive data to the cloud
- Plugs right into LangGraph or your existing agent stack in minutes
- MCP ready
- Free & open-source (Apache 2.0): audit it, fork it, own it


Replies
Hey Product Hunt 👋 I’m Fabio, maker of PIC Standard: AI Action Firewall.
Yes, the viral hook is “stop prompt injection from triggering tools”… but PIC is bigger than prompt injection.
PIC is a general standard for governing agent side-effects. Open Source, Apache 2.0.
Any time an agent is about to call a tool with real impact (💸 money, 🔐 privacy/data export, ☁️ infra/compute, irreversible ops), PIC forces a machine-verifiable contract before execution:
The agent must produce an Action Proposal (PIC/1.0 schema + verifier)
It ties together: intent → impact class → provenance → claims → evidence → exact tool call
If trust/evidence is insufficient → fail-closed and block the action
v0.4.1 supports deterministic, resolvable SHA-256 and Ed25519 Signature evidence (evidence IDs can point to real artifacts)
Why this matters: guardrails mostly focus on what the model says. PIC focuses on what the agent can do.
If you are building agents with tools, I would love feedback:
Which tool actions should be “high impact” by default?
What integration should be next after LangGraph + MCP?
Try it quickly (CLI + examples in the repo):
If PIC resonates:
⭐ Star the repo so other agent builders find it.
🤝 Contributors welcome, especially for new impact classes (email, billing, CRM export) and integrations (CrewAI next).
Repo: https://github.com/madeinplutofabio/pic-standard