AIP vs prompt guardrails - why we chose cryptography
by•
Most AI safety tools use prompt-level filters:
- "Don't do anything harmful"
- LLM-as-judge (another model watching the first one)
- Retrieval-based guardrails
The problem: all of these are probabilistic. A clever prompt injection
bypasses them. They work 95% of the time, but 5% failure rate on
financial transactions is a disaster.
AIP takes a different approach: cryptographic enforcement.
- Ed25519 signatures (not prompt parsing)
- Deterministic boundary checks (not "vibes-based safety")
- <1ms latency (not 500ms LLM-as-judge calls)
Curious what approach others are using for agent security?
3 views


Replies