Every AI company I looked at was focused on making LLMs more capable. Almost nobody was focused on making them secure in production.
The more I dug into it, the more obvious the gap became. Input scanning existed but if an attack got through, there was nothing on the other side. No output verification. No behavioral monitoring. No way to know if your LLM had been compromised until a user caught it.
I kept asking: who is watching what the LLM sends back? Who is worried about adversarial AI? The answer was nobody.
24-hour update:
We shipped Phase 3 on launch day โ the Agent Action Firewall.
On the same morning we launched, a Claude-powered agent publicly deleted an entire production database in 9 seconds and wrote a confession listing the safety rules it violated. We had a fix live in production by afternoon.
POST /v1/scan/action now intercepts any action your agent wants to take before it executes. DROP TABLE โ auto-blocked. DELETE FROM โ held for your approval. The agent cannot proceed until you approve or deny it.
This is the fourth capability no competitor has. Patent pending.
Free tier is live.
Try it: aevris.ai/?go ๐
Update from launch day: we shipped Phase 3 today โ the Agent Action Firewall.
A Claude-powered agent publicly deleted an entire production database this morning in 9 seconds. Our new POST /v1/scan/action endpoint catches exactly this โ classifies agent actions by reversibility, auto-blocks destructive operations, and holds irreversible ones for human approval before they execute.
Reduced to practice and live in production today.
This is the layer nobody was building. We are now. ๐
Quick update for anyone evaluating AEVRIS:
On the same day we launched, a Claude agent publicly deleted an entire production database โ we shipped the Agent Action Firewall fix that afternoon.
If you're building AI agents and want to see a live demo of the action firewall catching a destructive operation, DM me directly. Happy to walk you through it personally.
Free tier: aevris.ai/?go ๐