


Why I built AEVRIS — the problem I kept running into
Every AI company I looked at was focused on making LLMs more capable. Almost nobody was focused on making them secure in production. The more I dug into it, the more obvious the gap became. Input scanning existed — but if an attack got through, there was nothing on the other side. No output verification. No behavioral monitoring. No way to know if your LLM had been compromised until a user...
Unpopular opinion: input-only AI security is security theater
Every major AI security product scans inputs. Lakera, PromptArmor, Rebuff — all inputs. None of them scan what the LLM sends back. That means if a jailbreak gets through — and some always do — there is zero detection at the response layer. The attack succeeded. The compromised output is already on its way to your user. Your security dashboard still shows green. That's not security. That's the...
The AI security gap nobody is talking about
Every AI security product on the market scans inputs. Nobody scans outputs. Here's why that matters: if a jailbreak succeeds and your LLM starts behaving badly, every existing security tool has already failed. The compromised response still reaches your user. You find out when someone screenshots it. Output alignment verification is the missing layer. That's what AEVRIS does — and it's the...
I built AEVRIS — ask me anything about LLM security, MCP attacks, or the detection architecture
Launching AEVRIS tomorrow at 12:01am. AEVRIS is the only AI security platform that scans both sides of your LLM — input and output. Happy to answer anything about: How the 5-agent detection pipeline works Why output alignment verification matters (and why nobody else does it) MCP tool poisoning — what it is and how we catch it The AGI Alignment Guard and what behavioral misalignment looks like...
