AEVRIS
p/aevris
Real-time AI threat detection for LLM deployments
β€’0 reviewsβ€’1 follower
Start new thread
trending
Aevrisβ€’

7h ago

Unpopular opinion: input-only AI security is security theater

Every major AI security product scans inputs. Lakera, PromptArmor, Rebuff all inputs. None of them scan what the LLM sends back.

That means if a jailbreak gets through and some always do there is zero detection at the response layer. The attack succeeded. The compromised output is already on its way to your user. Your security dashboard still shows green. That's not security. That's the illusion of security.

Output alignment verification is the layer that actually closes this. It's what AEVRIS does and why we call it the first commercial product to protect both sides of your LLM. Launching tomorrow. Curious what others think am I wrong? Is input-only scanning enough?

Aevrisβ€’

7h ago

Why I built AEVRIS β€” the problem I kept running into

Every AI company I looked at was focused on making LLMs more capable. Almost nobody was focused on making them secure in production.

The more I dug into it, the more obvious the gap became. Input scanning existed but if an attack got through, there was nothing on the other side. No output verification. No behavioral monitoring. No way to know if your LLM had been compromised until a user caught it.

I kept asking: who is watching what the LLM sends back? Who is worried about adversarial AI? The answer was nobody.

Aevrisβ€’

7h ago

I built AEVRIS β€” ask me anything about LLM security, MCP attacks, or the detection architecture

Launching AEVRIS tomorrow at 12:01am. AEVRIS is the only AI security platform that scans both sides of your LLM input and output.

Happy to answer anything about:

  • How the 5-agent detection pipeline works

  • Why output alignment verification matters (and why nobody else does it)

  • MCP tool poisoning what it is and how we catch it

  • The AGI Alignment Guard and what behavioral misalignment looks like at runtime

  • How to integrate AEVRIS in under 10 minutes

Open to other questions as well.

Aevrisβ€’

7h ago

The AI security gap nobody is talking about

Every AI security product on the market scans inputs. Nobody scans outputs. Here's why that matters: if a jailbreak succeeds and your LLM starts behaving badly, every existing security tool has already failed. The compromised response still reaches your user. You find out when someone screenshots it. Output alignment verification is the missing layer.

That's what AEVRIS does and it's the first commercial product to do it.

Aevrisβ€’

4h ago

Aevris - Real-time AI threat detection for LLM deployments

AEVRIS is a multi-agent AI threat detection API β€” the only platform that protects both the input AND output of your LLM. Five specialized agents run in parallel on every prompt. Three industry firsts: Output Alignment Verification (scans LLM responses after generation), AGI Alignment Guard (runtime behavioral monitoring), and Live MCP Tool Inspection (detects tool poisoning before it enters your agent context). Vendor-neutral. Free tier: 500 scans/month, no credit card.