Every major AI security product scans inputs. Lakera, PromptArmor, Rebuff all inputs. None of them scan what the LLM sends back.
That means if a jailbreak gets through and some always do there is zero detection at the response layer. The attack succeeded. The compromised output is already on its way to your user. Your security dashboard still shows green. That's not security. That's the illusion of security.
Output alignment verification is the layer that actually closes this. It's what AEVRIS does and why we call it the first commercial product to protect both sides of your LLM. Launching tomorrow. Curious what others think am I wrong? Is input-only scanning enough?