Santosh Pai

Agumbe LLM Gateway (and Console) - Guardrails and Provenance for enterprise AI control

Most LLM guardrails don’t make it to production. Agumbe LLM Gateway lets you define guardrails at the app level and enforces them in your real request path. Detect, redact, or block → Prompt injection (direct + indirect) → PII and secrets detection → Denied topics → Output safety and groundedness → and more ... All while ensuring → Budget enforcement → Cheap models for dev, premium reserved for prod and more Test everything through a console that uses the same headless gateway as production.

Add a comment

Replies

Best
Santosh Pai
Hey everyone 👋 Santosh here We built Agumbe LLM Gateway because most tools let you define policies, but they live outside the real request path - so what you test and what runs are different. We wanted something simpler and more practical: * Define guardrails per app * Choose what happens (detect, redact, or block) * Enforce them on every request and response No complex setup. One line of code change to adopt guardrails in existing codebase. The console is just a way to test these guardrails using the same system your app uses - so there’s no gap between experimentation and production. The console exposes a few guardrails, and more available ... Would love feedback from folks building with LLMs: How are you handling safety, prompt injection, or PII today?
Santosh Pai

Early access is currently free