
Cencurity
Security gateway for LLM agents
65 followers
Security gateway for LLM agents
65 followers
Cencurity is a security gateway that proxies LLM/agent traffic and detects / masks / blocks sensitive data and risky code patterns in requests and responses, while recording everything as Audit Logs.







@vlad1323 Letting an agent touch prod APIs is where it gets scary, a prompt injection can turn into real side effects. Cencurity as a security gateway makes sense if it can fail closed on unsafe tool calls, force supervised mode for state changes, and keep an audit trail tied to user identity. Does it support per-tool, per-action scopes plus redaction at the gateway? An OpenTelemetry export to your SIEM would make adoption way smoother.
@piroune_balachandran Totally agree — prompt injection turning into real side effects is the scary part.
Fail-closed enforcement: We enforce policies at the gateway and can block requests/responses when a rule matches (including streaming paths).
Redaction: Yes — sensitive patterns can be masked/redacted at the gateway before content goes upstream or gets returned.
Per-tool / per-action scopes: We can scope enforcement by endpoint/provider/direction today, and we’re extending this into finer-grained “tool/action” policies for agent/MCP-style calls.
Audit trail + identity: We already log the enforcement decision + context (e.g., tenant, client IP, direction). We’re adding stronger actor identity binding (API key label/user identity) to make audits more actionable.
Supervised / approval mode: In progress — goal is explicit approval for state-changing/high-risk actions.
OpenTelemetry / SIEM export: Planned — structured logs exist, and OTel export is on the roadmap.
If you want, tell me your “must-have” first policy check and the tool/action you’d gate behind approval — I’ll prioritize based on real workflows.
@vlad1323 Congratulations Vlad! Can you describe how Cencurity functions as a security gateway for LLM agents, what are its key components?
@vlad1323 how does this differ from prompting or training an LLM that checks for security issues and how do you resolve potential indirect issues. Also how is it different from other code review tools?
@aman_kaushik18 Great question.
This isn’t about prompting an LLM to review code or training a security-aware model. Cencurity operates at the infrastructure layer — it sits as a proxy between IDE/agent and the upstream LLM provider.
That means enforcement happens at runtime, not as an optional review step. If a risky tool call, sensitive pattern, or policy violation is detected, the request can be blocked or redacted before it ever reaches the model or gets executed.
Regarding indirect issues: we focus on policy-based enforcement and structured extraction (e.g., tool arguments, fenced code blocks) rather than relying purely on semantic “LLM judgment.” The goal is deterministic guardrails at the gateway layer.
Compared to code review tools, this isn’t static analysis of a repository after the fact — it’s real-time control and auditing of agent/LLM traffic in production environments.
It’s less about reviewing code quality and more about controlling what AI systems are allowed to do.
Small technical note for builders interested:
Cencurity runs as a proxy in front of OpenAI-compatible APIs and inspects both inbound tool calls and outbound responses in real-time.
It can block dangerous code patterns before execution and keeps full audit logs for every policy match.
Everything runs locally via Docker, so teams can self-host without sending data to external services.
Happy to answer any technical questions.