Building the Team Version of Cencurity (Central Policies + Audit Export)
We’re currently building the team version of Cencurity.
The first version focused on protecting individual AI/LLM usage through a security proxy layer.
Now we’re extending it toward team-wide control and visibility.
New additions in progress:
• Central policy enforcement across tenants

• Tenant-level activity visibility

• Exportable audit logs (CSV)

• Aggregated threat scoring

The goal is simple:
When AI touches production systems,
visibility and accountability shouldn’t be optional.
This isn’t about model performance.
It’s about runtime control and auditability.
The solo version remains open source on GitHub, and we’ll continue updating it based on feedback.
Still early. Iterating fast.
Would love thoughts from teams deploying AI in production.
Open source version:
https://github.com/cencurity/cencurity



Replies