Governed agentic AI: proposal-first execution + audit + routing
I’m Christian (RTH Italia). I’m launching Core Rth (RC1) — a governed AI Control Plane for multi-LLM orchestration, tools, channels, and physical bridges.
The core idea is simple: AI proposes → Guardian audits → Owner approves → execution happens.
No “silent autonomy”, no hidden side effects.
What’s included in RC1:
Multi-LLM routing (cost/latency/privacy aware) + route explain
AI Village (role council: researcher/coder/critic/synth) with live run + synthesis
Guardian with policy DSL + severity profiles + audit trail
Security Vault (AES-256-GCM): secrets are fetched just-in-time for execution; the model never sees tokens
Browser Swarm (Playwright + safe fallback) with SSRF protections
Omni-channels (Telegram / WhatsApp / Mail) with replay-safe testing
Reality bridges (IoT/Robotics/Vehicles) with safety interlocks + emergency endpoints
I’d love early feedback on:
Messaging: does “Sovereign Cognitive Kernel / Mission Control” explain it clearly?
UX: what’s the one screen you’d want on day 1? (pending approvals? telemetry? routing?)
Use cases: where would proposal-first governance save you from trouble?
Repo: https://github.com/rthgit/CORE-RTH
If you want to try it and give practical feedback, reply here — I’m happy to help with setup.
🙏 Thanks


Replies