Hyperion is a high-performance LLM gateway with ~5µs overhead (~250× lower than LiteLLM).
It handles smart routing, semantic + exact-match caching, rate limiting, budget control, PII redaction, and failover through a single OpenAI-compatible API.
Built in Go with Redis, Qdrant, and ClickHouse.
SecureShell is a zero-trust execution layer for LLM agents with shell access. It prevents prompt-injection-driven command execution, enforces safety policies, and provides structured feedback for self-correcting agents. Plug-and-play with LangChain, MCP, all major providers and local agent on Ollama and llama.cpp.
MemLayer: The plug-and-play memory layer for smart, contextual LLMs.
MemLayer captures key information from conversations, stores it persistently, and retrieves exactly what your model needs to stay consistent and contextual. Works with any LLM. Build smarter agents in minutes, without managing your own memory system - plus an ML-based extraction module to filter and enhance what gets stored.