
We evaluated several orchestration frameworks before choosing LangGraph for Genie's agent architecture. The alternatives either abstracted too much away - making it hard to control exactly when and how tools get called - or couldn't handle the stateful, multi-step flows we needed. LangGraph gave us the right balance: explicit control over agent logic without building everything from scratch. When your AI is touching live business data, you can't afford unpredictable behavior.
Alternatives Considered
Report
1 view
For Genie's RAG layer, we needed managed infrastructure that wouldn't become its own engineering project to maintain. Bedrock let us connect our knowledge base, manage embeddings, and keep retrieval fast - without running our own vector infrastructure. The alternatives either required too much ops overhead or didn't integrate cleanly with the rest of our stack. Bedrock just worked, and that let us stay focused on the product.
Alternatives Considered
Report
1 view
We run both Claude and OpenAI in Genie, and Claude earned its place for a specific reason: complex, multi-step reasoning. When a user asks something open-ended like "Where are we losing pipeline?" Genie needs to chain multiple tool calls, hold business context, and explain its thinking clearly. Claude handles that kind of depth better than the alternatives we tested. For the analytical heavy-lifting, it's the right engine.
Alternatives Considered
Report
1 view
We use OpenAI alongside Anthropic's Claude in Genie's core - and OpenAI earned its place for speed and instruction-following on structured tasks. When Genie needs to parse a complex business question, select the right tool to call, and return a clean, formatted response fast, OpenAI's models are consistently reliable. For an AI analyst, where users expect instant answers, latency and precision both matter.
Alternatives Considered
Report
3 views




