Both the user interface and the integration of the platform are exceptionally smooth, which allows you to focus on what really matters in your system. OKAREO is immensely helpful to see what's going on under the hood. I have used OKAREO with RAG and it has been helpful to gain a better understanding of the strengths and opportunities for improvement of my system. Also, the technical support of OKAREO is impressive, they have provided solutions to all my questions to build my evaluation pipelines, and they gave me good advice to improve my product.
Okareo
Hi Product Hunt β Iβm Matt, Co-Founder & CEO of Okareo π
Thrilled to launch Okareo Error Reporting today π
If youβre spending hours chasing down Agent or RAG issues from scattered traces, Okareo can help. We deliver real-time error reporting through behavioral alerts, seamlessly connected to a structured evaluation and persona-based simulation suite β so you can debug more conditions, faster, and with confidence.
Our immediate goal is to help teams ship agents to production faster and with higher confidence β but the bigger vision is a virtuous loop where agents continuously self-improve.
Weβd love for you to take it for a spin and share your feedback β whatβs working, whatβs missing, and what you'd love to see next.
Thanks for checking us out!
@matt_wymanΒ Hey Matt, Interesting launch. Congrats. This seems like a big issue. These Agents eat up resources when they end up in loops erroneously. Do you have any numbers to share of simple AI agents like a AI calling app as a common use case.
Okareo
Hello @imrajuΒ ! I'm an ML engineer at Okareo, and I can give some insight here.
An agent looping is indeed a common and highly wasteful error pattern. On our error detection platform, we have a "check" (i.e., an LLM-based evaluation) called "Loop Guard." Loop Guard helps us detect when agents are stuck in repetitive patterns, and for one of our development partners, we have seen as much as 25% of their production traffic shows looping behavior.
@matt_wymanΒ Nice launch Matt. Agents self evolve is key, how do you explain to the users though? How exactly do we know it can improve?
@matt_wymanΒ BTW upvote to you!
Okareo
Hello there @halgodΒ ππ½ when we apply a "check" (i.e., an LLM-based evaluation) to an incoming datapoint, the check returns both an outcome (i.e., "pass" or "fail") as well as an explanation. The explanation can be used to help identify the root cause of a failure and to inform the agent developer what improvements can be made to the agent (or the agent network).
Fewsats
Few teams in this space understand what needs to be built to solve LLM observability and reporting challenges as effectively as Okareo does.
Congratulations on the launch! π
No more sifting through a mess of traces β debugging just got a whole lot clearer (and faster!). π
I like the UI !
Is it handcoded or inspired by AI ?
Okareo
Hello @sumΒ ! We use AI to help us out here and there, but our app is fundamentally designed and written by humans :)
Okareo is phenomenal, was one of their first customers and they absolutely crushed it
ion design
Amazing launch, can think of tons of ways this tech could be applied. Especially as tool call chains become more complex, each execution is a surface area for errors.
Real-time monitoring is a must for AI! π