Garry Tan

Cekura - Observe and analyze your voice and chat AI agents

Out-of-the-box 30+ predefined metrics for analysis on CX, accuracy, conversation and voice quality. Compile perfect LLM judges by annotating just ~20 conversations and auto-improve in Cekura labs. Real-time, segmented dashboards to identify trends in Conversational AI. Smart statistical alerts so that you get notified only when metrics shift from historical baselines. Automated system pings to catch silent production failures.

Add a comment

Replies

Best
Mykyta Semenov 🇺🇦🇳🇱

We are currently building AI support for a large corporation. In such projects, there is an issue with recognizing smaller languages (for example, Swedish). Can you analyze only English, or other languages as well?

Sidhant Kabra

@mykyta_semenov_ We are language agnostic and have customers using us for different languages (German, French, Spanish, Arabic etc) - happy to chat

Nabonita Dash

Super excited for this !!

How many predefined metrics are relevant for chat as well ?

Sidhant Kabra

@nabonita_dash All customer experience and accuracy metrics are applicable on chat (Response Consistency, Relevancy, Hallucination , Tool Call Success, Sentiment etc)

Pranay

Do you have auto debugging features & mcp support for production issues?

Sidhant Kabra

@pranayr0 Yes we have Cekura MCP. We also have skills in Claude code which you can use to improve test cases or metrics

Aditya Lahiri

This is great- especially out of the box metrics. Which ones do people use most in prod?

Satvik Dixit

@aditya_lahiri  Tool call accuracy and expected outcome are core - they tell you if the agent actually did the job. Latency comes next, since delays quickly break real-time UX.

For voice agents, interruption metrics (AI interrupting user, user interrupting AI, interruption evaluation) plus silence duration are very useful too.

Jared Salois

When Cekura flags an issue in production, what does fixing it actually look like in practice? Do teams usually retrain models, tweak prompts, or handle it more on a case‑by‑case basis?

Sidhant Kabra

@jared_salois There are 3 types of issues:

  • prompt level - you tweak

  • model level - you A/B test and measure tradeoffs

  • config level - it is case by case. for eg: there is abrupt silence during a certain tool call - that's because the connection was not setup correctly

Roop Reddy

Congrats. Have you considered integrations with tools like HubSpot or Zendesk for closing the loop on CX insights?

Sidhant Kabra

@roopreddy We have integrations with CX platforms like Salesforce - Hubspot and Zendesk are also in the plans. We build custom integrators based on engineering bandwidth and requirement from enterprises. Meanwhile, customers can use our APIs

Raghav Mehra

This is such a natural evolution from QA to monitoring. Congrats on shipping.

Satvik Dixit

@ragsyme Thanks a ton!

S.S. Rahman

Would love an API-first version of this for deeper integration into internal tooling.

Sidhant Kabra

@syed_shayanur_rahman We already have APIs available for integration - can refer here: https://docs.cekura.ai/api-reference/observability/send-calls

Himani Sah

Congrats team!!! Do you support real-time streaming analysis or is it batch processed right now?

Sidhant Kabra

@himani_sah1 Currently we support post call - we can fetch the call via webhook as soon as it over. You can also send it in batches if preferred.

Rishabh Sanjay

🚀 I’m so proud of the work we’ve done on Cekura Monitoring. I personally worked on the Smart Metric Alerting engine, which saves Voice and Chat AI teams from scrolling through thousands of calls. Now, you only get a ping when something actually feels off.

The best part? The customization. It allows our users to tune out the noise and focus purely on the performance metrics that define their success. It’s a total game-changer for anyone scaling AI agents.

Really helpful feature.