What's your biggest frustration with debugging Playwright test failures?
Hey, Product Hunt, 👋
We're launching TestRelic AI tomorrow — a platform that lets you ask your Playwright test suite questions in plain English and get back a rendered artifact: dashboards, sprint reports, root cause analysis, stakeholder slides.
But before the launch, I want to hear from the people this is built for.
What's your current debug loop look like when a test fails in CI?
For most teams I've spoken to, it's something like check CI logs → open Grafana → ping someone on Slack → create a Jira ticket manually → repeat.
That loop is what we're breaking. Ask AI reads across your test runs, failure logs, and production signals and answers in plain English — no queries, no dashboards to configure.
Would love to know:
How many tools does your team jump between per incident?
What's the one question you wish your test suite could just answer?



Replies