TestDino is highly praised for its Playwright-focused reporting capabilities, offering seamless CI/CD integration and AI-driven insights that enhance test automation efficiency. Users appreciate its compatibility with GitHub Actions and local setups, as well as its clear documentation. The platform's ability to classify failures and provide detailed historical data aids in improving test reliability. TestDino is particularly valued by Playwright users for its intuitive interface and actionable intelligence, making it a strong choice for teams seeking to streamline their testing processes.
Roadmap curiosity: today seems Playwright-first (nice!). Are Cypress/Selenium support on the way? Also, any plans for visual-diff signals so “UI change” vs “true regression” gets auto-tagged? Would pair nicely with your AI insights.
I was one of the early users of TestDino and had some role in refining some of the reports. Glad to see how far the product has come now. Good luck to the hard working team. They are great people to work with.
@dinwal So grateful for early adopters like you Dinesh! 🙏 Your contributions to the reporting features honestly helped us find our product-market fit. The team still remembers your detailed feedback docs !
cool dashboard, specifically the role based views are awesome. This may definitely bring transparency to engineering teams.
Exactly the pain point we wanted to solve! @sagar_karathiya
The dashboards automatically surface what's most relevant for each role --
QAs get test stability trends
Developers see specific failure details
Managers will get high-level metrics
🦕It's smart enough to adapt without needing manual configuration.
How do you treat retries. Do you count a passed-on-retry as flaky by default, and can teams configure thresholds per suite or folder?
Looks nice do you provide custom lables? If my qa team override a lable from flaky. Bug does that feedback retain the model for you project? If yes , how long until we see improved predictions.
Love the confidence scores idea. could you explain in detail what factors are considered (timing, retries, assertions)?
also, can a user click to drill down and see why a test was tagged flaky vs bug?
@viraj_limbadia The score looks at the error type and message, retry pattern, timing signals, and how similar failures behaved in past runs. We also consider basic context like environment and branch. You can click into a test to see the label, the short “why failing” note, steps, logs and the screenshot. If it looks off, you can override it and we learn from that.
@pratikpatel01 is SSO on roadmap? okta + google auth would be huge