Feedback wanted: AI that shows what’s really shipping into prod
Hey Product Hunt community 👋
I’m building something to solve a problem I keep running into as an engineer, and I’d love your feedback.
The problem:
AI tools like Copilot and Cursor are writing more of our code now.
PRs move faster. Reviews get shorter.
And honestly, it’s hard to tell:
how much of that code is AI-assisted
whether reviews are actually catching issues
or which PRs are risky before they hit production
Output is visible. Review quality isn’t.
What I built:
Cleq — a tool that connects to GitHub and tries to make this visible.
🤖 Detects AI-assisted code in PRs
⚠️ Flags PRs that look risky before merge
🛡️ Guardian Board — highlights reviewers who consistently catch issues (not just approve fast)
📊 PR Quality Board — shows which PRs are well-reviewed vs high-risk
Instead of ranking people by output, the goal is to surface who’s protecting the codebase and where quality is quietly slipping.
Where I’m at:
Early beta — onboarding a few teams and learning what’s signal vs noise.
My questions for you:
Does this feel like a real problem for your team?
What would you want this to show you that it doesn’t yet?
What would make you not trust a tool like this?
Thanks for any feedback - I’m building this based on what real teams need 🙏

Replies