Everyone in the software industry "knows" that code quality matters. But knowing in quotes isn't the same as knowing with data.
Before we built the CodeHealth MCP Server, we spent years building and validating the metric it runs on. That research is peer-reviewed, published at the International Conference on Technical Debt, and based on 39 proprietary production codebases across industries as varied as retail, finance, construction, and infrastructure, covering 40,000 source code modules in 14 programming languages.
CodeHealth MCP Server by CodeScene
Hey Product Hunt 👋
I’m Adam Tornhill, a software developer for over 30 years.
I’ve spent the past decades watching teams plan to fix technical debt... and then not do it.
Now we’ve added AI to the mix, which is fantastic at writing code fast. Unfortunately, it’s just as good at scaling your technical debt if you let it.
This is where it gets interesting: AI agents depend on code health even more than we do.
Sceptical? Here's what the research shows:
AI increases defect risk by more than 60% when working in unhealthy code
At low code health, AI wastes 35–50% more tokens unnecessarily
Most codebases aren’t even close to AI-ready
AI is an accelerator. It amplifies both good and bad in your codebase. So AI doesn’t make technical debt less important. It makes it critical.
That’s why we built the CodeHealth MCP. It plugs code health directly into your workflow so your AI can:
Auto-review AI-generated code before it becomes a problem.
Safeguard code health so it stays maintainable
Help uplift unhealthy code to make it AI-ready
Generating code fast is easy.
Healthy systems at AI speed are the real challenge.
👉 Try it for free. Your code will notice: https://codescene.com/product/code-health-mcp
CodeHealth MCP Server by CodeScene
@adam_tornhill_cs Really resonates. MCP flips this from insight → action.
Instead of just knowing where technical debt is, teams can now operationalize it in real-time workflows, prioritizing hotspots, guiding AI agents, and preventing bad code from scaling.
AI doesn’t just need code. It needs context. That’s where MCP becomes a force multiplier.
CodeHealth MCP Server by CodeScene
@matti_hanell Yes, I think that's the key: Code Health provides objective signals about maintainability and risk. The MCP exposes those signals as actionable tools, turning abstract engineering principles into executable guidance that agents can follow consistently.
@adam_tornhill_cs instructing an agent is hard enough trying to do it in a messy codebase is impossible. CodeHealth MCP feels like 'cleaning up the room' before you ask a guest to come over. Makes the agent way more effective. Congrats on the ship!
CodeHealth MCP Server by CodeScene
@priya_kushwaha1 That's the perfect analogy. And the messy room problem is worse than it looks, agents don't just get confused, they confidently do the wrong thing.
CodeHealth MCP Server by CodeScene
@priya_kushwaha1 Thanks for your kind words! Much appreciated.
Agreed — agents require strong code quality to be effective. I'm convinced that legacy code will be a key bottleneck for enterprise adoption of agentic coding tools.
I'm happy that we can be part of the solution, too.
CodeHealth MCP Server by CodeScene
Thanks @priya_kushwaha1 you should try it out! ☺️
Been a CodeScene user for a while, so when the CodeHealth MCP Server dropped I jumped on it immediately and it's been a great addition to my workflow.
As someone who leans heavily into vibe-coding, having real-time CodeHealth feedback baked directly into my AI coding assistant is a game changer. It catches the kind of subtle technical debt that accumulates fast when you're moving quickly and letting the AI do the heavy lifting. Instead of ending up with a pile of "works but nobody should touch this" code, I actually ship things I'm not embarrassed by later.
If you're already a CodeScene user, this is a no-brainer. And if you're new to it this is a great entry point. The deterministic health scoring gives you something concrete to improve toward, which is way more actionable than vague AI suggestions.
CodeHealth MCP Server by CodeScene
@lht8 "Works but nobody should touch this", we've all shipped that code🙈, and it's even easier to do when the AI is moving fast for you. Really glad the health scoring gives you something concrete to aim at rather than just vibes-based cleanup. Thanks for being a CodeScene user and for jumping on this so quickly 🙌
CodeHealth MCP Server by CodeScene
@lht8 Thanks for that lovely feedback, Marcus. Supper happy to hear that!
This is so important. An AI won't write "good enough" code on its own. In fact, we find that agents often operate in a kind of self-harm mode. They generate code that is inherently incompatible with, well, themselves. (A strange paradox).
With the CodeHealth MCP, we safeguard all code. It's the tool that enabled me and my team to go fully agentic. And we're not looking back 😊
CodeHealth MCP Server by CodeScene
Thanks@lht8 , much appreciated!
This hits a nerve. When I was CTO scaling an engineering team from 15 to 120 people, code review was already our biggest bottleneck - senior engineers spending 30-40% of their time reviewing junior code. Now multiply that by AI-generated PRs that look clean on the surface but silently introduce coupling and complexity. The fact that CodeHealth MCP runs deterministic checks locally is the right call - you need something that catches structural issues before they compound, not after three sprints of building on top of them. Curious how the feedback loop works in practice: when an agent gets a CodeHealth warning, does it typically self-correct in one pass or does it tend to need multiple iterations to converge on healthy code?
CodeHealth MCP Server by CodeScene
@avrisimon You can instruct AI to self-correct by having instructions in your generic `AGENTS.md` or `CLAUDE.md` file (depending on agent), which the agent will read as sort of a global context. We have an example `AGENTS.md` file in our repository here if you want to take a look: https://github.com/codescene-oss/codescene-mcp-server/blob/main/AGENTS.md.
The number of iterations it needs to do to achieve healthy code depends on a few factors, so it's hard to give a concrete number. How bad is the code? The worse the code is, the harder it will be for AI to one-shot the solution. How good is the AI model you use? The better the model, the better it can understand instructions given by the CodeHealth MCP. In general though, with the latest Opus models from Claude and with code health even as low as 2 out of 10, I've personally seen it able to get to 10 out of 10 in just 2 iterations.
The MCP is also great at safeguarding already healthy code so that AI can't start introducing subtle defects or code smells into your code. This is important because healthy code requires a lot less tokens to understand and you need to spend no tokens at all on refactoring, saving you money.
Does that answer your question?
CodeHealth MCP Server by CodeScene
@avrisimon Great point, and that scaling experience really puts the problem in perspective
The speed of generating code with Claude Code or Cursor is incredible but the "did I just create six months of tech debt in 20 minutes" anxiety is real. Having an opinionated quality gate that doesn't change its mind based on how you phrase the prompt is exactly what you need when the code itself is generated by a probabilistic system. Does it catch structural issues too, like functions that are doing too many things or classes that have grown beyond a reasonable scope? Those are the kinds of problems that AI agents love to create - technically correct code that's architecturally messy.
CodeHealth MCP Server by CodeScene
@ben_gend Yes, those are first class citizens in the Code Health score. Functions doing too many things are caught as Brain Methods, a dedicated metric for complex functions that centralize too much behavior. Classes that have grown beyond reasonable scope show up as Brain Classes (large modules with too many responsibilities) or Low Cohesion, which specifically measures whether a class has multiple unrelated responsibilities breaking the Single Responsibility Principle.
There's also Bumpy Road, which catches functions with multiple dispersed chunks of logic that should have been extracted into their own functions.
You can read more about our Code health metric here: https://codescene.io/docs/guides/technical/code-health.html#code-health-identifies-factors-known-to-impact-maintenance-costs-and-delivery-risks
CodeHealth MCP Server by CodeScene
@ben_gend Yeah, "did I just create six months of tech debt in 20 minutes" is really worrying as many developers don't even think about this impact. The see the larger commits but their current task was solved...
CodeHealth MCP Server by CodeScene
Thanks for your question@ben_gend CodeScene looks at some 25 different roles that drives complexity. You can read more about some of the smells here (https://docs.enterprise.codescene.io/latest/guides/technical/code-health.html#module-smells)
Really interesting timing on this. I've been using Claude Code heavily and the biggest issue isn't that the AI writes bad code per se, it's that it optimizes for "works now" without considering long-term maintainability. Functions get too long, coupling creeps in, and you don't notice until the PR is already 400 lines. Having code health checks integrated directly into the MCP layer means the AI gets feedback before it even shows you the result. Does this work as a preventive guardrail (blocking unhealthy suggestions) or more as a post-generation linter that flags issues for the developer to decide on?
CodeHealth MCP Server by CodeScene
@elijahbowlby It works as both. You can use it to review code health on demand (code_health_review tool), for before committing (pre_commit_code_health_safeguard tool) and also before pushing (analyze_change_set tool). If you add instructions to your generic agents instructions file, it can do it on its own. An example `AGENTS.md` file is also available in our GitHub repository, here: https://github.com/codescene-oss/codescene-mcp-server/blob/main/docs/AGENTS-standalone.md.
Internally we use it during development to maintain perfect code quality, but also as a last step safety check before pushing our changes.
Does this help answer your question?
CodeHealth MCP Server by CodeScene
@elijahbowlby The cool thing is that the CodeHealth MCP becomes part of that inner agentic developer loop, catching any slip in maintainability early. Claude Code then self-corrects, refactors the offending code, and re-evaluates with the MCP.
I’ve used the MCP internally during development, and its safeguards kick in during virtually every coding session. That also saves a lot of tokens going forward.
Deterministic is doing a lot of work here and in the best way possible. In a world of AI-generated everything, having a non-LLM signal for code quality feels underrated. What does the scoring model actually look at — cyclomatic complexity, coupling, something proprietary?
CodeHealth MCP Server by CodeScene
@tadej_kosovel Deterministic is the only way in the world of non-deterministic AI, I think.
The scoring model looks at many things; module smells, function smells and implementation smells. Part of those are things such as cyclomatic complexity and coupling indeed, but there's a whole lot more that goes on, and we keep continuously improving that metric as we go along. You can read more specific info on the CodeHealth metric here: https://codescene.io/docs/guides/technical/code-health.html#code-health-identifies-factors-known-to-impact-maintenance-costs-and-delivery-risks.
Does that help answer your question?
CodeHealth MCP Server by CodeScene
@tadej_kosovel Agree 100%. We really believe on deterministic quality signals is key for the current LLMs.
I use AI assisted code a lot now. Actually AI writes most of my code now. One thing has become very clear: AI is great at producing a lot of code. But it amplifies the code quality of what is already in the code base. Bad code gets worse. Good code can stay good, but it is very much the responsibility of the developer to keep it good.
The combination of Codescene extension (free) of the Codescene MCP makes this so much easier. The extension will surface potential problems instantly and show you code smells you probably want to adress. The Codescene MCP allows the coding agent to to be aware of problems and get more details and context on how to fix them.
I love the fact that the agent can end each session with asking codescene mcp for a code review so see where it didn't really cleared the bar, and automatically correct itself.
I also use the MCP server to ask about code that I might think is too complex, or where I sense something is wrong, but can't really put words on it. The MCP is so good at evaluating the code quality and give suggestions for improvements.
The more you work with AI assisted coding, the more important this product becomes. I highly recommend it and it is always the first thing that goes into custom instructions for the AI when I start working on a project.
CodeHealth MCP Server by CodeScene
@johan_nordberg Thanks a lot for your feedback!
I like that. It's a really important aspect of going agentic. Our research finds that AI requires even better code quality than humans, not less. The CodeHealth MCP allows us to pull that risk forward, and strategically refactor code to make it AI-ready.
CodeHealth MCP Server by CodeScene
@johan_nordberg Couldn't agree more on the amplification effect, it's probably the most underrated risk in AI-assisted development right now.