Everyone in the software industry "knows" that code quality matters. But knowing in quotes isn't the same as knowing with data.
Before we built the CodeHealth MCP Server, we spent years building and validating the metric it runs on. That research is peer-reviewed, published at the International Conference on Technical Debt, and based on 39 proprietary production codebases across industries as varied as retail, finance, construction, and infrastructure, covering 40,000 source code modules in 14 programming languages.
Been using CodeScene for a while to improve code quality and keep things maintainable. Really excited to try the MCP server and see how it can take this further, especially with AI-assisted workflows. Great work on the launch!
CodeHealth MCP Server by CodeScene
@tajib_smajlovic Thank you so much for your support, our team appreciates it a lot. How reliable has AI-generated code been for you in production so far?
@romanela_p It’s quite reliable in production after a thorough review, but I still think AI-generated code needs the right tooling around it. AI-generated code tends to work well in cleaner parts of the codebase, but in more complex or legacy areas it can introduce issues that are easy to miss. That’s where CodeScene has been helpful for me, by tracking code health and helping catch problems early.
CodeHealth MCP Server by CodeScene
@tajib_smajlovic Hi Tajib, that's really good insighs and also what we've seen from our research. When agents operate on unhealthy code, the defect risk increases by at least 60%. What we also saw that, based on the patterns, the relationship is not linear. Our study included only "problematic" code, on our Code Health scale rating ≥ 7.0.
The research never touched the truly unhealthy code found in many legacy codebases, modules scoring 4, 3, or even 1. In very unhealthy code, breakage may become the default behaviour.
This is the risk we removed with the CodeHealth MCP when enabled in the AI workflow, since the MCP is deterministic and auto-reviews the generated code continuously, flagging any potential code health issues. The agent is then "forced" in to a refactoring loop until all the issues are resolved and the generated code is healthy enough. So the MCP guides the agent to ensure that the code is healthy, free from technical debt and ready for production.
CodeHealth MCP Server by CodeScene
@tajib_smajlovic I'm glad you like the product!
CodeHealth MCP Server by CodeScene
@tajib_smajlovic Great to hear Tajib! Looking forward to hearing your thoughts on the MCP 🙏
CodeHealth MCP Server by CodeScene
@tajib_smajlovic Thanks for the feedback Tajib!
CodeHealth MCP Server by CodeScene
Thank you@tajib_smajlovic !
Very timely launch. A major theme at ICSE 2026 (https://conf.researchr.org/home/icse-2026) was how to add guardrails in agentic workflows. This MCP server is a meaningful step toward making structural code quality a commodity.
CodeHealth MCP Server by CodeScene
@mrksbrg Indeed! I'm excited for how far we can take this, and what other tools we could create to further improve software quality.
CodeHealth MCP Server by CodeScene
Insightful! @mrksbrg
CodeHealth MCP Server by CodeScene
@mrksbrg That's good news and I'm glad to hear that it's picked up as an important theme :)
I’ve tried it out and was quite happy with how easy it is to use. The installation was quick and the whole setup fells intuitive!
CodeHealth MCP Server by CodeScene
@freyawi We're glad you like it! If you have any feedback on how we could improve things further, we're all ears.
CodeHealth MCP Server by CodeScene
@freyawi Great to hear Freya 🙂
CodeHealth MCP Server by CodeScene
Thank you@freyawi I'm glad to hear!
CodeHealth MCP Server by CodeScene
@freyawi Thanks a lot, Freya! Happy to hear that.
CodeHealth MCP Server by CodeScene
I'm curious how you are actually handling this in practice, what does your workflow look like for reviewing or validating AI-generated code before it hits production?
CodeHealth MCP Server by CodeScene
@stefan_persson1 Everybody has a different development flow, of course, but I personally use it something like this: I create an initial prompt to the AI to work on some task. I've instructed it via `AGENTS.md` to always run code health review after every change, and if it has degraded, fix it on its own. This allows me to focus on the task and not the code quality, which the CodeHealth MCP takes care of. Once I'm done with my task I run the `analyze_change_set` tool to make sure that my feature branch doesn't have any degradation's compared to the master branch, and if there are, I will ask AI to fix those issues using CodeHealth MCP guidance.
This makes sure that the code itself is of perfect quality, but of course it can't understand architectural choices, so the very last review is still made by me - a human - to verify that everything looks good. I can focus on architectural analysis, and no longer have to focus on tedious code health parts, which is very liberating.
Asa.team
This is the right problem to be solving right now. Vibe coding is shipping a lot of code that works but that nobody will be able to maintain in 6 months.
The MCP angle is smart, putting code health signals directly in the context window where the agent can actually act on them rather than as a separate dashboard nobody checks. Does it surface refactor suggestions inline or just flag issues?
CodeHealth MCP Server by CodeScene
@ng_junsheng It flags issues, but with accurate solutions to those issues so the AI has no problem acting on the feedback without needing concrete code examples given to it.
CodeHealth MCP Server by CodeScene
When we developed the CodeHealth MCP we benchmarked raw Claude Code refactoring against MCP-guided refactoring. The result: 2-5x improvement in how many code smells Claude Code could solve. And the type of work changed too, from more low level improvements like renames of variables to guided restructuring of the code.
Agents aren't lazy, they're just flying blind and have no incentive to do better.
Read the full thing here: https://codescene.com/blog/making-legacy-code-ai-ready-benchmarks-on-agentic-refactoring
CodeHealth MCP Server by CodeScene
@fredrik_ekstrand Indeed! What I like best is that the MCP takes away the pain of cleaning up poor AI generated code and allows me to do my work in a more holistic way, thus allowing me to achieve more, not only in velocity, but also in breath. I no longer think in terms of code, but in terms of architectural specs, and it's been liberating to me as a generalist.
This is clearly needed. Agents are capable of writing excellent code, but left alone they choose not to.
I try to find ways to micromanage quality less and this is the best I’ve seen so far.
CodeHealth MCP Server by CodeScene
@johan_martinssonInteresting point about micromanaging, it actually help you with that.
CodeHealth MCP Server by CodeScene
@johan_martinsson1 Thank you! We think the CodeHealth MCP is the missing link in agentic programming. You should definitely give it a go!
CodeHealth MCP Server by CodeScene
@johan_martinsson1 Not having to micromanage quality is exactly the goal, agents should self-correct, not wait for a human to notice the mess