CodeHealth MCP Server by CodeScene - Keep AI-generated code healthy and maintainable
by•
CodeHealth MCP Server ensures agents and AI coding assistants write maintainable, production-ready code without introducing technical debt. Using deterministic CodeHealth feedback, it guides agents to spot risks, improve unhealthy code, and refactor toward clear quality targets. Run it locally and keep full control of your workflow while making legacy systems more AI-ready. The result is more reliable AI-generated code, safer refactoring, and greater trust in real engineering workflows.

Replies
CodeHealth MCP Server by CodeScene
A lot of developers have a negative view of AI assisted or generated code, because they tried it out at one point and it created what would be best described as low quality slop, making the job of the developer one of a glorified AI slop cleanup specialist. Nobody likes doing that, so they stopped using AI or formed a very negative view of it. I've been there myself, too.
With the CodeHealth MCP though, you can have a deterministic feedback loop for AI which makes AI self-correct the slop it creates, allowing you to think holistically about your task at hand without having to deal with cleaning up bad AI generated code.
I consider myself a fairly decent software engineer, but not only can the CodeHealth MCP remove the slop cleaning part of my agentic workflow, it also allows me to create better code than I did before, and I think my code pre-AI was already fairly decent, so that's saying something. I truly cannot envision doing agentic programming without CodeHealth MCP anymore. It's either that or I'd much rather write code without AI again.
Do you have similar experiences?
CodeHealth MCP Server by CodeScene
@askonmm Totally agree, it's underrated. The "asking an LLM if LLM code is good" loop has some obvious blind spots.
I’ve tried it out and was quite happy with how easy it is to use. The installation was quick and the whole setup fells intuitive!
CodeHealth MCP Server by CodeScene
@freyawi We're glad you like it! If you have any feedback on how we could improve things further, we're all ears.
CodeHealth MCP Server by CodeScene
@freyawi Great to hear Freya 🙂
CodeHealth MCP Server by CodeScene
Thank you@freyawi I'm glad to hear!
CodeHealth MCP Server by CodeScene
@freyawi Thanks a lot, Freya! Happy to hear that.
CodeHealth MCP Server by CodeScene
When we developed the CodeHealth MCP we benchmarked raw Claude Code refactoring against MCP-guided refactoring. The result: 2-5x improvement in how many code smells Claude Code could solve. And the type of work changed too, from more low level improvements like renames of variables to guided restructuring of the code.
Agents aren't lazy, they're just flying blind and have no incentive to do better.
Read the full thing here: https://codescene.com/blog/making-legacy-code-ai-ready-benchmarks-on-agentic-refactoring
CodeHealth MCP Server by CodeScene
@fredrik_ekstrand Indeed! What I like best is that the MCP takes away the pain of cleaning up poor AI generated code and allows me to do my work in a more holistic way, thus allowing me to achieve more, not only in velocity, but also in breath. I no longer think in terms of code, but in terms of architectural specs, and it's been liberating to me as a generalist.
Really interesting timing on this. I've been using Claude Code heavily and the biggest issue isn't that the AI writes bad code per se, it's that it optimizes for "works now" without considering long-term maintainability. Functions get too long, coupling creeps in, and you don't notice until the PR is already 400 lines. Having code health checks integrated directly into the MCP layer means the AI gets feedback before it even shows you the result. Does this work as a preventive guardrail (blocking unhealthy suggestions) or more as a post-generation linter that flags issues for the developer to decide on?
CodeHealth MCP Server by CodeScene
@elijahbowlby It works as both. You can use it to review code health on demand (code_health_review tool), for before committing (pre_commit_code_health_safeguard tool) and also before pushing (analyze_change_set tool). If you add instructions to your generic agents instructions file, it can do it on its own. An example `AGENTS.md` file is also available in our GitHub repository, here: https://github.com/codescene-oss/codescene-mcp-server/blob/main/docs/AGENTS-standalone.md.
Internally we use it during development to maintain perfect code quality, but also as a last step safety check before pushing our changes.
Does this help answer your question?
CodeHealth MCP Server by CodeScene
@elijahbowlby The cool thing is that the CodeHealth MCP becomes part of that inner agentic developer loop, catching any slip in maintainability early. Claude Code then self-corrects, refactors the offending code, and re-evaluates with the MCP.
I’ve used the MCP internally during development, and its safeguards kick in during virtually every coding session. That also saves a lot of tokens going forward.
Very timely launch. A major theme at ICSE 2026 (https://conf.researchr.org/home/icse-2026) was how to add guardrails in agentic workflows. This MCP server is a meaningful step toward making structural code quality a commodity.
CodeHealth MCP Server by CodeScene
@mrksbrg Indeed! I'm excited for how far we can take this, and what other tools we could create to further improve software quality.
CodeHealth MCP Server by CodeScene
Insightful! @mrksbrg
CodeHealth MCP Server by CodeScene
@mrksbrg That's good news and I'm glad to hear that it's picked up as an important theme :)
CodeHealth MCP Server by CodeScene
Another interesting use case with the CodeHealth MCP that we can dig deeper into is the ROI-calculation.
This ROI calculation is built-in to the MCP via the tool code_health_refactoring_business_case
It uses our validated statistical model and industry benchmarks to translate how improving code health translates into faster development speed and fewer defects. This makes it easier to justify the refactoring investments to stakeholders!
What are your thoughts?
@adam_tornhill_cs anything that I forget to mention?
CodeHealth MCP Server by CodeScene
@romanela_p The built-in ROI calculation is powerful. Refactoring might be a hard sell to a PO/PM who's busy with new features. The ROI calculation puts a business value on refactoring.
And yes, it's based on CodeScene's peer-reviewed research where we developed a statistical model for translating Code Health deltas into business impact: faster and/or better.
CodeHealth MCP Server by CodeScene
@romanela_p One of the shortcomings of many engineers is that they struggle with quantifying important engineering aspects into business impact and thus often fail to convince their managers to green light some important refactoring work. The CodeHealth MCP's code_health_refactoring_business_case tool solves that problem entirely.
Hi PH! I'm Adna, Developer Advocate at CodeScene.
I tested Claude, Copilot, and Cursor on the same legacy file and ended up with the same result: all three passed tests and all three made the code worse - and it happened silently, with no signal telling them they had.
The problem isn't the model. It's that agents have no idea which parts of a codebase are already load-bearing and fragile. They write confidently into broken areas because nothing stops them.
With the MCP Server in the loop: same file, same task, 4.82 → 9.1. Iteratively. The agent verified the delta after each step before moving on. That behavioral shift, knowing where not to be reckless, is what actually changed. Server runs locally, is model-agnostic, and finally, no code leaves your machine.
Happy to answer anything - especially if you've hit this problem yourself: how are you currently catching structural degradation in agent-assisted workflows?
CodeHealth MCP Server by CodeScene
One thing we found in our research is that AI tends to struggle the most in already complex, low CodeHealth codebases, it doesn’t just generate code, it amplifies existing issues.
We found that there's a 60% higher defect risk when applying AI coding tools to unhealthy Code. Here is a link to our whitepaper that is based on the research paper linked above.
Curious, how are you validating code quality when using AI tools today?
CodeHealth MCP Server by CodeScene
I'm curious how you are actually handling this in practice, what does your workflow look like for reviewing or validating AI-generated code before it hits production?
CodeHealth MCP Server by CodeScene
@stefan_persson1 Everybody has a different development flow, of course, but I personally use it something like this: I create an initial prompt to the AI to work on some task. I've instructed it via `AGENTS.md` to always run code health review after every change, and if it has degraded, fix it on its own. This allows me to focus on the task and not the code quality, which the CodeHealth MCP takes care of. Once I'm done with my task I run the `analyze_change_set` tool to make sure that my feature branch doesn't have any degradation's compared to the master branch, and if there are, I will ask AI to fix those issues using CodeHealth MCP guidance.
This makes sure that the code itself is of perfect quality, but of course it can't understand architectural choices, so the very last review is still made by me - a human - to verify that everything looks good. I can focus on architectural analysis, and no longer have to focus on tedious code health parts, which is very liberating.
Code health metrics are crucial for maintainability. I'm curious how this integrates with existing CI/CD pipelines. Does it require specific build tools or can it work with any project structure?
CodeHealth MCP Server by CodeScene
@chen_amber It uses static analysis for its work so the build tools of your project do not matter at all and it can work with any project structure.