fmerian

CodeHealth MCP Server by CodeScene - Keep AI-generated code healthy and maintainable

by
CodeHealth MCP Server ensures agents and AI coding assistants write maintainable, production-ready code without introducing technical debt. Using deterministic CodeHealth feedback, it guides agents to spot risks, improve unhealthy code, and refactor toward clear quality targets. Run it locally and keep full control of your workflow while making legacy systems more AI-ready. The result is more reliable AI-generated code, safer refactoring, and greater trust in real engineering workflows.

Add a comment

Replies

Best
Tijo Gaucher

deterministic feedback as the loop is the part that catches my eye — most coding agents just churn until tests pass. does CodeHealth surface the signal as a tool call result, or does it slot in as a pre-commit gate?

Asko Nõmm

@tijogaucher You can use it as both, really. In agentic programming you can instruct the agent to run a pre commit code health safeguard tool before committing, and you can instruct it to run the analyze change set tool before pushing, while using the code health review tool during iterations. This ensures code health is always checked throughout the entire flow.

Hirokazu Yoshinaga

We have many non-engineers on our team, and they have started using AI agents to develop various tools. Whilst this is wonderful, we often find ourselves wondering whether it is appropriate to release these tools to the public. When we look at the actual products they have developed, they work perfectly well, but the database structure is a mess—it looks as though it has been cobbled together bit by bit.

Even if we ask engineers to review them, they are often too busy to find the time. In such situations, I believe CodeHealth MCP is a tool that can step in to perform reviews on behalf of engineers and help resolve these issues.

Asko Nõmm

@yoshinaga I have a non-technical friend who has managed to create entire SaaS mini-apps for his business using the CodeHealth MCP in combination with a few other tricks such as instructing the AI to create tests and ensure high test coverage in the `AGENTS.md` file. Cool thing about that is that they don't have to understand what any of this means - just set it up once and they can prompt away towards their goals, and have vastly reduced chance of defects. Of course, using a frontier model like Opus 4.6+ also helps, but the CodeHealth MCP keeps the code health in check and doesn't let it snowball into chaos.

Hirokazu Yoshinaga

@askonmm I believe this MCP is well-suited to the coming era, where even non-engineers will be able to bring their ideas to fruition on their own, provided they have the right concept.

I do have one question, though. At our company, even our engineers include ‘everything-claude-code’ and ‘skills/mattpocock’ in Claude Code—what is the main difference between these skills and the ones mentioned here?

Asko Nõmm

@yoshinaga Main difference is in breath of the instructions used. Developers will add lots of middleman tooling to make sure AI works the way they want, but it might not make sense to do so for a non technical person who lacks the tooling know-how, so it's best to keep it simple. Make sure the code health is good, make the the logic is covered with tests. That alone goes a long way. After all, the things that non technical people make rarely are supposed to go to production anyway, because at that point you should need the judgement of a professional engineer, but for mini-apps for internal consumption or for themselves, it works well enough.

Hirokazu Yoshinaga

@askonmm It’s certainly wonderful that it manages to achieve so much whilst remaining so simple. I’m looking forward to seeing how it develops!

Ng Jun Sheng

This is the right problem to be solving right now. Vibe coding is shipping a lot of code that works but that nobody will be able to maintain in 6 months.

The MCP angle is smart, putting code health signals directly in the context window where the agent can actually act on them rather than as a separate dashboard nobody checks. Does it surface refactor suggestions inline or just flag issues?

Asko Nõmm

@ng_junsheng It flags issues, but with accurate solutions to those issues so the AI has no problem acting on the feedback without needing concrete code examples given to it.

Romain Petit

Really nice to see the great CodeScene tool as an mcp! But kotlin seems not supported 😭