Everyone in the software industry "knows" that code quality matters. But knowing in quotes isn't the same as knowing with data.
Before we built the CodeHealth MCP Server, we spent years building and validating the metric it runs on. That research is peer-reviewed, published at the International Conference on Technical Debt, and based on 39 proprietary production codebases across industries as varied as retail, finance, construction, and infrastructure, covering 40,000 source code modules in 14 programming languages.
Lancepilot
CodeHealth MCP Server by CodeScene
@odeth_negapatan1 Thank you, Odeth!
It's important to have checks that verify AI created code. You could have unit tests in place and instruct AI to make sure that tests pass. You could instruct AI to always check that test coverage is a high percentage (at CodeScene we try to aim for 95%+), this way AI can deterministically check if tests cover the logic it created or not. Finally you could have our CodeHealth MCP which can check for code quality issues, degradations, do uplifting and safeguarding.
Does this help answer your question?
CodeHealth MCP Server by CodeScene
Thanks odeth_negapatan1 you should try it out!
CodeHealth MCP Server by CodeScene
@odeth_negapatan1
Thank you Odeth, really appreciate your kind words and looking forward to hearing your thoughts when you have tried it out :)
CodeHealth MCP Server by CodeScene
One thing we found in our research is that AI tends to struggle the most in already complex, low CodeHealth codebases, it doesn’t just generate code, it amplifies existing issues.
We found that there's a 60% higher defect risk when applying AI coding tools to unhealthy Code. Here is a link to our whitepaper that is based on the research paper linked above.
Curious, how are you validating code quality when using AI tools today?
Really nice to see the great CodeScene tool as an mcp! But kotlin seems not supported 😭