To set up your account correctly, please select where you are located.
We created CodeHealth, a metric that predicts where AI coding agents will perform reliably and where they are likely to fail.
Productized in the CodeHealth™ MCP Server for safe, high-quality AI coding.
Without structural guidance, frontier models fix only ~20% of the code health issues.
With MCP-augmented CodeHealth™ guidance, fix rates reach 90–100%.
60% lower defect risk when AI works on healthy code.
Run the MCP server locally, fully under your control. Model-agnostic, it supports AI assistants and agents out of the box.
As AI writes code, the server checks changes against CodeHealth™ signals, detecting risk and preventing tech debt.
Every change is checked automatically. If risk increases, feedback is returned so the AI adjusts and retries in real time.
The AI refactors for maintainability, not passing tests. The code is re-evaluated until CodeHealth™ thresholds are met.
When AI completes the task, code is easier to review and evolve, so productivity lasts longer.
Featured
This package includes:
The CodeHealth™-aware MCP server continuously evaluates AI-generated changes against objective maintainability signals and feeds structured feedback back to the AI when risk increases. This creates a deterministic self-correcting loop that delivers easy to evolve code.
AI refactoring quality improves when code is modular and easy to reason about. The MCP Server guides AI assistants through Code Health reviews, identifies targeted design issues, and enables refactoring in small, measurable steps, verified by CodeHealth™. For large legacy functions, CodeScene ACE can accelerate the initial restructuring into smaller, cohesive units.
CodeScene links CodeHealth™ scores to business outcomes via a validated statistical model. The MCP server exposes ROI impact on velocity, defect rates, and maintenance costs, so you can estimate how improving Code Health affects delivery and justify refactoring.
The CodeHealth™ MCP server is a local Model Context Protocol (MCP) service that allows AI coding assistants and agents to access CodeScene's CodeHealth analysis during development.
CodeHealth provides objective signals about maintainability and change risk, while the MCP server exposes those signals as actionable tools. This gives AI assistants the context they need to avoid introducing technical debt, safeguard AI-generated code, and propose meaningful improvements for more precise refactoring.
The CodeScene MCP Server runs fully locally. All analysis, including Code Health scoring, delta reviews, and business-case calculations — is performed on your machine, against your local repository. No source code or analysis data is sent to cloud providers, LLM vendors, or any external service.
For complete details, please see CodeScene’s full privacy and security documentation.
CodeScene MCP can work with any model your AI assistant supports, but we strongly recommend choosing a frontier model when your assistant offers a model selector (as in tools like GitHub Copilot).
Frontier models, such as Claude Sonnet, deliver far better rule adherence and refactoring quality, while legacy models like GPT-4.1 often struggle with MCP constraints. For a consistent, high-quality experience, select the newest available model.
Getting started with the CodeHealth MCP Server is straightforward:
See the documentation for detailed setup instructions.
Many legacy functions are too large and complex for reliable AI-assisted work. That leads to higher error rates, wasted tokens, and fragile changes. Without objective feedback, AI agents often end up rearranging complexity or making superficial improvements instead of meaningfully improving maintainability.
The MCP server changes that by giving the agent deterministic guidance through a code_health_review. The Code Health score provides a clear, measurable target, while the review explains the specific maintainability issues that need attention. This lets the agent build a structured refactoring plan based on evidence rather than guesswork.
The workflow is simple: review → plan → refactor → re-measure.
For very large legacy functions, the first step is often to break them into smaller, more cohesive units. That increases modularity and makes further AI-assisted refactoring far more reliable. The result is higher Code Health, clearer code intent, and a larger AI-ready surface where agents can work safely and effectively.
CodeHealth provides an objective signal about the maintainability and change risk of code. Through the MCP Server, agents can run tools like code_health_review to assess the quality of a file and identify concrete maintainability issues.
The Code Health score gives agents a measurable goal, while the review highlights specific problems to fix. This allows agents to plan structured refactorings instead of guessing at improvements.
Read Agentic AI Coding: Best Practice Patterns for Speed with Quality.
The MCP protocol exposes powerful tools, but it does not define how agents should combine them into a workflow. When left on their own, AI agents often invoke tools inconsistently or skip important safeguards.
The CodeHealth™ MCP Server addresses this by using an AGENTS.md file that defines the intended workflow and decision logic for agents.
This file documents how agents should use MCP tools in sequence, for example:
Pull risk forward by running a code_health_review before making changes
Safeguard changes using pre-commit and pull-request checks
Enter refactoring loops when CodeHealth declines
By encoding these principles in AGENTS.md, teams ensure that AI agents follow consistent engineering practices instead of discovering guardrails by trial and error.