<img src="https://secure.leadforensics.com/794635.png" style="display:none;">
MCP Backround 2-1
MCP Server

CodeHealth™ MCP Server

Enable AI coding assistants to detect and fix issues using CodeHealth™ guidance, keeping code maintainable and making legacy code AI-ready.
Copilot Chat MCP: codescene-code-health

CodeHealth™

We created CodeHealth, a metric that predicts where AI coding agents will perform reliably and where they are likely to fail.

MCP Server

Productized in the CodeHealth™ MCP Server for safe, high-quality AI coding.

20%

Without structural guidance, frontier models fix only ~20% of the code health issues.

90-100%

With MCP-augmented CodeHealth™ guidance, fix rates reach 90–100%.

60%

60% lower defect risk when AI works on healthy code.

How does it work?

STEP 1

Install the MCP server locally

Run the MCP server locally, fully under your control. Model-agnostic, it supports AI assistants and agents out of the box.

STEP 2

AI guided by CodeHealth™

As AI writes code, the server checks changes against CodeHealth™ signals, detecting risk and preventing tech debt.

STEP 3

Self-correcting feedback loop

Every change is checked automatically. If risk increases, feedback is returned so the AI adjusts and retries in real time.

STEP 4

AI code is re-evaluated

The AI refactors for maintainability, not passing tests. The code is re-evaluated until CodeHealth™ thresholds are met.

STEP 5

AI code you'd approve

When AI completes the task, code is easier to review and evolve, so productivity lasts longer.

Empowering the world's top engineering teams
Pricing

Keep AI-generated code maintainable

Scale safely, keep code quality high and increase delivery speed, while maintaining control.
Background
Integrations

AI coding assistants & IDEs

Designed for agentic workflows and composable AI tooling, not tied to any single editor, assistant or model.
GitHub Copilot GitHub Copilot
Cursor Cursor
ChatGPT ChatGPT
Codeium Codeium
Windsurf Windsurf
Claude Code Claude Code
Also works with other AI coding assistants via MCP, including Amazon Q, Gemini Code Assist, Tabnine, Sourcegraph Cody and open-source tools.
JetBrains JetBrains
Visual Studio Code Visual Studio Code
Visual Studio Visual Studio

Solving AI's 3 hardest problems

Safeguard AI output, uplift legacy code and measure the real impact of refactoring.

Safeguarding AI

The CodeHealth™-aware MCP server continuously evaluates AI-generated changes against objective maintainability signals and feeds structured feedback back to the AI when risk increases. This creates a deterministic self-correcting loop that delivers easy to evolve code.

Uplift Unhealthy Code

AI refactoring quality improves when code is modular and easy to reason about. The MCP Server guides AI assistants through Code Health reviews, identifies targeted design issues, and enables refactoring in small, measurable steps, verified by CodeHealth™. For large legacy functions, CodeScene ACE can accelerate the initial restructuring into smaller, cohesive units.

ROI of Refactoring

CodeScene links CodeHealth™ scores to business outcomes via a validated statistical model. The MCP server exposes ROI impact on velocity, defect rates, and maintenance costs, so you can estimate how improving Code Health affects delivery and justify refactoring.

Get CodeScene Whitepaper

A large-scale study shows AI-generated changes fail far more often in unhealthy codebases with defect risk rising 30%+. In the AI era, healthy code isn’t optional.
Whitepaper New Mockup

Frequently asked questions

Can't find the answer here?

The CodeHealth™ MCP server is a local Model Context Protocol (MCP) service that allows AI coding assistants and agents to access CodeScene's CodeHealth analysis during development. 

CodeHealth provides objective signals about maintainability and change risk, while the MCP server exposes those signals as actionable tools. This gives AI assistants the context they need to avoid introducing technical debt, safeguard AI-generated code, and propose meaningful improvements for more precise refactoring. 

In short: The CodeHealth MCP Server guides AI coding assistants using deterministic CodeHealth™ metrics so they can generate healthier code, refactor safely, and prevent technical debt.

The MCP Server connects AI tools to CodeScene’s CodeHealth™ analysis, allowing them to improve code based on objective signals about maintainability and change risk.

With the MCP Server, teams can safeguard AI-generated code, guide AI assistants toward meaningful refactoring based on objective feedback, simplify code reviews by enforcing maintainability standards, and build a business case for refactoring using built-in ROI calculations that translate code health improvements into measurable business outcomes.
Yes, the MCP server is designed to run in your local environment.

The CodeScene MCP Server runs fully locally. All analysis, including Code Health scoring, delta reviews, and business-case calculations — is performed on your machine, against your local repository. No source code or analysis data is sent to cloud providers, LLM vendors, or any external service.


For complete details, please see CodeScene’s full privacy and security documentation.

Yes. The CodeHealth™ MCP Server works with any AI coding assistant, including IDE-native tools such as GitHub Copilot, Cursor, and Windsurf, as well as agents like Claude.

Because it follows the Model Context Protocol (MCP) standard, it is compatible with any LLM, model, or agentic workflow that supports MCP. This allows teams to integrate CodeHealth™ insights into their preferred AI development setup without being tied to a specific editor, agent, or model.
Yes, The CodeHealth MCP server is designed for agentic workflows and composable AI tooling, not tied to any single editor, assistant or model.
Yes. CodeScene supports 30+ programming languages, so the MCP server works across your polyglot codebase.

CodeScene MCP can work with any model your AI assistant supports, but we strongly recommend choosing a frontier model when your assistant offers a model selector (as in tools like GitHub Copilot).

Frontier models, such as Claude Sonnet, deliver far better rule adherence and refactoring quality, while legacy models like GPT-4.1 often struggle with MCP constraints. For a consistent, high-quality experience, select the newest available model.

Getting started with the CodeHealth MCP Server is straightforward:

  • Set your token as the CS_ACCESS_TOKEN environment variable. This allows the MCP Server to access CodeHealth analysis.
  • Install the MCP Server as an executable (via Homebrew for Mac/Linux, Windows installer, or manual download) or run it using Docker.
  • Connect the MCP Server to your AI assistant by following the setup instructions for your environment.
  • Add the AGENTS.md file to your repository. This file defines how AI agents should use the MCP tools and enforces safeguards for AI-assisted coding.

See the documentation for detailed setup instructions.

Yes. When you sign up for the CodeHealth MCP Server, a 4-week free trial begins automatically. You won’t be charged during the trial period. After the trial ends, the subscription continues as a paid plan unless you cancel.
The paid plan costs €8 or $9 per month. Customers in the United States are billed in USD, while customers in all other regions are billed in EUR. A 10% discount is available with annual billing.
Yes. You can cancel your CodeHealth MCP Server subscription at any time. Your access will remain active until the end of the current billing period.
The MCP Server prevents AI tools from introducing technical debt by surfacing maintainability issues such as high complexity, deep nesting, and low cohesion.

It enforces deterministic, objective code health metrics through the MCP Server, triggering a self-correcting refactoring loop when quality issues are detected. This ensures that every AI-generated change is evaluated and does not introduce additional risk. If a defect risk is detected, the MCP Server prompts the AI agent to adjust the code and then reassess it through a Code Health review, which is part of the mandatory workflow.

As a first line of defense, this automated feedback loop safeguards production code by preventing AI agents from introducing technical debt.

Many legacy functions are too large and complex for reliable AI-assisted work. That leads to higher error rates, wasted tokens, and fragile changes. Without objective feedback, AI agents often end up rearranging complexity or making superficial improvements instead of meaningfully improving maintainability.

The MCP server changes that by giving the agent deterministic guidance through a code_health_review. The Code Health score provides a clear, measurable target, while the review explains the specific maintainability issues that need attention. This lets the agent build a structured refactoring plan based on evidence rather than guesswork.

The workflow is simple: review → plan → refactor → re-measure.

For very large legacy functions, the first step is often to break them into smaller, more cohesive units. That increases modularity and makes further AI-assisted refactoring far more reliable. The result is higher Code Health, clearer code intent, and a larger AI-ready surface where agents can work safely and effectively.

CodeHealth provides an objective signal about the maintainability and change risk of code. Through the MCP Server, agents can run tools like code_health_review to assess the quality of a file and identify concrete maintainability issues.


The Code Health score gives agents a measurable goal, while the review highlights specific problems to fix. This allows agents to plan structured refactorings instead of guessing at improvements.


Read Agentic AI Coding: Best Practice Patterns for Speed with Quality.

Unhealthy code is not AI-ready. Low Code Health increases the likelihood that agents fail on their task or, increase defects and at best, burn excess tokens.

We know from our peer-reviewed research that AI performs better in healthy code, in fact, the defect risk increases with at least 60% when the code is unhealthy. Your goal is to aim for AI-ready code that has a code health of 9.5, ideally 10. Code that is not-yet AI-friendly needs to be refactored and uplifted before attempting to implement features via agents.

The AI-generated code is evaluated by the Code_health_review, via the MCP server, which assesses the quality of a file and identifies concrete maintainability issues.

Read the research
The MCP Server introduces automated quality safeguards into agentic workflows.

Agents can continuously run a code_health_review, verify changes with a pre_commit_code_health_safeguard, and perform a full change analysis before opening a pull request.

These safeguards detect quality regressions and automatically push agents into refactoring loops until maintainability issues are resolved.

The MCP protocol exposes powerful tools, but it does not define how agents should combine them into a workflow. When left on their own, AI agents often invoke tools inconsistently or skip important safeguards.

The CodeHealth™ MCP Server addresses this by using an AGENTS.md file that defines the intended workflow and decision logic for agents.

This file documents how agents should use MCP tools in sequence, for example:

  • Pull risk forward by running a code_health_review before making changes

  • Safeguard changes using pre-commit and pull-request checks

  • Enter refactoring loops when CodeHealth declines

By encoding these principles in AGENTS.md, teams ensure that AI agents follow consistent engineering practices instead of discovering guardrails by trial and error.

CodeScene uses CodeHealth™ as a proxy for AI readiness. CodeHealth aggregates multiple structural maintainability factors to measure how easy code is to understand, modify, and evolve. CodeHealth is an aggregated metric based on 25+ factors scanned from the source code. The code health factors correlate with increased maintenance costs and an increased risk for defects, evaluated and proven in our peer-reviewed research “The Business Impact of Code Quality”. Read more.

Code with high CodeHealth scores is typically modular and easier to change, which makes it safer and more reliable for AI agents to work with.

Our Code Health Score scale goes from 1 to 10. Healthy code with low risk is from 9+ on the scale, but for AI-ready code, it’s 9.5, ideally a perfect 10.

Problematic code, with increased maintenance efforts, increased technical debt is between 4-8.9, Unhealthy code is ranked between 1-3.9 and means severe technical debt, unhealthy, expensive code to maintain.
You want to aim for a Code Health of at least 9.5, ideally a perfect 10.0 for AI-ready code.

In our research “Code for Machines, Not Just Humans: Quantifying AI-Friendliness with Code Health Metrics” we show that when AI-coding assistants and agents operate on unhealthy code, the defect risk increases by at least 60%. As the code health decreases, we also see that defect rates rise sharply in deeply unhealthy, tangled code. Agents get confused by the same patterns as humans.

Read the research