Skip to main content
Home Solutions Approach About Blog
Let's Talk →
← All Articles
Handoffs & Retention

Customer Health Score: How to Build One That Actually Predicts Churn

Customer health scores are one of the most common CS infrastructure projects and one of the most commonly built wrong. The typical pattern: a CS leader designs a scoring model based on what feels important, assigns weights based on intuition, puts it in a spreadsheet or their CS platform, and starts reporting green/yellow/red accounts to leadership. Six months later, accounts that were green churn. Accounts that were yellow renew. The score has no predictive value and everyone knows it.

A customer health score that doesn't predict churn isn't a health score — it's a snapshot of CS team effort masquerading as customer insight. Here's how to build one that actually works.

Start With Churn, Not Signals

The fatal mistake in health score design is starting with signals ("what data do we have access to?") rather than outcomes ("what actually preceded churn in our historical data?"). A health score is a predictive model. Like any predictive model, it should be calibrated against historical outcomes.

Before you design your score, do this analysis: pull every account that churned in the last 18 months. For each one, look at the 90 days before the churn decision. What was their product usage doing? Were they submitting support tickets? Did their executive sponsor change? Were they engaging with QBRs? Were they late on payments?

The signals that appeared consistently before churn are your leading indicators. The signals that appeared randomly or not at all aren't worth weighting. This is the only way to build a health score that has actual predictive validity.

The bias trap: Your historical churn data is biased toward customers whose churn risk was already visible — the accounts that escalated, that sent complaints, that asked hard questions. The hardest churn to predict is silent churn: the customer who says nothing and simply doesn't renew. Your health score needs to catch those accounts, which means the signals need to be behavioral, not self-reported.

The Signals Worth Tracking

Not all signals are created equal. Rank them by how strongly they correlated with churn in your historical analysis. Common high-signal indicators:

Signals with lower predictive value that get overweighted in many health scores: NPS (good for aggregate, weak for individual account prediction), # of features used (unless depth of usage is the actual driver of value), and time since last CSM outreach (measures CS activity, not customer health).

Weighting and Scoring Mechanics

Once you have your signals, assign weights based on their historical correlation with churn. If usage drop preceded 80% of churns in your analysis, it should carry the most weight. If executive sponsor change preceded 40% of churns, it carries less weight but still matters.

A simple approach: assign each signal a maximum score (usage: 30 points, sponsor change: 20 points, support tickets: 15 points, QBR attendance: 10 points, contract signals: 25 points). Green = 75–100 points. Yellow = 50–74 points. Red = below 50.

Test your model against historical data before deploying it. Take your churned accounts and score them using your model as of 90 days before they churned. What percentage were yellow or red? That's your recall rate. If it's below 60%, your model needs refinement.

Making the Score Actionable

A health score only creates value when it drives intervention. Build the playbook alongside the score:

Track your save rate: of accounts that entered yellow or red, what percentage renewed? That's the measure of whether your intervention playbook works. Review it quarterly and adjust.

Connect the health score to your churn prediction system — the score is the detection mechanism, the playbook is the response, and the outcome tracking is how you improve both over time.

Build a CS infrastructure that catches risk before it becomes churn.

I help RevOps and CS teams design health scoring systems, intervention playbooks, and retention infrastructure that actually works.

Talk to Gage →