Designing Useful Measurement Systems

You measure what matters, right? Revenue, user growth, engagement, efficiency. You track KPIs, build dashboards, review metrics weekly. You're data-driven. Yet decisions don't improve. Teams game the numbers. Efforts misalign. The measurement system that should guide you creates confusion instead.

The problem isn't measuring—it's measuring badly. Most measurement systems suffer from predictable failures: too many metrics (nothing is important), wrong metrics (measure activity not outcomes), gaming-prone metrics (optimize the number not the goal), or disconnected metrics (no relationship to strategy). A useful measurement system does the opposite: focuses attention, reveals truth, resists gaming, and actually improves decisions.

Designing measurement systems that work requires understanding what makes metrics useful, how systems fail, and how to build frameworks that inform rather than mislead.


What Makes a Measurement System Useful?

The Purpose of Measurement

Not to track everything. To improve decisions and actions.

A useful measurement system:

  • Clarifies what success looks like
  • Reveals when you're on or off track
  • Informs resource allocation
  • Enables learning and improvement
  • Aligns team efforts

A useless measurement system:

  • Generates reports no one uses
  • Measures activity without outcomes
  • Creates perverse incentives
  • Obscures reality behind metrics
  • Diverts effort to gaming numbers

Characteristics of Useful Measurement Systems

Characteristic Why It Matters
Aligned with strategy Metrics must connect to actual goals, not proxy activities
Actionable Data should inform specific decisions; if no action possible, why measure?
Timely Data arrives when decisions are made, not weeks later
Balanced Multiple perspectives prevent over-optimization of one dimension
Simple Few, clear metrics beat many confused ones
Gaming-resistant Hard to manipulate without actual improvement
Leading and lagging Predict future (leading) and confirm results (lagging)

The Fundamental Tension: Comprehensiveness vs. Focus

The Comprehensive Measurement Trap

Natural impulse: Measure everything that might matter.

Result:

  • 50+ metrics tracked
  • Nobody knows which matter most
  • Cognitive overload
  • Everything measured, nothing managed

Problem: When everything is important, nothing is important.


Focus Beats Comprehensiveness

Research finding: Organizations with 3-7 key metrics per goal outperform those with 20+ metrics.

Why focus works:

Focused System (3-7 metrics) Comprehensive System (20+ metrics)
Clear priorities Confused priorities
Memorable Forgettable
Attention concentrated Attention diffused
Gaming visible Gaming hidden in noise
Actionable insights Overwhelming data

Rule: If you can't remember your key metrics, you have too many.


The 80/20 of Measurement

Principle: 20% of metrics provide 80% of decision value.

Implication: Identify critical few, track rigorously. Ignore rest or check only occasionally.

Example:

Organization Critical Few Metrics Secondary/Occasional
SaaS company MRR growth, net revenue retention, CAC:LTV 20+ other metrics (track quarterly)
Hospital Patient outcomes, readmission rate, safety incidents Operational efficiency metrics
University Graduation rate, job placement, research output Countless process metrics

The discipline: Resisting the urge to promote everything to "key metric" status.


Step 1: Start With Strategy

Metrics Must Connect to Goals

Broken approach:

  • Pick metrics because they're measurable
  • Track metrics because competitors do
  • Measure what's easy to measure

Effective approach:

  • Define strategic goals
  • Identify drivers of those goals
  • Measure drivers

The Strategy-Metrics Cascade

Level Question Example
Mission Why do we exist? "Make knowledge accessible"
Strategic Goal What does success look like? "Be primary resource for 10M learners"
Key Driver What causes goal achievement? "Content quality + discoverability"
Metric How do we measure driver? "Content depth score, organic traffic, retention rate"

Alignment test: Can you trace each metric back to strategic goal? If not, why measure it?


Common Misalignment Problems

Problem Example Fix
Activity metrics "Articles published" Measure outcomes: "Knowledge gained (retention, application)"
Vanity metrics "Total registered users" Measure engagement: "Active users, completion rates"
Lagging only "Annual revenue" Add leading: "Pipeline velocity, win rate"
One-dimensional "Revenue only" Add: "Customer satisfaction, product quality"

Step 2: Identify Key Performance Drivers

What Drives Success?

Critical question: What factors, if improved, would most advance strategic goals?

Framework:

Goal Key Drivers How to Identify
Revenue growth New customer acquisition, retention, expansion Historical analysis, cohort studies
Customer satisfaction Product quality, support responsiveness, ease of use Surveys, correlation analysis
Operational efficiency Process bottlenecks, automation level, error rates Value stream mapping, time studies

Leading vs. Lagging Indicators

Lagging indicators:

  • Measure results
  • Historical (what happened)
  • Hard to influence directly
  • Examples: Revenue, profit, market share

Leading indicators:

  • Predict future results
  • Forward-looking
  • Actionable
  • Examples: Sales pipeline, customer retention, product quality

A balanced system needs both:

Lagging (Outcome) Leading (Driver)
Revenue Sales pipeline value, win rate
Customer satisfaction Support ticket resolution time, product bugs
Employee retention Employee engagement scores
Market share Product quality ratings, brand awareness

Rule: If system has only lagging indicators, you know results but can't improve them.


Step 3: Select Core Metrics

The Selection Process

For each strategic goal:

  1. Identify 2-4 key drivers
  2. For each driver, select 1-2 metrics
  3. Result: 3-7 metrics per goal

Example: SaaS Company's Growth Goal

Driver Metric 1 Metric 2
Acquisition New MRR CAC (Customer Acquisition Cost)
Retention Net Revenue Retention Churn rate
Expansion Expansion MRR % customers expanding

Total: 6 core metrics


Criteria for Good Metrics

A good metric is:

Criterion Definition Example
Understandable Anyone can grasp meaning "Customer retention %" vs "Complex cohort survival index"
Comparable Trends over time, benchmarks Month-over-month, industry comparison
Ratio or rate Normalized (not absolute) "Conversion rate" better than "conversions"
Behavior-changing Influences decisions Revenue per customer → focus on expansion

Source: Lean Analytics by Croll & Yoskovitz


The SMART Metric Test

Metrics should be:

Attribute Question Bad Example Good Example
Specific Precisely defined? "User engagement" "Daily active users (logged in + action)"
Measurable Can be quantified? "Brand strength" "Net Promoter Score"
Actionable Can you influence it? "Market conditions" "Sales conversion rate"
Relevant Connects to goal? "Page views" (vanity) "Content completion rate" (engagement)
Time-bound Has update frequency? "Eventually" "Updated weekly"

Step 4: Balance Multiple Perspectives

The Balanced Scorecard Framework

Problem: Over-optimization of one dimension damages others.

Solution: Measure across multiple perspectives.

Kaplan & Norton's Balanced Scorecard (1992):

Perspective Questions Example Metrics
Financial How do we look to shareholders? Revenue growth, profitability, ROI
Customer How do customers see us? Satisfaction, retention, NPS
Internal Process What must we excel at? Cycle time, quality, innovation rate
Learning & Growth How can we improve? Employee skills, engagement, R&D investment

Key insight: Excellence in all four predicts long-term success; optimizing only financial metrics often destroys value.


Example: Hospital Measurement System

Balanced approach:

Dimension Metric Why
Clinical outcomes Mortality rate, complication rate Core mission
Patient experience Satisfaction scores, wait times Quality of care
Operational Bed utilization, procedure cost Efficiency
Staff Nurse turnover, training hours Capability
Financial Operating margin Sustainability

Prevents: Cutting costs at expense of outcomes, or maximizing satisfaction at expense of financial viability.


Step 5: Build Gaming Resistance

Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure"

Mechanism:

  • People optimize for metric
  • Metric diverges from underlying goal
  • Metric becomes meaningless

Examples:

Metric as Target Gaming Behavior True Goal Undermined
Call center: Calls handled Rush customers off phone Customer satisfaction
Hospital: Mortality rate Refuse high-risk patients Patient care
Software: Lines of code Write verbose code Code quality
Sales: Number of deals Close small, unprofitable deals Revenue quality

Strategies to Reduce Gaming

Strategy 1: Use Complementary Metrics

Approach: Pair metrics that counterbalance each other.

Metric A (Can Be Gamed) Metric B (Prevents Gaming) Effect
Quantity (calls handled) Quality (customer satisfaction) Can't rush if quality measured
Speed (response time) Accuracy (error rate) Can't be fast and sloppy
Revenue Customer acquisition cost Can't buy revenue at any price
Growth Retention Can't churn through customers

Strategy 2: Focus on Outcomes, Not Outputs

Output (Gameable) Outcome (Meaningful)
Features shipped Customer problems solved
Marketing campaigns run Leads generated, conversion rate
Training hours delivered Skills demonstrated, performance improvement
Reports produced Decisions informed, actions taken

Principle: Measure results, not activities.


Strategy 3: Maintain Qualitative Judgment

Don't rely solely on quantitative metrics.

Hybrid approach:

Quantitative Metric Qualitative Assessment
Sales conversion rate Win/loss analysis: why we won/lost
Customer satisfaction score Customer interviews: what matters
Code quality metrics Peer code review: actual quality judgment

Reason: Numbers are gameable; human judgment (properly structured) is harder to fool.


Strategy 4: Rotate or Evolve Metrics

When a metric becomes target:

  • Gaming strategies develop
  • Metric loses predictive power

Solution: Periodically change what you measure

Example: Google reportedly rotates quality metrics to prevent SEO gaming.


Step 6: Set Appropriate Measurement Frequency

Match Frequency to Decision Cycle

Principle: Measure as often as you need to make decisions, no more.

Metric Typical Frequency Why
Financial results Monthly/Quarterly Slow-moving, decision cycle is monthly
Website traffic Daily/Weekly Fast-moving, can react quickly
Customer satisfaction Quarterly Changes slowly, surveys have cost
Employee engagement Annually/Biannually Slow to change, survey fatigue issue

The Noise vs. Signal Trade-off

High-frequency measurement:

  • Pro: Detect changes quickly
  • Con: Noise overwhelms signal; random variation looks meaningful

Low-frequency measurement:

  • Pro: Clearer trends
  • Con: Miss timely intervention opportunities

Example:

Daily Revenue Tracking Monthly Revenue Tracking
See random fluctuations See clear trends
Panic over noise Respond to actual changes
Constant reaction Thoughtful response

Best practice: Track high-frequency, decide at lower frequency (moving averages, trend lines).


Step 7: Test and Iterate

Metrics Are Hypotheses

Initial metrics are guesses about what matters.

Test:

  • Do improvements in metric correlate with actual goal progress?
  • Do teams make better decisions with this metric?
  • Is metric being gamed?

If not, change the metric.


The Validation Process

Question How to Test Action If Fails
Does metric predict outcome? Correlation analysis Replace with better predictor
Do decisions improve? Decision audit Simplify or reframe metric
Is it gamed? Behavior observation Add counterbalancing metric
Is it used? Review meeting analysis Remove metric if unused

Evolution Over Time

As organization matures:

Early Stage Growth Stage Mature Stage
Focus: Survival, product-market fit Focus: Scaling, efficiency Focus: Optimization, innovation
Metrics: Cash runway, user feedback Metrics: Growth rate, unit economics Metrics: Market share, profitability

Measurement system must evolve with strategy.


Common Measurement System Mistakes

Mistake 1: Too Many Metrics

Problem: 50+ metrics tracked

Result:

  • No clear priorities
  • Gaming hidden in complexity
  • Analysis paralysis

Fix: Ruthlessly prune to 3-7 per major goal


Mistake 2: Measuring Only Lagging Indicators

Problem: Only track outcomes (revenue, profit)

Result: Know when you've failed, but can't prevent failure

Fix: Add leading indicators (pipeline, quality, engagement)


Mistake 3: No Connection to Strategy

Problem: Metrics chosen because they're available

Result: Measure things that don't matter

Fix: Start with strategy, derive metrics


Mistake 4: One-Dimensional Measurement

Problem: Financial metrics only

Result: Short-term optimization, long-term value destruction

Fix: Balanced scorecard approach


Mistake 5: Static Metrics

Problem: Never change what you measure

Result: Gaming develops, metrics lose meaning

Fix: Periodic review and evolution


Mistake 6: Targets Without Context

Problem: "Increase X by 20%"

Result: Gaming, sandbagging, arbitrary goals

Fix: Understand drivers; set targets based on what's achievable and valuable


Advanced Concepts

Diagnostic vs. Prescriptive Metrics

Diagnostic metrics: Tell you what happened Prescriptive metrics: Tell you what to do

Example:

Diagnostic Prescriptive
"Revenue dropped 10%" "Win rate decreased because competitive pricing changed; need new positioning"
"Churn increased" "Customers churning lack feature X; prioritize development"

Best systems: Provide both diagnosis and prescription.


Metrics at Different Organizational Levels

Different levels need different metrics:

Level Focus Metric Examples
Executive Strategic progress Market share, brand strength, financial health
Department Function performance Sales conversion, product quality, support satisfaction
Team Operational execution Story points completed, bugs fixed, calls handled
Individual Personal contribution Tasks completed, skills developed, feedback scores

Alignment: Individual → Team → Department → Executive metrics should cascade.


Real-Time vs. Periodic Dashboards

Real-time dashboards:

  • For operational metrics (website uptime, system load)
  • When immediate action required

Periodic reporting:

  • For strategic metrics (market position, brand)
  • When thoughtful analysis needed

Mistake: Making everything real-time creates noise and urgency bias.


Case Study: Redesigning a Failed Measurement System

The Problem

Software company with broken metrics:

Old Metric Problem
Lines of code written Incentivized verbose, low-quality code
Features shipped Quantity over quality; features nobody used
Bug count Hid bugs by not reporting them
Sprint velocity Inflated story point estimates

Result: Metrics looked good, product quality terrible, customers churning.


The Redesign Process

Step 1: Strategy clarity

  • Goal: Build product customers love and retain

Step 2: Identify drivers

  • Product quality
  • Customer value delivered
  • Team capability

Step 3: New metrics

Old Metric New Metric Why Better
Lines of code Code quality score (peer review + automated analysis) Measures quality
Features shipped Features adopted (% customers using) Measures value
Bug count Customer-reported bugs, time to fix Can't hide; measures impact
Sprint velocity Delivered value (customer outcome) Focuses on outcomes

Step 4: Balance

  • Added customer satisfaction (quarterly NPS)
  • Added team health (engagement survey)

Step 5: Gaming resistance

  • Multiple complementary metrics
  • Qualitative review (demos, code review)
  • Metric rotation (change technical quality metrics annually)

The Results

After 6 months:

  • Code quality improved (fewer production bugs)
  • Feature adoption increased (only valuable features built)
  • Customer retention improved
  • Team satisfaction increased (not gaming metrics)

Key insight: Fewer, better metrics focused on outcomes beat many activity metrics.


Practical Implementation

Building Your Measurement System

Timeline:

Phase Duration Activities
1. Strategy 1-2 weeks Clarify goals, identify drivers
2. Metric design 2-3 weeks Select metrics, define calculation
3. Infrastructure 4-8 weeks Build data collection, dashboards
4. Pilot 1-3 months Test with one team/function
5. Refine 2-4 weeks Fix issues discovered in pilot
6. Rollout 4-8 weeks Extend to organization
7. Ongoing Continuous Review quarterly, evolve as needed

The Measurement System Document

Create written document:

Section Contents
Strategy Goals, key drivers
Core metrics 3-7 per major goal, with definitions
Calculation Exactly how each metric computed
Frequency How often measured, reported
Ownership Who responsible for each metric
Targets Expected ranges (not rigid)
Review process How often system itself reviewed

Purpose: Clarity, alignment, reference.


Communication and Adoption

Measurement systems fail without adoption.

Keys to adoption:

Factor How
Clarity Everyone understands what metrics mean
Relevance Metrics connect to daily work
Visibility Dashboards accessible, discussed in meetings
Action Metrics inform actual decisions
Trust Metrics seen as fair, not punitive

Conclusion: Measurement as a System

Key principles:

  1. Focus beats comprehensiveness (3-7 metrics per goal)
  2. Start with strategy (metrics derive from goals)
  3. Balance dimensions (financial, customer, process, growth)
  4. Resist gaming (complementary metrics, qualitative judgment)
  5. Match frequency to decisions (measure when you can act)
  6. Iterate (metrics are hypotheses; test and evolve)

Good measurement systems:

  • Clarify priorities
  • Reveal truth
  • Inform decisions
  • Resist manipulation
  • Evolve with strategy

Bad measurement systems:

  • Obscure priorities
  • Create gaming
  • Generate reports nobody uses
  • Persist unchanged
  • Disconnect from goals

The difference is design. Measurement is too important to do accidentally.


References

  1. Kaplan, R. S., & Norton, D. P. (1992). "The Balanced Scorecard: Measures That Drive Performance." Harvard Business Review, 70(1), 71–79.

  2. Kaplan, R. S., & Norton, D. P. (1996). The Balanced Scorecard: Translating Strategy into Action. Harvard Business School Press.

  3. Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O'Reilly Media.

  4. Goodhart, C. A. E. (1975). "Problems of Monetary Management: The U.K. Experience." In Papers in Monetary Economics (Vol. 1). Reserve Bank of Australia.

  5. Hubbard, D. W. (2014). How to Measure Anything: Finding the Value of Intangibles in Business (3rd ed.). Wiley.

  6. Marr, B. (2012). Key Performance Indicators: The 75+ Measures Every Manager Needs to Know. FT Press.

  7. Austin, R. D. (1996). Measuring and Managing Performance in Organizations. Dorset House.

  8. Parmenter, D. (2015). Key Performance Indicators: Developing, Implementing, and Using Winning KPIs (3rd ed.). Wiley.

  9. Behn, R. D. (2003). "Why Measure Performance? Different Purposes Require Different Measures." Public Administration Review, 63(5), 586–606.

  10. Kerr, S. (1975). "On the Folly of Rewarding A, While Hoping for B." Academy of Management Journal, 18(4), 769–783.

  11. Meyer, M. W., & Gupta, V. (1994). "The Performance Paradox." Research in Organizational Behavior, 16, 309–369.

  12. Haas, M. R., & Kleingeld, A. (1999). "Multilevel Design of Performance Measurement Systems: Enhancing Strategic Dialogue Throughout the Organization." Management Accounting Research, 10(3), 233–261.

  13. De Waal, A. A. (2003). "Behavioral Factors Important for the Successful Implementation and Use of Performance Management Systems." Management Decision, 41(8), 688–697.

  14. Eccles, R. G. (1991). "The Performance Measurement Manifesto." Harvard Business Review, 69(1), 131–137.

  15. Neely, A., Gregory, M., & Platts, K. (2005). "Performance Measurement System Design: A Literature Review and Research Agenda." International Journal of Operations & Production Management, 25(12), 1228–1263.


About This Series: This article is part of a larger exploration of measurement, metrics, and evaluation. For related concepts, see [Why Metrics Often Mislead], [Goodhart's Law Breaks Metrics], [Vanity Metrics vs Meaningful Metrics], and [KPIs Explained Without Buzzwords].