What Should Be Measured and Why
Your organization tracks 47 metrics. Every department has a dashboard. Monthly reports present dozens of charts. Everyone feels data-driven. Yet when it's time to make a decision, no one knows which metrics actually matter. Teams debate endlessly about what to measure, argue over metric definitions, and spend more time collecting data than using it.
The real question isn't "Can we measure this?" (you probably can measure almost anything), but "Should we measure this?" and more importantly, "Why should we measure this?" Most measurement efforts fail not from lack of data, but from measuring the wrong things for the wrong reasons.
Good measurement starts with clarity about purpose: What decision does this metric inform? What outcome does it predict? What action becomes clearer with this data? Without clear answers, measurement becomes ritual—data collection for its own sake, creating noise instead of signal.
This guide provides a framework for deciding what deserves measurement, why certain things matter more than others, and how to design a measurement system that actually improves decisions and outcomes.
Start with Goals, Not Metrics
The Backward Approach (Wrong)
Common pattern:
- List everything you can measure
- Track all of it
- Hope some of it is useful
- Drown in data without insight
Why it fails:
- No connection between metrics and decisions
- Too many metrics dilute focus
- Measures activities, not outcomes
- Leads to "metric theater" (reporting without impact)
The Forward Approach (Right)
Effective pattern:
- Define what success looks like (goals)
- Identify what drives success (drivers)
- Measure the drivers (metrics)
- Validate metrics predict success (testing)
- Act on metrics (decision-making)
Why it works:
- Every metric has clear purpose
- Limited set of vital metrics
- Measures outcomes and their drivers
- Enables action
Example: SaaS Company
Wrong approach:
- Track: signups, page views, followers, downloads, features shipped, support tickets, blog posts, email sends...
- No clear connection to goals
- 30+ metrics, none clearly actionable
Right approach:
Goal: Sustainable profitable growth
What drives sustainable growth?
- Acquire customers efficiently
- Retain them (low churn)
- Expand revenue from existing customers
What should we measure?
| Driver | Metric | Why It Matters |
|---|---|---|
| Efficient acquisition | CAC payback period | Shows months to recover acquisition cost |
| Activation | % completing first value action | Predicts retention |
| Retention | Net revenue retention (NRR) | Captures churn + expansion in one number |
| Product value | Weekly active users / Monthly actives | Shows engagement depth |
Four metrics. Each tied to goal. Each actionable.
The Measurement Hierarchy
Level 1: Outcomes (What You Care About)
Definition: The ultimate results you want to achieve
Examples:
- Revenue, profit, market share
- Customer satisfaction, retention
- Mission impact (for nonprofits)
- Health outcomes (for healthcare)
Characteristics:
- What you actually care about
- Lagging indicators (show what already happened)
- Slow to change
- Directly tied to success
Limitation: By the time outcomes change, it's often too late to adjust course
Level 2: Outputs (What You Produce)
Definition: The direct results of your activities
Examples:
- Features shipped
- Sales calls made
- Content published
- Services delivered
Characteristics:
- Under your control
- Leading indicators (happen before outcomes change)
- Faster to change
- Activities, not results
Limitation: Outputs don't guarantee outcomes (can ship features no one uses)
Level 3: Drivers (What Causes Outcomes)
Definition: The measurable factors that predict and cause outcomes
Examples:
- Conversion rates at each funnel stage
- Customer engagement scores
- Net Promoter Score (if validated)
- Time-to-value for new customers
Characteristics:
- Predictive of outcomes
- Actionable (can influence them)
- Leading indicators
- Require validation (ensure they actually predict outcomes)
This is where most meaningful measurement happens.
The Hierarchy in Practice
Example: E-commerce Company
| Level | Metric | Why It Matters | Limitation |
|---|---|---|---|
| Outcome | Monthly revenue | Ultimate goal | Lagging, slow to change |
| Driver | Conversion rate | Predicts revenue, faster to impact | Requires traffic |
| Driver | Average order value | Predicts revenue, faster to change | Can manipulate short-term |
| Output | Products listed | Activity, not result | Doesn't predict revenue |
| Output | Marketing emails sent | Activity | Doesn't mean engagement or sales |
Focus: Measure outcome (revenue) to know if you're succeeding. Measure drivers (conversion, AOV) to understand and influence success. Don't mistake outputs for drivers.
Criteria for Measurement
Should You Measure It? The Five Tests
Before adding a metric, it should pass all five tests:
Test 1: Aligned with Goals
Question: Does this metric relate to a goal we actually care about?
Example:
- Goal: Increase customer lifetime value
- Metric: Email open rate
- Test: Does email open rate predict or influence LTV? If not, don't prioritize it.
Red flag: Metric interesting but disconnected from strategy
Test 2: Actionable
Question: Can we take meaningful action based on changes in this metric?
Test: If metric goes up/down, what would we do differently?
Example:
| Metric | If Increases | If Decreases | Actionable? |
|---|---|---|---|
| Churn rate | Investigate quality issues, survey churned users | Document retention drivers, scale | Yes |
| Industry news mentions | Celebrate | Unclear what to do | No |
If you can't articulate clear actions for metric changes, don't measure it.
Test 3: Measurable Reliably
Question: Can we collect this data consistently and accurately?
Problems that kill reliability:
- Data not consistently available
- Requires manual collection (error-prone)
- Definition ambiguous (people measure differently)
- Measurement changes behavior (Hawthorne effect)
Example:
- Reliable: Conversion rate (automated tracking, clear definition)
- Unreliable: "Employee happiness" (subjective, hard to define, measurement affects result)
If you can't measure it reliably, either improve measurement method or choose different metric.
Test 4: Predictive or Outcome
Question: Does this metric either:
- Predict an outcome we care about (leading indicator), OR
- Measure an outcome we care about (lagging indicator)?
Leading indicators predict:
- Trial starts predict paid conversions
- Engagement predicts retention
- NPS predicts growth (if validated in your context)
Lagging indicators measure:
- Revenue
- Customer retention
- Profit
Neither:
- Page views (doesn't predict outcomes in most contexts)
- Social media followers (weak predictor)
If metric neither predicts nor measures outcomes, it's noise.
Test 5: Non-Redundant
Question: Is this captured by another metric we're already tracking?
Redundancy waste:
- Clutters dashboards
- Dilutes focus
- Creates confusion ("Which metric do we optimize?")
Example:
- Tracking: Monthly recurring revenue (MRR), annual run rate (ARR = MRR × 12)
- Problem: Redundant—both show same information
- Fix: Pick one
If metric is redundant, eliminate it or consolidate.
What to Measure: Domain Examples
Product Development
| What to Measure | Why | What to Avoid |
|---|---|---|
| Activation rate (% completing first key action) | Predicts retention | Total signups (many never activate) |
| Feature adoption rate | Shows value of features | Features shipped (doesn't mean usage) |
| Time-to-value (days to first success) | Predicts retention, satisfaction | Time to ship features (output, not outcome) |
| Weekly active users / Monthly actives | Shows engagement depth | Total user count (includes inactive) |
| Retention cohorts (% active after 30/60/90 days) | Core product health | Vanity metrics (downloads, views) |
Focus: Measure whether users get value, not just whether they show up.
Marketing
| What to Measure | Why | What to Avoid |
|---|---|---|
| Customer acquisition cost (CAC) | Determines profitability | Total ad spend (no context) |
| CAC payback period | Shows time to recover investment | Impressions (doesn't mean engagement) |
| Conversion rate by channel | Identifies effective channels | Traffic (doesn't mean quality) |
| Marketing-attributed revenue | Shows ROI | Activity metrics (emails sent, posts published) |
| Lead-to-customer rate | Shows pipeline efficiency | MQLs without conversion context |
Focus: Measure cost-effectiveness and revenue impact, not activity.
Customer Success / Support
| What to Measure | Why | What to Avoid |
|---|---|---|
| Customer churn rate | Core retention metric | Total support tickets (could indicate engagement) |
| Net revenue retention (expansion - churn) | Shows growth from existing customers | Customer satisfaction alone (doesn't predict behavior) |
| Time-to-resolution | Affects satisfaction | First response time (doesn't mean problem solved) |
| Customer health score (engagement, usage, satisfaction) | Predicts churn | Tickets closed (doesn't mean quality) |
| Expansion revenue rate | Shows upsell success | Support team size (input, not outcome) |
Focus: Measure retention and expansion, not just support activity.
Sales
| What to Measure | Why | What to Avoid |
|---|---|---|
| Win rate (deals closed / total opportunities) | Shows close effectiveness | Sales calls made (activity) |
| Sales cycle length | Affects capital efficiency | Opportunities created (doesn't mean quality) |
| Average contract value | Revenue per deal | Pipeline value (doesn't account for close rate) |
| Customer acquisition cost | Profitability per customer | Demos given (activity) |
| Lead-to-close rate | End-to-end efficiency | Meetings booked (doesn't predict revenue) |
Focus: Measure conversion efficiency and deal quality, not activity volume.
Content / Media
| What to Measure | Why | What to Avoid |
|---|---|---|
| Engaged time (active reading/viewing) | Shows actual consumption | Page views (doesn't mean reading) |
| Conversion rate (content → email/trial/purchase) | Shows business impact | Social shares (doesn't predict behavior) |
| Return visitor rate | Shows value delivered | Bounce rate (often misleading) |
| Content-attributed revenue | Shows ROI | Articles published (output) |
| Subscriber growth rate (from high-engagement sources) | Shows audience building | Total followers (many inactive) |
Focus: Measure engagement depth and business impact, not vanity metrics.
How Many Metrics to Track
The 3-7 Rule
For any given goal, focus on 3-7 key metrics.
Why this range:
- Less than 3: Incomplete picture, miss important drivers
- More than 7: Diluted focus, too many metrics to act on
Example: Product Team's Key Metrics
- Weekly active users (engagement)
- Activation rate (new user success)
- 60-day retention (long-term stickiness)
- Net Promoter Score (satisfaction)
- Feature adoption rate (value realization)
Five metrics. Manageable. Each actionable.
Organize by Layer
Create measurement hierarchy:
| Layer | Metric Count | Purpose | Review Frequency |
|---|---|---|---|
| North Star | 1 | Captures core value | Weekly |
| Primary | 3-5 | Key drivers of North Star | Weekly |
| Secondary | 5-10 | Supporting metrics, diagnostics | Monthly |
| Operational | 10-20 | Detailed tracking | As needed |
Example: SaaS Company
- North Star: Net Revenue Retention (captures retention + expansion)
- Primary: Activation rate, engagement score, churn rate, expansion rate
- Secondary: CAC, LTV, feature adoption, NPS, support satisfaction
- Operational: Funnel conversion rates, A/B test results, traffic sources
Focus daily on North Star and Primary. Check Secondary monthly. Review Operational when diagnosing issues.
Common Measurement Mistakes
Mistake 1: Measuring Everything
Problem: "If we track everything, we'll understand everything"
Reality: Too many metrics create noise, not signal
Symptoms:
- 30+ metric dashboards
- No one knows which metrics matter
- Decisions still made on gut feel
- Analysis paralysis
Fix: Radical prioritization—identify vital few, ignore rest
Mistake 2: Measuring What's Easy
Problem: Tools auto-generate metrics, so you track them
Reality: Easy-to-measure ≠ important to measure
Example:
- Easy: Page views, session duration, bounce rate (default analytics)
- Hard but important: Activation rate (requires defining "activated"), cohort retention, customer lifetime value
Fix: Measure what matters, not what's automatic
Mistake 3: Outputs Instead of Outcomes
Problem: Tracking activities, not results
Example:
| Team | Output (What You Do) | Outcome (What It Achieves) |
|---|---|---|
| Marketing | Blog posts published | Content-attributed revenue |
| Sales | Calls made | Win rate, revenue |
| Product | Features shipped | Feature adoption, retention |
| Support | Tickets closed | Customer satisfaction, retention |
Fix: Measure outcomes first, use outputs to understand drivers
Mistake 4: Metrics Without Context
Problem: Absolute numbers without comparison
Example:
- "We have 100K users" (Is that good? Growing? Engaged?)
- Better: "We have 100K users, up 15% MoM, with 40% weekly active"
Fix: Always provide context:
- Comparison (vs. last period, vs. goal)
- Rates and ratios (not just absolutes)
- Segmentation (averages hide patterns)
Mistake 5: No Validation
Problem: Assuming metric predicts outcomes without testing
Example:
- Assume: High NPS → Growth
- Reality: In some businesses, NPS doesn't correlate with revenue or retention
Fix: Validate predictive metrics:
- Track both metric and ultimate outcome
- Analyze correlation over time
- If metric doesn't predict outcome, drop or replace it
Mistake 6: Gaming and Goodhart's Law
Problem: "When a measure becomes a target, it ceases to be a good measure" (Goodhart's Law)
Example:
- Metric: Support tickets closed
- Gaming: Close tickets quickly without solving problems
- Result: Metric looks good, customer satisfaction terrible
Fix:
- Use complementary metrics (tickets closed + satisfaction score)
- Mix outputs and outcomes
- Rotate metrics periodically
- Include qualitative feedback
Designing Your Measurement System
Step 1: Define Success
Clarify what you're trying to achieve.
Questions:
- What does success look like in 1 year? 3 years?
- What outcomes actually matter?
- How will we know if we're succeeding?
Output: 1-3 core goals
Step 2: Identify Drivers
What causes goal achievement?
Method: Work backward
Example Goal: Increase revenue
Ask: What drives revenue?
- More customers (acquisition)
- Higher value per customer (expansion)
- Longer customer relationships (retention)
Ask again: What drives each of those?
- Acquisition: Traffic quality × conversion rate
- Expansion: Product value × upsell process
- Retention: Product satisfaction × customer success
Output: Map of causal drivers
Step 3: Select Metrics
For each driver, choose 1-2 metrics.
Apply five tests:
- Aligned with goals?
- Actionable?
- Measurable reliably?
- Predictive or outcome?
- Non-redundant?
Output: 3-7 key metrics per goal
Step 4: Define Metrics Precisely
Avoid ambiguity.
For each metric, document:
- Name: What it's called
- Definition: Exact calculation
- Data source: Where numbers come from
- Frequency: How often measured
- Owner: Who's responsible
- Target: Goal value
- Action triggers: What changes trigger what actions
Example: Activation Rate
- Definition: % of signups who complete [specific action] within 7 days
- Data source: Product analytics (Mixpanel event: "First Value Action")
- Frequency: Weekly
- Owner: Product team
- Target: 40% (current: 32%)
- Action triggers: <30% = investigate onboarding; >45% = document what's working
Step 5: Validate Predictive Power
Test whether metrics actually predict outcomes.
Method:
- Track metric and outcome for 3-6 months
- Analyze correlation
- Look for leading relationship (metric changes before outcome)
Example:
- Hypothesis: Activation rate predicts 60-day retention
- Test: Track both for 6 months
- Result: 0.82 correlation, activation changes precede retention changes by 3-4 weeks
- Conclusion: Valid leading indicator
If metric doesn't predict outcomes, replace it.
Step 6: Create Dashboards and Rhythms
Make metrics visible and actionable.
Dashboard principles:
- Focus: North Star + Primary metrics on main view
- Context: Always show trends, targets, comparisons
- Segmentation: Enable drilling into segments
- Action-oriented: Link metrics to action items
Review rhythms:
- Daily: North Star (for critical products)
- Weekly: Primary metrics
- Monthly: Secondary metrics, deep dives
- Quarterly: Metric system review (add/remove/refine)
Step 7: Act on Metrics
Metrics only matter if they drive action.
For each metric:
- Green (on track): Document what's working, scale
- Yellow (warning): Investigate, test improvements
- Red (off track): Root cause analysis, action plan
If a metric doesn't trigger action within 90 days, eliminate it.
Conclusion: Measure What Matters
The temptation: Measure everything possible
The reality: More metrics = more noise
The solution: Ruthless focus on vital few
Principles:
- Start with goals (not available metrics)
- Measure drivers (not just activities)
- Validate predictions (test whether metrics correlate with outcomes)
- Limit quantity (3-7 metrics per goal)
- Act on metrics (if not actionable, don't measure)
- Review regularly (eliminate metrics that don't drive decisions)
Good measurement:
- Clarifies what matters
- Informs decisions
- Predicts outcomes
- Enables action
Bad measurement:
- Drowns teams in data
- Measures wrong things
- Creates illusion of understanding
- Leads nowhere
Measure less. Measure better. Act more.
Your decisions—and outcomes—will improve.
References
Kaplan, R. S., & Norton, D. P. (1996). The Balanced Scorecard: Translating Strategy into Action. Harvard Business School Press.
Hubbard, D. W. (2014). How to Measure Anything: Finding the Value of "Intangibles" in Business (3rd ed.). John Wiley & Sons.
Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O'Reilly Media.
Marr, B. (2012). Key Performance Indicators (KPI): The 75 Measures Every Manager Needs to Know. Financial Times/Prentice Hall.
Parmenter, D. (2015). Key Performance Indicators: Developing, Implementing, and Using Winning KPIs (3rd ed.). John Wiley & Sons.
Goodhart, C. (1975). "Problems of Monetary Management: The U.K. Experience." Papers in Monetary Economics (Reserve Bank of Australia).
Kerr, S. (1975). "On the Folly of Rewarding A, While Hoping for B." Academy of Management Journal, 18(4), 769–783.
Davenport, T. H., & Harris, J. G. (2007). Competing on Analytics: The New Science of Winning. Harvard Business School Press.
Ries, E. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.
Hope, J., & Fraser, R. (2003). Beyond Budgeting: How Managers Can Break Free from the Annual Performance Trap. Harvard Business School Press.
Neely, A., Adams, C., & Kennerley, M. (2002). The Performance Prism: The Scorecard for Measuring and Managing Business Success. Financial Times/Prentice Hall.
Eckerson, W. W. (2010). Performance Dashboards: Measuring, Monitoring, and Managing Your Business (2nd ed.). John Wiley & Sons.
Skok, D. (2015). "SaaS Metrics 2.0 – A Guide to Measuring and Improving What Matters." For Entrepreneurs (blog).
Ellis, S., & Brown, M. (2017). Hacking Growth: How Today's Fastest-Growing Companies Drive Breakout Success. Crown Business.
Maurya, A. (2012). Running Lean: Iterate from Plan A to a Plan That Works. O'Reilly Media.
About This Series: This article is part of a larger exploration of measurement, metrics, and evaluation. For related concepts, see [Vanity Metrics vs Meaningful Metrics], [KPIs Explained Without Buzzwords], [Designing Useful Measurement Systems], and [Why Metrics Often Mislead].