Every organization measures performance. Some track five metrics. Others track fifty. Most call them all "KPIs"—Key Performance Indicators. But if you have fifty "key" indicators, nothing is key. If everything is a priority, nothing is.

As Peter Drucker observed, "What gets measured gets managed." But the corollary is equally true: when you measure everything, you manage nothing well.

The term KPI has become corporate noise. It's slapped on any metric in a dashboard, diluting meaning until "KPI" just means "number we track." But real KPIs—the actual key performance indicators—are different. They're the vital few metrics that genuinely reflect strategic progress, the ones that, if you improve them, you win.

Understanding what KPIs actually are, how they differ from regular metrics, and how to select them effectively strips away buzzword fog and reveals a practical tool for focusing organizational attention.


What KPIs Actually Are

The Simple Definition

KPI (Key Performance Indicator): A metric directly tied to strategic goals that measures whether you're succeeding at what matters most.

Three components:

  1. Key: Most important, not comprehensive
  2. Performance: Measures actual results, not activity
  3. Indicator: Shows status and progress toward goal

KPIs vs. Metrics

All KPIs are metrics. Not all metrics are KPIs.

Characteristic KPIs Regular Metrics
Number Very few (3-7 per goal) Many (dozens to hundreds)
Connection to strategy Direct, explicit link May be operational only
Decision-making Directly inform major decisions Provide context or detail
Visibility Reviewed by leadership regularly Tracked by specific teams
Importance Truly critical Nice to know

Example: E-commerce company

KPIs (Vital Few) Regular Metrics (Many)
Monthly Recurring Revenue (MRR) Page views
Customer Lifetime Value (LTV) Email open rates
Customer Acquisition Cost (CAC) Social media followers
Net Promoter Score (NPS) Blog traffic

The KPIs tell you if the business is healthy. The metrics provide operational detail.


The "Key" in KPI

Why "Key" Matters

Key means: Most critical, indispensable, can't-ignore-this.

Problem: Organizations forget this and create 50 "key" indicators.

Result: Nothing is actually key.


The 3-7 Rule

Guideline: 3-7 KPIs per strategic goal

Why?

Too Few (1-2) Right Number (3-7) Too Many (20+)
Oversimplified; misses important dimensions Balanced view; memorable; actionable Overwhelming; diluted focus; nothing stands out

Human limitation: Can't hold more than ~7 items in working memory. If you can't remember your KPIs without looking them up, you have too many.

Andy Grove, former CEO of Intel and pioneer of OKRs, put it plainly: "If you try to focus on everything, you focus on nothing." The discipline of the 3-7 rule is not about oversimplification—it is about forcing the hard choices that reveal what genuinely drives success.


The Pareto Principle Applied

80/20 rule: 20% of metrics provide 80% of strategic insight.

KPIs should be that 20%.

Exercise: If you could only track 3-5 metrics, which would tell you most about business health? Those are your true KPIs.


Good KPIs vs. Bad KPIs

Characteristics of Good KPIs

Attribute Description Example
Strategic alignment Directly tied to organizational goal If goal is "customer satisfaction," KPI is NPS or CSAT
Actionable Influences decisions and behavior Conversion rate → invest in UX improvement
Measurable Consistently quantifiable Revenue (clear) vs "brand strength" (vague)
Understandable Everyone knows what it means "Customer churn rate" is clear; "Engagement Index v2.3" isn't
Owned Someone accountable Each KPI has an owner who can influence it
Timely Updated frequently enough to inform action Real-time for ops KPIs; monthly for strategic

Bad KPI Examples and Why

Bad KPI Why It's Bad Better Alternative
"Number of meetings" Activity, not outcome "Decisions made per week" or remove entirely
"Total registered users" Vanity metric; doesn't show value "Active users (logged in + action in 30 days)"
"Lines of code written" Incentivizes volume over quality "Features shipped that customers use"
"Revenue" (alone) One-dimensional "Revenue" + "Customer acquisition cost" + "Customer retention rate"
"Employee headcount" More isn't always better "Revenue per employee" or "Customer satisfaction"

Types of KPIs

Leading vs. Lagging KPIs

Lagging KPIs: Measure past results

Characteristics Examples
Historical Revenue, profit, market share
Hard to influence directly Can't change last quarter's revenue
Confirm success/failure Tell you if strategy worked
Easy to measure Clear, objective

Leading KPIs: Predict future results

Characteristics Examples
Forward-looking Sales pipeline, customer engagement, product quality
Actionable Can influence through current efforts
Early warning Problems show up here first
Harder to measure May require proxies

A balanced KPI system uses both:

Goal Lagging KPI (Result) Leading KPI (Driver)
Revenue growth Quarterly revenue Sales pipeline value, conversion rate
Customer satisfaction Net Promoter Score Support response time, product bug rate
Profitability Operating margin Cost per acquisition, operational efficiency
Market position Market share Brand awareness, product reviews

Why both: Lagging KPIs tell you if you won. Leading KPIs tell you if you're going to win, giving time to adjust.


Input, Process, Output, Outcome KPIs

Different levels of measurement:

Level What It Measures Example
Input Resources invested Marketing spend, staff hours
Process Activities performed Campaigns run, calls made
Output Direct results of activity Leads generated, features shipped
Outcome Ultimate impact Revenue, customer satisfaction, market share

Best KPIs focus on outcomes and outputs, not inputs and processes.

Why: Activity ≠ results. You can do lots of work (input/process) without achieving goals (outcomes).


How to Select KPIs

Step 1: Clarify Strategic Goals

Before choosing KPIs, be clear on what success looks like.

Framework:

Level Question Example
Mission Why do we exist? "Make high-quality education accessible"
Vision Where are we going? "Be the leading online learning platform in STEM"
Strategic Goal What does success look like in 3-5 years? "10 million active learners with 85%+ completion rate"
KPI How do we measure progress? "Monthly Active Learners," "Course Completion Rate," "Net Promoter Score"

KPIs cascade from strategy. No clear strategy = no clear KPIs.


Step 2: Identify Key Drivers

Ask: What factors drive strategic goal achievement?

Example: Goal is Revenue Growth

Potential Drivers Why They Matter
New customer acquisition More customers = more revenue
Customer retention Keeping customers costs less than acquiring new
Pricing Higher prices (if sustainable) = more revenue per customer
Product quality Quality drives retention and word-of-mouth

For each driver, select 1-2 KPIs.


Step 3: Apply Selection Criteria

For each candidate KPI, ask:

Question If No, Discard
Does it directly tie to a strategic goal? Not actually key
Can we influence it through actions? Can't manage what you can't control
Is it measurable consistently? Can't track unreliable data
Does everyone understand it? Won't drive aligned behavior
Will it inform actual decisions? Just reporting theater

"The first step is to measure whatever can be easily measured. This is okay as far as it goes. The second step is to disregard that which can't be easily measured, or to give it an arbitrary quantitative value. This is artificial and misleading." — W. Edwards Deming, Out of the Crisis

Doug Hubbard reinforces this in How to Measure Anything: the goal of measurement is to reduce uncertainty about decisions—not to produce numbers for their own sake. If a KPI candidate does not reduce uncertainty about whether you are achieving your strategy, it fails this test.


Step 4: Balance Perspectives

Use framework like Balanced Scorecard:

Perspective KPI Examples
Financial Revenue growth, operating margin, ROI
Customer NPS, retention rate, customer acquisition cost
Internal Process Cycle time, defect rate, innovation pipeline
Learning & Growth Employee engagement, skills coverage, training completion

Prevents: Over-optimizing one dimension at expense of others.


Step 5: Test and Iterate

KPIs are hypotheses about what matters.

Test:

  • Does improving this KPI actually advance strategic goals?
  • Does the team make better decisions with this KPI?
  • Is it being gamed?

If no to any, revise the KPI.


KPI Examples by Organization Type

SaaS Company

Strategic Goal KPI
Growth Monthly Recurring Revenue (MRR) growth rate
Efficiency Customer Acquisition Cost (CAC)
Retention Net Revenue Retention (NRR)
Product-Market Fit Net Promoter Score (NPS)
Unit Economics CAC:LTV ratio (lifetime value to acquisition cost)

E-commerce

Strategic Goal KPI
Revenue Gross Merchandise Value (GMV)
Profitability Gross margin
Customer Value Average Order Value (AOV)
Retention Repeat purchase rate
Efficiency Customer acquisition cost

Healthcare Provider

Strategic Goal KPI
Clinical Quality Patient outcomes (mortality, complications)
Patient Experience Patient satisfaction (HCAHPS scores)
Safety Adverse event rate
Efficiency Average length of stay
Financial Operating margin

Nonprofit

Strategic Goal KPI
Mission Impact Lives improved (specific to mission)
Reach People served
Efficiency Cost per person served
Sustainability Funding diversification (% from various sources)
Donor Satisfaction Donor retention rate

Common KPI Mistakes

Mistake 1: Too Many "Key" Indicators

Problem: Calling 50 metrics KPIs

Result: Nothing is actually key; focus is diluted

Fix: Brutal prioritization. 3-7 per major goal.


Mistake 2: Measuring Activity Instead of Results

Problem: KPIs that measure effort, not outcomes

Activity KPI (Bad) Outcome KPI (Better)
Calls made Deals closed
Marketing campaigns run Leads generated, conversion rate
Features developed Customer problems solved, adoption rate
Training hours Skills demonstrated, performance improvement

Fix: Always ask "So what?" If the activity doesn't lead to a valued outcome, don't KPI it.


Mistake 3: No Strategic Connection

Problem: KPIs chosen because they're easy to measure or "everyone tracks them"

Result: Tracking things that don't matter

Fix: Trace every KPI back to a strategic goal. If you can't, it's not a KPI.


Mistake 4: Only Lagging Indicators

Problem: All KPIs measure past results

Result: Know when you've failed, but can't prevent failure

Fix: Balance lagging KPIs (results) with leading KPIs (drivers/predictors)


Mistake 5: Gaming-Prone KPIs

Problem: KPIs that can be improved without actually advancing goals

This is the domain of Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." Economist Charles Goodhart identified this dynamic in monetary policy, but it applies universally to organizational KPIs.

Examples:

  • Call center: "Calls handled" → rushed customers
  • Hospital: "Mortality rate" → reject high-risk patients
  • Sales: "Number of deals" → close tiny, unprofitable deals

Fix: Use complementary KPIs (e.g., calls handled and customer satisfaction)


Mistake 6: Lack of Ownership

Problem: No one clearly responsible for each KPI

Result: Nobody acts to improve it

Fix: Assign each KPI an owner who has authority to influence it


Mistake 7: Static KPIs

Problem: KPIs never change, even as strategy evolves

Result: Measuring yesterday's priorities

Fix: Review KPIs regularly (at least annually); adjust as strategy changes


KPIs at Different Organizational Levels

Executive/Strategic Level

Focus: Overall organizational health and strategic progress

KPI Type Examples
Financial Revenue, profitability, cash flow
Market position Market share, brand strength
Strategic milestones Product launches, market expansion

Frequency: Monthly or quarterly review


Departmental/Tactical Level

Focus: Functional performance

Department KPI Examples
Sales Revenue per rep, win rate, pipeline value
Marketing Lead generation, cost per lead, conversion rate
Product Feature adoption, user engagement, product quality
Customer Success Churn rate, expansion revenue, NPS
Operations Cycle time, cost per unit, defect rate

Frequency: Weekly or monthly review


Team/Operational Level

Focus: Day-to-day execution

Team KPI Examples
Support team Ticket resolution time, customer satisfaction
Development team Sprint velocity, deployment frequency, bug rate
Content team Content published, engagement rate

Frequency: Daily or weekly review


Cascade Principle

KPIs should cascade:

  • Executive KPIs (organizational goals)
    • Department KPIs (functional contributions)
      • Team KPIs (operational execution)

Alignment: Team success → Department success → Organizational success


Individual KPIs: Proceed With Caution

The Case Against Individual KPIs

Problems:

Issue Why It Happens
Gaming Easier to game individual metrics
Perverse competition Hoarding information, sabotaging others
Short-term focus Optimize personal metrics at expense of team
Collaboration suffers Individual metrics incentivize solo work

Research: Individual performance metrics often backfire (Kerr, 1975; "On the Folly of Rewarding A, While Hoping for B").


When Individual KPIs Work

Appropriate contexts:

  • Roles with clear, independent outputs (e.g., sales)
  • When individual contribution easily isolated
  • Paired with team/organizational KPIs

Best practice: Weight team KPIs higher than individual KPIs (e.g., 70% team, 30% individual)


Implementing a KPI System

Phase 1: Design (4-6 weeks)

Step Activity
1. Strategy clarity Document strategic goals
2. Driver identification What drives goal achievement?
3. KPI selection 3-7 per major goal, using criteria
4. Definition Precise calculation, ownership, targets
5. Data infrastructure How will we measure?

Phase 2: Pilot (2-3 months)

Test with one team or function:

  • Build dashboards
  • Track data
  • Use in decision-making
  • Identify issues

Phase 3: Rollout (3-6 months)

Expand to organization:

  • Training on KPI meaning and use
  • Regular review cadence
  • Integration into decision processes

Phase 4: Sustain

Ongoing:

  • Monthly KPI reviews
  • Quarterly KPI system review
  • Annual KPI revision (align with strategy changes)

KPI Dashboards and Reporting

Dashboard Best Practices

Principle How
Clarity Each KPI clearly labeled with target
Context Show trend over time, not just current value
Visual hierarchy Most important KPIs prominent
Actionability Include drill-down to understand drivers
Timeliness Real-time or near-real-time updates

What to Show

For each KPI:

  • Current value
  • Target or goal
  • Trend (up/down, improving/declining)
  • Historical comparison (vs. last month, last year)
  • Status indicator (green/yellow/red)

What Not to Do

Avoid:

  • 50 metrics on one dashboard (overwhelming)
  • Vanity metrics (look good but don't matter)
  • Metrics without context (is 1,000 users good or bad?)
  • Stale data (outdated dashboards ignored)
  • No ownership (unclear who acts on KPIs)

Maintaining KPI Effectiveness

Regular Reviews

Monthly: Review KPI performance, discuss actions

Quarterly: Review whether KPIs still predict success

Annually: Reassess KPI selection aligned with strategy


Signs Your KPIs Need Revision

Warning Sign What It Means Action
KPIs not discussed in meetings Not actually informing decisions Replace with metrics people use
Consistent gaming Metrics optimized without goal progress Add counterbalancing KPIs
Strategy changed KPIs no longer aligned Redesign KPI set
KPIs always green/red Targets wrong or KPI not sensitive Adjust targets or change KPI
Too many KPIs added over time "Key" lost meaning Ruthlessly prune to vital few

Conclusion: Key Means Key

The discipline of KPIs is the discipline of focus.

Real KPIs:

  • Few in number (3-7 per goal)
  • Directly tied to strategy
  • Inform major decisions
  • Balance multiple perspectives
  • Evolve with strategy

Fake KPIs:

  • Dozens or hundreds
  • Measured because they can be
  • Generate reports nobody uses
  • Optimize for one dimension
  • Never change

The hard part isn't measuring. It's deciding what matters most and having the discipline to focus on only that.

If you can't name your 3-7 most important KPIs right now, you don't have KPIs. You have metrics. And the first step to effective KPIs is admitting that difference.


What Research Shows About KPI Effectiveness

The science of organizational performance measurement has produced several well-validated findings about what makes KPIs actually work -- and what causes them to fail.

Robert Kaplan and David Norton at Harvard Business School developed the Balanced Scorecard framework in a 1992 Harvard Business Review article and expanded it in their 1996 book. Their research, drawn from collaboration with dozens of major companies, showed that organizations relying exclusively on financial KPIs consistently underperformed those that measured across multiple dimensions. The core insight was that financial results are lagging indicators -- they confirm that something worked or failed long after the critical decisions were made. Leading indicators in customer satisfaction, internal process quality, and employee learning predicted financial outcomes 12 to 18 months in advance, giving leadership time to course-correct rather than simply observe failure.

Kaplan and Norton documented a specific failure pattern: when companies cut training budgets to hit short-term profitability KPIs, the learning and growth dimension of their scorecards deteriorated, followed by internal process quality 6 months later, followed by customer satisfaction at 12 months, followed by revenue decline at 18 to 24 months. The financial KPI looked fine right up until it didn't. Organizations that tracked only lagging financial KPIs were structurally blind to this cascade until it was expensive to reverse.

Andy Grove at Intel independently developed a similar framework in the 1970s, which later became the foundation for Objectives and Key Results (OKRs). Grove's contribution, documented in his book High Output Management (1983), was the distinction between output and activity measurement. Grove argued that managers instinctively track activities -- calls made, meetings attended, reports submitted -- because activities are visible and controllable. But activities do not guarantee outcomes. A salesperson can make 100 calls and close no deals. An engineer can write 10,000 lines of code that no one uses. KPIs, Grove insisted, must measure deliverable output at a specific time: not "making calls" but "revenue closed by end of quarter." This insistence on time-bounded outcome measurement became foundational to how companies from Google to LinkedIn structure their goal-setting.

Steven Kerr's 1975 paper "On the Folly of Rewarding A, While Hoping for B" in the Academy of Management Journal provided the clearest academic articulation of why organizations end up with KPIs that measure the wrong things. Kerr documented case after case in which organizational reward systems were formally tied to one set of outcomes while leadership verbally emphasized different outcomes entirely. Universities said they valued teaching but rewarded only research publications. Hospitals said they valued quality care but measured and rewarded throughput speed. The KPIs that got tracked and tied to compensation were the ones that drove behavior, regardless of stated priorities.

Kerr's conclusion was that KPI dysfunction is not primarily a measurement design problem -- it is an organizational honesty problem. Leaders often know which metrics genuinely predict success, but those metrics are harder to hit, more politically uncomfortable, or less impressive in reports. The solution is not better measurement technology but organizational willingness to measure what actually matters, even when the numbers are smaller or more ambiguous.


Real-World Case Studies: KPIs That Worked and Failed

Google's OKR system. When John Doerr introduced OKRs to Google in 1999 -- drawing on his work with Andy Grove at Intel -- the company had fewer than 50 employees. The system required each quarter that every team set 3 to 5 objectives, each with 3 to 5 measurable key results. Key results had to be quantifiable and binary: either achieved or not, without ambiguity. Google's adoption of this framework has been credited by Doerr (in his 2018 book Measure What Matters) and by Google's leadership as a significant factor in maintaining strategic alignment during periods of explosive growth. By the time Google had 50,000 employees, teams from advertising sales to infrastructure engineering were using the same OKR structure. The critical feature was specificity: "improve search quality" was not a KPI; "reduce click-through rate on first result by 10 percent for navigational queries" was.

Wells Fargo's account-opening KPI. Between 2002 and 2016, Wells Fargo used accounts opened per employee as a primary performance KPI, tied directly to compensation and performance reviews. The underlying strategic goal was to deepen customer relationships -- more accounts meant more touchpoints, more cross-selling opportunities, more customer stickiness. The KPI was logical in design. What happened in practice was that employees opened approximately 3.5 million accounts without customer knowledge or consent, and applied for 500,000 credit cards customers never requested. The KPI hit record highs while the actual strategic goal -- genuine customer relationship depth -- deteriorated. When the fraud was exposed in 2016, Wells Fargo paid $185 million in fines and eventually settled for $3 billion. The case is now the canonical example of Charles Goodhart's observation that when a measure becomes a target, it ceases to be a good measure.

The NHS waiting time targets. The British National Health Service introduced maximum waiting time targets in the early 2000s as KPIs to reduce patient waits for elective surgery and emergency treatment. The 4-hour emergency department target and 18-week elective surgery target were genuine problems -- patients were waiting far too long and the targets created legitimate pressure to improve. However, as the KPIs became high-stakes (linked to hospital funding and executive careers), documented gaming emerged: ambulances were held outside emergency departments until staff could guarantee the patient would be seen within the 4-hour window from arrival. Waiting lists were "paused" (the clock stopped but the patient still waited). Patients were reclassified into lower-urgency categories with longer permitted wait times. The KPIs improved substantially on paper while independent patient experience surveys showed more mixed results. The NHS waiting time case is now standard reading in public sector management education as an example of KPI design that creates perverse incentives at scale.

Intel's revenue-per-employee KPI. Andy Grove used revenue per employee as one of Intel's primary strategic health indicators. The metric was elegant: it captured both revenue growth and operational efficiency in a single ratio. When revenue per employee rose, the company was growing revenue faster than headcount -- a sign of productivity improvement. When it fell, it was an early warning that the cost structure was growing unsustainably. Grove's insight was that this metric was harder to game than revenue alone (which could be boosted by simply hiring more salespeople) or profit alone (which could be boosted by cutting investment). The ratio required genuine productivity improvement to move in the right direction. This KPI design principle -- using ratios that capture trade-offs between dimensions rather than single-dimension absolute numbers -- remains one of Grove's lasting contributions to management practice.


Evidence-Based Principles for KPI Design

Forty years of research on organizational performance measurement yields converging principles about what makes KPIs effective.

Principle 1: Fewer is better, and the limit is lower than most organizations think. Kaplan and Norton recommended 15 to 25 measures across the four Balanced Scorecard perspectives for an entire organization's strategic measurement system. That averages to roughly 4 to 6 per perspective. When organizations go above this, research consistently shows that decision quality declines even as measurement activity increases. The mechanism is cognitive: human working memory can hold approximately 7 items simultaneously. When KPI systems exceed this limit, people cannot hold the full set in mind during decisions, and begin selectively attending to the metrics they find most salient -- which may not be the most strategically important.

Principle 2: KPIs must be owned by someone who can actually move them. W. Edwards Deming identified "management by objective" as one of his "seven deadly diseases" of management -- not because goals are bad, but because they are typically assigned to people who lack the authority or tools to achieve them. A customer satisfaction KPI assigned to a customer success team that cannot influence product quality, pricing, or support staffing is a recipe for gaming and frustration. Effective KPIs require a clear owner with the authority and resources to influence the metric.

Principle 3: Leading and lagging indicators must both be present. An exclusive focus on lagging KPIs (revenue, profit, customer churn) produces organizations that know they have failed but do not know why until it is expensive to fix. An exclusive focus on leading KPIs (pipeline value, engagement scores, customer health) produces organizations that optimize proxies without confirming that the proxies actually predict outcomes. Kaplan and Norton's strategy maps were designed to make the causal chain explicit: specific leading indicators were hypothesized to drive specific lagging outcomes, and the hypothesis was tested over time. When a leading indicator improved but the expected lagging outcome did not follow, the causal assumption was revised.

Principle 4: KPIs should be revised when strategy changes, not preserved for historical continuity. One of the most consistent findings in performance measurement research is that organizations preserve KPIs long past their usefulness because changing them disrupts historical trend analysis and implies that previous measurement was inadequate. Kaplan and Norton documented organizations using the same scorecard metrics a decade after their strategy had fundamentally changed. The solution is to treat KPIs as hypotheses tied to specific strategic assumptions. When the strategy changes, the hypotheses change, and the KPIs should follow.


References

  1. Kaplan, R. S., & Norton, D. P. (1992). "The Balanced Scorecard: Measures That Drive Performance." Harvard Business Review, 70(1), 71–79.

  2. Parmenter, D. (2015). Key Performance Indicators: Developing, Implementing, and Using Winning KPIs (3rd ed.). Wiley.

  3. Marr, B. (2012). Key Performance Indicators: The 75+ Measures Every Manager Needs to Know. FT Press.

  4. Kerr, S. (1975). "On the Folly of Rewarding A, While Hoping for B." Academy of Management Journal, 18(4), 769–783.

  5. Hope, J., & Fraser, R. (2003). "Who Needs Budgets?" Harvard Business Review, 81(2), 108–115.

  6. Neely, A., Adams, C., & Kennerley, M. (2002). The Performance Prism: The Scorecard for Measuring and Managing Business Success. FT Press.

  7. Eckerson, W. W. (2010). Performance Dashboards: Measuring, Monitoring, and Managing Your Business (2nd ed.). Wiley.

  8. Austin, R. D. (1996). Measuring and Managing Performance in Organizations. Dorset House.

  9. Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O'Reilly Media.

  10. Hubbard, D. W. (2014). How to Measure Anything: Finding the Value of Intangibles in Business (3rd ed.). Wiley.

  11. Drucker, P. F. (1954). The Practice of Management. Harper & Row.

  12. Goldratt, E. M. (1990). Theory of Constraints. North River Press.

  13. Davenport, T. H., & Harris, J. G. (2007). Competing on Analytics: The New Science of Winning. Harvard Business School Press.

  14. Sull, D., Homkes, R., & Sull, C. (2015). "Why Strategy Execution Unravels—and What to Do About It." Harvard Business Review, 93(3), 57–66.

  15. Reddy, R. (2015). "How to Create a KPI." In The KPI Book (pp. 12–35). The KPI Institute.


About This Series: This article is part of a larger exploration of measurement, metrics, and evaluation. For related concepts, see [Designing Useful Measurement Systems], [Vanity Metrics vs Meaningful Metrics], [Why Metrics Often Mislead], and [What Should Be Measured and Why].

Frequently Asked Questions

What are KPIs in simple terms?

KPIs (Key Performance Indicators) are the specific metrics most critical to your strategic goals—the vital few that actually matter.

How are KPIs different from regular metrics?

All KPIs are metrics, but not all metrics are KPIs. KPIs are the handful most directly tied to strategic success.

How many KPIs should you have?

Typically 3-7 per strategic goal. Too many dilutes focus; 'key' means most important, not comprehensive.

What makes a good KPI?

Directly tied to strategic goals, actionable, measurable consistently, understood by everyone, and actually influences decisions.

Can KPIs change over time?

Yes, and they should. As strategy evolves or context changes, KPIs must adapt to remain aligned with what matters most.

What's the difference between leading and lagging KPIs?

Leading KPIs predict future performance; lagging KPIs measure past results. Effective systems use both.

Why do KPI initiatives often fail?

Too many KPIs, disconnection from strategy, poor data quality, lack of accountability, or treating KPIs as theater instead of tools.

Should every person have individual KPIs?

Not necessarily. KPIs work best at team or organizational level. Individual metrics can create perverse competition and gaming.