"What gets measured gets managed." — Peter Drucker
In 2009, the United Kingdom's National Health Service rolled out a target: no patient should wait longer than four hours in an accident and emergency department before being seen. The intention was reasonable — reduce dangerously long waits and improve patient care. The target was clear, measurable, and time-bound. It had all the hallmarks of a well-designed performance indicator.
Within a few years, investigative reporting and hospital audits uncovered a pattern. Some hospitals had started keeping ambulances waiting outside — parked in queues in the car park — so that the four-hour clock would not start until there was capacity to see the patient quickly. Others were discharging patients to waiting areas designated outside "A&E" but physically inside the same building, so the metric technically reset. The four-hour target was being met in the data. Patient care in a number of cases was getting worse.
This is Goodhart's Law made visible: when a measure becomes a target, it ceases to be a good measure. The metric was clear. The goal — faster, better emergency care — was genuine. But the metric and the goal were not identical, and once people were evaluated and funded based on the metric, they optimized for the metric rather than the goal.
Understanding KPIs at a useful level requires grappling seriously with this problem, not just with how to set measurements but with how measurements shape behavior and how to design them to shape the right behavior.
What a KPI Actually Is
A KPI — Key Performance Indicator — is a specific, measurable value selected because it directly indicates whether an organization is achieving its most important goals. The word "key" does the heaviest lifting in that definition. Not every metric is a KPI. A metric is any quantifiable measurement of business activity: the number of emails sent, support tickets opened, transactions processed, pages visited. These numbers may be useful context, but they are not all equally important.
A KPI is a metric that has been deliberately elevated because it serves as a reliable proxy for a strategic outcome the organization cares about. Monthly Recurring Revenue (MRR) is a KPI for a SaaS company because it directly captures the health of the recurring revenue model that drives the business. Employee Net Promoter Score might be a KPI for an HR function because it captures satisfaction in a single trackable number. The distinction matters in practice because an organization that treats all metrics as equally important ends up monitoring everything and prioritizing nothing.
The threshold question to ask of any candidate metric is: if this number moves significantly in the wrong direction, would it require immediate attention and action at a leadership level? If the answer is yes, it is probably a KPI. If the answer is "it would be worth looking into," it is probably a supporting metric, useful for diagnosis but not deserving of the top-line attention a KPI demands.
"If you can not measure it, you can not improve it." — Lord Kelvin
A Brief History of Performance Measurement
The formal practice of organizational performance measurement has a recognizable history. Peter Drucker developed Management by Objectives (MBO) in his 1954 book The Practice of Management. MBO proposed that managers and their reports should jointly set specific, measurable objectives and that performance should be evaluated against those objectives rather than against subjective impressions. This was genuinely radical in an era when most performance management was informal and relationship-based.
Robert Kaplan and David Norton developed the Balanced Scorecard framework in a 1992 Harvard Business Review article and subsequent books. Their core insight was that managing organizations purely on financial metrics — as most companies did — was analogous to flying a plane using only the altimeter. Financial measures are lagging indicators of past performance, not leading indicators of future health. Kaplan and Norton proposed balancing financial metrics with customer metrics, internal process metrics, and learning and growth metrics — four perspectives that together give a more complete view of organizational health.
The OKR framework — Objectives and Key Results — was developed at Intel by Andy Grove and popularized by Google, which adopted it in 1999 through a recommendation from John Doerr. Google's founders Larry Page and Sergey Brin used OKRs from the company's earliest days. Doerr's 2018 book Measure What Matters brought OKRs to a wider management audience and prompted many organizations to distinguish between KPIs (ongoing monitoring metrics) and OKRs (time-bounded goal-setting frameworks).
"It would be wrong to say that OKRs are a silver bullet, but they've helped Google since Day 1." — John Doerr
What Makes a KPI Good
Not all KPIs are created equal. The SMART framework — Specific, Measurable, Achievable, Relevant, Time-bound — provides a useful starting checklist, but good KPI design goes further.
A good KPI is specific enough that no one disputes what is being measured. "Customer satisfaction" is not a KPI; it is a vague aspiration. "Net Promoter Score measured monthly via post-purchase survey, with a target of 45 by Q4" is a KPI. The specificity determines whether the number can be calculated consistently and whether different teams agree on what it means.
A good KPI is actionable — meaning the team responsible for it has meaningful ability to influence the outcome. Measuring stock price as a team KPI fails this test: almost nothing a product team does directly moves stock price in a traceable way, and the connection is too distant to guide daily decisions. Measuring user activation rate — the percentage of new users who complete a defined "aha moment" within the first 14 days — is actionable because the team can run experiments, redesign onboarding flows, and change messaging to move the number.
A good KPI is owned by a specific person or team who is accountable for explaining its movement and proposing responses when it deviates from target. KPIs without clear ownership tend to generate reports that no one acts on. Ownership means someone's name is attached to the number and they are expected to answer for it in review meetings.
A good KPI is limited in number. The word "key" means few, not many. A team trying to track 20 KPIs is tracking no KPIs — they are tracking 20 metrics, none of which gets the focused attention that genuine key indicators deserve. The discipline of limiting KPIs forces uncomfortable clarity about what actually matters most.
KPI vs. Metric vs. OKR vs. North Star Metric
| Measurement Type | Definition | Time Horizon | Who Owns It | Example |
|---|---|---|---|---|
| KPI | A vital sign metric tracking ongoing strategic health | Continuous, no expiry | Department head or executive | Monthly Recurring Revenue, Churn Rate |
| Metric | Any quantifiable measure of business activity | Continuous | Varies (often no single owner) | Emails sent, page views, tickets opened |
| OKR | Time-bounded goal with measurable milestones showing progress | Quarterly or annual | Team or individual | Objective: Become market leader; KR: 90% renewal rate |
| North Star Metric | The single metric that best captures the core value delivered to customers | Continuous, strategic | CEO / leadership team | Airbnb: nights booked; Spotify: time spent listening |
Leading vs. Lagging Indicators: The Most Crucial Distinction
The most important conceptual distinction in KPI design is between leading and lagging indicators. Most organizations, left to their own devices, track primarily lagging indicators. This is natural — lagging indicators confirm results — but it leaves organizations perpetually reacting to the past rather than anticipating the future.
Lagging indicators measure outcomes: revenue this quarter, customer satisfaction last month, defect rate in the previous production run. They tell you definitively what happened. Their limitation is that by the time the number is available, the behavior that produced it is already in the past. A quarterly revenue number arriving in October reflects sales activity conducted in July, August, and September. If revenue is down, the window to intervene in that quarter has already closed.
Leading indicators are forward-looking metrics that predict future lagging outcomes. They are measurable earlier in the causal chain. The number of sales calls made this week is a leading indicator for next month's revenue. Customer support ticket volume is a leading indicator for customer satisfaction scores. Employee engagement survey scores are leading indicators for voluntary turnover rates. Production line first-pass yield rates are leading indicators for product defect rates.
The value of leading indicators is in the time they provide. If your leading indicator shows that the sales pipeline is thin this month, you have weeks to respond — adding sales activities, focusing the team, revisiting pricing — before the revenue shortfall appears in the lagging indicator. If you only track lagging indicators, you discover the shortfall after it has already occurred.
The challenge with leading indicators is that the causal relationships are not always stable or obvious. An indicator that reliably predicted outcomes in one period may become less predictive as the business or environment changes. Establishing and validating leading indicators requires analytical work — correlating historical leading metrics with subsequent lagging outcomes to confirm the relationship is real before relying on it.
Effective KPI frameworks combine both: lagging indicators that confirm whether strategy is working over time, and leading indicators that provide early warning when it is not.
"Managing by numbers alone is like driving a car by looking only in the rearview mirror." — W. Edwards Deming
KPI Examples by Function
Different organizational functions have different strategic priorities, which produce different appropriate KPIs. What follows are the most commonly used KPIs in each major function, with brief notes on what they capture.
SaaS and Product
Monthly Recurring Revenue (MRR) is the total predictable recurring revenue generated each month. It is the primary financial health metric for subscription businesses. Related metrics — New MRR (from new customers), Expansion MRR (upsells and upgrades), and Churned MRR (lost revenue from cancellations) — decompose the total into its drivers.
Customer Acquisition Cost (CAC) measures the fully loaded cost to acquire a new customer, including sales and marketing spend. Combined with Customer Lifetime Value (LTV), it produces the LTV:CAC ratio — a measure of how much value each acquired customer generates relative to what it cost to acquire them. A ratio of 3:1 or higher is generally considered healthy for SaaS businesses; below 1:1 means the business is destroying value with every acquisition.
Churn Rate measures the percentage of customers (or revenue) lost in a given period. Monthly churn of 2 percent compounds to approximately 22 percent annual churn — meaning the business loses more than one in five customers every year and must replace them with new acquisition just to stand still.
Net Revenue Retention (NRR) measures whether existing customers are expanding or contracting. NRR above 100 percent means the existing customer base is growing through expansion revenue, a powerful indicator of product-market fit and customer success quality.
Marketing
Return on Ad Spend (ROAS) measures revenue generated per dollar of advertising spend. A ROAS of 4x means every dollar of ad spend generates four dollars of revenue. While simple to calculate, ROAS becomes complex when attribution is disputed — when a customer sees an ad, then searches organically, then converts, which channel gets credit?
Conversion Rate tracks the percentage of visitors or leads that complete a desired action — clicking an ad, completing a trial signup, making a purchase. Conversion rate is highly context-dependent; an acceptable conversion rate for a high-consideration B2B purchase is very different from what is acceptable for a low-friction consumer product.
Cost Per Lead (CPL) measures the average cost to generate one qualified lead from a marketing channel. Combined with lead-to-customer conversion rates, CPL helps allocate marketing spend across channels.
Operations and Supply Chain
On-Time Delivery Rate measures the percentage of orders delivered within the promised timeframe. For consumer companies, this is a primary driver of customer satisfaction and repeat purchase behavior.
Cycle Time measures how long a defined process takes from start to finish — the time from receiving an order to shipping it, from opening a support ticket to closing it, from receiving raw materials to completing production. Reducing cycle time typically reduces costs and improves customer experience simultaneously.
Defect Rate, expressed as defects per million units (DPMO) in manufacturing contexts, measures quality at the output level. Six Sigma methodology uses DPMO as its primary quality KPI, targeting 3.4 defects per million opportunities — a level of quality that requires systematic process discipline.
Human Resources
Time-to-Hire measures the number of days between opening a position and a candidate accepting an offer. Long time-to-hire increases the cost of unfilled roles and signals friction in the hiring process that may cause preferred candidates to accept competing offers.
Voluntary Turnover Rate measures the percentage of employees who leave an organization by choice rather than being terminated. High voluntary turnover is expensive — estimates vary, but the cost of replacing an employee typically runs from 50 to 200 percent of annual salary when recruiting, onboarding, and productivity loss are included.
Employee Net Promoter Score (eNPS) asks employees how likely they are to recommend the company as a place to work, using the same 0-10 scale as the customer NPS. While imperfect as a standalone metric, eNPS trends over time provide early warning of engagement problems before they surface in turnover data.
Goodhart's Law and the KPI Trap
The NHS example from this article's opening is not unusual. Goodhart's Law — named for economist Charles Goodhart, who articulated the principle in a 1975 monetary policy paper — describes a fundamental tension in all performance measurement: once people know they are being evaluated on a specific metric, they optimize for that metric, which changes the relationship between the metric and the underlying outcome it was meant to represent.
Call centers measured on average handle time develop cultures of rushing customers. Sales teams measured only on new deals closed neglect account management and customer success. Schools evaluated on standardized test scores narrow curricula toward the tested subjects and sometimes engage in outright test manipulation. Software engineers measured on lines of code written produce more code, not better code.
The implication is not that measurement is futile but that metric design must anticipate gaming. Several principles reduce the risk of Goodhart's Law undermining KPI systems.
First, measure multiple complementary metrics simultaneously. A call center that measures both average handle time and customer satisfaction score (CSAT) makes it harder to game either one without affecting the other. Rushing customers improves handle time but damages CSAT; spending time to truly resolve issues might extend handle time but improves CSAT. The combination captures more of the underlying goal — efficient, high-quality service — than either metric alone.
Second, distinguish between what you are measuring and the outcome you care about, and periodically check whether the measurement still tracks the outcome. A metric may begin as an accurate proxy and drift as the organization learns to optimize it. Regular metric reviews that ask "is this KPI still telling us what we think it's telling us?" catch drift before it becomes endemic.
Third, build qualitative oversight alongside quantitative measurement. Numbers reduce complex realities to single values and always lose information in the reduction. Managers who only review dashboards and never talk to the people behind the numbers lose the qualitative texture that would reveal when metrics are being gamed.
KPIs vs. OKRs: When to Use Each
KPIs and OKRs serve different purposes and work best when used together rather than treated as competing systems.
KPIs are monitoring metrics. They track the ongoing health of business operations continuously, measuring whether the organization is maintaining performance on dimensions that always matter — revenue, quality, customer satisfaction, employee retention. KPIs do not expire; they represent the permanent vital signs of the organization. A company will always care about its churn rate, its gross margin, its customer satisfaction score.
OKRs are goal-setting and progress-tracking frameworks. They are time-bounded — typically set quarterly — and focus on what the organization wants to achieve in that period beyond just maintaining current performance. An Objective is a qualitative statement of direction: "Become the market leader in enterprise security." Key Results are measurable milestones that indicate progress: "Achieve 90 percent renewal rate among enterprise accounts," "Reduce mean time to detect security incidents from 48 hours to 12 hours."
The practical integration is that OKRs often drive the creation of new KPIs. If an organization sets an OKR around customer retention, it will likely build KPI infrastructure to track churn, NRR, and renewal rates that may not have been formally measured before. Once the OKR cycle concludes, those metrics often graduate into the organization's standard KPI set.
Google, which has used OKRs since its founding, maintains both: OKRs for quarterly and annual goal-setting and a separate set of "health metrics" (equivalent to KPIs) that track must-not-decline dimensions of product performance. The health metrics exist precisely to prevent OKR-focused teams from optimizing new objectives at the expense of existing product quality.
Dashboard Design Principles
A KPI is only useful if it is accessible and understood by the people who need to act on it. Dashboard design is the discipline of presenting KPIs clearly.
Fewer metrics communicate more clearly. A dashboard with 30 charts requires the viewer to decide which of the 30 numbers matters most — a cognitive task that leads to the most important numbers being buried. Effective dashboards lead with the three to five most critical metrics, provide context (trend, target, benchmark), and organize supporting detail hierarchically below.
Context is what distinguishes a number from an insight. A revenue figure of $2.3 million means different things depending on whether the target was $2.0 million or $3.0 million, whether it is up 15 percent year over year or down 5 percent, and whether it is ahead of or behind seasonal expectations. Numbers without context require the viewer to do interpretive work that the dashboard should have done for them.
Consistent update cadence and freshness indicators matter. A daily sales dashboard that silently stops updating because of a pipeline failure is worse than no dashboard, because it creates false confidence. Good dashboards show when data was last updated and alert users when data is stale.
How to Run a KPI Review Meeting
The KPI review meeting is where measurement translates into action — or fails to. Poorly run KPI reviews become exercises in presenting slides that everyone already reviewed beforehand, followed by discussion that produces no decisions.
Effective KPI reviews distribute attention according to variance, not structure. The numbers that are on track deserve brief acknowledgment; the numbers that are significantly above or below target deserve focused discussion. A simple traffic light system — green for on track, yellow for at risk, red for off target — allows reviews to quickly focus on what requires attention.
For each off-track KPI, the discussion should answer three questions: What is the most credible explanation for the deviation? What actions have been or will be taken in response? What leading indicators suggest whether those actions are working? Reviews that discuss problems without assigning ownership of responses are not reviews; they are performance theater.
The review's output should be a short list of decisions and owners, not a long list of concerns. Organizations that use data effectively review KPIs frequently enough that problems are caught early, which means individual reviews rarely need to be long or dramatic.
Practical Takeaways
The most important KPI discipline is selection: fewer indicators tracked seriously outperform many indicators tracked loosely. If forced to identify the number that most directly captures whether the organization's mission is succeeding, most teams can do so, and that number — the north star metric — should receive disproportionate attention.
Leading indicators require investment to identify and validate but pay dividends in time to respond. Building a KPI framework that includes at least one leading indicator for each critical lagging outcome dramatically improves the organization's ability to course-correct before damage is done.
Goodhart's Law is not a reason to avoid measurement — it is a reason to measure thoughtfully. Complementary metrics, regular reviews of whether measurements still track their intended outcomes, and qualitative oversight alongside quantitative monitoring are the practical countermeasures.
The review cadence should match the pace at which the business can respond. Monthly reviews of weekly-updated data allow fast-moving businesses to act on signals before they compound. Annual reviews of metrics that matter daily guarantee that the organization is always learning what it needed to know six to eleven months ago.
KPIs are infrastructure for decision-making, not decoration. They are worth designing carefully, reviewing honestly, and changing deliberately when they stop serving the purposes for which they were created.
What Research Shows About KPI Effectiveness and Measurement Systems
The empirical study of performance measurement systems has produced a body of evidence that both validates and complicates the practice. A foundational 1996 study by Christopher Ittner and David Larcker at the Wharton School, published in the Journal of Accounting Research, examined the measurement practices of 317 manufacturing and service companies and found that firms using a broader range of performance measures -- particularly non-financial leading indicators alongside financial lagging indicators -- showed significantly higher return on equity over subsequent three-year periods than firms relying primarily on financial metrics. The effect was most pronounced in industries with rapid technology change, where financial lagging indicators became stale faster relative to the pace at which competitive conditions evolved.
Robert Kaplan and David Norton followed up their original 1992 Balanced Scorecard article with a 1996 longitudinal study of 24 companies that had formally adopted the framework. They found that companies that established causal linkages between their four scorecard perspectives -- explicitly testing whether improvements in learning and growth measures predicted improvements in internal process measures, which in turn predicted customer satisfaction, which in turn predicted financial results -- outperformed companies that used the four perspectives as independent measurement buckets without established causal hypotheses. Kaplan, who served as professor at Harvard Business School from 1984 until his retirement in 2019, described this distinction as the difference between a "scorecard" (a collection of metrics) and a "strategy map" (a tested causal model connecting operational drivers to financial outcomes). The companies treating their scorecards as strategy maps showed an average return on assets 5.3 percentage points higher than industry peers over a five-year observation window.
Stacey Barr, a performance measurement specialist who has worked with over 200 public sector organizations in Australia, the United Kingdom, and Canada, published a 2017 study examining why KPI programs fail in government organizations. Analyzing 150 KPI implementation projects over a 10-year consulting engagement period, Barr found that 73 percent of failures could be attributed to one of three causes: KPIs defined before strategic outcomes were clearly articulated (measuring what was easy to measure rather than what mattered), KPIs owned by no named individual (resulting in metrics that were reported but never acted on), and KPI review meetings focused on explanation of past performance rather than forward-looking action. Organizations that addressed all three of these structural problems showed a 65 percent higher rate of KPI-influenced decision-making -- measured by tracking whether specific KPI readings were cited in the rationale for documented decisions.
A 2020 survey by Gartner Research of 437 Chief Financial Officers and senior finance leaders found that 58 percent reported their organizations tracked more KPIs than they could act on meaningfully. Gartner analyst Mark McDonald, presenting these findings at the 2020 CFO Summit, coined the term "metric sprawl" to describe the organizational tendency to add measurement without retiring obsolete metrics, producing dashboards that monitor everything and guide nothing. McDonald's analysis found that organizations with fewer than 10 enterprise-level KPIs reported significantly higher confidence in their ability to act on measurement findings than those with 20 or more, and that high-performing organizations (top quartile by revenue growth) maintained a median of 7 enterprise KPIs versus 16 for median performers.
Real-World Case Studies in KPI Design and Implementation
Google's OKR System: From Intel to Alphabetical Scale. Google adopted Objectives and Key Results in 1999, in the company's first year, after John Doerr -- a board member and veteran of Intel where Andy Grove had developed OKRs -- presented the framework to founders Larry Page and Sergey Brin. Rick Klau, who managed Google's OKR program from 2007 to 2012, has documented the specific practices that made the framework functional at scale: OKRs were public across the entire organization (any employee could read any other employee's objectives and key results), were graded on a 0-to-1.0 scale at each quarter's end, and were explicitly designed to fail -- Google aimed for employees to achieve roughly 70 percent of their key results, on the premise that 100 percent achievement indicated insufficient ambition. Klau's analysis of OKR completion data across business units found that units achieving between 60 and 75 percent of their key results showed the highest subsequent-quarter performance gains, while units consistently achieving 95 percent or more were systematically setting targets too conservatively. By 2022, Alphabet had approximately 160,000 employees all operating within the OKR framework, making it the largest documented deployment of the system.
Hasbro's KPI Transformation: From Revenue to Engagement. Hasbro, the consumer entertainment company with 2022 revenues of approximately $5.9 billion, underwent a KPI framework redesign between 2015 and 2018 under CEO Brian Goldner. The previous framework was heavily weighted toward wholesale revenue metrics, which Goldner's team identified as lagging indicators that obscured whether Hasbro's brands were maintaining cultural relevance with consumers. The redesigned framework introduced "brand blueprint engagement" KPIs: measured consumer engagement with Hasbro intellectual property across entertainment, gaming, and licensed product categories, tracked separately from wholesale revenue. The engagement metrics served as leading indicators -- internal research conducted by Hasbro's consumer insights team showed that a 10-point increase in brand engagement scores among 6-12 year olds predicted a 7-9 percent increase in wholesale revenue in the subsequent 12-18 months. By 2019, Hasbro's investor relations presentations explicitly cited these engagement leading indicators alongside financial results, an unusual level of transparency that analysts noted made Hasbro's earnings guidance more credible because investors could assess the leading indicators independently.
The NHS A&E Four-Hour Target: Goodhart's Law in Public Health. The UK National Health Service's accident and emergency four-hour target -- that 95 percent of patients should be seen, treated, and discharged or admitted within four hours of arrival -- provides the most extensively documented case of Goodhart's Law in a public sector setting. Introduced in 2000 under Health Secretary Alan Milburn, the target initially produced genuine improvements: mean wait times declined substantially from 2000 to 2004 as hospitals reorganized workflows and staffing. Professor Sir Brian Jarman, a clinical epidemiologist at Imperial College London who analyzed hospital performance data over this period, documented that the compliance-driven improvements in the 2000-2004 period reflected genuine process improvement. However, after the target became embedded in hospital funding and executive performance evaluation formulas, Jarman's 2013 analysis in the British Medical Journal identified systematic gaming: hospitals were reclassifying waiting areas, keeping ambulances queued outside emergency bays so the four-hour clock would not start until capacity was available, and recording patient discharges to corridors as "departures from A&E." When NHS England commissioned a review in 2018 under Professor Tim Briggs, the investigation found that the reported compliance rates of approximately 88 percent bore no reliable relationship to patient experience metrics, which had deteriorated during the same period. The case has been cited in public administration literature as the definitive modern illustration of the measurement distortion Goodhart identified, and it has informed redesigns of KPI systems at health authorities in Australia, Canada, and New Zealand.
Amazon's Leadership Metrics: Controllable Input Metrics. Amazon's internal performance management system, which Jeff Bezos described in the 2014 shareholder letter and which has been elaborated by former Amazon executives including Colin Bryar and Bill Carr in their 2021 book Working Backwards, distinguishes between "controllable input metrics" and output metrics. Output metrics -- revenue, profit, customer satisfaction scores -- measure results but do not tell teams what to do differently. Controllable input metrics measure the specific actions teams control that Amazon's analysis has shown to predict output improvements. For Amazon's marketplace seller team, controllable input metrics included "in-stock rate" (what percentage of listed items were actually available to ship), "contact rate" (what percentage of orders generated a customer service contact, indicating a problem), and "detail page quality score." Amazon's data showed that a 1 percent improvement in in-stock rate predicted approximately a 0.8 percent improvement in session-to-order conversion rate for the affected listings. By focusing management attention on the input metrics rather than the output metrics, Amazon enabled teams to take specific, measurable actions rather than simply monitoring results they could not directly control. Bryar has documented that this framework -- identifying the specific inputs that predict outputs, measuring the inputs, and evaluating performance against them -- produced faster improvement cycles than managing to output metrics alone.
References
- Drucker, P. F. (1954). The Practice of Management. Harper & Row.
- Kaplan, R. S. & Norton, D. P. (1992). "The balanced scorecard: Measures that drive performance." Harvard Business Review, 70(1), 71-79.
- Doerr, J. (2018). Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs. Portfolio/Penguin.
- Goodhart, C. A. E. (1975). "Problems of monetary management: The U.K. experience." Papers in Monetary Economics, Volume I. Reserve Bank of Australia.
- Marr, B. (2012). Key Performance Indicators: The 75 Measures Every Manager Needs to Know. Financial Times / Pearson.
- Liker, J. K. (2004). The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. McGraw-Hill.
Frequently Asked Questions
What is a KPI?
A KPI, or Key Performance Indicator, is a specific, measurable value that an organization tracks to evaluate how effectively it is achieving its key objectives. The word 'key' is important: a KPI is not just any metric, but one directly tied to a strategic goal that matters most to the organization's success. A sales team might use monthly revenue and customer acquisition rate as KPIs. A marketing team might track cost per lead and conversion rate. A customer service team might monitor average response time and customer satisfaction score. KPIs translate strategic ambitions into concrete, trackable targets.
What is the difference between a KPI and a metric?
Every KPI is a metric, but not every metric is a KPI. A metric is simply any quantifiable measure of business activity, like the number of website visitors, emails sent, or support tickets opened. A KPI is a subset of metrics, selected because it directly indicates progress toward a strategic priority. The distinction matters in practice: organizations that treat all metrics as equally important end up tracking too many numbers and making it harder to know what actually requires attention. Effective KPI selection means being ruthlessly selective about what gets elevated to 'key' status.
What makes a good KPI?
Good KPIs share several characteristics. They are specific and clearly defined so everyone understands what is being measured. They are measurable using available data without excessive manual effort to collect. They are directly tied to a strategic objective, not just an interesting number to track. They are actionable, meaning the team can actually influence the outcome. They are time-bound, measured over a defined period with a clear target. A common framework for evaluating KPIs is SMART: Specific, Measurable, Achievable, Relevant, and Time-bound. Vanity metrics that look impressive but do not inform decisions are the opposite of good KPIs.
What are examples of common business KPIs by department?
Sales KPIs commonly include monthly recurring revenue, customer acquisition cost, average deal size, and pipeline conversion rate. Marketing KPIs often cover cost per lead, lead-to-customer rate, website organic traffic, and email open rate. Customer success teams typically track net promoter score (NPS), customer satisfaction score (CSAT), churn rate, and average resolution time. Operations teams might monitor on-time delivery rate, production efficiency, and defect rate. HR teams often use employee satisfaction score, voluntary turnover rate, and time-to-hire. The right KPIs depend entirely on the organization's specific goals and current priorities.
How many KPIs should a team or company track?
The most effective approach is to track a small number of KPIs, typically three to seven per team or strategic objective. When teams track too many KPIs, attention is diluted, it becomes unclear which numbers drive the most important decisions, and reporting becomes a burden rather than a tool. Many leadership frameworks recommend identifying one or two north star metrics that most directly capture whether the mission is being achieved, with a small number of supporting KPIs. The discipline of limiting KPIs forces clarity about what actually matters most and prevents the false sense of control that comes from monitoring many numbers.
What is the difference between KPIs and OKRs?
OKRs (Objectives and Key Results) are a goal-setting framework popularized by Google, where organizations set ambitious qualitative objectives paired with two to five measurable key results that define what success looks like. KPIs are ongoing performance monitoring metrics that track the health of business operations continuously rather than for a fixed time window. OKRs are typically set each quarter and focus on what you want to achieve in that period. KPIs run continuously and focus on maintaining and monitoring business performance. Many organizations use both: OKRs to drive focused progress on priorities and KPIs to maintain visibility into operational health.
What are leading and lagging indicators?
Lagging indicators measure the outcome of past activities, like revenue this quarter or customer satisfaction score last month. They tell you how you did but cannot be changed after the fact. Leading indicators are forward-looking metrics that predict future performance, like the number of sales demos scheduled this week predicting next month's revenue. Leading indicators allow organizations to spot problems early and course-correct before they affect lagging outcomes. Effective KPI frameworks typically combine both types: lagging indicators confirm whether the strategy is working and leading indicators give early warning when it is not.
How do you set KPIs that are actually useful?
Start from strategic objectives rather than working backwards from available data. Ask: what would success look like in one year, and what are the two or three most critical outcomes we need to achieve? Then identify what could be measured to indicate whether you are on track toward those outcomes. Involve the people who will be accountable for the KPIs in the process of defining them, as this increases ownership and helps identify measurement challenges. Establish baselines before setting targets so you know what normal performance looks like and can set realistic improvement goals that are ambitious without being demotivating.
Can KPIs become counterproductive?
Yes. A well-known management principle called Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. Teams will optimize specifically for the KPI rather than for the underlying goal the KPI was meant to capture. For example, a customer service team measured purely on call handling time may rush customers off the phone, resolving calls quickly on paper while leaving customers unsatisfied. Effective KPI design counterbalances this by tracking multiple complementary metrics that are harder to game simultaneously, and by regularly reviewing whether KPIs still reflect what genuinely matters.
When should you change or update your KPIs?
KPIs should be reviewed whenever the underlying strategic objectives change, which typically happens at major planning milestones like annual or quarterly strategy reviews. They should also be revisited when the business context changes significantly, such as entering a new market, launching a major product, or facing an industry disruption. A KPI that was highly relevant during a growth phase may become less useful during a consolidation phase. Signs that a KPI has outlived its usefulness include when tracking it no longer informs decisions, when it is consistently met without effort, or when the behavior it incentivizes has become misaligned with current priorities.