What Is a KPI: Key Performance Indicators Explained

"What gets measured gets managed." — Peter Drucker

In 2009, the United Kingdom's National Health Service rolled out a target: no patient should wait longer than four hours in an accident and emergency department before being seen. The intention was reasonable — reduce dangerously long waits and improve patient care. The target was clear, measurable, and time-bound. It had all the hallmarks of a well-designed performance indicator.

Within a few years, investigative reporting and hospital audits uncovered a pattern. Some hospitals had started keeping ambulances waiting outside — parked in queues in the car park — so that the four-hour clock would not start until there was capacity to see the patient quickly. Others were discharging patients to waiting areas designated outside "A&E" but physically inside the same building, so the metric technically reset. The four-hour target was being met in the data. Patient care in a number of cases was getting worse.

This is Goodhart's Law made visible: when a measure becomes a target, it ceases to be a good measure. The metric was clear. The goal — faster, better emergency care — was genuine. But the metric and the goal were not identical, and once people were evaluated and funded based on the metric, they optimized for the metric rather than the goal.

Understanding KPIs at a useful level requires grappling seriously with this problem, not just with how to set measurements but with how measurements shape behavior and how to design them to shape the right behavior.

What a KPI Actually Is

A KPI — Key Performance Indicator — is a specific, measurable value selected because it directly indicates whether an organization is achieving its most important goals. The word "key" does the heaviest lifting in that definition. Not every metric is a KPI. A metric is any quantifiable measurement of business activity: the number of emails sent, support tickets opened, transactions processed, pages visited. These numbers may be useful context, but they are not all equally important.

A KPI is a metric that has been deliberately elevated because it serves as a reliable proxy for a strategic outcome the organization cares about. Monthly Recurring Revenue (MRR) is a KPI for a SaaS company because it directly captures the health of the recurring revenue model that drives the business. Employee Net Promoter Score might be a KPI for an HR function because it captures satisfaction in a single trackable number. The distinction matters in practice because an organization that treats all metrics as equally important ends up monitoring everything and prioritizing nothing.

The threshold question to ask of any candidate metric is: if this number moves significantly in the wrong direction, would it require immediate attention and action at a leadership level? If the answer is yes, it is probably a KPI. If the answer is "it would be worth looking into," it is probably a supporting metric, useful for diagnosis but not deserving of the top-line attention a KPI demands.

"If you can not measure it, you can not improve it." — Lord Kelvin

A Brief History of Performance Measurement

The formal practice of organizational performance measurement has a recognizable history. Peter Drucker developed Management by Objectives (MBO) in his 1954 book The Practice of Management. MBO proposed that managers and their reports should jointly set specific, measurable objectives and that performance should be evaluated against those objectives rather than against subjective impressions. This was genuinely radical in an era when most performance management was informal and relationship-based.

Robert Kaplan and David Norton developed the Balanced Scorecard framework in a 1992 Harvard Business Review article and subsequent books. Their core insight was that managing organizations purely on financial metrics — as most companies did — was analogous to flying a plane using only the altimeter. Financial measures are lagging indicators of past performance, not leading indicators of future health. Kaplan and Norton proposed balancing financial metrics with customer metrics, internal process metrics, and learning and growth metrics — four perspectives that together give a more complete view of organizational health.

The OKR framework — Objectives and Key Results — was developed at Intel by Andy Grove and popularized by Google, which adopted it in 1999 through a recommendation from John Doerr. Google's founders Larry Page and Sergey Brin used OKRs from the company's earliest days. Doerr's 2018 book Measure What Matters brought OKRs to a wider management audience and prompted many organizations to distinguish between KPIs (ongoing monitoring metrics) and OKRs (time-bounded goal-setting frameworks).

"It would be wrong to say that OKRs are a silver bullet, but they've helped Google since Day 1." — John Doerr

What Makes a KPI Good

Not all KPIs are created equal. The SMART framework — Specific, Measurable, Achievable, Relevant, Time-bound — provides a useful starting checklist, but good KPI design goes further.

A good KPI is specific enough that no one disputes what is being measured. "Customer satisfaction" is not a KPI; it is a vague aspiration. "Net Promoter Score measured monthly via post-purchase survey, with a target of 45 by Q4" is a KPI. The specificity determines whether the number can be calculated consistently and whether different teams agree on what it means.

A good KPI is actionable — meaning the team responsible for it has meaningful ability to influence the outcome. Measuring stock price as a team KPI fails this test: almost nothing a product team does directly moves stock price in a traceable way, and the connection is too distant to guide daily decisions. Measuring user activation rate — the percentage of new users who complete a defined "aha moment" within the first 14 days — is actionable because the team can run experiments, redesign onboarding flows, and change messaging to move the number.

A good KPI is owned by a specific person or team who is accountable for explaining its movement and proposing responses when it deviates from target. KPIs without clear ownership tend to generate reports that no one acts on. Ownership means someone's name is attached to the number and they are expected to answer for it in review meetings.

A good KPI is limited in number. The word "key" means few, not many. A team trying to track 20 KPIs is tracking no KPIs — they are tracking 20 metrics, none of which gets the focused attention that genuine key indicators deserve. The discipline of limiting KPIs forces uncomfortable clarity about what actually matters most.

KPI vs. Metric vs. OKR vs. North Star Metric

Measurement Type Definition Time Horizon Who Owns It Example
KPI A vital sign metric tracking ongoing strategic health Continuous, no expiry Department head or executive Monthly Recurring Revenue, Churn Rate
Metric Any quantifiable measure of business activity Continuous Varies (often no single owner) Emails sent, page views, tickets opened
OKR Time-bounded goal with measurable milestones showing progress Quarterly or annual Team or individual Objective: Become market leader; KR: 90% renewal rate
North Star Metric The single metric that best captures the core value delivered to customers Continuous, strategic CEO / leadership team Airbnb: nights booked; Spotify: time spent listening

Leading vs. Lagging Indicators: The Most Crucial Distinction

The most important conceptual distinction in KPI design is between leading and lagging indicators. Most organizations, left to their own devices, track primarily lagging indicators. This is natural — lagging indicators confirm results — but it leaves organizations perpetually reacting to the past rather than anticipating the future.

Lagging indicators measure outcomes: revenue this quarter, customer satisfaction last month, defect rate in the previous production run. They tell you definitively what happened. Their limitation is that by the time the number is available, the behavior that produced it is already in the past. A quarterly revenue number arriving in October reflects sales activity conducted in July, August, and September. If revenue is down, the window to intervene in that quarter has already closed.

Leading indicators are forward-looking metrics that predict future lagging outcomes. They are measurable earlier in the causal chain. The number of sales calls made this week is a leading indicator for next month's revenue. Customer support ticket volume is a leading indicator for customer satisfaction scores. Employee engagement survey scores are leading indicators for voluntary turnover rates. Production line first-pass yield rates are leading indicators for product defect rates.

The value of leading indicators is in the time they provide. If your leading indicator shows that the sales pipeline is thin this month, you have weeks to respond — adding sales activities, focusing the team, revisiting pricing — before the revenue shortfall appears in the lagging indicator. If you only track lagging indicators, you discover the shortfall after it has already occurred.

The challenge with leading indicators is that the causal relationships are not always stable or obvious. An indicator that reliably predicted outcomes in one period may become less predictive as the business or environment changes. Establishing and validating leading indicators requires analytical work — correlating historical leading metrics with subsequent lagging outcomes to confirm the relationship is real before relying on it.

Effective KPI frameworks combine both: lagging indicators that confirm whether strategy is working over time, and leading indicators that provide early warning when it is not.

"Managing by numbers alone is like driving a car by looking only in the rearview mirror." — W. Edwards Deming

KPI Examples by Function

Different organizational functions have different strategic priorities, which produce different appropriate KPIs. What follows are the most commonly used KPIs in each major function, with brief notes on what they capture.

SaaS and Product

Monthly Recurring Revenue (MRR) is the total predictable recurring revenue generated each month. It is the primary financial health metric for subscription businesses. Related metrics — New MRR (from new customers), Expansion MRR (upsells and upgrades), and Churned MRR (lost revenue from cancellations) — decompose the total into its drivers.

Customer Acquisition Cost (CAC) measures the fully loaded cost to acquire a new customer, including sales and marketing spend. Combined with Customer Lifetime Value (LTV), it produces the LTV:CAC ratio — a measure of how much value each acquired customer generates relative to what it cost to acquire them. A ratio of 3:1 or higher is generally considered healthy for SaaS businesses; below 1:1 means the business is destroying value with every acquisition.

Churn Rate measures the percentage of customers (or revenue) lost in a given period. Monthly churn of 2 percent compounds to approximately 22 percent annual churn — meaning the business loses more than one in five customers every year and must replace them with new acquisition just to stand still.

Net Revenue Retention (NRR) measures whether existing customers are expanding or contracting. NRR above 100 percent means the existing customer base is growing through expansion revenue, a powerful indicator of product-market fit and customer success quality.

Marketing

Return on Ad Spend (ROAS) measures revenue generated per dollar of advertising spend. A ROAS of 4x means every dollar of ad spend generates four dollars of revenue. While simple to calculate, ROAS becomes complex when attribution is disputed — when a customer sees an ad, then searches organically, then converts, which channel gets credit?

Conversion Rate tracks the percentage of visitors or leads that complete a desired action — clicking an ad, completing a trial signup, making a purchase. Conversion rate is highly context-dependent; an acceptable conversion rate for a high-consideration B2B purchase is very different from what is acceptable for a low-friction consumer product.

Cost Per Lead (CPL) measures the average cost to generate one qualified lead from a marketing channel. Combined with lead-to-customer conversion rates, CPL helps allocate marketing spend across channels.

Operations and Supply Chain

On-Time Delivery Rate measures the percentage of orders delivered within the promised timeframe. For consumer companies, this is a primary driver of customer satisfaction and repeat purchase behavior.

Cycle Time measures how long a defined process takes from start to finish — the time from receiving an order to shipping it, from opening a support ticket to closing it, from receiving raw materials to completing production. Reducing cycle time typically reduces costs and improves customer experience simultaneously.

Defect Rate, expressed as defects per million units (DPMO) in manufacturing contexts, measures quality at the output level. Six Sigma methodology uses DPMO as its primary quality KPI, targeting 3.4 defects per million opportunities — a level of quality that requires systematic process discipline.

Human Resources

Time-to-Hire measures the number of days between opening a position and a candidate accepting an offer. Long time-to-hire increases the cost of unfilled roles and signals friction in the hiring process that may cause preferred candidates to accept competing offers.

Voluntary Turnover Rate measures the percentage of employees who leave an organization by choice rather than being terminated. High voluntary turnover is expensive — estimates vary, but the cost of replacing an employee typically runs from 50 to 200 percent of annual salary when recruiting, onboarding, and productivity loss are included.

Employee Net Promoter Score (eNPS) asks employees how likely they are to recommend the company as a place to work, using the same 0-10 scale as the customer NPS. While imperfect as a standalone metric, eNPS trends over time provide early warning of engagement problems before they surface in turnover data.

Goodhart's Law and the KPI Trap

The NHS example from this article's opening is not unusual. Goodhart's Law — named for economist Charles Goodhart, who articulated the principle in a 1975 monetary policy paper — describes a fundamental tension in all performance measurement: once people know they are being evaluated on a specific metric, they optimize for that metric, which changes the relationship between the metric and the underlying outcome it was meant to represent.

Call centers measured on average handle time develop cultures of rushing customers. Sales teams measured only on new deals closed neglect account management and customer success. Schools evaluated on standardized test scores narrow curricula toward the tested subjects and sometimes engage in outright test manipulation. Software engineers measured on lines of code written produce more code, not better code.

The implication is not that measurement is futile but that metric design must anticipate gaming. Several principles reduce the risk of Goodhart's Law undermining KPI systems.

First, measure multiple complementary metrics simultaneously. A call center that measures both average handle time and customer satisfaction score (CSAT) makes it harder to game either one without affecting the other. Rushing customers improves handle time but damages CSAT; spending time to truly resolve issues might extend handle time but improves CSAT. The combination captures more of the underlying goal — efficient, high-quality service — than either metric alone.

Second, distinguish between what you are measuring and the outcome you care about, and periodically check whether the measurement still tracks the outcome. A metric may begin as an accurate proxy and drift as the organization learns to optimize it. Regular metric reviews that ask "is this KPI still telling us what we think it's telling us?" catch drift before it becomes endemic.

Third, build qualitative oversight alongside quantitative measurement. Numbers reduce complex realities to single values and always lose information in the reduction. Managers who only review dashboards and never talk to the people behind the numbers lose the qualitative texture that would reveal when metrics are being gamed.

KPIs vs. OKRs: When to Use Each

KPIs and OKRs serve different purposes and work best when used together rather than treated as competing systems.

KPIs are monitoring metrics. They track the ongoing health of business operations continuously, measuring whether the organization is maintaining performance on dimensions that always matter — revenue, quality, customer satisfaction, employee retention. KPIs do not expire; they represent the permanent vital signs of the organization. A company will always care about its churn rate, its gross margin, its customer satisfaction score.

OKRs are goal-setting and progress-tracking frameworks. They are time-bounded — typically set quarterly — and focus on what the organization wants to achieve in that period beyond just maintaining current performance. An Objective is a qualitative statement of direction: "Become the market leader in enterprise security." Key Results are measurable milestones that indicate progress: "Achieve 90 percent renewal rate among enterprise accounts," "Reduce mean time to detect security incidents from 48 hours to 12 hours."

The practical integration is that OKRs often drive the creation of new KPIs. If an organization sets an OKR around customer retention, it will likely build KPI infrastructure to track churn, NRR, and renewal rates that may not have been formally measured before. Once the OKR cycle concludes, those metrics often graduate into the organization's standard KPI set.

Google, which has used OKRs since its founding, maintains both: OKRs for quarterly and annual goal-setting and a separate set of "health metrics" (equivalent to KPIs) that track must-not-decline dimensions of product performance. The health metrics exist precisely to prevent OKR-focused teams from optimizing new objectives at the expense of existing product quality.

Dashboard Design Principles

A KPI is only useful if it is accessible and understood by the people who need to act on it. Dashboard design is the discipline of presenting KPIs clearly.

Fewer metrics communicate more clearly. A dashboard with 30 charts requires the viewer to decide which of the 30 numbers matters most — a cognitive task that leads to the most important numbers being buried. Effective dashboards lead with the three to five most critical metrics, provide context (trend, target, benchmark), and organize supporting detail hierarchically below.

Context is what distinguishes a number from an insight. A revenue figure of $2.3 million means different things depending on whether the target was $2.0 million or $3.0 million, whether it is up 15 percent year over year or down 5 percent, and whether it is ahead of or behind seasonal expectations. Numbers without context require the viewer to do interpretive work that the dashboard should have done for them.

Consistent update cadence and freshness indicators matter. A daily sales dashboard that silently stops updating because of a pipeline failure is worse than no dashboard, because it creates false confidence. Good dashboards show when data was last updated and alert users when data is stale.

How to Run a KPI Review Meeting

The KPI review meeting is where measurement translates into action — or fails to. Poorly run KPI reviews become exercises in presenting slides that everyone already reviewed beforehand, followed by discussion that produces no decisions.

Effective KPI reviews distribute attention according to variance, not structure. The numbers that are on track deserve brief acknowledgment; the numbers that are significantly above or below target deserve focused discussion. A simple traffic light system — green for on track, yellow for at risk, red for off target — allows reviews to quickly focus on what requires attention.

For each off-track KPI, the discussion should answer three questions: What is the most credible explanation for the deviation? What actions have been or will be taken in response? What leading indicators suggest whether those actions are working? Reviews that discuss problems without assigning ownership of responses are not reviews; they are performance theater.

The review's output should be a short list of decisions and owners, not a long list of concerns. Organizations that use data effectively review KPIs frequently enough that problems are caught early, which means individual reviews rarely need to be long or dramatic.

Practical Takeaways

The most important KPI discipline is selection: fewer indicators tracked seriously outperform many indicators tracked loosely. If forced to identify the number that most directly captures whether the organization's mission is succeeding, most teams can do so, and that number — the north star metric — should receive disproportionate attention.

Leading indicators require investment to identify and validate but pay dividends in time to respond. Building a KPI framework that includes at least one leading indicator for each critical lagging outcome dramatically improves the organization's ability to course-correct before damage is done.

Goodhart's Law is not a reason to avoid measurement — it is a reason to measure thoughtfully. Complementary metrics, regular reviews of whether measurements still track their intended outcomes, and qualitative oversight alongside quantitative monitoring are the practical countermeasures.

The review cadence should match the pace at which the business can respond. Monthly reviews of weekly-updated data allow fast-moving businesses to act on signals before they compound. Annual reviews of metrics that matter daily guarantee that the organization is always learning what it needed to know six to eleven months ago.

KPIs are infrastructure for decision-making, not decoration. They are worth designing carefully, reviewing honestly, and changing deliberately when they stop serving the purposes for which they were created.

References

  1. Drucker, P. F. (1954). The Practice of Management. Harper & Row.
  2. Kaplan, R. S. & Norton, D. P. (1992). "The balanced scorecard: Measures that drive performance." Harvard Business Review, 70(1), 71-79.
  3. Doerr, J. (2018). Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs. Portfolio/Penguin.
  4. Goodhart, C. A. E. (1975). "Problems of monetary management: The U.K. experience." Papers in Monetary Economics, Volume I. Reserve Bank of Australia.
  5. Marr, B. (2012). Key Performance Indicators: The 75 Measures Every Manager Needs to Know. Financial Times / Pearson.
  6. Liker, J. K. (2004). The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. McGraw-Hill.

Frequently Asked Questions

What is a KPI?

A KPI, or Key Performance Indicator, is a specific, measurable value that an organization tracks to evaluate how effectively it is achieving its key objectives. The word 'key' is important: a KPI is not just any metric, but one directly tied to a strategic goal that matters most to the organization's success. A sales team might use monthly revenue and customer acquisition rate as KPIs. A marketing team might track cost per lead and conversion rate. A customer service team might monitor average response time and customer satisfaction score. KPIs translate strategic ambitions into concrete, trackable targets.

What is the difference between a KPI and a metric?

Every KPI is a metric, but not every metric is a KPI. A metric is simply any quantifiable measure of business activity, like the number of website visitors, emails sent, or support tickets opened. A KPI is a subset of metrics, selected because it directly indicates progress toward a strategic priority. The distinction matters in practice: organizations that treat all metrics as equally important end up tracking too many numbers and making it harder to know what actually requires attention. Effective KPI selection means being ruthlessly selective about what gets elevated to 'key' status.

What makes a good KPI?

Good KPIs share several characteristics. They are specific and clearly defined so everyone understands what is being measured. They are measurable using available data without excessive manual effort to collect. They are directly tied to a strategic objective, not just an interesting number to track. They are actionable, meaning the team can actually influence the outcome. They are time-bound, measured over a defined period with a clear target. A common framework for evaluating KPIs is SMART: Specific, Measurable, Achievable, Relevant, and Time-bound. Vanity metrics that look impressive but do not inform decisions are the opposite of good KPIs.

What are examples of common business KPIs by department?

Sales KPIs commonly include monthly recurring revenue, customer acquisition cost, average deal size, and pipeline conversion rate. Marketing KPIs often cover cost per lead, lead-to-customer rate, website organic traffic, and email open rate. Customer success teams typically track net promoter score (NPS), customer satisfaction score (CSAT), churn rate, and average resolution time. Operations teams might monitor on-time delivery rate, production efficiency, and defect rate. HR teams often use employee satisfaction score, voluntary turnover rate, and time-to-hire. The right KPIs depend entirely on the organization's specific goals and current priorities.

How many KPIs should a team or company track?

The most effective approach is to track a small number of KPIs, typically three to seven per team or strategic objective. When teams track too many KPIs, attention is diluted, it becomes unclear which numbers drive the most important decisions, and reporting becomes a burden rather than a tool. Many leadership frameworks recommend identifying one or two north star metrics that most directly capture whether the mission is being achieved, with a small number of supporting KPIs. The discipline of limiting KPIs forces clarity about what actually matters most and prevents the false sense of control that comes from monitoring many numbers.

What is the difference between KPIs and OKRs?

OKRs (Objectives and Key Results) are a goal-setting framework popularized by Google, where organizations set ambitious qualitative objectives paired with two to five measurable key results that define what success looks like. KPIs are ongoing performance monitoring metrics that track the health of business operations continuously rather than for a fixed time window. OKRs are typically set each quarter and focus on what you want to achieve in that period. KPIs run continuously and focus on maintaining and monitoring business performance. Many organizations use both: OKRs to drive focused progress on priorities and KPIs to maintain visibility into operational health.

What are leading and lagging indicators?

Lagging indicators measure the outcome of past activities, like revenue this quarter or customer satisfaction score last month. They tell you how you did but cannot be changed after the fact. Leading indicators are forward-looking metrics that predict future performance, like the number of sales demos scheduled this week predicting next month's revenue. Leading indicators allow organizations to spot problems early and course-correct before they affect lagging outcomes. Effective KPI frameworks typically combine both types: lagging indicators confirm whether the strategy is working and leading indicators give early warning when it is not.

How do you set KPIs that are actually useful?

Start from strategic objectives rather than working backwards from available data. Ask: what would success look like in one year, and what are the two or three most critical outcomes we need to achieve? Then identify what could be measured to indicate whether you are on track toward those outcomes. Involve the people who will be accountable for the KPIs in the process of defining them, as this increases ownership and helps identify measurement challenges. Establish baselines before setting targets so you know what normal performance looks like and can set realistic improvement goals that are ambitious without being demotivating.

Can KPIs become counterproductive?

Yes. A well-known management principle called Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. Teams will optimize specifically for the KPI rather than for the underlying goal the KPI was meant to capture. For example, a customer service team measured purely on call handling time may rush customers off the phone, resolving calls quickly on paper while leaving customers unsatisfied. Effective KPI design counterbalances this by tracking multiple complementary metrics that are harder to game simultaneously, and by regularly reviewing whether KPIs still reflect what genuinely matters.

When should you change or update your KPIs?

KPIs should be reviewed whenever the underlying strategic objectives change, which typically happens at major planning milestones like annual or quarterly strategy reviews. They should also be revisited when the business context changes significantly, such as entering a new market, launching a major product, or facing an industry disruption. A KPI that was highly relevant during a growth phase may become less useful during a consolidation phase. Signs that a KPI has outlived its usefulness include when tracking it no longer informs decisions, when it is consistently met without effort, or when the behavior it incentivizes has become misaligned with current priorities.