A minimum viable product (MVP) is the simplest version of a new product that allows a team to collect the maximum amount of validated learning about customers with the least effort. Coined by Frank Robinson in 2001 and popularized by Eric Ries in The Lean Startup (2011), the MVP is not a half-built product or a rough prototype -- it is a disciplined experiment designed to test whether a business idea is worth pursuing before committing significant resources. The concept has reshaped how startups, corporations, and product teams around the world approach innovation, turning product development from a guessing game into a science of iterative discovery.

The Story That Changed Product Development

In 2008, Drew Houston was convinced people needed a better way to sync files across devices. He had the idea, the technical skills, and a burning belief in the problem. What he did not have was evidence that anyone else cared enough to pay for a solution.

Rather than spending months building the product, Houston made a three-minute screencast demonstrating what Dropbox would do. He posted it to Hacker News. Overnight, his beta waiting list grew from 5,000 people to 75,000.

Houston had not built Dropbox. He had built evidence that Dropbox was worth building.

This is the logic of the minimum viable product -- one of the most misunderstood and most useful concepts in modern product development. It represents a fundamental shift in how entrepreneurs think about risk, learning, and the relationship between building and knowing.

"The only way to win is to learn faster than anyone else." -- Eric Ries, The Lean Startup, 2011


The Origins: From Manufacturing Thinking to Learning Machines

To understand the MVP, you need to understand the problem it was designed to solve -- and that problem has roots stretching back to the dawn of industrial management.

The Waterfall Fallacy

For most of the 20th century, product development followed a model borrowed from manufacturing. Companies researched markets, wrote comprehensive business plans, developed complete products, and launched. This approach -- sometimes called the waterfall model -- assumed that the primary challenge was execution: getting the plan right upfront and then implementing it efficiently.

The waterfall model worked reasonably well in stable environments where customer needs were understood and technology was predictable. It failed catastrophically in environments of uncertainty. A 2011 study by Shikhar Ghosh at Harvard Business School found that 75 percent of venture-backed startups fail. A Standish Group report (the CHAOS Report, updated annually since 1994) consistently found that only about 29 percent of software projects were completed on time, on budget, and with the required features. The fundamental problem was the same in both cases: teams were building products based on assumptions that turned out to be wrong, and they discovered the error only after investing heavily.

Steve Blank and Customer Development

Steve Blank, a serial entrepreneur who taught at Stanford and Berkeley, identified the root cause. Startups, he argued in The Four Steps to the Epiphany (2005), are not small versions of large companies. They are organizations searching for a repeatable and scalable business model. Unlike established companies, which execute known business models, startups operate under conditions of extreme uncertainty. They do not know who their customer is, what problems matter most, or how those customers will respond to a product.

The solution, Blank proposed, was customer development -- a parallel process to product development, in which entrepreneurs get out of the building and talk to customers before and during the building process to validate or invalidate their assumptions. This was a radical departure from the prevailing wisdom, which held that founders should guard their ideas in secrecy and build in isolation until launch.

Eric Ries and the Lean Startup

Eric Ries, a student of Blank's who had co-founded a startup called IMVU using these ideas, synthesized and extended them in The Lean Startup (2011). Drawing on Blank's customer development methodology and the principles of lean manufacturing pioneered by Toyota's Taiichi Ohno, Ries introduced the MVP as a formal concept and the build-measure-learn loop as the core operating cycle of a lean startup.

The book sold over one million copies and spawned a global movement. By 2013, the Lean Startup methodology was being taught at Harvard Business School, adopted by General Electric's FastWorks program under CEO Jeff Immelt, and used by the US government's innovation teams. The MVP had moved from Silicon Valley jargon to mainstream business vocabulary.


What an MVP Actually Is

Ries defined the minimum viable product precisely:

"That version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort." -- Eric Ries, The Lean Startup, 2011

This definition is widely misquoted and more widely misunderstood. The key phrases are validated learning and least effort.

Validated learning is not the same as subjective feedback, downloads, or signups. It is a specific insight about customer behavior, willingness to pay, or product-market fit that is grounded in real user actions rather than self-reported preferences. Research in behavioral economics by Daniel Kahneman and Amos Tversky (1979) demonstrated that people consistently misjudge their own future behavior. What people do with a real product -- even an imperfect one -- is far more informative than what they say they would do in a survey.

Least effort does not mean the lowest possible quality. It means building only what is needed to test the specific hypothesis at hand. An MVP that tests whether people will pay for a service should include a payment mechanism. An MVP that tests whether people find value in a feature does not need billing infrastructure.

The MVP is not a product strategy. It is an experiment strategy. Every MVP should test a specific, falsifiable hypothesis: "We believe [customer segment] will [take this action] because [our understanding of their need/motivation]."


The Build-Measure-Learn Loop

The Lean Startup's core process is a feedback cycle designed to minimize wasted investment in unvalidated assumptions. It has three stages, each with specific requirements that most teams get wrong.

Build: Create the Minimum Necessary

Create the minimum necessary to test the hypothesis. This might be a landing page, a paper prototype, a manual service, a demo video, or a working but feature-limited product. The key discipline is resisting the urge to build more than the experiment requires.

Reid Hoffman, co-founder of LinkedIn, captured this tension memorably: "If you are not embarrassed by the first version of your product, you've launched too late." The point is not that embarrassment is desirable -- it is that premature polish consumes resources that should be spent on learning.

Measure: Collect Actionable Data

Collect data on actual user behavior. The emphasis is on actionable metrics -- metrics that predict long-term business outcomes -- rather than vanity metrics that look good but do not drive decisions.

Vanity Metric Actionable Metric Why It Matters
Total registered users Daily/monthly active users Registration without usage indicates a retention problem
Downloads Activation rate (completed key setup) Downloads without activation mean the onboarding is broken
Page views Time on page for key content Views without engagement suggest misleading headlines
Social media followers Conversion from follower to customer Followers who never buy are an audience, not a market
App installs Day 7 and Day 30 retention Installs without retention mean the product lacks stickiness
Total revenue Revenue per user / lifetime value (LTV) Total revenue masks unsustainable unit economics

A 2017 analysis by Mixpanel of 1.3 billion user actions across thousands of apps found that the average app loses 77 percent of its daily active users within the first three days after installation. This statistic underscores why retention metrics matter far more than download counts.

Learn: Extract the Insight

Analyze the data to determine whether the hypothesis was validated or invalidated. This is the step most frequently skipped. Teams ship an MVP, look at the numbers, and move on to building the next feature without extracting the insight the experiment was designed to generate.

The learning informs the decision to persevere (the hypothesis was validated; continue in this direction) or pivot (the hypothesis was not validated; change a fundamental assumption while retaining what was learned).


Famous MVP Examples: What They Actually Tested

The most instructive MVP examples are not just stories of scrappy beginnings. They are case studies in hypothesis testing -- each one designed to answer a specific question with minimal investment.

Dropbox: The Demo Video MVP

Drew Houston's Dropbox explainer video is perhaps the most cited MVP example. The video showed potential users what Dropbox would do before it did it, while the product was still being built. The test was simple: does anyone want this? The result -- 75,000 signups from a three-minute video -- was unambiguous.

The Dropbox MVP tested demand before building supply. This approach is appropriate when the engineering challenge is significant and you are uncertain whether customers care enough to change their behavior. Houston later noted that the technical challenge of reliable file syncing across platforms was immense. Building first and testing demand later would have risked years of engineering on an unvalidated assumption.

Zappos: The Concierge MVP

In 1999, Nick Swinmurn wanted to test whether people would buy shoes online without trying them on. Rather than building an e-commerce platform, he took photos of shoes at local shoe stores, posted them on a rudimentary website, and when someone ordered, he went to the store, bought the shoes at retail, and shipped them.

Zappos' first "product" was not software. It was a concierge service -- manual fulfillment of what the eventual product would automate. The test validated a genuine hypothesis: people would pay for shoes online even without trying them on first. The company was acquired by Amazon in 2009 for approximately $1.2 billion.

Airbnb: The Air Mattress MVP

Brian Chesky and Joe Gebbia did not build a platform to test whether travelers would stay in strangers' homes. They rented air mattresses in their own San Francisco apartment during a design conference when hotels were sold out, built a minimal website called "AirBed and Breakfast," and photographed their space themselves. They validated the core hypothesis personally before writing the algorithms that would power a marketplace.

The early Airbnb also provides a lesson about what "minimum" means in practice. Chesky has discussed how the company gained early traction by going to New York and personally helping hosts improve their listings -- taking professional photos, writing better descriptions. This non-scalable, manual work taught them what made listings convert, which informed the features they eventually built into the platform.

Food on the Table: The Ultimate Concierge MVP

Manuel Rosales built Food on the Table -- a service that generates weekly meal plans and grocery lists based on sales at local stores -- by personally calling one customer every week to ask what she liked to eat, checking her local grocery store's sales, and manually generating a meal plan and shopping list for her. He had one customer for months before writing a line of code.

This example is extreme but instructive: the hypotheses being tested (Do people want this? Will they pay for it? Can I generate meal plans people find useful?) are all answerable without software. The approach also provided deep qualitative insight into what customers actually valued, which shaped every subsequent product decision.

Buffer: The Two-Page MVP

In 2010, Joel Gascoigne wanted to test whether people would pay for a social media scheduling tool. His MVP was two web pages. The first described the product concept and had a "Plans and Pricing" button. The second showed pricing tiers. If someone clicked through to a pricing page, it asked for their email address. The entire experiment took seven hours to build and validated both interest (people clicked) and willingness to pay (people chose a paid tier before being told the product did not exist yet).


What "Minimum" Actually Means

The "minimum" in MVP is relative to the hypothesis being tested, not to some absolute standard of simplicity. This is the source of most MVP confusion.

A useful framework for deciding what is minimum:

Step 1: Write down the core assumption your MVP is testing. Be specific. "People want a better task manager" is not a testable hypothesis. "Busy parents aged 30-45 will pay $8/month for a task manager that syncs with their children's school calendar" is.

Step 2: Identify the riskiest assumption in that hypothesis -- the one that, if wrong, would most damage your strategy. Test that assumption first. This concept, which Ries calls the leap of faith assumption, determines the entire experiment design.

Step 3: Ask what is the cheapest, fastest way to test that assumption with real users. The answer determines what to build.

Step 4: Remove anything from the build that does not directly bear on testing the hypothesis.

This process often reveals that less needs to be built than founders initially assume. It also reveals that sometimes the right MVP is not a software product at all -- it might be a spreadsheet, a phone call, a landing page, or a manual service.


MVP vs MLP: The Minimum Lovable Product Debate

A growing critique of the MVP concept emerged in the mid-2010s. Henrik Kniberg, an agile coach who worked with Spotify, argued that "viable" is not sufficient in competitive consumer markets. A product that merely works and collects data may not spread organically in a market full of polished alternatives. Users have to like it enough to share it and return to it.

The Minimum Lovable Product (MLP) reframes the goal: not the smallest product that tests a hypothesis, but the smallest product that a specific user could love.

The distinction has practical implications:

Dimension MVP MLP
Primary goal Validated learning User delight and word-of-mouth
UX standard Functional, possibly rough Polished enough to create emotional response
Ideal context B2B, early validation, uncertain markets Consumer products, competitive markets
Optimization target Speed of learning Organic growth and retention
Risk tolerance High (expects many experiments to fail) Lower (invests more per experiment)

The tension between the two concepts is genuine. Early-stage startups with limited resources may not be able to build an MLP without spending more than they can afford to validate a hypothesis that turns out to be wrong. The answer is usually context-dependent: in markets where user experience is table stakes for adoption, MLP thinking is necessary; in markets where you can recruit users directly and survive rough edges during early tests, MVP thinking conserves resources.

Laurence McCahill of The Happy Startup School has suggested a useful synthesis: "Start with MVP thinking to find the problem worth solving, then switch to MLP thinking to build the solution worth spreading."


What Pivot Actually Means

The Lean Startup popularized the term pivot to describe a structured course correction based on validated learning. The word has been widely abused to mean anything from "we changed our minds" to "our first idea failed" -- but the lean definition is more specific.

A pivot is a change to one fundamental assumption in your strategy while retaining what you have learned. Common types include:

Customer segment pivot: The product solves the problem you thought, but for a different customer than you originally targeted. Instagram pivoted from a location check-in app called Burbn by noticing that users primarily used the photo-sharing feature. Slack pivoted from a gaming company called Tiny Speck when the team realized their internal communication tool was more valuable than the game they were building.

Problem pivot: You are serving the right customer, but you have discovered a different problem they care more about.

Business model pivot: The product is right, the customer is right, but the way you charge for it needs to change. Netflix pivoted from DVD-by-mail to streaming. Adobe pivoted from perpetual software licenses to a subscription model, increasing its market capitalization from approximately $25 billion in 2012 to over $250 billion by 2023.

Channel pivot: The product and customer are right, but the distribution channel that reaches them needs to change.

The key distinction between a pivot and a failure is whether the pivot is based on specific validated learning. A team that discovers through systematic testing that their hypothesis was wrong and adjusts accordingly is practicing lean methodology. A team that ships, gets poor results, and then randomly changes direction is not -- they are flailing.

Research by the Startup Genome Project (2011), which analyzed over 3,200 startups, found that startups that pivoted once or twice raised 2.5 times more money, had 3.6 times better user growth, and were 52 percent less likely to scale prematurely than those that never pivoted or pivoted more than twice.


Common MVP Mistakes

1. Forgetting to Define the Hypothesis

The most fundamental mistake is building an MVP without first writing down what specific assumption it is testing. Without a hypothesis, you cannot design the right experiment, measure the right things, or determine what the results mean. This is equivalent to running a scientific experiment without a research question.

2. Building Too Much

"Minimum" is harder than it sounds. Engineers want to build robust systems. Designers want to build beautiful interfaces. Product managers want to include important features. All of these impulses produce MVPs that are not minimum and delay the learning cycle without proportional benefit.

A useful constraint: can you test your core hypothesis in two weeks? If not, scope down. Marty Cagan, partner at the Silicon Valley Product Group and author of Inspired (2017), recommends that teams ask: "What is the smallest experiment that would change our minds?"

3. Measuring Vanity Metrics

A landing page with a "Sign Up" button gets 10,000 visitors and 200 signups. Is that good? It depends on what the hypothesis was. If the hypothesis was "people will pay for this," then email signups measure awareness, not willingness to pay -- and the MVP should have included a price point, even a fake one.

4. Skipping the Learning Phase

Many teams build an MVP, ship it, look at the numbers briefly, and immediately start building the next version. This treats the MVP as a release process, not a learning process. The learning phase -- analyzing results, updating the hypothesis, deciding whether to persevere or pivot -- is the whole point.

5. Confusing Early Adopters with the Mass Market

Early users who find a rough MVP are usually self-selected enthusiasts who are more tolerant of imperfection and more motivated to engage than typical future customers. Geoffrey Moore described this gap in Crossing the Chasm (1991) -- the transition from early adopters to the mainstream market is the most dangerous phase of a technology product's life cycle. Learning from early adopters is valuable, but extrapolating their behavior to the broader market requires caution.


Beyond the MVP: When to Stop Experimenting

The build-measure-learn cycle is not meant to continue indefinitely. At some point, core hypotheses are validated, product-market fit is established, and the organization's challenge shifts from learning to scaling.

Signs that you are beyond MVP stage and into growth:

  • Users are actively pulling you forward -- asking for more features, sharing unprompted, complaining when the product is unavailable
  • Key metrics (retention, LTV, NPS) are strong and improving
  • You understand your acquisition, activation, retention, referral, and revenue mechanics well enough to invest confidently in scaling each
  • The primary bottleneck is distribution, not product-market fit

Marc Andreessen, co-founder of Andreessen Horowitz, defined product-market fit as "being in a good market with a product that can satisfy that market." Before that point, the MVP and the lean loop are the right tools. After it, the right question changes from "should we build this?" to "how fast can we scale it?"

A 2019 survey by First Round Capital of 869 startup founders found that 72 percent of those who achieved product-market fit said they pivoted at least once to get there. The MVP was the mechanism that enabled those pivots -- providing the evidence that made course corrections possible before resources were exhausted.

The minimum viable product is ultimately not a product strategy -- it is an epistemology, a method for converting startup uncertainty into actionable knowledge efficiently. Founders who internalize that insight build better products, waste less money, and make better decisions than those who treat the MVP as simply a scrappy first launch.

For related concepts, see how to write a business plan, the planning fallacy explained, how to make better decisions, and what is the creator economy.


References and Further Reading

  • Ries, Eric. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.
  • Blank, Steve. (2005). The Four Steps to the Epiphany: Successful Strategies for Products That Win. K&S Ranch.
  • Maurya, Ash. (2012). Running Lean: Iterate from Plan A to a Plan That Works. O'Reilly Media.
  • Cagan, Marty. (2017). Inspired: How to Create Tech Products Customers Love. Wiley.
  • Moore, Geoffrey. (1991). Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers. Harper Business.
  • Ghosh, Shikhar. (2011). The Venture Capital Secret: 3 Out of 4 Start-Ups Fail. Harvard Business School Working Knowledge. https://hbswk.hbs.edu/item/the-venture-capital-secret-3-out-of-4-start-ups-fail
  • Startup Genome Project. (2011). Startup Genome Report Extra on Premature Scaling. https://startupgenome.com
  • Kahneman, Daniel & Tversky, Amos. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291.
  • Grove, Andrew S. (1983). High Output Management. Random House.

Frequently Asked Questions

What is a minimum viable product (MVP)?

A minimum viable product is the simplest version of a product that allows a team to collect the maximum amount of validated learning about customers with the least effort. The concept, popularized by Eric Ries in 'The Lean Startup' (2011), emphasizes that the goal of an MVP is not to build a small product but to test a specific hypothesis about customer behavior or market need with real users as quickly as possible.

What are famous examples of MVPs?

Dropbox launched as a demo video before building any product, gaining 75,000 email signups overnight to validate demand. Zappos founder Nick Swinmurn listed shoes from local stores online and manually fulfilled orders to test whether people would buy shoes without trying them first. Airbnb's founders rented out air mattresses in their San Francisco apartment to test whether strangers would pay to sleep in someone else's home. Each tested a core assumption cheaply before building infrastructure.

What is the build-measure-learn loop?

The build-measure-learn loop is the core feedback cycle of the Lean Startup methodology. Teams build the minimum required to test a hypothesis, measure the actual outcomes (not vanity metrics), and learn whether to persevere with the current approach or pivot to a new strategy. The goal is to complete this loop as quickly as possible, minimizing wasted investment in unvalidated assumptions.

What is the difference between an MVP and an MLP?

An MVP (minimum viable product) focuses on learning — it is optimized to test hypotheses and gather data, often at the cost of polish or full functionality. An MLP (minimum lovable product) focuses on user experience — it is the smallest product that users would actually enjoy and recommend. MLP thinking is often more appropriate for consumer products in competitive markets where user experience drives adoption, while MVP thinking is more appropriate for early-stage validation before significant investment.

What are the most common MVP mistakes?

The most common mistakes include building too much (overdefining 'minimum'), not defining what hypothesis the MVP is testing before building it, measuring vanity metrics (downloads, signups) rather than behavior metrics (activation, retention, revenue), building for the wrong customers, and confusing a prototype or pilot with an MVP. The most fundamental mistake is skipping the learning phase — building, shipping, and moving on without systematically extracting insight.