The product management discipline has a frameworks problem. In the past two decades, practitioners and consultants have generated a dense canon of frameworks, models, matrices, and methodologies — jobs-to-be-done, opportunity solution trees, north star metrics, RICE, ICE, Kano models, impact mapping, story mapping, continuous discovery, product-led growth, build-measure-learn, dual-track agile, product trio, and dozens more. Most PMs have encountered at least ten of these. Fewer than half apply any of them consistently. A small minority have deeply internalized the ones that changed how they think, while dismissing the ones that provided vocabulary without insight.

The frameworks problem has two failure modes. The first is ignorance: PMs who have never been exposed to the frameworks cannot benefit from the genuine insights embedded in the best ones. The second is cargo cult: PMs who have collected frameworks as credentials — who can name every model but use none of them to make actual decisions — are sometimes more dangerous than the ignorant, because they mistake vocabulary for understanding and process theater for product discipline.

The resolution is a principled filter: understand the frameworks that encode genuine insight, discard the ones that are mostly vocabulary, and apply the ones that survive scrutiny as thinking tools rather than execution checklists. This article applies that filter systematically — explaining the most important PM frameworks, assessing their actual utility, identifying where they are commonly misapplied, and distinguishing the small set of frameworks that genuinely improve product decisions from the larger set that mostly generate slide decks.

A note on framework proliferation: a 2023 survey by the product community Mind the Product found that the average product manager had been "trained in" or "introduced to" 8.4 distinct frameworks during their career, yet used only 2.1 regularly in their actual decision-making. The gap between frameworks encountered and frameworks internalized is not a knowledge problem — it is a relevance problem. Most frameworks answer questions that practitioners only encounter occasionally. The tier-one frameworks below are the ones that answer questions product managers face every day.

"The point of a framework is to make better decisions. If your team can recite the framework but cannot make better decisions because of it, the framework has failed." — Teresa Torres, author of 'Continuous Discovery Habits'


Key Definitions

Framework: A structured way of thinking about a class of problems — a set of questions to ask, dimensions to consider, or steps to follow. Frameworks are tools for organizing thought, not algorithms for generating correct answers.

Cargo cult: A term from anthropology referring to imitation of surface behaviors without understanding the underlying principles that make those behaviors effective. In product management, cargo cult describes teams that adopt framework vocabulary and rituals without the substance behind them.

Outcome vs output: Outputs are things a team produces — features, code, documents. Outcomes are changes in behavior or metrics that result from those outputs. "We shipped 12 features in Q3" is an output. "We increased activation rate by 14% in Q3" is an outcome. The distinction is fundamental to product management; most of the best frameworks are organized around outcomes.

Discovery: The work of understanding what to build before committing to building it — including customer interviews, prototype testing, market analysis, and hypothesis formulation. Discovery-oriented frameworks are designed to reduce the risk of building the wrong thing.

Delivery: The work of building and shipping something. Delivery-oriented frameworks (sprints, Kanban, continuous deployment) are designed to ensure that committed work gets done reliably and efficiently.

Opportunity: A customer need, pain point, or desire that, if addressed, would move a measurable product outcome. Distinguishing between opportunities (customer-grounded) and solutions (team-invented) is a conceptual discipline that several tier-one frameworks reinforce.


Framework Quality Assessment

Framework Category Genuine Utility Common Misuse Tier
Jobs-to-be-done Discovery High — changes how you see competition Ignoring what customers actually say 1 — Foundational
Opportunity solution tree Discovery High — connects outcome to action Used as leadership slide deck 1 — Foundational
Outcome vs output distinction Mindset Very high — changes what you measure Treated as obvious without behavioral change 1 — Foundational
North star metric Alignment High — creates team focus Optimized to the point of gaming 2 — Operational
RICE / ICE scoring Prioritization Moderate — forces explicit tradeoffs Applied mechanically, becomes theater 2 — Operational
Product strategy hierarchy Alignment High — prevents incoherent roadmaps Strategy is too vague to be actionable 2 — Operational
OKRs Goal-setting Moderate — when used for outcomes Turned into task lists with output KRs 3 — Contextual
Kano model Feature research Moderate — useful in competitive context Applied to every feature regardless of fit 3 — Contextual
Build-measure-learn Iteration High — when rigorously practiced Undirected iteration without hypotheses 3 — Contextual
'PM as CEO' metaphor Mindset Low — accountability framing only Directive behavior toward engineers/design 4 — Skeptical
SAFe / PI Planning Scaled delivery Context-dependent Process overhead that replaces thinking 4 — Skeptical

The Frameworks That Matter

Jobs-to-Be-Done (JTBD)

Developed by Clayton Christensen at Harvard Business School and popularized through his 2003 book "The Innovator's Solution" and the 2016 HBR article "Know Your Customers' Jobs to Be Done," JTBD reframes the question of who your customer is. Instead of defining customers by demographics or psychographics, JTBD focuses on the job the customer is trying to accomplish.

The canonical example: Christensen's team analyzed why people were buying milkshakes in the morning at McDonald's. Demographically, morning milkshake buyers looked random. But understanding the job — they had a long, boring commute and needed something that would keep them occupied and satiated until lunch — revealed the real competition: not other milkshakes, but bananas, bagels, and energy bars. The product improvement implied by demographics (better flavors) was different from the product improvement implied by JTBD (easier to consume one-handed while driving, more filling).

Tony Ulwick, who developed the closely related outcome-driven innovation methodology and wrote "Jobs to Be Done: Theory to Practice" (Idea Bite Press, 2016), adds a structural layer to JTBD: customers have functional jobs (the practical task they are trying to accomplish), emotional jobs (how they want to feel while doing it), and social jobs (how they want to be perceived by others). A product that addresses only the functional job while ignoring the emotional and social dimensions often finds lower adoption than its technical quality would predict.

The practical implications of JTBD for product strategy are substantial:

  • Competitor mapping changes: Competition is defined by the job, not the category. Notion competes with Google Docs, physical notebooks, Trello, and personal email drafts — all tools people use to "capture and organize thinking."
  • Feature prioritization changes: Features that better address the core job are more valuable than features that add variety within the existing job category.
  • Customer segmentation changes: Demographics become less relevant than behavioral segmentation around who has the job most urgently and most frequently.

JTBD is genuinely useful when: you are trying to understand non-obvious competition, you are building a new category rather than improving an existing one, or your team is too focused on feature requests rather than underlying user motivation.

JTBD is commonly misapplied when: teams use it as a customer segmentation tool rather than a problem framing tool, or when it becomes an excuse to ignore what customers actually say they want in favor of what the PM believes they "really" want.

Opportunity Solution Tree

Teresa Torres introduced the opportunity solution tree in her book "Continuous Discovery Habits" (2021) as a visual framework for connecting product goals to action. The tree structure is:

  • Outcome: The top-level business or product goal (e.g., "increase trial-to-paid conversion")
  • Opportunities: Unmet customer needs, pain points, or desires that, if addressed, would move the outcome
  • Solutions: Possible approaches to addressing each opportunity
  • Experiments: Specific tests to evaluate whether a solution works

The framework's value is structural: it forces explicit connection between action and outcome, prevents solution-jumping before problem exploration, and creates a shareable artifact that makes product team reasoning visible.

A critical discipline in the opportunity solution tree is the distinction between opportunity space exploration (understanding all the ways the outcome could be moved) and solution space exploration (evaluating specific approaches to a chosen opportunity). Most teams jump straight to solutions because solutions feel productive — there is something to build. The opportunity solution tree forces the discipline of first mapping the space of customer problems before committing to any particular approach.

Torres found in her research that teams using the opportunity solution tree consistently identified higher-impact solutions than teams that began with solution brainstorming — not because the tree generates better ideas, but because it ensures that solution ideas are grounded in specific, well-understood customer problems.

The opportunity solution tree is most powerful when used by a product trio (PM, designer, engineer) in ongoing discovery sessions. It degenerates quickly when used as a presentation tool for leadership rather than a working document for the team.

The Outcome vs Output Distinction

This is less a framework than a conceptual foundation that most other product frameworks depend on. Melissa Perri articulates it most clearly in "Escaping the Build Trap" (O'Reilly, 2018): outputs are things a team produces — features shipped, code written, documents delivered. Outcomes are changes in customer behavior or business metrics that result from those outputs.

The reason this distinction matters operationally: organizations that measure themselves on output have an incentive to keep producing output, regardless of whether that output creates value. Organizations that measure themselves on outcomes have an incentive to understand the relationship between what they build and the customer behavior they are trying to change — and to stop building when the relationship is unclear.

"Companies optimise for what they measure. If you measure features shipped, your team will ship features. If you measure customer outcomes, your team will be forced to understand what actually changes customer behavior." — Melissa Perri, Escaping the Build Trap, O'Reilly Media, 2018

In practice, shifting from output to outcome measurement requires changing how quarterly goals are written, how product team success is evaluated, how roadmaps are structured, and what questions are asked in product reviews. This is a cultural and organizational change, not just a vocabulary update — which is why so many teams claim to be "outcome-focused" while continuing to measure themselves on features shipped per quarter.

North Star Metric

The north star metric concept, popularized by Sean Ellis and refined by writers including John Cutler ("North Star Playbook," Amplitude, 2019) and Lenny Rachitsky, is a single top-level metric that best captures the core value a product delivers to customers. The hypothesis is that aligning an entire product organization around one metric creates focus and prevents the metric cherry-picking that allows teams to look productive while not driving real value.

Famous examples:

  • Spotify: time spent listening
  • Airbnb: nights booked
  • Facebook: daily active users
  • Slack: daily active users who send messages in the first 30 days (originally)
  • Duolingo: daily active users completing at least one lesson

Cutler's framework adds important structure: the north star metric should represent the leading indicator of long-term business success, not a lagging indicator like revenue. Revenue is a consequence of delivering value; the north star captures whether value is being delivered.

The north star metric is most useful as an alignment tool. It is most dangerous when treated as a management reporting metric: optimizing for any single metric eventually produces distorted behavior. The antidote is a counter-metric system — define not just the north star but the guardrail metrics that prevent gaming it. If Spotify's north star is time listening, guardrails might include satisfaction scores (to prevent low-quality content that people half-listen to) and subscription retention (to ensure listening is translating into revenue).

A 2021 study by Amplitude of 20 high-growth consumer products found that companies with clearly defined north star metrics grew 2.3x faster on average than comparable companies without them — though the causal mechanism likely runs in both directions: companies with clear strategy are both more likely to define north stars and more likely to grow.

Product Strategy vs Roadmap vs Backlog

These three concepts are frequently conflated, and the conflation produces organizations that are busy but not coherent.

Product strategy answers: who are we building for, what problem are we solving uniquely well, what are the big bets we are making about the market, and what do we need to be true for those bets to pay off? A product strategy is short — one to three pages — and relatively stable over 12-18 months. Roger Martin's "Playing to Win" (Harvard Business Review Press, 2013) provides the clearest framework for strategy as a set of integrated choices: where to play (which customers, which problems) and how to win (what capabilities and approaches differentiate you).

Product roadmap answers: given our strategy, what are we building over the next 6-18 months, in what order, and why? A roadmap is a plan, not a commitment. The most effective roadmaps are outcome-oriented and explicitly provisional. Janna Bastow, co-founder of ProdPad, popularized the now/next/later roadmap format — a three-column structure that shows current focus, near-term priorities, and long-term direction without committing to specific delivery dates for items beyond the current quarter.

Product backlog answers: what specific work have we committed to doing, and in what order? The backlog is a living prioritized list of engineering and design work, much more granular than the roadmap.

Strategy drives roadmap. Roadmap informs backlog. Organizations that build roadmaps without underlying strategy produce an incoherent collection of feature bets. Organizations that treat backlog management as product strategy are the ones that fall into the build trap.


Prioritization Frameworks in Depth

Prioritization is the PM's most repeated decision. The frameworks below exist to make that decision more rigorous — not to automate it.

RICE Scoring

Developed by Sean McBride at Intercom and published in a widely shared 2016 blog post, RICE scores each candidate initiative across four dimensions:

  • Reach: How many users will this affect per time period?
  • Impact: How much will this move the needle for each user reached? (Scored 0.25 to 3)
  • Confidence: How confident are you in your reach and impact estimates? (Scored as a percentage)
  • Effort: How many person-months of work will this require?

The RICE score is: (Reach x Impact x Confidence) / Effort

The value of RICE is that it makes assumptions explicit and comparable. When two initiatives are debated, RICE forces the team to articulate specifically: how many users does this reach, and how confident are we? This surfaces hidden disagreements about assumptions that would otherwise be obscured by competing opinions. The discussion about how to score an initiative is more valuable than the score itself.

ICE Scoring

ICE (Impact, Confidence, Ease) is a simplified version of RICE without the Reach dimension. Scored on a 1-10 scale for each dimension with the score being their average. ICE is faster to apply and suitable for teams early in their prioritization discipline, or for cases where reach is roughly equivalent across initiatives.

Opportunity Scoring (Outcome-Driven Innovation)

Tony Ulwick's opportunity scoring methodology (described in "Jobs to Be Done: Theory to Practice," 2016) is more research-intensive than RICE or ICE but more directly grounded in customer data. The process surveys customers on two dimensions for each desired outcome: importance ("how important is it that you can do X?") and satisfaction ("how satisfied are you with your current ability to do X?"). Outcomes with high importance and low satisfaction are the highest-value opportunities. The scoring surfaces where customers are underserved (high importance, low satisfaction) and where they are overserved (features that may not need continued investment).


The Frameworks Frequently Misapplied or Overused

OKRs

OKRs (Objectives and Key Results), developed at Intel by Andy Grove and popularized by John Doerr's 2018 book "Measure What Matters," are one of the most widely adopted and most consistently misapplied frameworks in the technology industry.

Properly used, OKRs set ambitious outcome goals (Objectives) and measure them with specific, time-bound indicators of progress (Key Results). In practice, OKRs are most commonly misapplied by:

  • Treating Key Results as tasks: "KR: ship the redesign by end of Q2" is an output, not an outcome. The correct Key Result would be "increase activation rate by 15 percent by end of Q2."
  • Setting too many OKRs: Three to five Objectives per team, maximum. Companies that set fifteen OKRs have no strategic focus; they have a comprehensive to-do list.
  • Using OKRs for performance management: Doerr explicitly warned against using OKR scores to evaluate individual employee performance. When OKRs affect compensation, teams set conservative targets that are easy to hit rather than ambitious bets.

The practical test for OKR quality: are the Key Results things you could achieve without actually making progress on the Objective? If yes, they are outputs, not outcomes.

Lean Startup's Build-Measure-Learn

Eric Ries' "The Lean Startup" (Crown Business, 2011) introduced the build-measure-learn cycle as a framework for iterative product development under uncertainty. The core insight — that you should validate assumptions with the smallest possible experiment before committing to full build — is sound and widely applicable.

The misapplication is treating the loop as a literal sequence rather than a principle. Genuine build-measure-learn requires: a specific hypothesis formulated before building, a minimum viable experiment designed to test that hypothesis, a clear success criterion established in advance, and honest evaluation of results including willingness to pivot based on the data.

Most teams that claim to be "doing Lean Startup" are building incrementally without the hypothesis discipline that makes iteration scientifically productive. They cycle quickly, but they cycle without learning — each iteration produces output without informing the next bet.

Ries acknowledged in a 2019 retrospective that the most commonly misunderstood element of the Lean Startup method was not the loop but the validated learning concept: learning counts only when it changes a specific belief you held before the experiment. If the experiment could not have changed your decision, it was not a real test.

Agile and Scrum

Agile — the family of iterative software development practices codified in the 2001 Agile Manifesto — and Scrum — the specific sprint-based implementation framework developed by Jeff Sutherland and Ken Schwaber — are delivery frameworks, not product frameworks. They address how teams manage and execute committed work, not how they decide what work to commit to.

The confusion arises because product managers are deeply embedded in Agile/Scrum processes: they own or contribute to the product backlog, participate in sprint planning, and often serve as the "product owner" in Scrum's role definition. But Scrum's product owner is a delivery coordination function, not a full product management function. A team running perfect Scrum can still build the wrong product with impeccable organizational discipline.

The practical implication: Agile practices are necessary but not sufficient for good product outcomes. Teams need both delivery discipline (Agile) and discovery discipline (continuous customer research, outcome measurement, hypothesis testing) to build products that work.

The 'PM as CEO of the Product' Metaphor

This phrase, attributed to variations of a 1996 essay by Ben Horowitz, is technically motivating and practically misleading. PMs are not CEOs. CEOs have formal authority over everyone in their organization. PMs have formal authority over no one. CEOs can allocate budget and headcount. PMs cannot.

The metaphor is useful for understanding PM accountability: like a CEO, the PM is responsible for the product's success even though many things that determine that success are outside their direct control. The metaphor becomes harmful when PMs internalize it as a license for directive behavior — treating engineers, designers, and stakeholders as employees rather than partners.

The better mental model, articulated by Marty Cagan in "Empowered" (Wiley, 2021), is PM as coach and context-setter: providing the team with clear problem definitions, customer context, and success criteria, then trusting engineers and designers to find the best solutions to those problems.


Less-Known Frameworks Worth Understanding

Dual-Track Agile

Dual-track agile, developed by Jeff Patton and described in his book "User Story Mapping" (O'Reilly, 2014), addresses the tension between discovery work and delivery work by running them as parallel tracks rather than sequential phases. In a single-track model, discovery happens before a sprint and delivery happens during the sprint, with constant pressure to skip discovery when sprints need to be filled. In dual-track agile, the discovery track runs one to two sprints ahead of the delivery track, continuously generating validated problems and solutions that the delivery track can commit to building with confidence.

Impact Mapping

Impact mapping, developed by Gojko Adzic and described in "Impact Mapping" (Provoking Thoughts, 2012), is a visual collaborative planning technique that connects product deliverables to business goals by mapping four levels: goal (what business outcome), actors (who can help or hinder), impacts (what behaviors do we need from these actors), and deliverables (what can we build to trigger those behaviors). Impact mapping is most useful at the beginning of a product initiative to ensure the entire team understands not just what is being built but why it is expected to create business value.

Story Mapping

User story mapping, also developed by Jeff Patton, organizes user stories along two axes: the horizontal axis represents the user's journey (activities in sequence), while the vertical axis represents priority (most essential stories at the top, enhancements below). The structure makes it possible to see which stories together constitute a viable release — a horizontal slice across the map — rather than losing context when working from a flat backlog.

The Kano Model

Developed by Noriaki Kano in 1984 ("Attractive Quality and Must-Be Quality," Journal of the Japanese Society for Quality Control), the Kano model categorizes product features into five types based on how customers respond to their presence or absence:

  • Must-be (basic) features: Expected as baseline; their presence creates no delight, but their absence creates strong dissatisfaction. Security, uptime, and basic usability fall here.
  • Performance (linear) features: The more you have, the more satisfied customers are — and the less you have, the less satisfied they are. Speed, storage, and accuracy are typical examples.
  • Excitement (delighter) features: Unexpected features that create strong positive reactions when present but whose absence causes no dissatisfaction because they were not expected.
  • Indifferent features: Customers neither notice nor care whether they are present.
  • Reverse features: Present in the product, but some customers actively prefer their absence.

The Kano model is most useful when doing competitive feature analysis — understanding which features are table stakes in your category (must-be) versus where differentiation opportunity exists (excitement features that competitors lack). It is misapplied when used as a universal prioritization framework regardless of whether the feature categories are relevant to the specific decision at hand.


A Framework Selection Hierarchy

Not all frameworks deserve equal attention. A practical prioritization:

Tier 1 — Foundational (understand deeply):

  • Jobs-to-be-done: changes how you see customer problems
  • Opportunity solution tree: improves team discovery discipline
  • Outcome vs output distinction: the conceptual foundation of modern product management

Tier 2 — Operational (use regularly):

  • RICE/ICE: prioritization conversations
  • North star + guardrail metrics: alignment and focus
  • Product strategy to roadmap to backlog hierarchy: organizational clarity

Tier 3 — Contextual (use when appropriate):

  • Kano model: feature categorization in competitive analysis
  • OKRs: goal-setting if used as outcomes, not outputs
  • Opportunity scoring: customer-grounded prioritization research
  • Dual-track agile: when discovery and delivery conflicts are creating friction
  • Impact mapping: at the start of major initiatives

Tier 4 — Skeptical (apply selectively):

  • Any framework you cannot explain in plain English without jargon
  • Any framework whose primary output is a slide rather than a decision

How to Evaluate a Framework Before Adopting It

When a new framework is presented — in a conference talk, a newsletter, a book — apply this filter before investing time in it:

  1. What decision does it improve? A good framework answers a specific question better than you could answer it without the framework. If you cannot identify the specific decision it helps, it is probably decorative.

  2. What does it require you to do differently? Frameworks that feel obvious and require no behavioral change are not frameworks — they are reassurances that you are already doing the right thing. Real frameworks create friction because they force you to do something you would not naturally do.

  3. Is the value in the artifact or the process? The best frameworks generate value through the thinking they require, not through the document they produce. An opportunity solution tree drawn together in a team workshop creates shared understanding. An opportunity solution tree assembled by the PM and presented as a completed artifact creates polished slides and minimal learning.

  4. Can you test it quickly? Frameworks that require six-month implementation cycles before producing evidence they work are high-risk investments. Prefer frameworks you can try on a single initiative within one sprint.


Practical Takeaways

The frameworks that most reliably improve product decisions are the ones that change how you see the problem — particularly JTBD and the outcome vs output distinction. These are not process tools; they are conceptual shifts. Once you genuinely understand that customers hire products to do jobs, and once you genuinely distinguish between measuring output and measuring outcome, the other frameworks follow more naturally.

When evaluating whether to adopt a framework, ask: does this help me make a decision I would otherwise make poorly, or does it give me a way to describe a decision I already know how to make? If the former, adopt it. If the latter, it is probably decorative.

Apply frameworks in working sessions, not in slide decks. A framework that lives in a presentation is a credential. A framework that is drawn on a whiteboard mid-conversation to resolve a disagreement is a tool. The difference matters more than which specific frameworks you claim to use.

The best product managers in practice use a small number of frameworks fluently. They can move in and out of JTBD framing during a customer interview, apply opportunity solution tree structure when a team discussion gets solution-heavy too quickly, and reach for RICE when a prioritization debate has gone on too long without explicit assumptions. Fluency — not breadth of framework vocabulary — is the goal.


References

  1. Christensen, C. M., Hall, T., Dillon, K., & Duncan, D. S. "Know Your Customers' Jobs to Be Done." Harvard Business Review, September 2016.
  2. Torres, T. Continuous Discovery Habits. Product Talk, 2021.
  3. Ellis, S. & Brown, M. Hacking Growth. Crown Business, 2017.
  4. Cutler, J. "North Star Playbook." Amplitude, 2019.
  5. Ries, E. The Lean Startup. Crown Business, 2011.
  6. Doerr, J. Measure What Matters. Portfolio/Penguin, 2018.
  7. Ulwick, A. Jobs to Be Done: Theory to Practice. Idea Bite Press, 2016.
  8. Cagan, M. Inspired: How to Create Tech Products Customers Love. Wiley, 2018.
  9. Cagan, M. & Jones, C. Empowered: Ordinary People, Extraordinary Products. Wiley, 2021.
  10. Perri, M. Escaping the Build Trap. O'Reilly Media, 2018.
  11. Rachitsky, L. "North Star Metric: A Framework for Aligning Your Team." Lenny's Newsletter, 2021.
  12. Kano, N. "Attractive Quality and Must-Be Quality." Journal of the Japanese Society for Quality Control, 1984.
  13. Horowitz, B. & Andreessen, M. "Good Product Manager / Bad Product Manager." Andreessen Horowitz, 2010.
  14. Patton, J. User Story Mapping. O'Reilly Media, 2014.
  15. Adzic, G. Impact Mapping. Provoking Thoughts, 2012.
  16. Martin, R. & Lafley, A.G. Playing to Win. Harvard Business Review Press, 2013.
  17. McBride, S. "RICE: Simple Prioritization for Product Managers." Intercom Blog, 2016.
  18. Bastow, J. "Now-Next-Later Roadmaps." ProdPad, 2014.
  19. Amplitude. "North Star Playbook: Data on Metric Adoption and Growth." Amplitude, 2021.
  20. Mind the Product. "State of Product Management Survey." Mind the Product, 2023.

Frequently Asked Questions

What is the jobs-to-be-done framework?

JTBD reframes customers by the job they are trying to accomplish rather than their demographics. It helps identify non-obvious competition and shifts product thinking from features to the underlying motivation behind product use.

What is an opportunity solution tree?

Teresa Torres' framework visually connects a desired outcome to opportunities (customer needs), solutions, and experiments — preventing teams from jumping to solutions before fully exploring the problem space.

What is a north star metric?

A single top-level metric that best captures the core value a product delivers — like Airbnb's nights booked or Spotify's time spent listening. It aligns the team around one success measure instead of competing metrics.

What is the difference between a product strategy and a product roadmap?

Strategy defines why you are building what you are building — target customer, differentiated value, and key market bets. A roadmap shows what you plan to build and when. Strategy should drive the roadmap, not the reverse.

Which product frameworks are overused or misapplied?

OKRs are routinely turned into task lists rather than outcome goals. Build-measure-learn is often applied as undirected iteration without real hypothesis discipline. Any framework that primarily produces a slide rather than a decision is likely cargo cult.