Most teams that do retrospectives do not improve from them. They meet. They list things. They write action items on a digital whiteboard. They close the meeting. Nothing changes. Two weeks later, they do it again.

This is not because retrospectives are a bad idea. It is because running a retrospective that produces actual change is harder than running a meeting that feels productive. The difference between the two is significant and specific.

This article explains what retrospectives are for, the major formats, the psychological conditions required for them to work, the most common failure modes, how to make action items stick, and how to run them effectively with remote teams.


What Retrospectives Are For

The agile retrospective (often shortened to "retro") is a structured team event in which the team examines its own working practices — not the product, not the deliverables, but the process of how the team works together — and identifies specific improvements.

In Scrum, retrospectives are one of the five core ceremonies (alongside sprint planning, daily standup, sprint review, and sprint retrospective). The Scrum Guide specifies the retrospective is held at the end of each sprint and focused on three questions: what went well, what did not go well, and what changes will the team commit to?

The underlying theory is simple: continuous improvement requires regular reflection. Teams that do not periodically examine how they work tend to accumulate small inefficiencies, unaddressed interpersonal frictions, and habitual practices that made sense at one point but no longer do. The retrospective is the scheduled space for surfacing and addressing these.

But the theory only works if the meeting produces actual behavior change. A retrospective that surfaces problems without changing anything is not just ineffective — it is actively corrosive to team morale, because it demonstrates that reflection does not lead to improvement, which makes future reflection feel pointless.

"The retrospective is the most important ceremony in Scrum — and the one most often done badly. When it works, the team gets better every sprint. When it doesn't, it's just a meeting about a meeting." -- Lyssa Adkins, Coaching Agile Teams (2010)


The Research Case for Retrospectives

The case for regular team retrospectives is not just theoretical — it is supported by substantial organizational research.

Edmondson (1996) in a landmark study of cardiac surgery teams found that teams who reflected after procedures — discussing what went well and what could be improved — showed measurably faster skill acquisition and better long-term outcomes than teams that moved immediately from one case to the next. The teams that debriefed were not more skilled at the outset; they became more skilled faster because of the reflection habit.

Tannenbaum and Cerasoli (2013) conducted a meta-analysis of 46 studies on after-action reviews (the military equivalent of retrospectives) and found that teams who conducted structured post-performance debriefs improved performance by an average of 20 to 25 percent compared to teams that did not. The effect was consistent across industries, team sizes, and types of work.

Ellis and Davidi (2005), studying Israeli Defense Force squads, found that after-event reviews improved not only the performance of the specific tasks reviewed but also transfer of learning to new tasks — suggesting that the reflective habit itself, not just the specific insights, contributes to performance improvement.

These findings suggest that the value of retrospectives extends beyond any specific process change they generate. Teams that retrospect regularly develop a shared orientation toward learning that makes them more adaptable, more honest with each other, and faster at identifying and solving problems — qualities that compound over time.

The challenge, as all practitioners know, is that knowing this does not make retrospectives easy to run well.


When to Run a Retrospective

Retrospectives are typically held:

  • At the end of each sprint (standard Scrum cadence — weekly, biweekly, or monthly depending on sprint length)
  • At the end of a project or phase (milestone retrospective)
  • After a significant incident or failure (post-mortem or blameless retrospective)
  • When a team is newly formed (early retrospective to establish norms)
  • When team performance is clearly suffering (emergency retrospective)

The sprint retrospective is the recurring operational form. The post-mortem is the incident-specific form. Both serve the same basic purpose — learning from experience — but differ in scope and emotional intensity.

Frequency and Duration

Research on debrief effectiveness suggests a clear relationship between frequency and impact: more frequent, shorter retrospectives outperform less frequent, longer ones. Teams that meet every two weeks for 60 minutes improve faster than teams that meet monthly for three hours, even when the total time investment is similar.

The Scrum Guide allocates a maximum of three hours for a retrospective in a one-month sprint, and proportionally less for shorter sprints. For most two-week sprint teams, 60-90 minutes is sufficient. The temptation to expand retrospectives to fill available time should be resisted; longer meetings tend to produce more venting and less focused decision-making.


The Core Retrospective Formats

Format choice matters. Different formats surface different information, work better at different levels of team trust, and suit different specific objectives. The best facilitators choose formats deliberately rather than defaulting to the same template every time.

Start / Stop / Continue

The most commonly used format. The team generates items in three categories:

  • Start: Things the team is not currently doing but should begin
  • Stop: Things the team is doing that are not serving it well
  • Continue: Things that are working and should be preserved

Best for: Teams newer to retrospectives, situations where a structured template helps participation, when you want broad coverage of process issues.

Limitation: The three-bucket structure can feel generic and produce generic responses. It does not naturally prompt deeper analysis of why things are happening.

The 4Ls

Developed by Mary Gorman and Ellen Gottesdiener, the 4Ls asks team members to sort reflections into four categories:

  • Liked: Things the team appreciated
  • Learned: Things the team discovered or came to understand
  • Lacked: Things the team needed but did not have
  • Longed For: Things the team wishes it had had

Best for: Teams that need to surface learning and aspirations, not just problems. The "Longed For" category often reveals systemic issues (support, tooling, clarity) that "Stop/Start" formats miss.

The Sailboat (or Speed Boat)

A visual metaphor format:

  • Wind at your back (or sails): Things that helped the team move forward
  • Anchors: Things that slowed the team down or held it back
  • Rocks ahead (in some versions): Risks or upcoming obstacles
  • Sunny island (destination): Goals or aspirations

Best for: Teams that respond well to visual metaphors, mixed groups including less technical participants, when you want to make the discussion feel less clinical. The metaphor can lower emotional temperature around difficult topics.

Mad / Sad / Glad

Participants categorize experiences into three emotional states:

  • Mad: Things that frustrated or angered them
  • Sad: Things that disappointed them or that they will miss
  • Glad: Things they appreciated or that went well

Best for: Surfacing emotional undercurrents that more analytical formats miss. Useful after difficult sprints, following a project cancellation, or with teams that have experienced significant stress. Requires reasonable psychological safety.

Five Whys (Root Cause Analysis)

Rather than collecting a broad inventory of issues, the Five Whys focuses on one problem and drills into its root cause. The facilitator asks "why?" iteratively (typically five times) until arriving at a systemic cause rather than a surface symptom.

Example:

  • Problem: We shipped three bugs that should have been caught in QA.
  • Why? Because the QA review was rushed.
  • Why? Because it happened the day before the sprint deadline.
  • Why? Because testing is always scoped last and cut first when time is short.
  • Why? Because there is no explicit time allocation for testing in sprint planning.
  • Why? Because testing is considered implicit rather than explicit work.
  • Root cause: Testing is not treated as a first-class sprint item in planning.

Best for: Recurring problems that keep appearing without resolution. Teams that are ready to move from symptom identification to genuine root cause analysis.

The Five Whys was originally developed by Taiichi Ohno as part of the Toyota Production System — the same framework that produced Kanban. Its insight is that the first explanation of a problem is almost never the true cause; the true cause is typically several layers of "why?" deeper.

The Lean Coffee Retrospective

A participant-driven format where team members propose topics, vote on priorities, and discuss in order of vote count with time-boxing. No predetermined agenda.

Best for: Teams with high psychological safety who have clear issues they want to discuss. Empowers participants; reduces facilitator control. Requires maturity to execute well.

The FLAP Retrospective

A format developed by David Horowitz (co-founder of Retrium) designed to create clearer connections between outcomes and process:

  • Future Considerations: What's coming up that will affect the team?
  • Lessons Learned: What did we learn this period?
  • Accomplishments: What did we achieve?
  • Problem Areas: What got in our way?

Best for: Teams at project milestones or end-of-phase reviews where forward-looking considerations are as important as backward-looking reflection.

Format Structure Best use case Psychological safety required
Start/Stop/Continue Three buckets General process review Moderate
4Ls Four categories Learning-oriented teams Moderate
Sailboat Visual metaphor Mixed or newer teams Moderate
Mad/Sad/Glad Emotional Post-difficulty processing High
Five Whys Root cause drilling Recurring problems High
Lean Coffee Participant-driven Self-directed teams High
FLAP Forward-looking Milestone reviews Moderate

The Non-Negotiable: Psychological Safety

No retrospective format overcomes insufficient psychological safety. This is the most important factor in retrospective effectiveness, and the one most often treated as a given when it is not.

Psychological safety — the team climate in which people believe they can speak up, share concerns, and identify problems without fear of punishment, ridicule, or marginalization — was documented by Amy Edmondson at Harvard Business School as the primary predictor of team learning behavior. Her 1999 study found that psychologically safe teams surfaced errors and reported more problems — not because they made more errors, but because they reported them more honestly. Teams without psychological safety appeared to perform well on narrow measures while hiding the information that would have made them genuinely better.

"Psychological safety is not about being nice. It's about being honest. Teams with psychological safety don't sugarcoat bad news or shield the boss from reality. They say the hard thing because saying the hard thing is how the team gets better." -- Amy Edmondson, The Fearless Organization (2018)

In a retrospective, the most valuable information is also the most risky to raise: the interpersonal conflict everyone is tiptoeing around, the manager behavior that is slowing the team down, the process change that should happen but implicates someone with authority. In low-psychological-safety environments, these topics do not surface. The retrospective produces safe commentary on safe topics and changes nothing that matters.

Edmondson's 2012 review of psychological safety research, published in Annual Review of Organizational Psychology, found that the single strongest predictor of psychological safety is leader behavior — specifically, whether the leader models intellectual humility, acknowledges their own uncertainty and mistakes, and responds non-defensively to criticism. Teams take their cue from how the leader behaves, and they behave accordingly in retrospectives.

This has a direct implication: retrospectives facilitated by a manager who responds defensively to feedback about their own decisions will produce sanitized feedback indefinitely, regardless of which format is used.

Signs of Low Psychological Safety in Retrospectives

  • The same themes appear every sprint without resolution
  • Participation is uneven, with quiet members and a few dominant voices
  • Feedback is abstract rather than specific ("communication could be better" instead of "we need to decide who communicates with the stakeholder before sending messages")
  • No one ever raises concerns about the manager or team lead
  • Action items are consistently not completed, without anyone acknowledging this
  • Positive observations significantly outnumber negative ones even after difficult sprints

Building Psychological Safety

Safety is built over time through consistent behavior, not through a single exercise or icebreaker. Contributing factors include:

  • Facilitators modeling vulnerability: Sharing their own mistakes or uncertainties
  • No-blame norms explicitly stated and enforced: Distinguishing between systemic problems and individual failures
  • Managers responding non-defensively when their own behavior is discussed
  • Demonstrated follow-through: Teams that see action items acted on trust the process more, which increases willingness to raise real issues
  • Regular retrospectives: Frequency builds familiarity and trust over time
  • Anonymous input channels: Digital tools that allow anonymous sticky notes can bridge safety gaps while trust is building

Project Aristotle, Google's extensive research into team effectiveness (2016), found that psychological safety was the most important factor in team performance across all 180 teams studied — more important than having the right mix of skills, clear roles, or strategic direction. The finding reinforced Edmondson's earlier research and extended it to product and engineering teams in a contemporary setting.


Common Failure Modes

The Action Item Graveyard

Teams generate five, eight, or ten action items in a retrospective and review none of them at the next one. New items are added. The list grows. Nothing is ever implemented. The team learns that retrospectives are where good ideas go to die.

Fix: Limit action items to two or three per retrospective. Start every retrospective by reviewing progress on the previous items before generating new ones. If previous items are not done, understand why before adding more.

The Sanitized Session

In environments with low psychological safety, retrospectives produce inoffensive observations that do not touch the real issues. "We could improve our documentation" is a safe statement. "The product manager is providing requirements too late for us to scope them properly" is the same problem, stated usefully.

Fix: This requires psychological safety work, not format adjustment. Consider anonymous input collection (sticky notes, digital tools like EasyRetro or Parabol) as a bridge while safety is being built.

Facilitator Capture

When the manager or team lead facilitates their own team's retrospective, the team often produces feedback that is acceptable to that person rather than feedback that is accurate. The implicit power dynamic constrains honesty.

Fix: Rotate facilitation. Use external facilitators for teams where manager-as-facilitator dynamics are clearly distorting the output. Consider separating the facilitator role from the manager role explicitly.

The Complaint Session

Retrospectives without forward-looking structure can devolve into venting without resolution — a list of frustrations with no analysis of causes or commitment to specific changes.

Fix: Structure formats to require identification of potential actions, not just problems. Ensure the "what should we do differently" portion is as structured as the "what went wrong" portion.

Missing the "What Went Well"

Teams in difficult periods, under pressure, or with high negativity bias may spend the entire retrospective on problems and never identify what is working and should be protected. This produces a distorted picture and erodes morale.

Fix: Enforce time allocation for positive identification. Start with "what went well" to establish a balanced tone before moving to problems.

The Never-Ending Retro

Retrospectives that run long produce diminishing returns and leave teams feeling drained rather than energized. Past the 75-minute mark for most teams, the quality of input degrades and decisions made in the last portion of the meeting are often reversed or ignored.

Fix: Use a timer. Time-box each section explicitly. A structured agenda with allocated times makes it easy to protect the most important sections — action item generation in particular — from being squeezed by unfocused discussion earlier.

The One-Size-Fits-All Format

Using the same format every sprint regardless of what the team needs produces familiarity and boredom rather than insight. Teams learn to generate the expected responses to the expected questions without genuine reflection.

Fix: Vary formats deliberately. After a difficult sprint with high emotional content, use Mad/Sad/Glad. After a sprint where a specific recurring problem appeared again, use Five Whys. The format should serve the need, not the other way around.


Making Action Items Stick

The single most common reason retrospectives fail to produce improvement is the action item problem. Here is what works:

Limit to 2-3 items per retrospective. A short list of items with genuine follow-through beats a long list of aspirations no one acts on. Research on goal-setting by Locke and Latham (2002) in Psychological Science confirms that specificity and manageability of goals are primary predictors of achievement — broadly defined or excessive goal sets consistently underperform narrow, specific ones.

Assign a named owner, not "the team." "The team will document our deployment process" assigns responsibility to no one. "Alex will write the first draft of the deployment runbook by next Friday" assigns it to a specific person with a deadline.

Make items specific and behavioral. "Improve communication" is not an action item. "Hold a 10-minute alignment meeting every Tuesday before the sprint review" is. The test: can you tell whether the action was taken?

Review explicitly at the start of the next retrospective. Before generating new items, spend 5-10 minutes on the previous cycle's items. Were they done? If not, why not? Was the item unclear? Did priorities change? Did the owner not have capacity? The answers to these questions are as valuable as any new insight the current retrospective might generate.

Track somewhere visible. Action items that live only in retrospective notes get forgotten. A team task board, a running document the team regularly consults, or a standing agenda item in weekly standups keeps items visible.

Distinguish between experiment and commitment. Some action items are better framed as time-boxed experiments: "For the next two sprints, we will hold a 15-minute design review before any new story enters the sprint." This framing reduces resistance (it is temporary, not a permanent change) and builds in a natural evaluation point.

The Retrospective Health Check

Henrik Kniberg and Spotify's engineering teams popularized a regular team health check — a structured self-assessment of team functioning across multiple dimensions — as a complement to retrospectives. Rather than waiting for problems to surface organically in retrospective discussion, the health check provides a systematic prompt across dimensions like sprint planning quality, delivery confidence, codebase health, and team morale.

Teams rate themselves on each dimension using a simple traffic-light system (green/yellow/red) and track changes across sprints. This approach ensures that important dimensions are not consistently overlooked simply because they do not organically arise in discussion — a common problem in teams where certain topics are implicitly treated as off-limits.


Remote Retrospectives

Distributed teams face specific challenges with retrospectives: lower social cue richness, more difficult facilitation, greater risk of participation inequality, and the awkwardness of silences on video calls.

Tools

Several tools are designed specifically for remote retrospectives:

  • Parabol: Free, open-source, integrates with Jira and GitHub
  • EasyRetro (formerly FunRetro): Simple, visual, widely used for distributed teams
  • Miro / MURAL: General whiteboard tools that work well for visual retrospective formats
  • Metro Retro: Specifically retrospective-focused with good real-time collaboration features
  • Reetro: Free option with good format variety
  • Retrium: Paid tool with strong facilitation guides built in

Adapting for Remote

Use asynchronous input collection. Ask team members to add sticky notes before the synchronous meeting. This equalizes participation (introverts contribute as much as extroverts), reduces groupthink (people form independent views before seeing others'), and makes the synchronous time more productive. Tools like Parabol and EasyRetro support asynchronous pre-population.

Protect against silence penalties. Video call silence is more uncomfortable than in-person silence. Build in explicit quiet time for individual thinking; do not rush to fill pauses.

Rotate facilitation deliberately. In remote settings, a dominant facilitator has more control over who speaks and how long topics run. Rotation distributes this influence.

Keep remote retros shorter than in-person ones. The fatigue of video calls reduces productive engagement time. A tight 45-60 minute remote retrospective often produces better output than a sprawling 90-minute one.

Time zone awareness for global teams. Teams spanning multiple time zones should rotate meeting times rather than consistently disadvantaging participants in earlier or later zones. A team with members in New York, London, and Singapore has no ideal time — but rotating the disadvantage distributes it fairly.

Use video deliberately. For retrospectives specifically, video-on norms matter more than for other meetings: the emotional content of retrospectives benefits from seeing faces. Teams that run retrospectives with cameras off lose significant signal about team morale and individual response.

A 2021 study by Cizek and colleagues in the Journal of Applied Psychology found that remote teams with explicit structured communication protocols — defined turn-taking, explicit time-boxing, and required participation signals (thumbs up/down reactions) — significantly outperformed teams using unstructured remote discussion on both participation equity and decision quality. These protocols matter more in retrospectives than in almost any other meeting type, because the retrospective's value depends on all voices being heard.


Running the Retrospective: A Session Structure

A functional sprint retrospective for a team of 5-10 people can follow this structure:

Check-in (5 minutes): A brief, non-work question to shift from task mode to reflection mode. "What is one word that describes your experience of the last sprint?" A mood thermometer. Something that activates presence and signals that this meeting is different from a status update.

Review previous action items (5-10 minutes): Status of last cycle's items. What was done? What was not, and why? This section should precede new discussion — teams that skip it reinforce the message that action items are not serious commitments.

Data gathering (15-20 minutes): The core format activity. Participants add input individually (silently) before sharing and discussing. The silence phase equalizes participation and prevents early dominant voices from anchoring the group's thinking.

Insight generation (10 minutes): Identifying themes, patterns, root causes. Moving from observations to understanding. This is the analytical layer that most teams skip, jumping directly from "here is what happened" to "here is what we should do" without asking why things happened.

Action item generation (10 minutes): Translating insights into specific, owned, time-bound commitments. Strict quantity limits apply. Each item must have a name, a description specific enough to verify completion, and a due date.

Close (5 minutes): Brief check-out. "What is one word for how you are leaving this meeting?" Rate the retrospective itself on a simple 1-5 scale. The retrospective is a practice that benefits from its own feedback loop — teams that track retrospective quality over time can identify when the practice is deteriorating and intervene before it becomes purely ceremonial.

The total: 50-60 minutes for a sprint retrospective. Post-mortems and milestone retrospectives may run longer, but most sprint retrospectives should not.


Blameless Post-Mortems: A Special Case

The blameless post-mortem is the incident-specific retrospective form, developed and popularized in engineering culture primarily by John Allspaw and Paul Hammond at Etsy and later codified in Google's Site Reliability Engineering practices.

The blameless post-mortem operates on a foundational principle: people do not come to work to do a bad job. When something fails, the failure is almost always a system failure — inadequate tooling, insufficient monitoring, unclear processes, unrealistic expectations — not an individual failure of will or competence. Attributing failures to individuals not only fails to fix the system but also prevents the honest diagnosis that would produce real improvement.

"Blameless post-mortems exist not to prove that no one was at fault, but to recognize that if we want to learn from failure, we must create environments where people can speak truthfully about what happened without fear that the truth will be used against them." -- John Allspaw, Blameless PostMortems and a Just Culture (2012)

Blameless post-mortems require significant psychological safety — arguably more than any other retrospective form. They also require explicit leadership commitment: if management says "blameless" but then disciplines individuals for actions revealed in post-mortems, the practice collapses immediately and permanently.

The structure of a blameless post-mortem differs from a sprint retrospective:

  1. Timeline reconstruction: A factual, chronological account of what happened. No blame language; just "at 14:23, the deployment was initiated."
  2. Contributing factors: What conditions made this failure possible? What could have gone differently?
  3. Detection: How was the problem discovered? How could detection have been faster?
  4. Response: What was the response? What worked and what did not?
  5. Action items: Specific, owned, time-bound changes to system, process, or tooling.

The blameless post-mortem is arguably the retrospective form with the strongest research support for organizational learning. Edmondson (2002) in Managing the Risk of Learning found that organizations that institutionalized blameless review processes after incidents improved their safety records and their operational performance significantly faster than those that used traditional root-cause analysis approaches that identified responsible individuals.


Retrospectives as Organizational Learning

The deeper purpose of retrospectives extends beyond fixing individual sprints. Teams that run effective retrospectives develop a learning culture — a shared orientation toward reflection, experimentation, and adjustment. This is harder to measure than a specific process improvement but more valuable.

Peter Senge's concept of the learning organization in The Fifth Discipline (1990) describes organizations that continuously expand their capacity to create results they truly desire. He identifies five disciplines: systems thinking, personal mastery, mental models, shared vision, and team learning. The retrospective, run well and consistently, is one of the few organizational practices that directly operationalizes team learning at the working-team level — where Senge's vision most often fails to reach.

The Tannenbaum and Cerasoli meta-analysis (2013) identified several factors that distinguish effective debrief practices from ineffective ones. Effective debriefs:

  • Focus on specific, behavioral observations rather than general impressions
  • Connect observations to outcomes (not just "what happened" but "what did that cause")
  • Involve all team members, not just leaders
  • Generate concrete, verifiable commitments to change
  • Are conducted close in time to the events being reviewed

Ineffective debriefs — which describe many retrospectives run in practice — do the opposite: they deal in generalities, involve some voices more than others, generate vague aspirations, and are often conducted weeks after the events that prompted them.

The retrospective is not a ceremony. It is a discipline. Run it badly — as a box to check, a complaint session, a list-generating exercise without follow-through — and it produces nothing. Run it well — with psychological safety, honest input, root cause thinking, and rigorous action item ownership — and it is one of the most reliable tools for sustained team improvement available.

The difference between teams that improve steadily and teams that plateau is not usually talent or technical skill. It is whether they have developed the discipline of looking honestly at how they work, generating specific hypotheses about what would make them work better, testing those hypotheses, and learning from what they find. The retrospective, at its best, is that discipline in institutional form.

Frequently Asked Questions

What is a retrospective in agile?

An agile retrospective is a structured team meeting held at the end of a sprint, project phase, or defined time period, in which the team reflects on how it worked together -- what went well, what did not, and what specific changes it will make. Originating in Scrum methodology, the retrospective is one of the five Scrum ceremonies and is designed to drive continuous improvement of team process, communication, and collaboration rather than reviewing the product itself.

What are the most common retrospective formats?

The most widely used formats include Start/Stop/Continue (what should we start doing, stop doing, and continue doing), the 4Ls (Liked, Learned, Lacked, Longed For), the Sailboat (wind at your back are things helping you, anchors are things holding you back), Mad/Sad/Glad (sorting feelings into three emotional categories), and the Five Whys (drilling down to root causes). Format choice depends on team maturity, the specific issues being addressed, and how much psychological safety exists in the team.

Why do most retrospectives fail to produce real change?

The most common failure is generating action items without ownership, deadlines, or follow-up. Teams list improvements enthusiastically, leave the meeting with a long list, and revisit none of it at the next retrospective. Other failure modes include insufficient psychological safety (people say what they think is safe rather than what is true), facilitator dominance (the meeting becomes a top-down feedback session rather than genuine team reflection), and focusing only on negatives (missing what is working and should be preserved or amplified).

Why is psychological safety essential for retrospectives?

Psychological safety -- the belief that one can speak up without fear of punishment or humiliation -- is a prerequisite for retrospectives to surface real problems. Without it, teams perform retrospectives: they say things that are socially acceptable rather than diagnostically useful. The most valuable information in a retrospective (the interpersonal conflict no one is addressing, the process that slows everyone down, the decision that was clearly wrong) is also the most risky to raise. Teams with low psychological safety produce sanitized retrospectives that change nothing.

How should retrospective action items be structured to ensure follow-through?

Effective retrospective action items should be specific (a concrete behavior or change, not a vague aspiration), assigned to a named person (not 'the team'), given a deadline or target date, limited in number (two or three maximum per retrospective), and reviewed explicitly at the start of the next retrospective before new items are generated. The SMART framework (Specific, Measurable, Achievable, Relevant, Time-bound) is a useful check. Teams that carry forward incomplete items before adding new ones create accountability that teams with unlimited lists never achieve.