Data-Driven Decision Making Explained: Using Data to Improve Outcomes
In 2007, Netflix faced a pivotal decision: how to allocate their content budget in the streaming era. They had data showing which DVDs customers rented, but streaming was new territory. Instead of relying on Hollywood intuition about what "audiences want," Netflix built sophisticated data systems tracking every interaction: what users watched, when they paused, what they abandoned, what they binged.
This data revealed surprising patterns. David Fincher films had loyal followings. Political dramas engaged specific demographics intensely. Kevin Spacey had strong appeal. The data suggested a combination—political drama directed by Fincher starring Spacey—would find an audience.
The result: House of Cards, Netflix's first major original series. It became a cultural phenomenon, validating Netflix's data-driven approach and launching their transformation from DVD rental to content production powerhouse.
But the story has a crucial nuance. The decision to greenlight House of Cards wasn't purely data-driven. Netflix executives combined data insights with creative judgment, industry expertise, and strategic vision. Data informed—but didn't dictate—the decision.
This balance between data and judgment is the essence of effective data-driven decision making. Pure data fetishism—blindly following analytics without context, expertise, or critical thinking—fails as reliably as pure intuition-based decision making. The goal isn't replacing human judgment with data. It's augmenting human judgment with data insights, creating a decision-making approach superior to either alone.
This article explains data-driven decision making comprehensively: what it means, why it works, its benefits and pitfalls, how to implement it effectively, what types of decisions it suits, how to balance data with intuition, what infrastructure and culture changes are required, and practical frameworks for integrating data into decision processes.
Defining Data-Driven Decision Making
Data-driven decision making uses quantitative data and systematic analysis to guide choices, rather than relying primarily on intuition, opinions, hierarchy, or tradition.
Core Components
1. Data collection: Gathering relevant information systematically
2. Analysis: Identifying patterns, trends, correlations, and insights
3. Hypothesis testing: Evaluating assumptions against evidence
4. Insight integration: Using findings to inform decisions
5. Outcome measurement: Tracking whether decisions achieved goals
6. Learning loops: Improving future decisions based on results
What Data-Driven Doesn't Mean
Common misconceptions to dispel:
| Misconception | Reality |
|---|---|
| Data replaces judgment | Data informs judgment; experts interpret and apply insights |
| All decisions need extensive analysis | Use data proportional to decision stakes and availability |
| Intuition is invalid | Expertise and intuition remain essential, especially where data is limited |
| More data is always better | Quality and relevance matter more than quantity |
| Data eliminates uncertainty | Data reduces uncertainty but rarely eliminates it |
| Data-driven means only quantitative | Qualitative data (interviews, observations) also matters |
Data-driven decision making is a spectrum, not binary. Some decisions warrant extensive data analysis; others require quick judgment with minimal data. The art is matching analytical investment to decision importance and data availability.
Why Data-Driven Decision Making Works
Understanding the mechanisms reveals when and how to apply it effectively.
Reason 1: Reduces Cognitive Bias
Human intuition is systematically biased. Psychologist Daniel Kahneman documented dozens of cognitive biases affecting judgment:
- Confirmation bias: Seeking information supporting existing beliefs
- Availability bias: Overweighting recent or vivid examples
- Anchoring: Over-relying on first information encountered
- Overconfidence: Overestimating accuracy of judgments
- Hindsight bias: Believing outcomes were predictable after the fact
Data counters these biases by forcing confrontation with objective evidence. Well-analyzed data doesn't care about your intuition—it reflects reality.
Example: Manager believes sales training program is working (confirmation bias—remembers success stories). Data shows no statistically significant difference in sales between trained and untrained reps. Data contradicts intuition, prompting reevaluation.
Reason 2: Detects Patterns Humans Miss
Human brains excel at pattern recognition in evolutionarily familiar contexts but struggle with:
- Large datasets: Can't mentally process thousands of data points
- Weak signals: Subtle patterns buried in noise
- Non-linear relationships: Complex interactions between variables
- Statistical significance: Distinguishing real patterns from random variation
Data analysis tools excel here: regression analysis, clustering algorithms, time-series analysis, machine learning models can identify patterns invisible to human observation.
Example: Retailer analyzes transaction data, discovers customers who buy product A also frequently buy product B (not intuitively obvious). Cross-promotion increases sales of both.
Reason 3: Enables Experimentation
A/B testing and controlled experiments allow systematic comparison of alternatives.
Traditional approach: Implement change, hope it works, hard to know if outcomes are due to change or other factors.
Data-driven approach: Test change with subset, measure outcomes against control group, verify improvement before full rollout.
Example: Website redesign. Instead of replacing entire site based on designer opinion, test new design with 10% of traffic. Data shows conversion rate improves 15%. Roll out confidently. If data showed decline, avoid costly mistake.
Reason 4: Provides Accountability
Intuition-based decisions are hard to evaluate objectively. "I felt this was right" is unfalsifiable.
Data-driven decisions create clear rationale: "We chose option A because data showed X, Y, and Z." This enables:
- Learning: When decisions fail, analyze why data led you wrong (bad data? Bad interpretation? Changed circumstances?)
- Confidence: Defend decisions to stakeholders with evidence
- Improvement: Systematic feedback loops improve future decisions
Reason 5: Scales Decision Quality
Expert intuition doesn't scale—it's locked in individual experts' heads. Data-driven approaches can be:
- Codified: Documented so others can apply
- Automated: Routine decisions made by algorithms
- Distributed: Non-experts can make better decisions with data access
- Consistent: Same data produces same decision, reducing variance
Example: Loan approval. Expert bankers make intuitive judgments—inconsistent, slow, doesn't scale. Data-driven credit scoring models make consistent, fast decisions at scale, with better default prediction than average human judgment.
The Benefits of Data-Driven Decision Making
Beyond theoretical advantages, empirical evidence shows concrete benefits.
Benefit 1: Improved Business Performance
Research consistently shows data-driven organizations outperform competitors:
- MIT study (Brynjolfsson & McElheran, 2016): Firms adopting data-driven decision making had 5-6% higher productivity and profitability than expected based on other investments
- Harvard study: Top third of companies in data-driven decision making were, on average, 5% more productive and 6% more profitable than competitors
- McKinsey research: Data-driven organizations 23x more likely to acquire customers, 6x more likely to retain customers, 19x more likely to be profitable
Mechanisms: Better resource allocation, faster identification of what works, reduced waste on ineffective initiatives.
Benefit 2: Faster Learning Cycles
Data creates feedback loops accelerating organizational learning.
Without data: Implement strategy → wait months/years → subjective assessment of success → slow iteration.
With data: Implement strategy → measure outcomes continuously → identify what's working/failing quickly → rapid iteration.
Example: Amazon tests thousands of changes annually through A/B tests. Winning variants deployed immediately; losers killed fast. Compounding effect of thousands of small improvements drives massive advantage.
Benefit 3: Alignment Across Organization
Data provides common ground for discussions, reducing organizational politics.
Traditional: Decisions influenced by who has power, who argues most persuasively, organizational politics.
Data-driven: Decisions grounded in evidence accessible to all. Harder (though not impossible) for politics to override clear data.
Cultural effect: Shifts norm from "defer to highest-paid person's opinion" to "let's look at the data."
Benefit 4: Risk Reduction
Data-driven approaches reduce costly mistakes:
- Testing before full rollout: Identify failures small-scale before expensive company-wide deployment
- Quantified uncertainty: Models provide probability ranges, not false certainty
- Early warning signals: Dashboard metrics flag problems before they escalate
Example: Netflix tests content with small audiences before major marketing spend. Low engagement = cut losses. High engagement = invest heavily.
Benefit 5: Competitive Advantage
In many industries, data itself becomes competitive moat:
- Network effects: More users → more data → better product → more users
- Proprietary insights: Data no competitor has
- Operational excellence: Data-optimized processes competitors can't match without equivalent data
Examples: Google's search quality, Amazon's recommendation engine, Spotify's music discovery—all built on data scale competitors can't replicate.
The Pitfalls: When Data-Driven Goes Wrong
Data-driven decision making has failure modes. Recognizing them prevents costly mistakes.
Pitfall 1: Measuring What's Easy, Not What Matters
Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."
Problem: Organizations measure proxies—easily quantifiable variables—mistaking them for true goals.
Example: Call center measured "calls handled per hour" to improve productivity. Employees rushed calls, providing poor service to hit targets. Customer satisfaction plummeted. The measurable proxy (call volume) diverged from actual goal (customer problem resolution).
Fix: Define goals clearly before metrics. Ask: "If this metric improves but our real goal doesn't, what would that look like?" If scenarios exist, metric is wrong.
Pitfall 2: Analysis Paralysis
Problem: Endless data gathering and analysis delays decisions. Perfect information never arrives.
Example: Startup spends six months analyzing market data before launching product. Competitor launches in six weeks with minimal data, learns from real customers, iterates, captures market while startup is still analyzing.
Fix: Time-box analysis. Set decision deadline. Gather best available data in that window. Decide. Implement. Learn from outcomes. Iteration beats pre-analysis for many decisions.
Pitfall 3: Ignoring Context and Causation
Problem: Data shows correlation; decision-makers assume causation without investigating mechanisms.
Example: Data shows customers who use feature X have higher retention. Company invests heavily promoting feature X. Retention doesn't improve. Why? Power users (already highly engaged) used feature X. Using X didn't cause retention—being a power user caused both X usage and retention. Correlation confused with causation.
Fix: Always ask "Why?" and "How?" Don't accept correlations at face value. Test causal hypotheses when possible through experiments.
Pitfall 4: Historical Bias
Problem: Data reflects past, but decisions are about future. When circumstances change, historical data misleads.
Example: Pre-COVID data suggested office space investments were wise (growing demand). COVID radically changed context. Historical data became poor predictor.
Fix: Scenario planning. Ask: "Under what conditions would historical patterns break?" Monitor for those conditions. Weight recent data more heavily when contexts shift rapidly.
Pitfall 5: Ignoring the Unmeasurable
Problem: Important factors often resist quantification—culture, morale, creativity, strategic positioning, brand perception.
Data-driven organizations sometimes neglect these because they're not in dashboards.
Example: Company optimizes everything for short-term metrics (revenue, conversion, engagement). Neglects brand building, employee development, product quality—hard to measure immediately but crucial long-term. Short-term data optimization destroys long-term value.
Fix: Explicitly discuss qualitative factors. Ask: "What important considerations aren't in our data?" Create mechanisms capturing qualitative input (regular surveys, interviews, team discussions).
Pitfall 6: Data Quality Issues
"Garbage in, garbage out." Bad data produces bad decisions.
Common issues:
- Selection bias: Data not representative
- Measurement error: Instruments or processes are inaccurate
- Missing data: Important variables not captured
- Data manipulation: People gaming metrics distorts data
- Technical errors: Bugs, pipeline failures, calculation mistakes
Fix: Data quality processes. Validate data sources. Check for anomalies. Document assumptions. Build data literacy so people recognize quality issues.
Pitfall 7: Cherry-Picking Data
Problem: Selecting data supporting predetermined conclusions while ignoring contradictory evidence.
Example: CEO wants to acquire company. Presents data on synergies and growth potential. Ignores data on cultural misfit, integration challenges, financial risks. Board approves based on incomplete picture.
Fix: Pre-specify analysis. Define what data you'll examine and how you'll decide before looking at data. Reduces opportunity for motivated reasoning. Assign devil's advocate to challenge interpretations.
Implementing Data-Driven Decision Making: A Practical Framework
Transitioning to data-driven approaches requires systematic change across infrastructure, processes, and culture.
Phase 1: Build Data Foundation
Step 1: Define key decisions and questions
Start by identifying what decisions you want to improve. For each:
- What question needs answering?
- What decision will result?
- What data would inform this?
Example: "Should we expand to new market?" needs data on: market size, competition, customer demand, required investment, expected ROI, risks.
Step 2: Identify required data and gaps
Map current data availability:
- What do we already collect?
- What's accessible but not analyzed?
- What's not collected but needed?
- What's impossible to collect?
Prioritize gaps by: decision importance, ease of collection, and potential insight value.
Step 3: Build data infrastructure
Technical requirements:
- Data storage: Databases, data warehouses
- Data pipelines: Automated collection and cleaning
- Analysis tools: SQL, Python/R, BI platforms
- Dashboards: Visualizations for decision-makers
- Data governance: Quality processes, documentation, security
Start simple: Don't build enterprise data warehouse before you've proven value. Begin with spreadsheets and simple tools. Scale as value becomes clear.
Phase 2: Create Analytical Capabilities
Step 4: Build or acquire analytical talent
Options:
- Hire analysts: Data analysts, data scientists, business intelligence roles
- Train existing employees: Data literacy programs
- Partner with external experts: Consultants or agencies for specialized analysis
- Use vendor tools: Many platforms include built-in analysis
Right approach depends on: scale, complexity, budget, strategic importance of data advantage.
Step 5: Establish analysis standards
Prevent inconsistent or poor-quality analysis:
- Methodology documentation: How to analyze common questions
- Review processes: Peer review of important analyses
- Best practices: Statistical rigor, avoiding common errors
- Tool standardization: Reduce fragmentation
Phase 3: Integrate Data into Decision Processes
Step 6: Embed data in workflows
Make data accessible at decision points:
- Dashboards for routine decisions: Real-time metrics informing operational choices
- Briefing documents: Analyses prepared for strategic decisions
- Self-service tools: Enable decision-makers to explore data themselves
- Meeting templates: Require data review as standard agenda item
Step 7: Start with low-stakes decisions
Build confidence and capability:
- Choose decisions with: clear metrics, fast feedback, limited downside if wrong
- Run experiments and measure outcomes
- Document learnings
- Celebrate data-driven successes
- Analyze failures to improve
Step 8: Establish feedback loops
Pre-decision: What data informs decision? What do we expect to happen?
Post-decision: What actually happened? Did outcomes match expectations? Why or why not?
Learning: What does this teach about our decision-making? How should we adjust?
Phase 4: Build Data-Driven Culture
Step 9: Leadership modeling
Culture changes when leaders demonstrate new behaviors:
- Publicly use data in decisions
- Ask for data in meetings ("What does the data show?")
- Acknowledge when data contradicts intuition—and follow data
- Reward employees who make data-informed decisions
- Create psychological safety to report negative data
Step 10: Democratize data access
Reduce data as power/control:
- Make dashboards and reports accessible
- Train people to interpret data
- Create data champions in each team
- Document data sources and definitions
- Encourage experimentation and analysis
Step 11: Balance data with other inputs
Emphasize data as input, not replacement for judgment:
- Explicitly discuss qualitative factors
- Encourage debate when data conflicts with expertise
- Acknowledge uncertainty
- Document reasoning including non-data factors
- Review decisions holistically, not just quantitative outcomes
Matching Decisions to Appropriate Analytical Approaches
Not all decisions need the same level of data analysis. Matching approach to context improves efficiency.
High-Stakes, High-Uncertainty Decisions
Characteristics: Major strategic choices, large resource commitments, uncertain outcomes, one-time or rare.
Examples: Market entry, M&A, major product launches, strategic pivots.
Appropriate approach:
- Extensive data gathering and analysis
- Multiple analytical methods
- External benchmarks and comparisons
- Scenario modeling (best/base/worst cases)
- Expert consultation
- Robust debate integrating data with judgment
- Post-decision tracking to learn
Repeated Operational Decisions
Characteristics: High volume, similar structure, measurable outcomes, made frequently.
Examples: Pricing, inventory, hiring, resource allocation, marketing spend.
Appropriate approach:
- Build decision models or algorithms
- Use historical data to optimize
- Automate routine cases
- Review aggregate performance periodically
- Iterate on model as outcomes measured
- Human review only for edge cases or anomalies
Experimental Decisions
Characteristics: Trying new approaches, testing hypotheses, learning orientation.
Examples: Product features, marketing messages, process changes, growth tactics.
Appropriate approach:
- A/B testing or controlled experiments
- Small-scale pilots before full rollout
- Clear metrics defined upfront
- Statistical rigor in analysis
- Fast iteration based on results
- Culture encouraging experimentation
Irreversible or Ethical Decisions
Characteristics: Can't be undone, involve values, affect people significantly, no clear metric.
Examples: Layoffs, ethical dilemmas, mission changes, legal issues.
Appropriate approach:
- Data as one input, not primary driver
- Heavy emphasis on values, ethics, stakeholder input
- Long-term consequences consideration
- Qualitative analysis and discussion
- Leadership judgment essential
- External perspective valuable
Balancing Data with Intuition and Expertise
The best decision-making integrates both—but how?
When to Weight Data More Heavily
Situations favoring data:
- Large sample sizes available
- Repeated similar decisions (patterns exist)
- Low stakes (experimentation cheap)
- Historical patterns likely to hold
- Personal experience limited
- Cognitive biases likely (emotionally charged decisions)
Example: Optimizing ad spend. Lots of data, patterns clear, experimentation feasible. Weight data heavily.
When to Weight Expertise More Heavily
Situations favoring intuition/expertise:
- Novel situations (no historical precedent)
- Rapidly changing contexts (historical data stale)
- Complex qualitative factors
- Ethical or value-laden choices
- Data quality poor or unavailable
- Long-term strategic vision required
Example: Founder deciding company culture and values. Limited relevant data, deeply personal, requires vision. Weight judgment heavily.
Integration Framework
Step 1: Start with data—what does evidence suggest?
Examine relevant data systematically. What patterns exist? What do analyses recommend?
Step 2: Apply expertise—does this align with domain knowledge?
Do data conclusions make sense given your understanding? If data suggests something surprising, is there plausible explanation or is it likely artifact?
Step 3: Consider context—what's missing from data?
What relevant factors aren't captured in data? How might current situation differ from historical patterns?
Step 4: Check assumptions—are data and analysis sound?
Data quality good? Appropriate methods used? Causal logic valid? Potential biases in data or analysis?
Step 5: Weight confidence—how reliable is data vs. strength of intuition?
Large, high-quality dataset with clear signal? Weight data heavily. Small, noisy data with weak signal? Weight expertise more.
Step 6: Test when possible—use experiments before full commitment
If data and intuition conflict, can you test small-scale? Learn cheaply before big bets.
Step 7: Document reasoning—explain both data and intuition factors
Record what data showed, what expertise suggested, how you weighed them, what you decided. Enables learning from outcomes.
Step 8: Review outcomes—did data or intuition prove correct?
Track whether decisions worked. Over time, learn when to trust data vs. intuition in different contexts.
Essential Metrics and KPI Frameworks
What to measure depends on context, but general principles apply.
Characteristics of Good Metrics
1. Aligned with goals: Directly relates to what you care about
2. Actionable: You can influence it through decisions
3. Understandable: Clear what it measures, why it matters
4. Timely: Updated frequently enough for decision-making
5. Accurate: Measured reliably and correctly
6. Cost-effective: Value of insight exceeds measurement cost
Types of Metrics
| Type | Description | Example |
|---|---|---|
| Leading indicators | Predict future outcomes | Sales pipeline, user engagement |
| Lagging indicators | Measure past results | Revenue, profit, customer retention |
| Input metrics | What you control directly | Ad spend, hours worked, features shipped |
| Output metrics | Results achieved | Customers acquired, revenue, quality |
| Efficiency metrics | Ratio of outputs to inputs | Cost per acquisition, revenue per employee |
| Health metrics | Organizational capability | Employee satisfaction, technical debt, brand strength |
Balanced Metric Systems
Avoid over-indexing on any single metric. Frameworks for balance:
Balanced Scorecard (Kaplan & Norton):
- Financial: Profitability, growth, shareholder value
- Customer: Satisfaction, retention, acquisition
- Internal processes: Quality, efficiency, innovation
- Learning and growth: Employee development, capabilities, culture
AARRR Framework (Pirate Metrics for startups):
- Acquisition: How users find you
- Activation: First positive experience
- Retention: Users return
- Referral: Users tell others
- Revenue: Monetization
OKRs (Objectives and Key Results):
- Objectives: Qualitative goals
- Key Results: Quantitative measures of progress
- Links strategy to measurable outcomes
Case Studies: Data-Driven Decisions in Practice
Case 1: LinkedIn's Data-Driven Growth
Challenge: Scale user growth and engagement
Approach:
- Built sophisticated experimentation platform
- Every product change tested with controlled experiments
- Measured engagement, retention, virality
- Iterated rapidly based on data
Key insight: Data showed "connection recommendations" were most impactful feature for engagement—users who connected with 5+ people in first week had far higher retention.
Action: Focused new user experience on driving early connections. Simple data-driven insight drove explosive growth.
Lesson: High-velocity experimentation enables systematic optimization.
Case 2: Target's Pregnancy Prediction
Challenge: Identify pregnant customers early to market relevant products
Approach:
- Analyzed purchase patterns of known pregnant customers (registry data)
- Built predictive model identifying pregnancy from shopping behavior
- Scored all customers on pregnancy likelihood
Result: Successfully identified pregnant customers before they announced publicly, enabling targeted marketing.
Controversy: Raised privacy concerns when teenager's pregnancy was revealed to family through targeted ads.
Lesson: Data-driven decisions can be technically successful but ethically problematic. Consider broader implications beyond narrow metrics.
Case 3: Google's People Analytics
Challenge: Retain high-performing employees, improve management quality
Approach:
- Analyzed data on employee performance, satisfaction, retention
- Identified characteristics of best managers
- Tested management training interventions
Key finding: Eight behaviors distinguished great managers (provide support and career development, empower team, communicate clearly, have technical expertise, etc.).
Action: Training programs focused on these eight behaviors. Measurable improvement in employee satisfaction and retention.
Lesson: Even "soft" people management can be improved through data-driven approaches.
Conclusion: Data Informs, Humans Decide
The future belongs to organizations that effectively combine data insights with human judgment—not to those that blindly follow algorithms or stubbornly reject evidence.
The key insights:
1. Data-driven doesn't mean data-only—the goal is augmenting human decision-making with data insights, not replacing judgment with algorithms. Best decisions integrate quantitative evidence with qualitative context, domain expertise, and values.
2. Benefits are real and significant—organizations that effectively use data consistently outperform competitors on productivity, profitability, customer acquisition, and retention. The competitive advantage of data capabilities compounds over time.
3. Pitfalls are common and costly—measuring proxies instead of goals, analysis paralysis, ignoring causation, historical bias, neglecting unmeasurable factors, and poor data quality undermine data-driven efforts. Awareness prevents these failure modes.
4. Implementation requires systematic change—infrastructure (data systems), capabilities (analytical talent), processes (embedding data in workflows), and culture (leadership modeling, psychological safety, balanced perspective).
5. Match analytical approach to decision type—high-stakes strategic decisions warrant extensive analysis; repeated operational decisions should be automated; experimental decisions need rigorous testing; ethical decisions require judgment beyond data.
6. Balance evolves with context—situations differ in data availability, stakes, uncertainty, and time pressure. Skilled decision-makers weight data vs. intuition appropriately for each context, and improve this judgment through deliberate practice and feedback.
7. Culture is the ultimate driver—technical capabilities mean nothing if culture doesn't value evidence, if leaders don't model data use, if people fear negative data, or if politics override evidence. Cultural transformation enables everything else.
Netflix didn't become dominant purely through data—Ted Sarandos' creative judgment, Reed Hastings' strategic vision, and organizational culture of innovation were equally essential. But data gave them advantages competitors lacked: understanding audiences deeply, testing efficiently, learning rapidly, allocating resources optimally.
As W. Edwards Deming observed: "In God we trust. All others must bring data." But George Box added crucial nuance: "All models are wrong, but some are useful."
The synthesis: Bring data. Use it to inform, challenge, and enhance judgment. But never let data absolve humans from the responsibility to think, question, and decide. That responsibility cannot be delegated to dashboards—nor should it be.
References
Brynjolfsson, E., & McElheran, K. (2016). The rapid adoption of data-driven decision-making. American Economic Review, 106(5), 133–139. https://doi.org/10.1257/aer.p20161016
Davenport, T. H. (2006). Competing on analytics. Harvard Business Review, 84(1), 98–107.
Davenport, T. H., & Harris, J. G. (2007). Competing on analytics: The new science of winning. Harvard Business School Press.
Duhigg, C. (2012, February 16). How companies learn your secrets. The New York Times Magazine. https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action. Harvard Business School Press.
Kohavi, R., Tang, D., & Xu, Y. (2020). Trustworthy online controlled experiments: A practical guide to A/B testing. Cambridge University Press. https://doi.org/10.1017/9781108653985
LaValle, S., Lesser, E., Shockley, R., Hopkins, M. S., & Kruschwitz, N. (2011). Big data, analytics and the path from insights to value. MIT Sloan Management Review, 52(2), 21–32.
McAfee, A., & Brynjolfsson, E. (2012). Big data: The management revolution. Harvard Business Review, 90(10), 60–68.
Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books.
Provost, F., & Fawcett, T. (2013). Data science for business: What you need to know about data mining and data-analytic thinking. O'Reilly Media.
Shah, S., Horne, A., & Capellá, J. (2012). Good data won't guarantee good decisions. Harvard Business Review, 90(4), 23–25.
Word count: 6,247 words