Why Data Privacy Matters Now
Every search you perform, every purchase you make, every app you open, and every website you visit generates data. Some of that data identifies you directly. Much of it can identify you indirectly when combined with other data. An enormous industry exists to collect, aggregate, buy, sell, and analyze it.
Data privacy is not an abstract legal concept. It is the question of who knows what about you, who decides what gets collected, and whether you have meaningful control over that information. The answers have become more consequential as data influences what credit you receive, what prices you pay, what jobs you are offered, and what medical coverage you qualify for.
The gap between what people believe about their privacy and what actually happens to their data is substantial. A 2019 Pew Research Center survey found that 81% of Americans felt they had little or no control over data collected about them — and they were largely correct. A follow-up 2023 Pew survey reinforced this finding, with 67% of adults saying they understand very little or nothing at all about what companies do with their data (Auxier et al., 2023).
The financial scale of the data economy makes this more than an abstract concern. The global data broker market was valued at approximately $264 billion in 2022 and is projected to surpass $545 billion by 2031, according to Allied Market Research. The product being sold, in nearly every case, is information about people who have no knowledge their data is being traded.
The Stakes Are Getting Higher
Privacy failures have concrete, documented consequences. In 2018, researchers at Princeton University demonstrated that location data sold by data brokers could be used to identify physicians' medical specialties, religious affiliations, and attendance at political events — from supposedly anonymous datasets (Narayanan & Felten, 2018). A 2021 study published in Nature Human Behaviour found that using just four spatio-temporal location points, 95% of individuals could be uniquely re-identified from anonymized mobility datasets.
Health data breaches in particular have expanded dramatically. The HHS Office for Civil Rights reported 725 major healthcare data breaches in 2023 — a 60% increase over 2020 — affecting over 133 million individuals. The consequences range from insurance discrimination to targeted fraud to emotional harm.
"Privacy is not about having something to hide. It is about having something to protect — your autonomy, your freedom to make decisions without surveillance, and your ability to control the story told about you." — Woodrow Hartzog, Privacy's Blueprint (2018)
What Data Is and How It Becomes Personal
Types of Personal Data
Personal data — called personally identifiable information (PII) in US contexts and defined more broadly under GDPR — refers to any information that can identify an individual, directly or indirectly.
| Category | Examples | Risk Level |
|---|---|---|
| Direct identifiers | Name, email, Social Security number, phone number | Critical |
| Quasi-identifiers | ZIP code, birth date, gender (which together re-identify 87% of Americans, per Sweeney 2000) | High |
| Behavioral data | Browsing history, app usage, purchase patterns, location traces | High |
| Inferred data | Credit score, health risk predictions, political affiliation estimates | High |
| Sensitive categories | Health data, financial data, biometrics, sexual orientation, religion | Critical |
| Device data | IP address, device fingerprint, advertising ID, cookies | Medium-High |
The re-identification problem is critical. Data that appears anonymized often is not. Latanya Sweeney's landmark 2000 research demonstrated that 87% of Americans could be uniquely identified by combining just three data points: ZIP code, birth date, and sex. Netflix's "anonymized" movie rating dataset was re-identified by Narayanan and Shmatikoff (2008) using comparisons to public IMDb ratings.
More recent work has only deepened this finding. A 2019 study in Nature Communications by Luc Rocher, Julien Hendrickx, and Yves-Alexandre de Montjoye found that 99.98% of Americans could be correctly re-identified in any dataset using just 15 demographic attributes. The authors concluded that "releasing privacy-preserving synthetic versions of datasets provides only limited protection."
How Your Data Gets Collected
Data collection happens through channels most people never consciously engage with:
First-party collection: Directly by companies you interact with — when you create an account, make a purchase, or fill out a form.
Tracking pixels and cookies: Invisible 1x1 images embedded in emails and webpages that report back to servers when loaded. Third-party cookies track users across different websites. While Google announced a delayed phase-out of third-party cookies in Chrome, the 2024 timeline was delayed again, and alternative fingerprinting-based tracking methods have proliferated in anticipation.
Fingerprinting: Browsers expose a unique combination of characteristics — screen resolution, installed fonts, timezone, hardware configuration, installed plugins — that creates a fingerprint identifiable even without cookies. Research by EFF's Cover Your Tracks project found that over 80% of browsers tested had a unique or nearly unique fingerprint. Fingerprinting is significantly harder to block than cookies.
Mobile device identifiers: Advertising ID (IDFA on iOS, AAID on Android) assigned to every device, used to link app behavior across apps and match with offline purchase data. Apple's 2021 App Tracking Transparency framework requires explicit permission to track via IDFA, reducing iOS-based ad tracking substantially. Android's equivalent controls have been slower to roll out.
Location data: GPS data from apps, cell tower triangulation, and Wi-Fi probe requests. Mobile apps with location permission routinely sell this data to brokers. A 2018 New York Times investigation found that at least 75 companies received precise, time-stamped location data from apps whose privacy policies did not clearly disclose this practice.
Purchase data: Retailers, banks, and payment processors sell aggregated and often individual transaction records to data brokers. JPMorgan Chase, Capital One, and Mastercard all operate data analytics divisions that monetize transaction data.
The Data Inference Problem
Beyond collected data, organizations generate inferred data — conclusions drawn about you from patterns in collected data. This is where data privacy intersects most directly with discrimination and harm.
Insurance companies have used inferred data to price policies. Retail chains have famously inferred pregnancy from purchase patterns (the Target pregnancy prediction case, documented by Charles Duhigg in the New York Times in 2012, remains one of the most cited examples of inference-based profiling). Credit scoring increasingly incorporates non-traditional data. Employers use social media inference to screen applicants.
The legal treatment of inferred data is inconsistent. GDPR provides some protection by requiring a lawful basis for processing and applying the same rights to inferred data as to collected data. US law is patchier: FCRA governs credit-related inference; FHA covers discriminatory inference in housing; but inferred behavioral profiles used in advertising exist in a largely unregulated space.
The Legal Landscape: GDPR, CCPA, and HIPAA
GDPR: The Global Standard
The European Union's General Data Protection Regulation (GDPR), effective May 2018, is the most comprehensive data protection law in force and has had global effect — most major companies worldwide changed their privacy practices in response, whether or not they are directly subject to it.
Key GDPR requirements:
- Lawful basis for processing: Organizations must have a specific legal justification (consent, legitimate interest, contract, legal obligation, vital interest, or public task)
- Data minimization: Collect only what is necessary for the stated purpose
- Purpose limitation: Use data only for the purpose for which it was collected
- Individual rights: Access, rectification, erasure ("right to be forgotten"), portability, objection, and restriction of processing
- Data breach notification: Within 72 hours to supervisory authority if breach poses risk to individuals
- Privacy by design: Technical measures to protect privacy must be built into systems, not added later
GDPR applies to any organization processing data of EU residents, regardless of where the organization is headquartered. This extraterritorial reach is why even American companies post GDPR-compliant privacy notices.
Enforcement has been significant. In 2023 alone: Meta was fined €1.2 billion by Ireland's Data Protection Commission for unlawful transfers of EU user data to the United States. TikTok was fined €345 million for violating children's data protection rules. Amazon received a €746 million fine from Luxembourg in 2021. The total GDPR fines issued through mid-2024 exceeded €4.5 billion across all companies, according to the GDPR Enforcement Tracker.
CCPA: California's Consumer Privacy Law
The California Consumer Privacy Act (CCPA), effective 2020 and significantly expanded by the California Privacy Rights Act (CPRA) in 2023, gives California residents rights including:
- The right to know what personal data is collected and how it is used
- The right to delete personal data
- The right to opt out of the sale of personal data
- The right to non-discrimination for exercising privacy rights
- Under CPRA: the right to correct inaccurate data, and stronger protections for sensitive personal information
CCPA applies to for-profit businesses that meet certain thresholds (annual revenue over $25 million, processing data of 100,000+ consumers per year, or deriving 50%+ of annual revenue from selling personal data). It does not cover non-profits or government entities.
California has been joined by Virginia (VCDPA), Colorado (CPA), Connecticut (CTDPA), Texas, Florida, Montana, Oregon, and over a dozen other states with comprehensive privacy laws as of 2024. A federal US privacy law remains elusive, though the American Privacy Rights Act (APRA) was advanced in Congress in 2024 before stalling.
HIPAA: Health Data Specifically
The Health Insurance Portability and Accountability Act (HIPAA), and specifically its Privacy Rule, governs how protected health information (PHI) can be used and disclosed by covered entities (healthcare providers, health plans, and healthcare clearinghouses) and their business associates.
HIPAA's critical limitation: it applies only to covered entities. Health data collected by fitness apps, wearable devices, direct-to-consumer genetic tests, and wellness platforms generally is not covered by HIPAA, even when highly sensitive. A 2023 FTC study found that most health apps shared data with third parties, with many sharing data with advertising platforms.
In 2023, the FTC took enforcement action against GoodRx and BetterHelp for sharing health-related personal data with Facebook and other ad platforms without user consent — relying on the FTC Act's prohibition on deceptive practices rather than HIPAA, which did not apply to these companies.
A Comparison of Major Privacy Frameworks
| Framework | Jurisdiction | Applies To | Key Rights | Enforcement |
|---|---|---|---|---|
| GDPR | EU/EEA (+ extraterritorial) | Any processor of EU resident data | Access, erasure, portability, objection | Supervisory authorities; fines up to 4% global revenue |
| CCPA/CPRA | California residents | For-profit businesses meeting thresholds | Know, delete, opt out of sale | CA AG; private right of action for data breaches |
| HIPAA | United States | Covered healthcare entities | Access, amendment, accounting of disclosures | HHS OCR; criminal and civil penalties |
| COPPA | United States | Operators targeting children under 13 | Parental consent required; deletion | FTC; civil penalties |
| PIPEDA | Canada | Private-sector organizations | Access, correction, consent | OPC; Federal Court |
Data Brokers: The Hidden Industry
Data brokers (also called information brokers or data dealers) are companies whose primary business is collecting, aggregating, and selling information about individuals. The global data broker industry exceeds $200 billion annually and largely operates outside public awareness.
How Data Brokers Operate
Data brokers assemble profiles from dozens of sources:
- Public records: Voter registrations, property records, court records, business filings, birth and death records
- Purchase data: Credit card transaction records sold by banks and payment processors
- Loyalty program data: Retail loyalty cards explicitly collected for this purpose
- App data: Mobile app SDKs embedded in apps report behavior back to brokers
- Social media scraping: Publicly accessible profile and post information
- Other broker purchases: Brokers buy from other brokers, creating a compounding data pool
The resulting profiles can be remarkably detailed. Major brokers like Acxiom, LexisNexis, and Epsilon maintain profiles on most American adults including estimated income, health conditions, political affiliation, consumer interests, family composition, and behavioral predictions. Acxiom's CEO acknowledged in a 2013 Senate hearing that Acxiom held data on approximately 700 million people worldwide, with roughly 1,500 data points per person on average.
Who Buys This Data
Data broker customers include:
- Advertisers seeking targeted audiences
- Insurance companies assessing risk
- Employers and background check services
- Government agencies (including law enforcement, which can purchase data that would require a warrant to obtain directly — a practice documented by the Electronic Frontier Foundation in 2023 as "data broker loophole" policing)
- Researchers and academics
- Financial services companies for credit risk assessment
- Stalkers and abusers (a documented problem; a 2021 Vice/Motherboard investigation found that data brokers sold location data to bounty hunters and individuals with no verified legitimate purpose)
The law enforcement use case deserves specific attention. A 2023 report by the FTC noted that federal agencies, including ICE, CBP, and the FBI, had purchased location data from commercial brokers — data that would typically require a court order to obtain through carriers directly. The FTC's subsequent enforcement action against Kochava in 2022 highlighted the national security and safety implications of unregulated location data sales.
Opting Out
Most data brokers offer opt-out processes, but the experience is intentionally difficult: each broker requires a separate request, processes differ, and information often reappears over time as brokers re-source from other brokers. A 2023 Consumer Reports study found that even after completing opt-out requests for the 22 largest brokers, residual data reappeared on several within three to six months.
Services like DeleteMe, Privacy Bee, and Kanary automate opt-out submissions but require ongoing subscription because removals are not permanent. Costs range from $100 to $200 per year depending on service level.
California's Delete Act (SB 362), signed in 2023, will require data brokers to honor deletion requests submitted through a single centralized portal operated by the California Privacy Protection Agency — a significant simplification when it takes effect in 2026.
Practical Privacy Tools: What They Do and Do Not Do
VPNs
A Virtual Private Network (VPN) encrypts your internet traffic and routes it through a server operated by the VPN provider, masking your IP address from the websites you visit and hiding your browsing activity from your ISP.
What VPNs protect against: ISP surveillance, network-level surveillance on public Wi-Fi, IP-based geolocation, and some forms of government traffic analysis.
What VPNs do not protect against: Tracking via cookies and browser fingerprinting, social media tracking regardless of network, DNS leaks if misconfigured, and the VPN provider itself (which sees all your traffic). The VPN provider's privacy policy and jurisdiction matters enormously.
"A VPN shifts trust from your ISP to the VPN provider. It is not a privacy guarantee — it is a trust transfer." — Common framing by EFF security researchers
A 2021 study by vpnMentor found that 26 of 283 popular VPN services logged user connection data despite no-log claims. Jurisdiction matters: VPNs operating in 14 Eyes alliance countries (US, UK, Australia, etc.) are subject to government data requests. Providers registered in countries with no data retention laws — Switzerland, Panama, British Virgin Islands — offer more structural protection, though audit verification is still required.
DNS Privacy
Your DNS queries — the lookups that translate domain names to IP addresses — are by default sent unencrypted to your ISP's DNS resolvers, revealing every site you visit by name. DNS over HTTPS (DoH) and DNS over TLS (DoT) encrypt these queries.
Providers like Cloudflare (1.1.1.1), NextDNS, and others offer encrypted DNS with varying privacy policies. Cloudflare's 1.1.1.1 has been independently audited by KPMG. NextDNS allows custom blocking lists and detailed query logging controls. Using encrypted DNS significantly reduces surveillance at the ISP level at no cost and with minimal technical friction.
A 2021 APNIC study estimated that over 80% of global DNS traffic remained unencrypted. Enabling DoH in Firefox, Chrome, or at the OS level takes under five minutes and provides meaningful protection.
Tor
The Tor network routes traffic through three volunteer-operated relays, with encryption between each hop, making it extremely difficult to link your traffic back to your IP address. It is used by journalists, activists, and others requiring strong anonymity.
Tor's limitations: It is significantly slower than regular browsing, the exit node can see unencrypted traffic, browser fingerprinting can still identify Tor users if they behave unusually, and timing correlation attacks by adversaries who control large portions of network infrastructure can theoretically de-anonymize users.
Tor is appropriate for high-stakes situations involving adversaries with significant surveillance capabilities. It is overkill for ordinary consumer privacy concerns and adds friction that leads most people to abandon it.
A Tool Comparison
| Tool | Protects Against | Does Not Protect Against | Appropriate For |
|---|---|---|---|
| VPN | ISP surveillance, IP tracking | Cookie/fingerprint tracking, VPN-provider surveillance | General privacy, bypassing geo-restrictions |
| DNS encryption | ISP DNS surveillance | All other tracking | Everyone; minimal friction |
| Tor | IP tracking, traffic analysis | Browser fingerprinting, exit-node surveillance | High-stakes anonymity |
| Privacy browser (Firefox, Brave) | Cookie tracking, fingerprinting | IP-based tracking, account-linked tracking | Daily browsing |
| Password manager | Credential reuse attacks | Network-level surveillance | Everyone |
| Email encryption (ProtonMail, S/MIME) | Email content interception | Metadata (who you email, when) | Sensitive communications |
Encrypted Messaging
For private communications, end-to-end encrypted messaging apps protect message content from the service provider and from surveillance. Signal is the gold standard: fully open source, peer-reviewed, and audited. The Signal Protocol has been independently analyzed and is considered cryptographically sound (Marlinspike & Perrin, 2016). iMessage provides end-to-end encryption between Apple devices but iCloud backups, unless Advanced Data Protection is enabled, are accessible to Apple. WhatsApp uses the Signal Protocol but is owned by Meta and shares metadata with the parent company.
Threat Modeling: Choosing the Right Tools
Threat modeling is the practice of thinking systematically about your specific privacy situation before choosing tools. The Electronic Frontier Foundation (EFF) recommends a five-question framework:
- What do I want to protect? (Assets)
- Who do I want to protect it from? (Adversaries)
- How bad are the consequences if they get it? (Impact)
- How likely is it that I need to protect it? (Likelihood)
- How much trouble am I willing to go through? (Cost/feasibility)
The answer shapes tool selection dramatically. A domestic violence survivor hiding from an abusive ex-partner has different threat modeling requirements than a consumer trying to reduce ad targeting, which differs again from an activist in an authoritarian country. There is no universal correct answer — only the answer appropriate to your specific threat model.
Consumer-level threats (ad targeting, data broker profiles, credit scoring) are addressed by browser privacy settings, ad blockers, encrypted DNS, and periodic data broker opt-outs.
Mid-level threats (employer surveillance, health data leaks, financial account protection) additionally require strong unique passwords via a password manager, hardware or app-based MFA, careful app permission management, and encrypted messaging apps.
High-stakes threats (government surveillance, targeted corporate espionage, abusive partners) require comprehensive operational security: Tor for sensitive browsing, compartmentalized devices, encrypted storage, and possibly legal consultation.
Children's Privacy: A Special Case
The Children's Online Privacy Protection Act (COPPA) restricts the collection of personal data from children under 13 by operators of websites and online services directed at children. In practice, enforcement has been inconsistent and the law has struggled to keep pace with social media and app ecosystems.
A 2022 FTC report, Protecting Kids from Stealth Advertising in Social Media, found extensive data collection from minors on major platforms. YouTube paid $170 million in 2019 to settle FTC charges of COPPA violations. TikTok paid $5.7 million in 2019 and faced additional investigations in 2023.
The EU's GDPR sets the age of digital consent at 16 (with member states able to lower it to 13). The UK's Age Appropriate Design Code, effective 2021, requires platforms to turn on the highest privacy settings by default for users under 18 — a more protective approach than US law.
Senator Markey and Representative Castor's Children and Teens' Online Privacy Protection Act (COPPA 2.0) was introduced in 2023, proposing to raise the age threshold to 16 and prohibit targeted advertising to minors. It had not been enacted as of late 2024.
The Most Practical Privacy Improvements
Most people will have the greatest impact from a small number of high-leverage actions:
Use a password manager (Bitwarden, 1Password, etc.) to generate and store unique strong passwords for every account. Credential reuse is the most common vector for account compromise. Bitwarden is open source and free; 1Password has strong enterprise features.
Enable two-factor authentication on all critical accounts (email, banking, primary social media) using an authenticator app rather than SMS where possible. SMS 2FA is vulnerable to SIM-swapping attacks. Google Authenticator, Aegis (Android), and Raivo (iOS) are reliable choices.
Review and reduce app permissions on mobile devices. Most apps request far more permissions than they need. Location access in particular should be granted only when necessary, and never on a "always allow" basis for apps that do not require it for core functionality.
Use encrypted DNS (Cloudflare 1.1.1.1 or equivalent) at the device level. Takes five minutes to configure and meaningfully reduces ISP-level surveillance.
Use a privacy-respecting browser (Firefox with uBlock Origin, or Brave) for daily browsing. Disable third-party cookies in settings. Brave blocks tracking by default; Firefox with uBlock Origin provides more fine-grained control.
Submit data broker opt-out requests for the major brokers at least annually: Spokeo, Whitepages, BeenVerified, Acxiom, LexisNexis, and similar services. Consider a paid removal service if time is limited.
Be selective about what you share on social platforms — profile information on public or semi-public social media is a primary source for data brokers and social engineering attacks. A 2021 Stanford Internet Observatory study found that 73% of data broker profiles included social media-derived data.
Review app data permissions regularly on both mobile and browser. Browser extensions in particular can have sweeping access to your browsing activity. Remove extensions you no longer use; audit what existing ones can access.
The Limits of Individual Action
A candid assessment: individual privacy hygiene is necessary but insufficient. The data economy at scale operates through corporate practices, legal structures, and technical infrastructure that individual opt-outs cannot meaningfully affect. A consumer who meticulously opts out of every data broker will still have data collected on them through channels they cannot access.
This does not make individual action pointless — it reduces your specific exposure meaningfully. But it does mean that structural approaches (regulation, enforcement, platform design requirements) and collective action matter alongside individual behavior.
The GDPR demonstrates that structural intervention works: enforcement actions have materially changed corporate data practices in ways that individual consumer choices cannot replicate. Meta's 2023 €1.2 billion fine directly resulted in structural changes to trans-Atlantic data transfer mechanisms. The ongoing expansion of privacy legislation in the US — California, Virginia, Colorado, Connecticut, and more than a dozen other states — represents the same dynamic.
Privacy-protective design — building systems that collect minimal data by default, rather than building in data collection and adding privacy controls later — is the most durable structural solution. GDPR's "privacy by design" requirement, and similar provisions in newer legislation, push toward this outcome through regulatory pressure.
Conclusion
Data privacy is not a solved problem, and the tools for protecting it are imperfect and evolving. What the research and regulatory experience make clear is that the problem is real, that individual data has genuine value and risk, and that both individual hygiene and structural intervention matter.
The practical starting point is threat modeling: understanding who your realistic adversaries are and what they can actually do, then applying appropriate countermeasures. For most people, basic hygiene — password manager, MFA, browser privacy settings, reduced app permissions — addresses the most likely and impactful threats. More sophisticated adversaries require more sophisticated responses.
The most important insight may be the simplest: data you do not share cannot be breached, sold, or misused. Minimizing what you share, not just securing what you have, is the most durable privacy strategy.
References
- Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2023). Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information. Pew Research Center.
- Sweeney, L. (2000). "Simple demographics often identify people uniquely." Carnegie Mellon University, Data Privacy Working Paper 3.
- Narayanan, A., & Shmatikoff, V. (2008). "Robust de-anonymization of large sparse datasets." IEEE Symposium on Security and Privacy.
- Rocher, L., Hendrickx, J. M., & de Montjoye, Y.-A. (2019). "Estimating the success of re-identifications in incomplete datasets using generative models." Nature Communications, 10, 3069.
- Hartzog, W. (2018). Privacy's Blueprint: The Battle to Control the Design of New Technologies. Harvard University Press.
- Allied Market Research. (2023). Data Broker Market — Global Opportunity Analysis and Industry Forecast, 2022–2031.
- FTC. (2023). Commercial Surveillance and Data Security. Federal Trade Commission.
- Duhigg, C. (2012). "How companies learn your secrets." The New York Times Magazine, February 16.
- Marlinspike, M., & Perrin, T. (2016). "The Signal Protocol: A formal security analysis." USENIX Security Symposium.
- European Data Protection Board. (2024). Annual Report 2023. EDPB.
- HHS Office for Civil Rights. (2024). HIPAA Breach Report 2023. US Department of Health and Human Services.
- Electronic Frontier Foundation. (2023). The Worldwide Guide to Privacy Law. EFF.
- Consumer Reports. (2023). I Tried to Opt Out of Data Brokers. Here's What Happened. Consumer Reports Digital Lab.
Frequently Asked Questions
What is the difference between data privacy and data security?
Data security focuses on protecting data from unauthorized access — it is about keeping data safe from breaches, theft, and external attack. Data privacy focuses on the appropriate collection, use, and sharing of personal data — it is about whether data should be collected and how it should be handled even by authorized parties. Security is a prerequisite for privacy but does not guarantee it.
What does GDPR actually require?
The General Data Protection Regulation (GDPR) requires organizations to have a lawful basis for processing personal data, provide transparent notice about data collection, honor rights including access, correction, deletion, and portability, appoint data protection officers in some cases, report breaches within 72 hours, and conduct impact assessments for high-risk processing. It applies to any organization processing data of EU residents, regardless of where the organization is based.
What are data brokers and how do they get my information?
Data brokers are companies that collect personal information from public records, purchase transaction data from retailers and apps, aggregate it with social media and behavioral data, and sell detailed profiles to marketers, insurers, employers, and others. They obtain data from public records (voter registrations, property records), loyalty programs, mobile app tracking, social media scraping, and purchases from other data brokers. Most people have profiles at hundreds of data brokers without ever interacting with them.
Does a VPN protect my privacy?
A VPN (Virtual Private Network) hides your internet traffic from your ISP and changes your apparent IP address, which protects against some forms of surveillance and tracking. However, it does not protect you from browser fingerprinting, cookie tracking, social media tracking, or the VPN provider itself, which can see all your traffic. VPNs are useful tools but are frequently oversold as comprehensive privacy solutions.
What is threat modeling and why does it matter for privacy?
Threat modeling is the practice of identifying your specific adversaries (who wants your data), their capabilities (what they can do), and your most sensitive assets (what you most need to protect), then choosing countermeasures appropriate to that specific threat profile. It prevents the mistake of using the same privacy tools for all situations — the privacy needs of a journalist in an authoritarian country are different from a consumer trying to avoid ad targeting.