Privacy is often talked about as a personal preference — something that matters to cautious or private people, while bolder, more modern individuals simply accept that living online means sharing data. This framing does a disservice to what privacy actually is. Privacy is not about hiding things you are ashamed of. It is about maintaining the conditions under which autonomy, authentic relationships, and political freedom are possible. When information about you is collected, analyzed, and acted upon without your knowledge or meaningful consent, the power in the relationship between you and the organizations that hold that data shifts in ways that have concrete consequences.
The scale of modern data collection is genuinely difficult to comprehend. Every app you install, every website you visit, every search you make, every loyalty card you swipe, every location ping from your phone, every television program you watch through a streaming service — these generate data that is collected, aggregated, sold, and used to build profiles that are often more detailed and accurate than anything you have consciously provided about yourself. This is not abstract: these profiles are used to make decisions about what you pay for insurance, whether you receive a job interview, what political advertising you see, and what news reaches you.
Understanding data privacy requires engaging with both its regulatory dimensions (what GDPR and CCPA require), its technical dimensions (what companies actually collect and how), its economic dimensions (the data broker industry), and its philosophical dimensions (why privacy is a collective rather than purely individual concern). This article addresses all of these, with attention to where the language of "privacy as preference" systematically obscures what is actually at stake.
"Surveillance is the business model of the internet. Data is not the new oil — it is the new means of behavioral control." -- Shoshana Zuboff, adapted from The Age of Surveillance Capitalism, 2019
Key Definitions
Personal data: Any information that relates to an identified or identifiable individual. Under GDPR, this is defined broadly — including name, email, IP address, location data, cookie identifiers, and inferred attributes.
GDPR: The General Data Protection Regulation, a European Union law effective since 2018 that governs the collection, processing, and storage of personal data of EU residents. Applies globally to any organization that processes EU residents' data.
CCPA: The California Consumer Privacy Act, in effect since 2020, providing California residents with rights to know what personal data is collected, to opt out of its sale, and to request deletion.
Data broker: A company that collects personal data from multiple sources, aggregates it into profiles, and sells or licenses those profiles to third parties. Typically operates without a direct relationship with the individuals whose data it holds.
Differential privacy: A mathematical framework providing formal privacy guarantees by adding calibrated noise to data analyses, ensuring that individual participation in a dataset cannot meaningfully increase the risk of that individual's data being revealed.
Data minimization: The principle, embedded in GDPR and other frameworks, that organizations should collect only the personal data strictly necessary for their stated purpose. A core check on the tendency to collect data "just in case."
Consent: In the GDPR framework, consent must be freely given, specific, informed, and unambiguous. Pre-ticked boxes, bundled consent (agree to everything or use nothing), and inactivity as consent are all invalid under this definition.
The Global Regulatory Landscape
Data privacy regulation has expanded dramatically since GDPR came into force in 2018. What was once a patchwork of sector-specific rules in the United States and a stricter European framework has become a genuinely global body of law, with major jurisdictions implementing their own frameworks.
| Regulation | Jurisdiction | Key Rights Granted | Applies To |
|---|---|---|---|
| GDPR | European Union | Access, correction, deletion, portability, objection | Any org processing EU residents' data |
| CCPA/CPRA | California, USA | Know, delete, opt out of sale, non-discrimination | Businesses above revenue/data thresholds |
| LGPD | Brazil | Similar to GDPR with local variations | Organizations processing Brazilian residents' data |
| PIPEDA | Canada | Access, correction, accountability principles | Private sector organizations in Canada |
| PDPA | Singapore | Access, correction, data portability (partial) | Organizations processing Singapore residents' data |
| UK GDPR | United Kingdom | Same as EU GDPR (post-Brexit) | Organizations processing UK residents' data |
| APPI | Japan | Access, correction, deletion, consent for sensitive data | Private sector organizations in Japan |
| PIPL | China | Access, correction, deletion, portability, objection | Organizations processing Chinese citizens' data |
The proliferation of frameworks reflects both genuine momentum toward recognizing data privacy as a right and the fragmented, jurisdictional nature of international law. Organizations operating globally must navigate multiple overlapping frameworks simultaneously — a compliance challenge that has given rise to an entire privacy engineering and legal profession.
What Companies Actually Collect
The Visible Collection
Some data collection is visible and expected. When you create an account, you provide your name and email. When you make a purchase, you provide payment and shipping information. When you post on social media, you provide the content you choose to share. These forms of collection are straightforward, and most users are aware they are happening.
The privacy concern with visible collection is about what happens to that data after collection: how long it is retained, who it is shared with, how it is combined with other data, and what decisions it informs. A health app that asks for your date of birth, weight, and fitness goals has legitimate use for that data to provide its service — but the same data sold to insurance companies or employers creates entirely different implications.
Data retention is an often-overlooked dimension of visible collection. GDPR's storage limitation principle requires that data be retained only as long as necessary for its stated purpose. In practice, many organizations retain data indefinitely or for periods far longer than any operational purpose requires, creating liability and risk.
The Invisible Collection
The more consequential data collection is largely invisible. Cross-site tracking through cookies and fingerprinting follows your behavior across websites you visit, building a behavioral profile that includes every category of content you read, every product you browse without buying, and your patterns of interest over time. Mobile advertising identifiers (the IDFA on iOS and GAID on Android) link your app usage to your device identity, enabling cross-app tracking.
Browser fingerprinting does not require cookies at all. By collecting dozens of technical attributes of your browser — screen resolution, fonts installed, plugins, time zone, hardware specifications — websites can create a fingerprint that identifies your specific browser with high probability even if you clear cookies. Research by the Electronic Frontier Foundation's Panopticlick project found that the vast majority of browsers are unique or nearly unique based on fingerprint alone.
Location data is particularly revealing. From GPS data captured by apps and sold to data brokers, researchers and journalists have repeatedly reconstructed highly sensitive behavioral patterns: attendance at medical clinics, places of worship, political events, and immigration lawyer offices. In 2020, The New York Times's Privacy Project published location data analyses demonstrating that even "anonymized" location datasets allowed individual identification with straightforward techniques (Valentino-DeVargas & Dance, 2020).
Third-party pixels and SDKs (software development kits) embedded by app developers cause data to be sent to advertising networks, analytics companies, and data brokers whenever the app is used — often without the app developer's full knowledge of what data is being transmitted. A 2019 study by Reardon and colleagues published at the USENIX Security Symposium found that 1,325 of the 88,000 Android apps studied were transmitting data to third parties in violation of their stated permissions.
The Inferred Data
Beyond what you explicitly share and what is invisibly collected, data brokers and advertising platforms generate inferred attributes — predictions about characteristics you have never disclosed. These include: estimated income and net worth, political affiliation, religious views, health conditions (inferred from search and purchase patterns), sexual orientation (inferred from app usage and content engagement), relationship status, and psychological traits.
A 2013 study by Kosinski, Stillwell, and Graepel at Cambridge University demonstrated that Facebook "likes" could predict a user's sexual orientation with 88% accuracy, political affiliation with 85% accuracy, and religion and other sensitive attributes at similarly high rates from data users considered innocuous. This work later informed the Cambridge Analytica scandal, in which personality profiles derived from Facebook data were used to micro-target political advertising without users' meaningful consent.
Inferred attributes may be wrong — and when used in consequential decisions, errors are harmful. But there is no systematic mechanism for individuals to know what has been inferred about them, challenge incorrect inferences, or prevent those inferences from being used.
GDPR vs. CCPA: Rights and Limits in Practice
GDPR's Framework
The General Data Protection Regulation, effective May 2018, represents the most comprehensive data protection framework currently in force globally. Its core principles require: lawful basis for processing (typically explicit, informed, freely given consent or demonstrated legitimate interest); data minimization (collecting only what is necessary); purpose limitation (using data only for the purpose it was collected for); storage limitation (retaining data only as long as necessary); and data security.
GDPR grants individuals enforceable rights: the right to access data held about them, the right to correction, the right to erasure (right to be forgotten), the right to data portability, the right to object to processing, and rights related to automated decision-making. Violations can result in fines of up to 20 million euros or 4% of global annual revenue, whichever is higher.
GDPR enforcement has been uneven — Ireland's Data Protection Commission, which is responsible for many of the largest technology companies due to their European headquarters being in Dublin, has been criticized for slow and weak enforcement. But the regulation has had real effects: cookie consent requirements changed the web's user experience globally, data breach notification became standard, and the GDPR framework influenced privacy legislation worldwide.
The largest GDPR fine to date was 1.2 billion euros against Meta (Facebook) by the Irish DPC in May 2023, related to transfers of EU user data to the United States without adequate protection mechanisms. Previous landmark fines include 746 million euros against Amazon (Luxembourg DPA, 2021) and 225 million euros against WhatsApp (Irish DPC, 2021).
GDPR's Legitimate Interest Provision: A Contested Carveout
One of GDPR's more contested provisions is Article 6(1)(f), which allows data processing without explicit consent when a controller has a "legitimate interest" that is not overridden by the individual's interests or rights. This provision has been used broadly by the advertising industry to justify behavioral tracking and profiling without consent, on the grounds that targeted advertising is a legitimate business interest.
Privacy advocates and data protection authorities have pushed back on this expansive interpretation. The CJEU's 2021 Planet49 decision and subsequent national DPA guidance have narrowed the legitimate interest basis for advertising tracking, requiring genuine balancing of interests rather than blanket claims.
CCPA's Approach
California's Consumer Privacy Act, effective January 2020 and strengthened by the California Privacy Rights Act (CPRA) in 2023, takes a different approach. Where GDPR is comprehensive and rights-based, CCPA is more targeted and opt-out oriented. It gives California residents the right to know what personal information businesses collect about them and why, the right to delete personal information held by businesses, the right to opt out of the sale of their personal information, and the right not to be discriminated against for exercising these rights.
CCPA applies only to for-profit businesses that meet threshold criteria (annual revenue above $25 million, or that buy/sell/receive/share personal information on more than 100,000 consumers or households annually). Many small businesses and nonprofit organizations are exempt.
The limitation of the opt-out model is behavioral: most people never exercise opt-out rights, either because they are unaware of them, because the process is made deliberately inconvenient, or because the interface design (dark patterns) makes opting out more difficult than staying opted in. Research by Lorrie Faith Cranor and colleagues at Carnegie Mellon has documented extensively how interface design affects privacy choice exercise rates (Cranor, 2012).
Dark Patterns in Privacy Design
Dark patterns are user interface design choices that nudge users toward choices that benefit the service provider at the user's expense. In the context of privacy, common dark patterns include: burying opt-out options in multiple nested menus, making "Accept All" buttons large and prominent while making "Reject All" buttons small or absent, requiring more clicks to opt out than to opt in, using confusing double negatives in consent language, and presenting consent choices after onboarding when users are already committed.
A 2022 European Data Protection Board study of cookie consent banners found that dark patterns were present in approximately 80% of surveyed websites that technically offered a consent choice. Legal compliance and genuine informed consent are not the same thing when the presentation of choices is manipulative.
The Data Broker Industry
Scale and Structure
The data broker industry processes and sells information on hundreds of millions of individuals. The largest players — Acxiom, Experian, Equifax (consumer profile division separate from its credit reporting function), Epsilon, Nielsen, LexisNexis, and Verisk — are relatively well-known within the industry. Below them are thousands of smaller brokers specializing in specific data types or markets.
A typical Acxiom profile may include over 3,000 data attributes per individual: name, address history, phone numbers, email addresses, vehicle ownership, estimated income, household composition, political affiliation, religious affiliation, purchase history categories, health interest categories, media consumption patterns, and more. These profiles are sold to direct marketers, financial institutions, healthcare marketers, political campaigns, and — increasingly — law enforcement agencies.
The Federal Trade Commission's 2014 report on the data broker industry, Data Brokers: A Call for Transparency and Accountability, estimated that the nine largest data brokers alone held data on hundreds of millions of Americans and generated revenues of over $426 million annually from consumer data sales. The industry has grown substantially since that report.
The Absence of Transparency
A defining feature of the data broker industry is its opacity to the individuals whose data it holds. Most people have never heard of Acxiom or Epsilon, have no relationship with them, and have no mechanism to discover what data those companies hold about them. Unlike credit bureaus, which are regulated under the Fair Credit Reporting Act and must provide consumers with free annual credit reports, data brokers selling marketing profiles operate without comparable transparency requirements in the United States.
The Vermont Data Broker Law (2018) was the first U.S. state law requiring data brokers to register with the state and describe their data practices. California's Delete Act (2023), effective from 2026, will create a single mechanism for California residents to request deletion of their data from all registered data brokers simultaneously — a significant simplification of what is currently a burdensome, company-by-company process.
Law Enforcement Use
The Fourth Amendment to the U.S. Constitution protects against unreasonable search and seizure and generally requires a warrant for law enforcement to obtain communications content. It does not apply to information voluntarily shared with third parties — the third party doctrine established in Supreme Court cases Smith v. Maryland (1979) and United States v. Miller (1976).
Law enforcement agencies have exploited this by purchasing data from data brokers rather than seeking warrants. The purchase of location data, purchase histories, and behavioral profiles from commercial data brokers requires no judicial oversight. Several agencies including Immigration and Customs Enforcement (ICE) and the Defense Intelligence Agency have contracted directly with data brokers for access to commercial data pools, a practice that the Electronic Frontier Foundation and ACLU have challenged as an unconstitutional workaround to warrant requirements.
Justice Sotomayor's concurrence in United States v. Jones (2012) and Justice Gorsuch's dissent in Carpenter v. United States (2018) both raised concerns about the third party doctrine's adequacy for digital-age surveillance, suggesting the legal framework may need evolution.
Differential Privacy: A Technical Approach
The Privacy-Utility Tradeoff
Traditional statistical disclosure limitation techniques — publishing aggregated data, suppressing small cell counts, generalizing geographic information — provide imperfect privacy protection because sufficiently motivated analysts can often reconstruct individual records through combination attacks.
Latanya Sweeney demonstrated this in a 1997 study showing that 87% of Americans could be uniquely identified using only three data points: ZIP code, birth date, and sex. She re-identified the medical records of Massachusetts Governor William Weld from "anonymized" data using this technique. Re-identification attacks on datasets claimed to be anonymous have since become a recurring theme: Netflix prize data (Narayanan & Shmatikoff, 2008), AOL search logs (Barbaro & Zeller, 2006), and mobility data have all been de-anonymized by researchers.
Differential privacy, developed formally by Cynthia Dwork and colleagues at Microsoft Research in 2006, provides a mathematically rigorous framework. A mechanism satisfies differential privacy if the output of the mechanism is essentially the same whether or not any single individual's data is included. This is achieved by adding calibrated random noise to query results — the noise is chosen to be large enough to mask individual contributions but small enough to preserve aggregate accuracy.
"Differential privacy ensures that the removal or addition of a single database item does not significantly affect the outcome of any analysis. In this way, it provides a mathematically precise, strong notion of privacy protection." -- Cynthia Dwork and Aaron Roth, The Algorithmic Foundations of Differential Privacy, 2014
Real-World Implementations
Apple has used differential privacy since iOS 10 to collect aggregate usage statistics — which keyboard words are most common, which emoji are used most, which websites use the most battery — without being able to attribute any individual usage pattern to a specific user. Apple's implementation uses local differential privacy, where noise is added on the device before transmission, providing stronger privacy guarantees than server-side approaches.
The U.S. Census Bureau deployed differential privacy for the 2020 Census, using it to protect individual household records in published tabulations while preserving accuracy for aggregate statistics. The transition was controversial among researchers who relied on Census data, as the added noise reduced accuracy for small geographic areas and subpopulations. The episode illustrated that differential privacy involves genuine tradeoffs between privacy protection and data utility.
Google's RAPPOR system uses differential privacy to collect statistics from Chrome users' browser settings, and Google has used related techniques in its federated learning infrastructure. Federated learning is a machine learning approach where model training happens on devices rather than on centralized servers, keeping raw user data local while contributing only model updates — a privacy-preserving architecture that is increasingly used in healthcare and finance.
Why Privacy Is a Power Issue
Surveillance Capitalism
Shoshana Zuboff, professor at Harvard Business School, developed the concept of surveillance capitalism in her 2019 book of the same name. Her argument is that the major internet platforms discovered a new economic logic: behavioral data generated by users' online activities has value not just for improving services but as raw material for predicting — and influencing — behavior. The product sold to advertisers is not just ad placement; it is access to a behavioral modification apparatus.
Zuboff distinguishes between behavioral data (what users do, willingly shared to improve services) and behavioral surplus (the excess data collected beyond what is needed for service improvement, analyzed to build predictive models and sold). The company that hosts your search queries needs some data to improve search quality; it does not need to track your physical location to do so. The behavioral surplus is the raw material of surveillance capitalism.
The power asymmetry this creates is significant: users generate the behavioral data, companies analyze it, and the resulting predictions are used to influence user behavior in ways users do not see and cannot easily resist.
Algorithmic Decision-Making and Discrimination
Data collected about individuals feeds automated decision-making systems that determine consequential life outcomes. Credit scoring models use purchase histories and behavioral data to predict creditworthiness. Insurance pricing algorithms incorporate data beyond traditional risk factors. Hiring screening systems use automated resume filtering and behavioral assessments. Advertising systems decide who sees job postings, housing advertisements, and loan offers.
Research by ProPublica's Julia Angwin and colleagues (2016) on the COMPAS criminal recidivism scoring system demonstrated that algorithmic tools used in criminal sentencing were twice as likely to falsely flag Black defendants as future criminals compared to white defendants. This finding sparked a broad scholarly debate about algorithmic fairness and the ways in which historical data, reflecting historical discrimination, can perpetuate and automate that discrimination.
Cathy O'Neil's 2016 book Weapons of Math Destruction documented similar dynamics across education, lending, and employment: models trained on data that reflects past inequalities, operating without transparency or accountability, systematically disadvantaging already-disadvantaged populations.
Privacy and Political Freedom
Surveillance enables repression at the political level. Knowing who communicates with whom, what people believe, and where they gather allows authoritarian governments to identify and preemptively suppress opposition. The same tools and data infrastructure that enable commercial behavioral targeting can be repurposed for political targeting.
Edward Snowden's 2013 revelations about NSA surveillance programs documented the scope of state surveillance in democratic countries: the collection of telephone metadata for hundreds of millions of Americans, the PRISM program accessing data from major technology companies, and the Five Eyes intelligence-sharing arrangement. The revelations prompted significant public debate, contributed to the passage of the USA FREEDOM Act (2015), and gave impetus to GDPR's development.
Journalist Glenn Greenwald, who reported on the Snowden revelations, argued that surveillance has a chilling effect on behavior even when people are doing nothing wrong — the knowledge of observation changes what people say, search, and associate with. This chilling effect is the mechanism by which surveillance constrains freedom even absent direct punishment.
Academic research has confirmed this intuition. Studies following the Snowden revelations documented measurable declines in Wikipedia searches for terrorism-related topics (Penney, 2016) — not because interest changed, but because people were afraid to be observed having such interests. Privacy protection, in this frame, is not about individual preference but about maintaining the structural conditions for free society: the ability to think, communicate, assemble, and dissent without systematic behavioral monitoring.
The Historical Roots of Privacy as a Right
The philosophical and legal case for privacy as a right has a longer history than the current surveillance debate might suggest.
Samuel Warren and Louis Brandeis published "The Right to Privacy" in the Harvard Law Review in 1890, responding to what they described as the "too enterprising press" and photographic technology that enabled individuals to be documented without consent. They argued that common law recognized an individual's "right to be let alone" — a formulation that influenced decades of privacy jurisprudence.
Alan Westin's 1967 book Privacy and Freedom provided a sociological framework: privacy is "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." This control-based definition maps directly onto contemporary data rights frameworks.
Daniel Solove's work, particularly Understanding Privacy (2008) and Nothing to Hide: The False Tradeoff Between Privacy and Security (2011), challenged the "nothing to hide" argument directly. Solove argued that privacy is not about secrecy but about preventing a range of specific harms: aggregation of individually innocuous data into sensitive profiles, surveillance that produces chilling effects, and loss of control that undermines autonomy even absent any specific harm.
Practical Takeaways
For individuals, understanding what rights apply in your jurisdiction creates some measure of practical control. Under GDPR, EU residents can submit data access requests, deletion requests, and portability requests to any organization processing their data. Under CCPA, California residents can opt out of data sales and request deletion. The Global Privacy Control browser signal is a standardized way to automatically communicate opt-out preferences to websites.
Tools that reduce the volume of tracking data generated: browser extensions (uBlock Origin, Privacy Badger), privacy-focused search engines (DuckDuckGo, Brave Search), privacy-respecting email providers (Proton Mail, Fastmail), and VPNs from reputable providers each reduce exposure at the individual level.
The deeper point is structural. Meaningful privacy in the current environment requires both individual practice and collective action — regulatory pressure, enforcement, and norms that treat data collection with genuine scrutiny rather than passive acceptance. Individual opt-outs, however diligent, cannot address the systemic dynamics of surveillance capitalism. The aggregate of individual decisions creates the commercial and political infrastructure for surveillance. Changing it requires policy, not just personal hygiene.
References
- Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
- European Union. (2016). Regulation (EU) 2016/679 (GDPR). Official Journal of the European Union.
- California Attorney General. (2020). California Consumer Privacy Act: Text of Statute. California DOJ.
- Dwork, C., & Roth, A. (2014). "The algorithmic foundations of differential privacy." Foundations and Trends in Theoretical Computer Science, 9(3-4), 211-407.
- Valentino-DeVargas, J., & Dance, G. (2020). "One nation, tracked." The New York Times Privacy Project, December 19.
- Cranor, L. F. (2012). "Necessary but not sufficient: Standardized mechanisms for privacy notice and choice." Journal on Telecommunications and High Technology Law, 10(2), 273-308.
- Reardon, J., Feal, A., Wijesekera, P., On, A. E. B., Vallina-Rodriguez, N., & Egelman, S. (2019). "50 ways your data is collected." USENIX Security Symposium.
- Electronic Frontier Foundation. (2023). Government Purchases of Personal Data. EFF Deeplinks.
- Greenwald, G. (2014). No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State. Metropolitan Books.
- Solove, D. J. (2011). Nothing to Hide: The False Tradeoff Between Privacy and Security. Yale University Press.
- Irish Data Protection Commission. (2023). Decision on Meta (Facebook) Data Transfer. DPC.ie.
- Warren, S. D., & Brandeis, L. D. (1890). "The right to privacy." Harvard Law Review, 4(5), 193-220.
- Kosinski, M., Stillwell, D., & Graepel, T. (2013). "Private traits and attributes are predictable from digital records of human behavior." PNAS, 110(15), 5802-5805.
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). "Machine bias." ProPublica, May 23.
- O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
- Penney, J. W. (2016). "Chilling effects: Online surveillance and Wikipedia use." Berkeley Technology Law Journal, 31(1), 117-182.
- Federal Trade Commission. (2014). Data Brokers: A Call for Transparency and Accountability. FTC Report.
- Sweeney, L. (1997). "Weaving technology and policy together to maintain confidentiality." Journal of Law, Medicine and Ethics, 25(2-3), 98-110.
- Narayanan, A., & Shmatikoff, V. (2008). "Robust de-anonymization of large sparse datasets." IEEE Symposium on Security and Privacy.
- Westin, A. F. (1967). Privacy and Freedom. Atheneum.
Frequently Asked Questions
What is the difference between GDPR and CCPA?
GDPR (General Data Protection Regulation) is a European Union regulation that applies to any organization processing the personal data of EU residents, regardless of where the organization is located. It requires explicit, informed consent for data collection, gives individuals rights to access, correct, and delete their data, mandates data breach notification within 72 hours, and imposes significant fines for violations (up to 4% of global annual revenue). CCPA (California Consumer Privacy Act) is a California state law with similar principles but narrower scope: it applies to businesses that meet certain revenue or data volume thresholds and California residents. GDPR is generally considered more comprehensive and stringent; CCPA is seen as more business-friendly with broader exemptions.
What are data brokers and why are they a privacy concern?
Data brokers are companies that collect, aggregate, and sell personal data about individuals, often without those individuals' knowledge. They compile information from public records, social media, loyalty card programs, mobile apps, website tracking, and purchased datasets into detailed profiles covering names, addresses, phone numbers, income estimates, political views, health interests, relationship status, and behavioral patterns. Companies like Acxiom, Experian, and LexisNexis maintain profiles on hundreds of millions of people. This data is sold to advertisers, employers, insurers, landlords, and law enforcement. The privacy concern is that people have no meaningful ability to see, correct, or remove information that is used to make consequential decisions about them.
What is differential privacy?
Differential privacy is a mathematical framework for analyzing datasets in ways that protect individual privacy. It works by adding carefully calibrated statistical noise to results, ensuring that the output of an analysis does not change meaningfully whether or not any specific individual's data is included. This gives individuals a formal privacy guarantee: their participation in a dataset does not meaningfully increase the risk of their specific information being revealed. Apple uses differential privacy to collect aggregate usage statistics from devices, and the U.S. Census Bureau deployed differential privacy for the 2020 Census to protect individual records while preserving the accuracy of aggregate statistics. It represents the most rigorous technical approach to the privacy-utility tradeoff.
What is the right to be forgotten?
The right to be forgotten (formally called the 'right to erasure' in GDPR Article 17) gives individuals the right to request that an organization delete their personal data when it is no longer necessary for the original purpose, when consent is withdrawn, or when the data has been processed unlawfully. The concept gained major legal standing through the 2014 European Court of Justice ruling in Google Spain v. AEPD and Mario Costeja Gonzalez, which established that individuals could request search engines to de-index certain results about them. The right is not absolute — it can be overridden by public interest, freedom of expression, or legal obligations — but it represents a significant check on permanent digital records and has been exercised millions of times against major search engines.
Why is privacy described as a power issue rather than just a preference?
Privacy as a preference frames it as a matter of personal comfort — some people care about it, others do not. Privacy as a power issue recognizes that information asymmetry creates control. When companies or governments know far more about you than you know about them, they can predict and influence your behavior in ways you are unaware of. Shoshana Zuboff's concept of 'surveillance capitalism' describes how behavioral data is used not just to predict but to modify behavior at scale. At a political level, surveillance enables repression: knowing who communicates with whom, what people believe, and where they go allows authoritarian governments to identify and target dissidents. Privacy is a precondition for autonomy, political freedom, and the ability to develop ideas and identities without constant observation.