Google processes approximately 8.5 billion searches per day. For each one, it attempts to retrieve the most relevant, authoritative, and useful results from an index containing hundreds of billions of web pages — in a fraction of a second. The mechanism by which it does this has evolved continuously since Larry Page and Sergey Brin published their foundational paper in 1998, through hundreds of updates and the integration of large-scale machine learning, into a system of formidable complexity. Yet its core purpose has remained consistent: find and rank the pages most likely to satisfy the person searching.
Understanding how the algorithm works has obvious practical relevance for anyone who publishes content online. It also matters for understanding the information environment: Google's ranking decisions determine what information people find, which voices receive amplification, and which kinds of content are economically viable to produce. When Google updates its algorithm to reward longer, more comprehensive content, the web shifts accordingly. When it penalizes content farms, the media economics of cheap article production change. The algorithm is not neutral infrastructure — it actively shapes what the internet looks like.
This article traces the history of Google's algorithm from PageRank through the modern multi-signal, machine-learning-driven system, explains what E-E-A-T actually means and how it is applied, reviews major algorithm updates and their effects, addresses common SEO myths, examines what Google is genuinely trying to optimize for at the level of intent, and discusses the practical implications for content creators, publishers, and businesses competing for organic visibility.
"Google is not trying to rank the best pages. It is trying to rank the pages that the greatest number of users will find most satisfying. These are related but not identical." — Common framing among search quality researchers
Key Definitions
PageRank: The original Google algorithm developed by Larry Page and Sergey Brin, which measured a webpage's importance by the number and quality of links pointing to it. Still a component of Google's system but significantly supplemented by other signals.
E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness. Google's Search Quality Rater Guidelines framework for evaluating content quality, particularly for topics with high stakes for users.
Core update: A broad change to Google's ranking systems that can cause significant fluctuations in search rankings across many topics. Announced publicly but without specific guidance about what changed.
BERT: Bidirectional Encoder Representations from Transformers. A natural language processing model deployed by Google in 2019 to better understand the meaning of search queries, particularly long-tail and conversational queries.
Search intent: The underlying goal behind a search query — informational (wanting to learn), navigational (looking for a specific site), transactional (wanting to buy or download), or commercial investigation (researching before buying).
RankBrain: Google's first machine-learning ranking component, deployed in 2015, which helps interpret novel and ambiguous queries by relating them to semantically similar queries that have been seen before.
YMYL (Your Money or Your Life): Google's category for topics where inaccurate content could cause real harm to health, finances, legal standing, or safety. E-E-A-T standards are applied most rigorously to YMYL content.
Core Web Vitals: A set of real-world, user-centered performance metrics that Google uses as ranking signals: Largest Contentful Paint (loading performance), Cumulative Layout Shift (visual stability), and Interaction to Next Paint (interactivity responsiveness).
Zero-click search: A search session in which the user finds the answer directly on the search results page — through a featured snippet, knowledge panel, or other SERP feature — without visiting any website.
Google's Major Algorithm Updates Timeline
| Update | Year | Target | Effect |
|---|---|---|---|
| Panda | 2011 | Thin, low-quality, duplicate content | Penalized content farms; introduced site-wide quality signals |
| Penguin | 2012 | Manipulative link building | Penalized artificial links; transformed link building industry |
| Hummingbird | 2013 | Semantic search intent | Enabled understanding of conversational and long-form queries |
| Mobile-Friendly | 2015 | Mobile page usability | Boosted mobile-optimized pages in mobile search |
| RankBrain | 2015 | Novel query interpretation | Machine learning applied to ranking for the first time |
| BERT | 2019 | Natural language understanding | Deep contextual parsing of query meaning |
| Core Web Vitals | 2021 | Page experience signals | Loading speed, visual stability, interactivity added as signals |
| Helpful Content | 2022 | Search-engine-first content | Site-wide signals against AI-generated and thin content |
| March 2024 Core | 2024 | Low-quality scaled content | Large declines for many high-volume content sites |
The PageRank Foundation
The Original Insight
Larry Page and Sergey Brin's key insight was that the structure of the web itself contained information about the relative importance of pages. Prior search engines relied primarily on keyword matching — how many times a search term appeared on a page. This was trivially gameable and produced low-quality results.
Page and Brin's 1998 paper, "The Anatomy of a Large-Scale Hypertextual Web Search Engine," introduced PageRank as a way to measure a page's importance based on the link graph. The intuition was that a link from one page to another represents an editorial judgment — the linking page is effectively endorsing the linked page. Pages with more endorsements (links) are more important. But not all endorsements are equal: a link from a widely-endorsed page carries more weight than a link from an obscure one.
Formally, PageRank simulates a random web surfer who follows links at random, occasionally teleporting to a random page. The PageRank of each page is proportional to the probability that the surfer is on that page at any given time. Pages that many high-PageRank pages link to receive high PageRank scores in a self-consistent, iterative calculation.
Why PageRank Was a Breakthrough
The innovation was not just technical — it was epistemological. PageRank used the collective judgment encoded in the link graph as a quality signal, turning the web's structure into a distributed editorial process. This meant that to rank well in early Google, you needed to produce content that real sites would actually link to — which was harder to fake than keyword stuffing.
The result was a dramatic improvement in search quality relative to contemporary competitors. Google's results in 1998-2002 were noticeably better than AltaVista, Yahoo, or Ask Jeeves for most queries, and this quality advantage drove rapid adoption. Within four years of its founding, Google had become the dominant search engine globally.
PageRank remains part of Google's system, though its weight relative to other signals has changed significantly. Google no longer publicly reports PageRank scores for individual pages, having retired the public PageRank toolbar in 2016.
The Limitations PageRank Revealed
PageRank's elegance also contained its vulnerabilities. Because it treated inbound links as editorial votes, anyone who could acquire links — through purchases, exchanges, or manufactured networks — could artificially inflate a page's apparent importance. By the mid-2000s, link manipulation had become a substantial industry, with thousands of businesses offering paid links, private blog networks, and comment-spam campaigns designed to accumulate PageRank rather than reflect genuine editorial endorsement.
This arms race between link manipulators and Google's detection systems defined much of search optimization from approximately 2003 to 2012, and its resolution required a fundamental rethinking of how link signals were interpreted and weighted.
How the Algorithm Has Evolved
Panda (2011): Penalizing Thin Content
By 2011, a significant portion of the web consisted of low-quality "content farms" — sites that produced large volumes of thin, keyword-stuffed articles designed to rank rather than to be useful. Demand Media's eHow, Associated Content, and similar properties published millions of pages that ranked highly for specific search terms while providing little genuine value.
The Panda update in February 2011 targeted this directly, introducing quality signals that penalized sites with large proportions of low-quality, thin, or duplicated content. The update reduced organic search visibility for many content farm sites dramatically and had a significant effect on the media economics of low-quality article production. Demand Media's stock price fell approximately 40 percent in the weeks following Panda, a measure of how completely the update disrupted the economics of content farm publishing.
Panda introduced the concept of site-wide quality signals — a site with a large proportion of poor content could see all of its pages downranked, not just the specific poor-quality pages. This created an incentive to maintain quality across a domain rather than allowing low-quality content to accumulate. It remains one of the most consequential single updates in Google's history.
Penguin (2012): Penalizing Manipulative Links
PageRank's reliance on links had created a thriving industry of artificial link building — comment spam, paid links, private blog networks, and link exchanges designed to manipulate PageRank rather than reflect genuine editorial endorsement.
The Penguin update in April 2012 targeted link manipulation, applying penalties to sites with large proportions of clearly artificial inbound links. Combined with the introduction of the Google Disavow Tool, Penguin fundamentally changed the link building industry: toxic link building became not just ineffective but actively harmful.
The update was initially applied as a periodic refresh (sites could recover only when Penguin recrawled and recalculated). Google moved Penguin to real-time processing in 2016, meaning recoveries and penalties now happen continuously as Google recrawls the web.
The Penguin era also produced one of the most significant SEO case studies in the industry: the JC Penney link scheme uncovered by The New York Times in February 2011, in which the retailer had paid for thousands of manipulative links and ranked first for hundreds of competitive terms as a result. Following media exposure and Google manual action, JC Penney's rankings dropped overnight. The episode illustrated both the effectiveness of large-scale link manipulation and the severity of Google's penalty response.
Hummingbird (2013): Understanding Intent
Hummingbird in August 2013 was less a penalty-driven update than a fundamental re-architecture of how Google processed queries. It replaced the core query parsing system with one capable of understanding natural language — the meaning of a query as a whole, not just the individual keywords.
This enabled Google to handle conversational queries ("what's the closest pizza place open now"), multi-part questions, and queries where the important words are not the keywords themselves ("what do I need to bring to a job interview" — the intent, not the word "interview," is what Google needs to match). Hummingbird was the foundation on which subsequent language understanding advances (RankBrain, BERT, MUM) were built.
BERT (2019) and MUM (2021): Language Understanding
RankBrain in 2015 was Google's first use of machine learning to process search queries, particularly novel queries that had not been seen before. It helped Google better match queries to relevant pages even when the exact query terms were not present in the page text.
BERT (Bidirectional Encoder Representations from Transformers), deployed in late 2019, represented a more fundamental shift. BERT is a large language model that understands context and meaning in language far more deeply than keyword-based approaches. It enables Google to parse the meaning of queries more accurately — understanding that "can you get medicine for someone pharmacy" is asking about pharmacy policies, not about the word "medicine" in isolation. Google described BERT as affecting one in ten English-language queries at launch.
MUM (Multitask Unified Model), announced in 2021, is described as 1,000 times more powerful than BERT and capable of understanding information across text, images, and video simultaneously. It is used for complex, multi-step queries and to identify information in multiple languages.
The practical implication of these language model integrations is significant: Google now understands queries semantically, not syntactically. Writing content that "matches keywords" is no longer the operative model of how pages connect to queries. What matters is whether a page addresses the topic comprehensively enough that the language model can represent it as an answer to the range of queries the topic encompasses.
The Helpful Content System (2022-2024)
Google's Helpful Content Update in August 2022, and its expansion through 2023, introduced a new system specifically designed to reward "content created for people" and down-rank "content created for search engines." The update targeted what Google called "search engine-first" content — articles written to match search query formats and word counts rather than to genuinely serve readers.
The system introduced a site-wide classifier (similar in concept to Panda's site-wide signal) that could affect a domain's overall visibility based on the proportion of content that appeared to be primarily search-engine-optimized rather than genuinely helpful. The March 2024 core update significantly expanded this system, causing substantial ranking volatility and significant declines for many sites that had relied on high-volume AI-generated or formulaic content production.
The update's effect was most pronounced on sites that had grown primarily through SEO-optimized content rather than brand reputation or direct audience relationships — a signal that Google was explicitly devaluing the link between content volume and search visibility.
Independent analysis by Lily Ray, SEO director at Amsive, found that the March 2024 core update produced ranking declines of 50 percent or more for dozens of large content sites that had relied on SEO-first production models. Several sites with hundreds of thousands of pages saw near-total loss of organic search visibility — the most dramatic illustration of site-wide quality signals in action since Panda.
E-E-A-T: What It Actually Means
The Quality Rater Guidelines
Google employs approximately 10,000 to 16,000 Search Quality Raters — human evaluators who assess search results using a set of guidelines that Google publishes publicly. Quality Raters do not directly change search rankings; they evaluate results that are used as training data for the machine learning systems that do affect rankings.
The Quality Rater Guidelines articulate the E-E-A-T framework as the primary lens for evaluating content quality. The four components are:
- Experience (added in December 2022): whether the content creator has direct, first-hand experience with the subject — a product review written by someone who has actually used the product versus one written from secondary sources.
- Expertise: subject-matter knowledge relevant to the topic — a financial article written by someone with demonstrated finance background versus a generalist.
- Authoritativeness: reputation and recognition within the field — an institution or author that other authoritative sources cite and reference.
- Trustworthiness: the foundational factor — accuracy, transparency about sources, clear editorial policies, secure and properly functional site.
Trustworthiness is weighted most heavily. A highly expert author who regularly publishes inaccurate information is less trustworthy than a moderately expert author who is scrupulously accurate.
How E-E-A-T Signals Are Detected
A common misunderstanding is that E-E-A-T is a direct ranking factor — a score Google computes and applies to individual pages. The more accurate description is that E-E-A-T is a quality framework that Google's machine learning systems are trained to approximate by detecting signals that correlate with it.
These signals include: author bylines and biographical information linked to a subject-matter profile; structured data markup identifying author credentials; inbound links from authoritative sources in the same domain of knowledge; press mentions and citations from recognized institutions; editorial transparency pages (about us, editorial policy, corrections process); accurate contact information and physical addresses; and the consistency and accuracy of factual claims across the page.
For practical purposes, E-E-A-T signals are more detectable — and more consequential for rankings — on competitive YMYL topics than on low-stakes informational queries. A recipe blog does not need the same level of demonstrated expertise as a medical information site; a travel destination guide operates under different quality expectations than a financial advice page.
YMYL Topics
The E-E-A-T framework is applied with particular rigor to YMYL (Your Money or Your Life) topics — areas where inaccurate or untrustworthy content could have significant negative effects on users' health, finances, legal standing, or safety. Medical advice, financial guidance, legal information, safety-critical content, and news about important public matters are all YMYL categories.
For YMYL topics, Google's guidelines explicitly call for high standards of expertise and authority. A medical article written by an anonymous author with no credentials will receive a lower quality rating than one written by a medical professional, affiliated with a recognized medical institution, with clear sourcing. This is why established health information sites — Mayo Clinic, NHS, Cleveland Clinic — tend to rank highly for health queries while unbranded content farm articles typically rank less well.
The introduction of "Experience" to the existing E-A-T framework (making it E-E-A-T) was notable. It addressed a specific problem: genuinely expert authors who write about things they have not personally experienced versus less credentialed authors who have direct first-hand accounts. For certain queries — product reviews, travel recommendations, practical how-to guides — first-hand experience has signal value independent of formal expertise credentials.
What Google Is Actually Optimizing For
Stated Goals vs. Practical Reality
Google's stated mission is to "organize the world's information and make it universally accessible and useful." In search, this translates to: return results that satisfy user needs. The company describes its ranking systems as designed to reward content that is "helpful, reliable, people-first."
The practical measurement of this goal is user satisfaction signals: whether searchers click on a result and stay on the page (or return to search quickly, indicating the result was not satisfying — "pogosticking"), whether they refine their search after seeing results (indicating the initial results were not helpful), and implicit signals from click patterns and session behavior.
Google explicitly states it uses behavioral signals to evaluate the quality of its ranking systems in aggregate — not to rank individual pages (which would be gameable). But the signals that correlate with user satisfaction in aggregate influence the machine learning systems that do affect rankings.
Search Intent Classification
A critical component of what Google is optimizing for is search intent — matching not just the words in a query but the underlying task the searcher is trying to accomplish. Google classifies queries into four primary intent types:
Informational intent: The searcher wants to learn something. "How does photosynthesis work," "what is a 401k," "history of the Roman Empire." The best results for informational queries are comprehensive, accurate, and well-structured for reading. Featured snippets and Knowledge Panels extract key information directly to the SERP for simple informational queries.
Navigational intent: The searcher wants to reach a specific site or page. "Facebook login," "Nike official website," "Gmail." The best result is the specific site they are looking for. Google typically returns it at the top of results regardless of optimization signals.
Transactional intent: The searcher wants to buy or download something. "Buy running shoes," "download Photoshop," "iPhone 16 Pro deals." The best results include Shopping listings, brand pages, and major retailer product pages. High commercial relevance and product availability signals matter significantly here.
Commercial investigation: The searcher is researching before buying. "Best running shoes for flat feet," "iPhone vs Samsung 2024," "Klaviyo review." The best results combine editorial expertise with practical specificity. Comparison guides, expert reviews, and in-depth product analyses tend to rank well.
Understanding which intent type applies to a target query is prerequisite to producing content that can rank for it. A long-form informational guide ranks well for "how to start a podcast" (informational intent) but will typically not appear for "podcast microphone" (transactional intent) regardless of how comprehensive it is.
The Commercial Dimension
Search is Google's core revenue driver, generating the majority of Alphabet's annual revenue through advertising. Organic search rankings are adjacent to — but distinct from — paid search results, and Google maintains that organic ranking cannot be purchased. This claim is generally accepted as factual.
However, the commercial dimension of search shapes algorithmic decisions in less direct ways. Google's algorithm developments over time have consistently extended the space for paid results and knowledge panels above organic results, reducing the click-through rates to organic positions. Featured snippets, Local Packs, Shopping results, and People Also Ask boxes all appear above traditional organic results in many queries, changing what "ranking first" actually means for visibility.
A widely shared analysis by SparkToro found that a growing percentage of Google searches result in zero clicks — users get the answer from the SERP (Search Engine Results Page) itself without visiting any website. Rand Fishkin's analysis of 2022 and 2023 clickstream data estimated that approximately 58 percent of US Google searches in 2022 resulted in zero clicks, a proportion that had grown steadily as Google expanded SERP features. This "zero-click search" trend has significant implications for publishers relying on organic search traffic.
The Current Landscape: AI and the Future of Search
Google's AI Overviews
In May 2024, Google launched AI Overviews (previously called Search Generative Experience) in the United States, deploying large language model-generated summaries at the top of search results for a significant proportion of queries. AI Overviews represent the most significant change to Google's results page format since the introduction of featured snippets in 2014.
Early analysis by Semrush, BrightEdge, and others found that AI Overview prevalence varied significantly by query type: informational queries in health, finance, and how-to categories showed the highest AI Overview rates, while transactional and navigational queries showed lower rates. Sites cited within AI Overviews do not necessarily gain significant organic traffic from the citation — several publishers reported that AI Overview citations generated negligible click-through despite appearing prominently in results.
The implications for content strategy are still emerging, but early evidence suggests that the queries most likely to be answered by AI Overviews are also the queries with highest informational intent — the same queries that historically drove the most organic traffic to informational content sites. This development represents a structural challenge to content businesses that monetize through organic search traffic, regardless of content quality.
What Remains Durable
Despite the acceleration of change in Google's systems, several principles have remained consistent across twenty-five years of algorithm evolution: content that genuinely addresses user needs outperforms content that mimics the surface signals of useful content. Earned links from authoritative sources outperform manufactured links. Technical performance signals (page speed, mobile usability, stable layout) matter at the margins. Demonstrated expertise and trustworthiness matter significantly for high-stakes topics.
The trajectory of updates — from Panda forward through Helpful Content — has been consistently in the direction of raising the floor for what constitutes rankable content. Each major update has made it harder to rank through surface optimization and easier to rank through genuine quality. The direction has been consistent even when individual updates have been unpredictable.
Common SEO Myths Debunked
Myth: Social media shares boost rankings. Google has stated explicitly that social signals (shares, likes, followers) are not a direct ranking factor. Social engagement may indirectly generate links, which do affect rankings, but social signals themselves are not used.
Myth: Longer content always ranks better. Length is not a ranking factor. The appropriate length depends entirely on what the topic requires to be genuinely answered. Padded content that is long without being useful is exactly what the Helpful Content System targets.
Myth: Exact keyword density matters. Google's language models have moved entirely beyond keyword counting. Writing naturally about a topic, using related terms and covering the subject comprehensively, is what signals topical relevance — not hitting a specific keyword percentage.
Myth: Domain age is a major ranking factor. While an established domain with a history of links has some advantage, domain age itself is a weak signal. New domains can rank quickly for topics they cover comprehensively and authoritatively.
Myth: Meta keywords affect rankings. Google has explicitly stated it ignores the meta keywords tag for ranking, which has been true since the early 2000s. Meta descriptions do not directly affect rankings either, though they influence click-through rates.
Myth: You must submit your sitemap to rank. Google crawls the web autonomously and will discover pages through links. Sitemaps help with efficient crawling and indexing of large sites or newly published content, but they are not prerequisites for appearing in search results.
Myth: Publishing more content faster improves rankings. The Helpful Content System explicitly targets sites whose publishing velocity exceeds their quality threshold. A smaller number of genuinely comprehensive, well-researched pages consistently outperforms a large number of thin, quickly produced pages for the same domain.
Practical Takeaways
For content publishers, the clearest guidance from Google's documented systems and algorithm history points in a consistent direction: produce content that genuinely addresses what searchers are looking for, demonstrate clear expertise and trustworthiness (especially for YMYL topics), earn links through content quality rather than manipulation, and ensure the technical experience of the page meets modern performance standards (Core Web Vitals: Largest Contentful Paint, Cumulative Layout Shift, Interaction to Next Paint).
The most durable SEO strategy is alignment with what Google is actually trying to measure — user satisfaction — rather than attempts to optimize for current algorithmic signals that may be updated. Every significant core update has moved in the direction of rewarding content that is genuinely useful to humans and penalizing content optimized primarily for algorithmic signals. The trajectory is consistent even when individual updates are unpredictable.
Understanding that E-E-A-T signals (authorship credentials, institutional affiliation, editorial standards, source citations) are increasingly measurable by algorithms — through structured data, author pages, citations, and link patterns — means that demonstrating expertise in discoverable ways has both practical and strategic value for content that wants to rank on competitive, high-stakes topics.
The emergence of AI Overviews introduces new uncertainty for publishers whose model depends on organic search traffic from informational queries. The most resilient position is building a direct audience relationship — email lists, subscriptions, brand recognition — that does not depend entirely on Google sending traffic. Search visibility matters, but dependence on any single traffic source is structurally fragile regardless of current algorithm dynamics.
References
- Brin, S., & Page, L. (1998). "The anatomy of a large-scale hypertextual web search engine." Computer Networks and ISDN Systems, 30(1-7), 107-117.
- Google. (2023). Search Quality Rater Guidelines. Google LLC.
- Sullivan, D. (2022). "Google's helpful content update." Google Search Central Blog, August 18.
- Cutts, M. (2012). "New Penguin algorithm targets web spam." Google Webmaster Central Blog, April 24.
- Google. (2019). "Understanding searches better than ever before (BERT)." Google Blog, October 25.
- Nayak, P. (2021). "A breakthrough for understanding searches." Google Blog, May 18. (MUM announcement)
- Google. (2023). How Google Search Works: Core Ranking Systems. developers.google.com.
- Fishkin, R. (2023). Zero-Click Searches Study: 2022 and 2023 Data. SparkToro Research.
- SparkToro & Datos. (2023). Zero-Click Searches: The Growing Trend and What It Means for SEO. SparkToro Research.
- Google. (2021). Introducing the Multitask Unified Model. ai.googleblog.com.
- Illyes, G. (2016). "Rolling out mobile-friendly update." Google Webmaster Central Blog.
- Singhal, A. (2011). "Finding more high-quality sites in search." Google Blog, February 24. (Panda announcement)
- Semrush. (2024). AI Overviews: Impact on Click-Through Rates and Organic Traffic. semrush.com.
- Ray, L. (2024). March 2024 Core Update: Analysis of Impacted Sites. Amsive Research.
- Google. (2021). Evaluating Page Experience for a Better Web. developers.google.com. (Core Web Vitals announcement)
- Schwartz, B. (2023). "JC Penney paid for links that duped Google." Search Engine Land.
- Sullivan, D. (2024). "Google introduces AI Overviews in Search." Google Search Central Blog, May 14.
Frequently Asked Questions
What is PageRank and how did it work originally?
PageRank, developed by Larry Page and Sergey Brin in 1998, measured a page's importance by the number and quality of links pointing to it — treating each link as an editorial endorsement weighted by the linking page's own authority. It was a breakthrough because it used the web's own link structure as a distributed quality signal, making results far harder to game than keyword-count approaches.
What is E-E-A-T and how does it affect rankings?
E-E-A-T stands for Experience, Expertise, Authoritativeness, and Trustworthiness — the framework Google's Search Quality Raters use to evaluate content quality, especially for health, finance, and legal topics where inaccurate content causes real harm. Rater assessments are used to train the ranking systems, so E-E-A-T signals (author credentials, institutional affiliation, source citation, accuracy) indirectly shape what ranks.
What are Google core updates and how often do they happen?
Core updates are broad changes to Google's ranking systems — typically two to four per year — that can cause significant ranking shifts across many topics simultaneously. Unlike targeted updates (spam, helpful content), core updates represent Google reassessing how it evaluates overall quality, and Google does not provide site-specific guidance on what to fix after one.
Does keyword stuffing still work for SEO?
No — keyword stuffing has been actively penalized since the Panda update (2011) and is now irrelevant because Google's BERT and MUM language models understand meaning, not keyword frequency. Writing naturally about a topic and comprehensively covering its substance is what signals relevance to modern Google.
How many signals does Google use to rank pages?
Google uses hundreds of signals including content quality, link authority, Core Web Vitals performance, mobile-friendliness, structured data, and query-specific intent matching. Machine learning systems — RankBrain (2015), BERT (2019), MUM (2021) — interpret queries and content meaning, making no single signal dominant and resistant to simple optimization.