On the evening of October 29, 1969, a student at UCLA typed the letters 'l' and 'o' on a terminal connected to a new experimental network. The message was supposed to read 'login.' Instead, the system at the other end — Stanford Research Institute, 350 miles away — crashed after receiving those first two characters. The first message ever sent over ARPANET, the network that would eventually become the internet, was an accidental fragment: 'lo.' It reads, in retrospect, like an omen. What followed that crashed terminal would be one of the most consequential technological developments in human history — a system that has remade commerce, politics, culture, knowledge, and the texture of daily life on a scale that its creators did not remotely anticipate.

The history of the internet is not a single story. It is several overlapping stories: a Cold War military communications experiment; an academic project in distributed computing; a commercial explosion that produced both extraordinary wealth and extraordinary dysfunction; a social revolution that put publishing power in everyone's hands; and an ongoing struggle between competing visions of what the network should be and who should control it. Understanding where the internet came from illuminates both its design characteristics — why it is decentralised, why it was built without security — and the tensions that shape its present and future.

We are now more than five decades past that first crashed transmission. The internet has more than five billion users. It carries an incomprehensible volume of human communication, commerce, and creative work. It has also concentrated enormous power in a small number of technology companies, enabled surveillance at a scale previously impossible, accelerated the spread of misinformation, and disrupted institutions that took centuries to build. The internet did not promise any of these outcomes. It simply provided a substrate and let humanity loose on it. The result has been everything human beings are, compressed and accelerated.

"The web is more a social creation than a technical one. I designed it for a social effect — to help people work together — and not as a technical toy." -- Tim Berners-Lee, 'Weaving the Web' (1999)


Development Year Significance
Paul Baran's distributed networks paper 1962 Conceptual foundation of packet-switched, resilient networks
ARPANET first message 1969 First packet-switched network; predecessor to internet
Email invented (Ray Tomlinson) 1971 First killer application of networked computing
TCP/IP protocol described 1974 Standard language enabling networks to interconnect
DNS (domain names) 1984 Human-readable addresses replace numeric IPs
World Wide Web proposed 1989 Berners-Lee's hypertext proposal at CERN
World Wide Web goes public 1991 First website; hypertext links; accessible to anyone
First browser (Mosaic) 1993 Graphical interface; web goes mainstream
Netscape IPO 1995 Dot-com era begins; commercial internet investment
Google founded 1998 Search transforms web navigation
Dot-com crash (NASDAQ peak) 2000 $5 trillion in market value destroyed
Broadband and Web 2.0 2004-2008 Social media; YouTube; user-generated content
iPhone introduction 2007 Mobile internet; always-on connectivity
Mobile surpasses desktop globally 2014 Majority of internet access via smartphones
GDPR enacted (EU) 2018 First major platform regulation; data rights

Key Definitions

Packet switching: The communication method, independently developed by Paul Baran at RAND in 1964 and Donald Davies at the UK's National Physical Laboratory in 1965, in which data is broken into discrete packets that travel independently across a network and are reassembled at the destination. Packet switching made resilient, decentralised networks possible and remains the foundational principle of internet data transmission.

TCP/IP: The Transmission Control Protocol / Internet Protocol, developed by Vint Cerf and Bob Kahn and described in their landmark 1974 paper. TCP/IP is the technical standard that allows different networks to communicate with each other — the 'internet protocol' in the literal sense of connecting separate networks into an interconnected whole. Its adoption as a universal standard is what transformed multiple separate networks into a single internet.

The World Wide Web: Invented by Tim Berners-Lee at CERN in 1989 and first made available to the public in 1991, the web is a system of interlinked documents (web pages) accessed via the internet using hypertext links and a standard addressing system (URLs). The web is not the internet — it is one application running on the internet — but for most users they are functionally synonymous.

Web 2.0: The second phase of web development, roughly 2004-2012, characterised by user-generated content, social networking, interactive applications, and the shift from read-only to read-write web experiences. The term was popularised by Tim O'Reilly and captures the shift from websites as broadcast media to platforms as participatory spaces.

Platform economy: The economic model in which technology companies provide infrastructure (platforms) on which third parties create value, with the platform company capturing a share of that value. Google, Facebook, Amazon, Apple, and Microsoft exemplify the platform model. The platform economy has produced enormous wealth concentration and has been the central economic consequence of the Web 2.0 era.

Net neutrality: The principle that internet service providers should treat all internet traffic equally, regardless of its source, destination, or content type. Net neutrality has been one of the most contested regulatory questions in internet governance, with advocates arguing that it preserves the open, permissionless character that made the internet innovative, and opponents arguing that differentiated service quality is commercially necessary and economically efficient.

Surveillance capitalism: A term coined by Shoshana Zuboff in 'The Age of Surveillance Capitalism' (2019) to describe the economic logic underpinning the dominant internet business model. In this model, human behaviour is the raw material extracted from users and processed into behavioural data that is used to predict and influence future behaviour, sold to advertisers. Zuboff argues this represents a new economic form distinct from industrial capitalism, with profound implications for human autonomy.


The Military Origins: ARPANET (1969)

Cold War Logic

The intellectual foundations of the internet emerged from a specific Cold War anxiety. In 1962, RAND Corporation analyst Paul Baran wrote a series of papers for the US Air Force studying how military communications could survive a nuclear attack. Baran's key insight was that centralised communication networks — with a hub-and-spoke structure where all traffic routes through a few central nodes — are catastrophically vulnerable to targeted attack. A decentralised, distributed network, where messages can route around destroyed nodes, would be far more resilient.

This insight, combined with the packet-switching work of Baran and Davies, provided the conceptual architecture for what became ARPANET. The Defense Department's Advanced Research Projects Agency funded a network connecting four university computers in 1969: UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah. The first message was sent on October 29, 1969. By 1973, ARPANET had 35 nodes and had been demonstrated to work across continents.

The Cold War origin story, while accurate, can obscure another crucial element of ARPANET's origins: the academic research community. The researchers who built and used ARPANET were not primarily motivated by military requirements. They were computer scientists exploring the possibilities of distributed computing, time-sharing, and computer-to-computer communication. Hafner and Lyon's authoritative history 'Where Wizards Stay Up Late' (1996) documents a culture of open collaboration and shared technical development that was, in many respects, the opposite of a military programme.

Email: The First Killer Application

In 1971, Ray Tomlinson, a programmer working on ARPANET at Bolt, Beranek and Newman, implemented the first email system — sending messages between different computers on the network by using the @ symbol to separate the user's name from the machine's name. The choice of @ was, Tomlinson later recalled, somewhat arbitrary; it happened to be available on the keyboard. Email rapidly became the dominant application on ARPANET, accounting for most network traffic by 1973. The asynchronous, text-based, addressable character of email established patterns of digital communication that persist in every subsequent messaging system.

The Accidental Openness

One of the most consequential design choices in internet history was not deliberate. ARPANET's designers created an open system, because openness made it easy for new nodes to connect. There was no authentication system, no security architecture, no mechanism for controlling who could join. The network was designed for a small community of trusted researchers. Security was simply not a design requirement, because the designers could not have imagined the network being used by the entire world.

This original openness is the source of many of the internet's most persistent problems — spam, phishing, distributed denial-of-service attacks, and the difficulty of verifying identity online — and also of many of its greatest strengths: low barriers to entry, interoperability across systems, and the absence of gatekeepers that allowed the explosive innovation of the 1990s and 2000s.


TCP/IP and the Birth of the Internet (1974)

Cerf and Kahn

In 1974, Vint Cerf and Bob Kahn published 'A Protocol for Packet Network Intercommunication' in IEEE Transactions on Communications. This paper described TCP/IP, the protocol that would eventually allow different networks to communicate with each other and that defines the internet in the technical sense: not a single network but a network of networks connected by a common protocol.

The genius of TCP/IP was its neutrality: it defined how data should be packaged and addressed without dictating anything about the physical network carrying it, the applications running on top of it, or the content it carried. This neutrality was architecturally embodied in what became known as the end-to-end principle — the idea that the intelligence in a network should reside at the edges (in the computers connecting to the network) rather than in the network itself. This design choice, advocated by Cerf, Kahn, and later formalised by network theorists David Reed, Jerome Saltzer, and David Clark, produced the innovation-friendly network architecture that allowed the Web, email, streaming video, and every other internet application to emerge without requiring permission from any central authority.

The NSFNET and the Academic Internet

Through the 1980s, the internet expanded as a primarily academic resource. The National Science Foundation funded NSFNET in 1985, creating a higher-capacity backbone connecting research universities. By 1989, ARPANET carried less traffic than NSFNET. Email became the killer application of the academic internet — simple, asynchronous, addressable. The internet that existed before the web was largely a text-based environment of email, file transfer, and discussion forums (Usenet groups), accessible only to those with institutional connections and technical knowledge.

The first Usenet newsgroups, created in 1979, were proto-social networks: topically organised public discussion forums where users posted and responded to text messages. By the early 1990s, Usenet had tens of thousands of groups covering every conceivable topic. The cultural norms of the early internet — a presumption of good faith, the expectation of civil discourse, the hostility to commercial activity — were largely formed in this pre-web academic context. They would prove difficult to maintain when the network opened to the general public.


Tim Berners-Lee and the World Wide Web (1989)

A Problem at CERN

Tim Berners-Lee was a software engineer at CERN, the European particle physics laboratory, in 1989 when he wrote a proposal titled 'Information Management: A Proposal.' CERN had a problem: it employed thousands of researchers who produced enormous volumes of information that was difficult to find and share. Berners-Lee's proposal was for a system of documents connected by hypertext links that could be stored on any computer connected to the network and accessed from any other.

His manager, Mike Sendall, wrote 'Vague but exciting' on the cover page and approved modest support for the project. By 1991, Berners-Lee had created the first web browser (called WorldWideWeb, later Nexus), the first web server, and the first website. He published the web's technical standards freely and without patent protection, making the most consequential contribution to the architecture of information since Gutenberg's press.

The conceptual core of the web was hypertext — the ability to embed links within documents that pointed to other documents anywhere on the network. The idea of hypertext predates the web by decades: Ted Nelson coined the term in 1963 and envisioned 'Xanadu,' a universal hypertext system, in the 1960s and 1970s. Berners-Lee's contribution was to combine hypertext with the internet's existing infrastructure and to make the combination practically simple enough to implement and use. The web's simplicity — HTML is not a programming language in any serious sense — was its essential virtue. Anyone could learn to make a web page.

The Decision Not to Patent

Berners-Lee's decision not to patent the web or extract commercial value from it is one of the most significant acts of technological generosity in history. CERN's decision to release the web's underlying technology into the public domain in 1993 allowed anyone to build web servers, browsers, and websites without paying licensing fees. The consequences are difficult to overstate: without that decision, the explosive innovation of the 1990s web would simply not have happened, or would have happened in a far more constrained form.

The counterfactual is not abstract. In the same period, proprietary online services — CompuServe, AOL, Prodigy — were building walled-garden networks that charged for access, controlled content, and required their own proprietary software. Had the web been proprietary, these services might have dominated online information for a generation. Berners-Lee's openness choice was also a competitive-architectural choice: it produced a platform on which no single company could establish control, which is why the web produced so much more innovation than the walled gardens of its contemporaries.


The Browser Wars and the Commercial Internet (1993-2001)

Mosaic and the Public Web

The first web browser that made the internet visually accessible to ordinary users was Mosaic, released in April 1993 by Marc Andreessen and Eric Bina at the National Center for Supercomputing Applications at the University of Illinois. Mosaic was the first browser to display images inline with text, a capability so significant that it made the web visually comprehensible for the first time. Within months of Mosaic's release, web traffic was growing at a rate of 341,634 percent per year, according to data cited by Hafner and Lyon (1996).

Andreessen left Illinois to found Netscape, which released Navigator in 1994 — the first commercial web browser. Netscape's 1995 initial public offering, at a company that had never turned a profit, is widely regarded as the opening act of the dot-com era. The stock price more than doubled on its first day of trading. The gold rush had begun.

The Mosaic browser was also the moment at which the internet's demographics began to shift. The academic internet had been the domain of technically sophisticated researchers and students. The graphical, image-capable browser made the web accessible to people with no technical background. Internet service providers — America Online, CompuServe, Prodigy — began offering consumer dial-up internet access. Tim Wu, in 'The Master Switch' (2010), identifies this commercialisation as the point at which the internet began to follow the historical pattern of other communications technologies: an open, innovative early phase followed by progressive concentration and control.

Microsoft and the Antitrust Battle

Microsoft recognised the threat that Netscape posed to its operating system business. If the browser became the primary computing environment, the operating system mattered less — and Microsoft's monopoly was at risk. Microsoft's response was to bundle Internet Explorer with Windows, effectively making it impossible for PC users to not have IE installed. Through a combination of technical integration and licensing pressure on computer manufacturers, Microsoft achieved approximately 90 percent browser market share by 2001.

The Department of Justice's antitrust case against Microsoft, United States v. Microsoft Corporation (2001), found that Microsoft had illegally maintained its monopoly through anticompetitive practices. The remedy was eventually limited to conduct restrictions rather than the structural breakup that the trial court initially ordered, but the case established important precedent about platform monopoly that would become relevant again in the era of Google, Amazon, and Meta. The browser wars established a pattern: in networked markets, first-mover and bundling advantages tend toward monopoly, and the resulting monopolies are difficult to dislodge through market competition alone.


The Dot-Com Boom and Bust (1995-2001)

The commercial web attracted speculative investment on an extraordinary scale. Companies with no revenue model, no clear path to profit, and often no clear purpose attracted hundreds of millions in venture capital and public market investment simply by adding '.com' to their names and projecting exponential user growth. Pets.com, Webvan, Kozmo.com, and hundreds of others burned through capital at rates that seemed justified only if the internet would remake all commerce in five years.

The numbers were staggering. Between 1998 and 2000, venture capital investment in US internet companies totalled approximately $120 billion. The NASDAQ Composite Index rose from approximately 1,500 in early 1998 to a peak of 5,048 on March 10, 2000 — a threefold increase in two years. When it crashed, it fell 78 percent from peak to trough, eventually bottoming out at 1,114 in October 2002. Approximately $5 trillion in market capitalisation was destroyed.

The crash that began in March 2000 was painful for investors and company employees but ultimately did not disrupt the underlying growth of internet infrastructure and usage. The companies that survived — Amazon, Google, eBay — emerged with stronger competitive positions because their weaker competitors had been eliminated. Amazon's share price fell from $107 in 1999 to $7 in 2001 before beginning its long ascent to become one of the most valuable companies in history. The crash also produced a period of sober reassessment that set the stage for Web 2.0.

The dot-com boom's lasting legacy was not its failures but its infrastructure. The enormous capital expenditure of the late 1990s laid fibre-optic cable across the United States and between continents at prices that would not have been financed under rational expectations. When that capital was destroyed in the crash, the physical infrastructure remained, providing cheap bandwidth capacity that enabled the streaming, social media, and cloud computing revolutions of the following decade.


Web 2.0 and Social Media (2004)

Wikipedia, Facebook, YouTube

The web of the early 2000s was dominated by the idea of user-generated content — the notion that the web's value would increasingly come from what ordinary users created rather than from professional media organisations. Wikipedia, launched in 2001 by Jimmy Wales and Larry Sanger, demonstrated that collaborative knowledge production at scale was possible. By 2024, it contained more than 65 million articles across 300 languages, maintained by volunteers, and remained one of the most visited sites on the internet.

Wikipedia's production model — open contribution, consensus-based editing, no payment for contributors — was a radical departure from every existing model of knowledge production. Research by Giles (2005), published in Nature, compared Wikipedia's accuracy on 42 scientific topics to the Encyclopaedia Britannica and found similar error rates per article. The finding was not that Wikipedia was as reliable as Britannica (the study had significant methodological limitations) but that a crowdsourced encyclopedia was in the same ballpark as a professionally produced one — which was remarkable enough.

Facebook, launched by Mark Zuckerberg and co-founders in 2004, and YouTube, founded by Chad Hurley, Steve Chen, and Jawed Karim in 2005, defined the social media phase of Web 2.0. Twitter followed in 2006. The common thread was platforms that aggregated user-created content, facilitated social connection, and monetised the resulting attention through advertising. By 2010, social media had become the dominant form of online activity for most internet users globally.

The Advertising Model

The advertising model that Google pioneered with AdWords in 2000 and that Facebook refined with social graph targeting became the economic foundation of the commercial internet. In exchange for free services, users provided behavioural data that platforms used to target advertising with unprecedented precision. This model proved extraordinarily profitable for platform companies and produced services that billions of people used daily — but it also created incentive structures that aligned platform interests with user engagement rather than user wellbeing.

Frances Haugen's 2021 testimony before the United States Senate, supported by internal Facebook research documents she had obtained as a product manager, documented that Facebook's own researchers had found that the platform's recommendation algorithm amplified divisive and emotionally activating content because such content generated higher engagement. The company's internal research had found that Instagram use was associated with negative body image and mental health outcomes in teenage girls. Yet the platform continued optimising for engagement because engagement drove advertising revenue. The incentive structure of surveillance capitalism, Zuboff (2019) argued, is systematically opposed to user wellbeing.

Google's founding in 1998 by Larry Page and Sergey Brin, and its subsequent dominance of internet search, represented a qualitative change in how the internet was navigated. The PageRank algorithm, which ranked pages by the number and quality of links pointing to them, produced dramatically better search results than the directory-based navigation of early web portals. By 2004, Google handled approximately 75 percent of all web search queries globally. By 2024, that share had grown to approximately 91 percent.

The concentration of search in a single company's algorithm created an extraordinary dependency: the entire architecture of online information, commerce, and publishing came to be shaped by the requirements of ranking well in one company's proprietary ranking system. Google's algorithmic updates — Panda (2011), Penguin (2012), Hummingbird (2013), and dozens of subsequent iterations — could destroy or create entire online industries with a single release. The power this represented was qualitatively different from any previous media power: not the power to publish or broadcast, but the power to determine what was findable.


Mobile Internet and the Smartphone Revolution

The introduction of the iPhone in 2007, followed by Android-based smartphones from 2008, transformed the internet from a destination people visited on computers to an environment that surrounded them continuously. Mobile internet access surpassed desktop access globally by 2014. The smartphone made the internet personal and location-aware in ways that fixed computers never were, enabling services — navigation, food delivery, ride-sharing, mobile payments — that depend on continuous connectivity and location data.

The app economy that emerged from the smartphone era represented a partial re-centralisation of the web's open architecture. Apple's App Store and Google's Play Store became mandatory intermediaries for most smartphone software distribution, taking 30 percent commissions on digital sales and reserving the right to remove any application from their platforms. The permissionless innovation model of the open web — where anyone could publish a website without seeking approval from any gatekeeper — was replaced, for mobile software, by a curator model where two companies controlled access to the dominant computing platforms.

The mobile internet also dramatically expanded internet access globally. In regions of Africa, South Asia, and Southeast Asia where fixed-line infrastructure was limited, mobile networks provided the primary path to internet connectivity. By 2023, approximately 4.32 billion people accessed the internet primarily through mobile devices, a figure that represented the majority of the global internet population. The International Telecommunication Union reported in 2023 that approximately 2.6 billion people remained offline — a figure that, while still large, represented a dramatically more connected world than the 600 million internet users of 2002.


Governance, Power, and the Internet's Future

Platform Concentration

The architecture of the internet that Tim Berners-Lee helped create is increasingly strained by the concentration of power that its openness inadvertently enabled. A small number of platform companies — Google, Meta, Amazon, Apple, Microsoft — control the infrastructure, search, social connection, commerce, and operating systems through which most people experience the internet. Scott Galloway, in 'The Four' (2017), argued that these companies had achieved the kind of market dominance and societal integration that, in previous eras, was associated with public utilities or nation-states.

European competition authorities have been more aggressive than their American counterparts in challenging platform concentration. The EU's Digital Markets Act (2022) designated certain platform operators as gatekeepers subject to mandatory interoperability requirements, prohibitions on self-preferencing, and requirements to share data with competitors. The Digital Services Act (2022) imposed transparency requirements on algorithmic content curation and prohibited certain forms of targeted advertising. Whether these regulatory interventions will materially alter the structural economics of platform dominance remains an open question.

The Information Disorder

The internet's role in what scholars have called the information disorder — the proliferation of misinformation, disinformation, and malinformation — has been one of the defining public concerns of the 2010s and 2020s. Research by Soroush Vosoughi, Deb Roy, and Sinan Aral, published in Science (2018), found that false news stories diffused on Twitter significantly faster, more broadly, and more deeply than true stories — and that this effect was driven by human sharing behaviour, not by bots. False information was more novel and emotionally engaging than true information, and the internet's architecture rewarded novelty and emotional engagement.

The 2016 Brexit referendum and the 2016 US presidential election brought these dynamics to public attention, with subsequent research documenting the role of targeted misinformation campaigns, algorithmic amplification, and deliberately designed information environments in shaping political outcomes. The long-term implications for democratic deliberation of information environments designed to maximise engagement rather than accuracy remain among the most important and least resolved questions in internet governance.

What Comes Next

Tim Berners-Lee's Solid project proposes decentralised personal data storage ('pods') that would allow users to control their data independently of specific platforms. The 'Web3' concept proposes blockchain-based alternatives to centralised platforms. The EU's Digital Markets Act and Digital Services Act represent the most significant regulatory challenge to platform concentration attempted anywhere in the world. The emergence of generative AI — large language models, image generators, video synthesis — is adding another layer of complexity to an already strained information environment.

How these forces resolve will determine whether the internet's third phase more closely resembles its founders' open vision or the concentrated, surveilled infrastructure it has partially become. The choice is genuine: the internet's architecture, unlike geography or physics, is a human creation. It was built by choices, and it can be rebuilt by different ones.


Practical Takeaways

The internet's history is a lesson in how design choices compound over decades. The decision to build an open, permissionless network produced extraordinary innovation and extraordinary vulnerability. The decision to fund the commercial web through advertising created services billions rely on and incentive structures that often harm users. The decision not to patent the web created the most important open communication infrastructure in history.

Understanding this history matters for current debates about internet governance, platform regulation, and the principles that should guide the next phase of network development. The internet was built by choices. Its future will also be determined by choices — about openness, about data ownership, about the relationship between platform power and democratic governance. Those choices are being made now, in legislative chambers and corporate boardrooms, and their effects will compound for decades in the same way that the choices of 1974 and 1991 compounded to produce the internet we inhabit today.


References

  1. Berners-Lee, T. (1999). Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. HarperCollins.
  2. Cerf, V., & Kahn, B. (1974). A protocol for packet network intercommunication. IEEE Transactions on Communications, 22(5), 637-648.
  3. Hafner, K., & Lyon, M. (1996). Where Wizards Stay Up Late: The Origins of the Internet. Simon & Schuster.
  4. Andreessen, M. (2011). Why software is eating the world. The Wall Street Journal, August 20, 2011.
  5. Wu, T. (2010). The Master Switch: The Rise and Fall of Information Empires. Alfred A. Knopf.
  6. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
  7. Battelle, J. (2005). The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture. Portfolio.
  8. O'Reilly, T. (2005). What is Web 2.0: Design patterns and business models for the next generation of software. O'Reilly Media.
  9. Galloway, S. (2017). The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google. Portfolio/Penguin.
  10. Kirkpatrick, D. (2010). The Facebook Effect: The Inside Story of the Company That Is Connecting the World. Simon & Schuster.
  11. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
  12. Keen, A. (2007). The Cult of the Amateur: How Today's Internet Is Killing Our Culture. Doubleday.
  13. Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438, 900-901.
  14. International Telecommunication Union. (2023). Measuring Digital Development: Facts and Figures 2023. ITU.
  15. European Commission. (2022). Digital Markets Act. Official Journal of the European Union.
  16. European Commission. (2022). Digital Services Act. Official Journal of the European Union.
  17. Warzel, C., & Petersen, A. H. (2021). Out of Office: The Big Problem and Bigger Promise of Working From Home. Knopf.

Frequently Asked Questions

Who invented the internet?

The internet does not have a single inventor, but several key figures made foundational contributions. Vint Cerf and Bob Kahn developed TCP/IP, the fundamental communication protocol that makes the internet possible, in a landmark 1974 paper. Their work built on ARPANET, the US Defense Department network established in 1969. Tim Berners-Lee invented the World Wide Web in 1989-1991 while working at CERN — the system of hyperlinked documents that transformed the internet from an academic tool into a public resource. Paul Baran and Donald Davies independently developed packet-switching theory in the 1960s, which provided the conceptual foundation for how data travels across networks.

What was ARPANET?

ARPANET was the Advanced Research Projects Agency Network, funded by the US Defense Department and launched in 1969. It was the first operational packet-switched network and the direct predecessor of the internet. The first ARPANET message was sent on October 29, 1969, from UCLA to Stanford Research Institute. The system crashed after two letters — 'lo' of 'login' — making the first internet message accidentally poetic. ARPANET was designed to be resilient to partial destruction, a Cold War consideration that drove the decentralised architecture that still characterises the internet today. It was decommissioned in 1990 as the commercial internet took over its functions.

What were the browser wars?

The browser wars refers primarily to the period from 1995 to 2001, when Netscape Navigator and Microsoft Internet Explorer competed fiercely for dominance of the web browser market. Netscape had pioneered commercial web browsing with its Navigator browser in 1994. Microsoft responded by bundling Internet Explorer with Windows, leveraging its operating system monopoly to capture market share. By 2001, Internet Explorer had achieved approximately 90 percent market share, effectively ending Netscape as a commercial entity. The Department of Justice antitrust case against Microsoft was partly triggered by these tactics. A second browser war began in the mid-2000s as Firefox, Chrome, and Safari challenged Internet Explorer's dominance.

What is Web 2.0?

Web 2.0 refers to the second phase of web development, characterised by user-generated content, social networking, and participatory platforms, as distinguished from the largely static, read-only websites of the early web. The term was coined by Darcy DiNucci in 1999 and popularised by Tim O'Reilly in a 2004 conference. The Web 2.0 era produced Wikipedia, YouTube, Facebook, Twitter, Flickr, and blogs — platforms where users were producers of content as well as consumers. Web 2.0 represented a shift from the web as an information broadcast medium to the web as a social and participatory space, with profound consequences for media, politics, and culture.

What comes after the current internet?

Several competing visions for the internet's next phase are actively contested. 'Web3' proponents argue for a decentralised internet built on blockchain technology, with users owning their data and digital assets rather than being tenants of platform companies. Critics argue that Web3 proposals largely reproduce existing power structures while adding speculative assets. The 'metaverse' concept, heavily promoted by Meta beginning in 2021, envisions persistent three-dimensional digital environments. AI integration is reshaping how information is retrieved and generated online. The Solid project, developed by Tim Berners-Lee himself, proposes giving users control of their personal data through decentralised storage pods. Which of these visions — or what combination — shapes the next phase of the internet remains genuinely uncertain.