Common Security Failures Explained: How Organizations Get Breached
Equifax knew about the vulnerability. On March 7, 2017, the Apache Struts framework disclosed a critical security flaw--CVE-2017-5638--along with a patch. Equifax's security team received the alert. They were supposed to patch their systems within 48 hours per internal policy. They didn't. For 143 days, the vulnerability sat unpatched on a public-facing web application. During that window, attackers exploited it to steal the personal data of 147 million Americans: names, Social Security numbers, birth dates, addresses, and driver's license numbers.
The Equifax breach was not sophisticated. It did not involve a nation-state actor deploying novel malware. It involved a known vulnerability with a known fix that was not applied. This is the uncomfortable truth about most security breaches: they exploit basic, preventable failures, not advanced techniques.
Study after study confirms this pattern. The 2024 Verizon Data Breach Investigations Report analyzed over 30,000 security incidents and found that the vast majority traced back to a small set of recurring failures: stolen credentials, unpatched vulnerabilities, misconfigured systems, and human error. The attackers weren't geniuses. The defenders were making the same mistakes, year after year.
This article catalogs those recurring failures--not as an abstract checklist, but through the specific incidents that exposed them, the organizational dynamics that perpetuate them, and the practical steps that would have prevented them.
Failure Category One: Credential Compromise
The Problem That Won't Go Away
Weak, stolen, or mismanaged credentials are the single most common cause of breaches. The Verizon DBIR consistently finds that over 80% of hacking-related breaches involve credentials in some form.
The mechanics are straightforward. People reuse passwords across services. When one service is breached, those credentials are tested against every other service in automated credential stuffing attacks. The success rate is typically 0.1-2%, which sounds small until you realize attackers test millions of credentials simultaneously.
Example: In January 2023, Norton LifeLock--ironically, a cybersecurity company--disclosed that nearly 925,000 accounts had been compromised through credential stuffing. Attackers used username/password pairs from external breaches to access Norton accounts. Users who reused passwords and hadn't enabled MFA were exposed.
1. Default credentials. Devices and applications ship with factory-set usernames and passwords ("admin/admin," "root/password"). The Mirai botnet in 2016 hijacked over 600,000 IoT devices by trying a list of just 62 default username/password combinations. It then launched a DDoS attack that took down major internet services including Twitter, Netflix, and Reddit.
2. Shared credentials. Teams share service accounts or passwords through Slack messages, spreadsheets, or sticky notes. When someone leaves the organization, those credentials often aren't rotated. The 2021 Colonial Pipeline ransomware attack--which shut down fuel distribution across the eastern United States--was traced to a compromised VPN password for an account that didn't use MFA and may no longer have been actively used.
3. Hardcoded credentials. Developers embed API keys, database passwords, or service account credentials directly in source code. These credentials end up in version control systems, container images, and deployment artifacts. GitHub reported scanning over 100 million commits for exposed secrets in 2023, finding millions of leaked credentials.
"We don't have a technology problem. We have a credential management problem. Fix that, and you eliminate the majority of breaches overnight." -- Brian Krebs, investigative cybersecurity journalist
Failure Category Two: Unpatched Vulnerabilities
The Known Risks That Organizations Ignore
A patch is a fix for a known vulnerability. When a vendor releases a patch, they're essentially publishing both the existence of the vulnerability and the fix for it. This creates a race: organizations must apply the patch before attackers exploit the vulnerability. Organizations consistently lose this race.
The mean time to patch a critical vulnerability across enterprises is measured in weeks to months, not hours or days. Meanwhile, exploit code for disclosed vulnerabilities often appears within 24-48 hours of publication.
Example: The WannaCry ransomware attack of May 2017 exploited a Windows vulnerability (EternalBlue) for which Microsoft had released a patch two months earlier. WannaCry infected over 200,000 computers across 150 countries, shutting down hospitals, factories, and government agencies. Every infected system was one that hadn't applied a two-month-old patch.
Why Patching Fails
1. Fear of breaking things. Production systems are complex, and patches can introduce incompatibilities. Organizations delay patching because testing takes time and they fear downtime.
2. Poor asset inventory. You can't patch systems you don't know exist. Shadow IT, forgotten servers, and unmanaged devices create blind spots. The Equifax breach occurred on a system that the security team didn't know was running the vulnerable software.
3. Legacy systems. Old systems that can't be easily patched--or that run on unsupported software--persist in organizations for years because replacing them is expensive and risky.
4. Patch fatigue. Security teams face thousands of vulnerability alerts monthly. Without risk-based prioritization, critical patches get lost in the noise.
| Breach | Vulnerability | Patch Available Before Breach | Unpatched Duration |
|---|---|---|---|
| Equifax (2017) | Apache Struts CVE-2017-5638 | Yes, 2 months | 143 days |
| WannaCry (2017) | EternalBlue MS17-010 | Yes, 2 months | Varies (many still unpatched) |
| Capital One (2019) | WAF misconfiguration + SSRF | Mitigations available | N/A (configuration, not patch) |
| Log4Shell (2021) | Log4j CVE-2021-44228 | Yes, within days | Months for many organizations |
| MOVEit (2023) | SQL injection CVE-2023-34362 | Zero-day initially | Weeks after patch |
| Citrix Bleed (2023) | CVE-2023-4966 | Yes | Weeks for many targets |
Failure Category Three: Misconfiguration
The Invisible Gaps in Your Defenses
Misconfiguration means systems are set up in ways that inadvertently expose data or create vulnerabilities. Cloud environments have made this problem dramatically worse because the number of configurable settings has exploded and the consequences of getting them wrong are immediate and global.
Example: In 2019, Capital One suffered a breach exposing 106 million credit card applications. The root cause wasn't a software vulnerability--it was a misconfigured web application firewall (WAF) that allowed a server-side request forgery (SSRF) attack to access AWS metadata, which contained temporary credentials with excessive permissions. A former AWS employee exploited this chain of misconfigurations.
1. Public cloud storage. Amazon S3 buckets, Azure Blob Storage containers, and Google Cloud Storage buckets set to public access have exposed billions of records. In 2017 alone, security researchers found exposed S3 buckets belonging to Verizon (14 million customer records), the U.S. Department of Defense (1.8 billion scraped social media posts), and Accenture (internal credentials and decryption keys).
2. Database exposure. MongoDB, Elasticsearch, and Redis instances deployed with default configurations and no authentication, directly accessible from the internet. Shodan, a search engine for internet-connected devices, consistently finds tens of thousands of exposed databases.
3. Overly permissive access controls. Cloud IAM roles with wildcard permissions ("Action": "*"), security groups allowing all inbound traffic, and service accounts with administrative privileges. The principle of least privilege is simple in theory and routinely violated in practice.
4. Debug and development settings in production. Stack traces displayed to users, debug endpoints left accessible, verbose error messages revealing internal architecture, and test credentials active in production environments.
Failure Category Four: Social Engineering and Phishing
Attacking the Human Layer
Social engineering bypasses technical controls entirely by targeting the humans who operate the systems. Phishing--the most common form--uses deceptive emails, messages, or websites to trick people into revealing credentials, installing malware, or transferring money.
The 2023 MGM Resorts breach, which cost the company an estimated $100 million in lost revenue and remediation, began with a phone call. Attackers from the Scattered Spider group called MGM's IT help desk, impersonated an employee they found on LinkedIn, and convinced the support agent to reset the employee's MFA credentials. That phone call gave the attackers a foothold that they used to deploy ransomware across MGM's hotel and casino operations.
1. Spear phishing. Targeted phishing aimed at specific individuals using personal information gathered from social media, company websites, or previous breaches. Unlike mass phishing campaigns, spear phishing emails are carefully crafted and often nearly indistinguishable from legitimate communications.
2. Business email compromise (BEC). Attackers impersonate executives or trusted partners to request wire transfers or sensitive data. The FBI reports that BEC has caused over $50 billion in global losses since 2013, making it the single most financially destructive form of cybercrime.
3. Vishing and smishing. Voice phishing (phone calls) and SMS phishing. The MGM and Uber breaches both used these vectors. Help desk social engineering is particularly effective because support agents are trained to be helpful, creating a cognitive tension between security protocols and customer service instincts.
"Amateurs hack systems. Professionals hack people." -- Bruce Schneier, cryptographer and security researcher
Failure Category Five: Insider Threats
The Attacker Is Already Inside
Insider threats come from people with legitimate access: employees, contractors, business partners. They fall into two categories: malicious insiders who deliberately steal data or sabotage systems, and negligent insiders who cause damage through carelessness, ignorance, or poor judgment.
Example: Edward Snowden, a contractor for the National Security Agency, exfiltrated an estimated 1.5 million classified documents in 2013. He had legitimate access to the systems he pulled data from. The NSA's authorization controls--which should have detected a contractor accessing volumes of data far outside his job function--failed to flag the activity.
Example: In 2022, a former Amazon employee was convicted of accessing Capital One customer data during the 2019 breach described earlier. While the initial access exploited a misconfiguration, the fact that an individual at a cloud provider could access customer environments highlights the inherent risks of third-party access.
Detecting insider threats is fundamentally harder than detecting external attacks because insider actions look like normal authorized activity. This requires data-driven approaches to behavioral analysis--establishing baselines of normal activity and flagging deviations.
Failure Category Six: Supply Chain Compromise
When Your Trusted Software Becomes the Weapon
Supply chain attacks compromise software, hardware, or services that organizations trust, turning them into vectors for attacking downstream targets. The attacker doesn't breach you directly--they breach something you depend on.
The SolarWinds attack of 2020 remains the most instructive example. Russian intelligence operatives compromised the build system for SolarWinds' Orion network monitoring software. They inserted malware into a legitimate software update that was digitally signed by SolarWinds and distributed to approximately 18,000 customers, including multiple U.S. government agencies and Fortune 500 companies. The victims installed the malware themselves, trusting the update because it came from a verified vendor through official channels.
1. Open-source dependency attacks. The 2024 XZ Utils backdoor (CVE-2024-3094) demonstrated that even critical open-source infrastructure is vulnerable. An attacker spent two years building trust as a contributor to the XZ compression library before inserting a backdoor that would have compromised SSH authentication on millions of Linux systems. It was discovered accidentally by a Microsoft engineer who noticed a 500-millisecond performance regression.
2. Managed service provider (MSP) compromise. Attackers target IT service providers because compromising one MSP gives access to hundreds of client organizations. The 2021 Kaseya VSA attack deployed ransomware to approximately 1,500 businesses through a single compromised remote management tool.
3. Hardware supply chain risks. Firmware implants, counterfeit components, and tampered devices introduce vulnerabilities before systems are even deployed. While harder to execute, hardware attacks are also harder to detect and remediate.
Why Organizations Keep Making the Same Mistakes
The Organizational Dynamics Behind Chronic Failures
Technical explanations for breaches are necessary but insufficient. The deeper question is why organizations repeatedly fail at security basics despite knowing the risks. The answer lies in organizational dynamics, not technology.
1. Security as cost center. Security spending doesn't generate revenue, making it vulnerable to budget cuts. Until a breach occurs, the return on security investment is invisible--you're paying for things that didn't happen. This creates a persistent underinvestment bias.
2. Misaligned incentives. Developers are rewarded for shipping features quickly, not for writing secure code. Executives are rewarded for growth metrics, not security posture. Security teams are seen as obstacles rather than enablers. These incentive structures make security someone else's problem.
3. Complexity. Modern organizations operate hundreds of applications, thousands of endpoints, and millions of configuration settings across hybrid cloud environments. The attack surface is vast, and the cognitive load on security teams exceeds human capacity without automation.
4. Compliance theater. Organizations conflate compliance with security. Passing an audit or earning a certification becomes the goal rather than actually being secure. Equifax was SOC 2 compliant at the time of its breach.
5. Normalization of deviance. Small security shortcuts that don't immediately cause problems become accepted practice. Skipping patch windows, leaving test credentials active, granting overly broad permissions "temporarily"--each deviation normalizes the next until the cumulative exposure is enormous.
The Anatomy of Breach Detection Failure
Why Breaches Go Unnoticed for Months
IBM's 2024 Cost of a Data Breach report found the average time to identify and contain a breach was 258 days--over eight months. This number has barely improved in a decade despite enormous investment in security tools.
Detection fails because:
1. Alert fatigue. Security tools generate thousands of alerts daily, most of which are false positives. Overwhelmed analysts triage mechanically, and genuine threats drown in noise. This is a direct manifestation of Goodhart's Law--when the metric becomes "alerts responded to," the incentive shifts from detecting real threats to closing tickets.
2. Insufficient logging. Organizations don't collect the logs needed to detect sophisticated attacks, or they don't retain them long enough, or they store them in silos that can't be correlated.
3. Lack of behavioral baselines. Without understanding what normal looks like, anomalous behavior is invisible. An attacker who moves slowly and mimics legitimate user patterns will evade signature-based detection.
4. Outsourced monitoring without context. Managed security service providers (MSSPs) monitor alerts but lack the organizational context to distinguish genuinely suspicious activity from unusual-but-legitimate business operations.
Building Defenses That Actually Work
The Controls That Prevent the Most Common Failures
Given that most breaches exploit basic failures, the most effective defenses are correspondingly basic--but must be implemented consistently and comprehensively.
1. Enforce MFA universally. Not just for executives, not just for email, not just for new hires. Every user, every system, every access path. Prioritize phishing-resistant methods (hardware keys, passkeys) over SMS-based MFA.
2. Automate patching for standard systems. Desktop operating systems, browsers, and common applications should be patched automatically. Reserve manual patch testing for critical production systems, and hold those to strict SLAs measured in days, not months.
3. Implement infrastructure as code. Define all cloud configurations in version-controlled code. Use automated scanning to detect misconfigurations before deployment. This prevents the configuration drift that creates security gaps.
4. Deploy phishing-resistant authentication. Hardware security keys or passkeys eliminate the most common social engineering vector. Google, Cloudflare, and other organizations that deployed hardware keys report zero successful phishing attacks afterward.
5. Practice least privilege aggressively. Audit permissions quarterly. Implement just-in-time access for administrative functions. Remove standing privileges for sensitive operations. Use separate accounts for administrative and daily work.
6. Monitor for credential exposure. Subscribe to breach notification services. Scan code repositories for exposed secrets. Check employee credentials against known breach databases. Implement automated credential rotation.
7. Test your defenses. Conduct regular penetration testing, red team exercises, and tabletop simulations of incident response. The time to discover your response plan doesn't work is during a drill, not during a real breach.
The Pattern Recognition Problem
Every post-breach report reads like the one before it. The vulnerabilities change--Apache Struts, EternalBlue, Log4j, MOVEit--but the underlying failures are the same. Known vulnerabilities left unpatched. Credentials mismanaged. Configurations left open. People manipulated. Monitoring absent.
The organizations that break this cycle are the ones that treat security as a continuous operational discipline rather than a project with a completion date. They invest in basics before buying advanced tools. They measure their mean time to patch, their MFA coverage, their permission sprawl--and they hold themselves accountable to those numbers.
The Equifax breach cost the company $1.4 billion in settlements, penalties, and remediation. The patch that would have prevented it took minutes to apply. The question was never whether they could prevent the breach. It was whether the organizational will existed to do the boring, unglamorous, relentless work of maintaining security hygiene.
For most organizations, it still doesn't.
References
- Verizon. "2024 Data Breach Investigations Report." Verizon Enterprise Solutions, 2024.
- IBM Security. "Cost of a Data Breach Report 2024." IBM, 2024.
- U.S. Government Accountability Office. "Equifax Data Breach: Actions Taken by Equifax and Federal Agencies." GAO-18-559, 2018.
- Cloudflare Blog. "The mechanics of the sophisticated phishing scam and how we stopped it." Cloudflare, August 2022.
- Krebs, Brian. "SolarWinds Hack Could Affect 18K Customers." Krebs on Security, December 2020.
- FBI Internet Crime Complaint Center. "2023 Internet Crime Report." IC3, 2024.
- Antonakakis, Manos et al. "Understanding the Mirai Botnet." USENIX Security Symposium, 2017.
- Turek, Jia. "The XZ Backdoor: Everything You Need to Know." Ars Technica, April 2024.
- MGM Resorts International. "SEC Filing: Cybersecurity Incident Disclosure." SEC, October 2023.
- National Cyber Security Centre (UK). "WannaCry Ransomware: Lessons Learned." NCSC, 2018.
- GitHub. "Secret Scanning: Protecting Your Repository." GitHub Security, 2023.