Equifax knew about the vulnerability. On March 7, 2017, the Apache Struts framework disclosed CVE-2017-5638, a critical remote code execution vulnerability, along with a patch. Equifax's security team received the alert. Internal policy required patching critical vulnerabilities within 48 hours. They did not patch. For 143 days, the vulnerability sat unpatched on a public-facing web application handling credit dispute inquiries. During that window, attackers exploited it to steal the personal data of 147 million Americans: Social Security numbers, birth dates, home addresses, driver's license numbers, and in some cases credit card numbers.
The Equifax breach was not sophisticated. It did not involve a nation-state actor with exotic capabilities or novel malware that bypassed defenses. It involved a known vulnerability with a known fix that was not applied. This is the uncomfortable truth about most security breaches: they exploit basic, preventable failures, not advanced techniques.
The 2024 Verizon Data Breach Investigations Report analyzed over 30,000 security incidents across industries and found that the overwhelming majority traced back to a small set of recurring failures: stolen credentials, unpatched vulnerabilities, misconfigured systems, and human error. The attackers were not geniuses. The defenders were making the same mistakes, year after year, in different organizations, using different technologies, and generating different headlines---but following the same underlying script.
This article catalogs those recurring failures: the specific incidents that exposed them, the organizational dynamics that perpetuate them, and the practical measures that would have prevented them.
The most common security breaches do not involve exotic techniques or nation-state adversaries. They involve known vulnerabilities that were not patched, credentials that were not protected, and configurations that were never reviewed. Most breaches are preventable with basic hygiene applied consistently.
Failure Category One: Credential Compromise
The Problem That Won't Go Away
Stolen, weak, or mismanaged credentials are the single most common cause of security breaches. The Verizon DBIR consistently finds that over 80% of hacking-related breaches involve credentials in some form. This figure has been stable for a decade despite enormous industry investment in security tools, training, and frameworks.
The mechanics are straightforward and brutal. People reuse passwords across services. When one service is breached and its password database is stolen, those username/password pairs are tested against every other major service in automated credential stuffing attacks. The attack requires no special skill: tools that automate the testing of millions of credential pairs are freely available. Success rates of 0.1-2% sound small until you realize that testing 10 million credentials yields 10,000-200,000 compromised accounts per campaign.
Example: In January 2023, Norton LifeLock---a cybersecurity company---disclosed that nearly 925,000 accounts had been targeted in a credential stuffing attack. Attackers used username/password pairs from external breaches to access Norton accounts. Users who had reused passwords and had not enabled MFA were exposed. The irony of a security company experiencing a credential-based breach was not lost on observers; it illustrated that the problem is organizational and human, not technical.
Default Credentials
Devices and applications ship with factory-set usernames and passwords. These defaults ("admin/admin," "root/password," "admin/1234") are publicly documented in vendor manuals and known to every attacker.
The 2016 Mirai botnet scanned the internet for IoT devices and attempted to authenticate using a list of 62 default credential pairs. This simple approach infected over 600,000 devices: security cameras, home routers, DVRs, and other internet-connected devices that their owners had never thought to reconfigure. Mirai then used this army of compromised devices to launch distributed denial-of-service attacks that knocked Twitter, Netflix, Reddit, Spotify, and large portions of the internet offline for hours on October 21, 2016.
The lesson was not that Mirai was technically sophisticated---it was the opposite. The lesson was that hundreds of thousands of devices on the internet were secured by credentials everyone already knew.
Shared and Reused Service Account Credentials
Teams share service account passwords through Slack messages, email threads, shared spreadsheets, or sticky notes on monitors. When team members leave the organization, those credentials are often not rotated because nobody is certain who has them or where they are written down.
The 2021 Colonial Pipeline ransomware attack, which shut down fuel distribution to the entire eastern United States for six days and caused gasoline shortages, was traced to a compromised VPN password for an account that was no longer in active use and did not have MFA enabled. The account had a single factor of authentication: a password that had been exposed in a previous breach of a different service. The pipeline company had not known the account was still active or accessible.
Colonial Pipeline paid a $4.4 million ransom to restore operations. The DarkSide ransomware group received this payment. The attack was prevented from the very beginning by a password that should have been changed, on an account that should have been deactivated, with MFA that should have been required.
Hardcoded Credentials
Developers embed API keys, database passwords, signing certificates, and service account credentials directly in source code. The credentials end up in version control systems, container images, deployment pipelines, and documentation repositories. They spread across systems as code is copied, branches are forked, and repositories are cloned.
GitHub reported in 2023 that their secret scanning service, which monitors public repositories for exposed credentials, detected over 12.6 million exposed secrets in the previous year. These included AWS access keys (which grant cloud infrastructure access), database connection strings, private keys, and API tokens for payment processors, communication platforms, and identity providers.
Many organizations have significant secret exposure in private repositories as well. The 2023 CircleCI breach led the company to recommend that all customers immediately rotate secrets stored in their CI/CD environment---because the attackers had access to the environment where secrets were used for automated builds and deployments.
Failure Category Two: Unpatched Vulnerabilities
The Race Organizations Consistently Lose
A patch is a fix for a known vulnerability. When a vendor releases a patch, they simultaneously publish the existence of the vulnerability and a description of how it works. This creates an asymmetric race: defenders must apply the patch to every affected system across their environment, while attackers need to exploit just one unpatched system. Organizations consistently lose this race.
The mean time to patch a critical vulnerability across enterprises is measured in weeks to months. Exploit code for disclosed vulnerabilities frequently appears within 24-72 hours of the patch release, because security researchers and attackers can analyze the patch itself to understand what it fixed.
Example: The WannaCry ransomware attack of May 2017 exploited EternalBlue (MS17-010), a vulnerability in Microsoft's SMB protocol. Microsoft had released a patch two months earlier in March 2017, after learning of the vulnerability from a Shadow Brokers leak of NSA hacking tools. WannaCry infected over 200,000 computers across 150 countries, shutting down hospitals in the UK's National Health Service (forcing cancellation of 19,000 appointments and operations), Telefonica in Spain, FedEx, and factory operations at Renault and Nissan. Every infected system was one that had not applied a patch that had been available for two months.
Why Patching Fails
Fear of breaking production systems: Patches can introduce incompatibilities. Production systems are complex. Organizations delay patching because testing takes time, scheduled maintenance windows are infrequent, and the disruption of an unplanned outage from a bad patch feels more immediate than the abstract risk of a breach from an unpatched vulnerability.
Poor asset inventory: You cannot patch systems you do not know exist. Shadow IT (technology deployed without IT's knowledge), forgotten servers, acquired company systems, and unmanaged endpoints create blind spots. The Equifax breach occurred on a system the security team reportedly did not know was still running the vulnerable version of Apache Struts.
Legacy systems: Old systems built on unsupported software cannot receive patches because the vendor no longer provides them. Organizations that run Windows XP, end-of-life versions of Linux distributions, or legacy enterprise applications face an impossible choice: upgrade at significant cost and disruption, or accept indefinite vulnerability. Many accept the vulnerability.
Patch fatigue: Enterprise environments receive thousands of vulnerability alerts monthly. Without risk-based prioritization, security teams cannot distinguish the critical vulnerabilities in actively exploited, internet-facing systems from the informational findings in internal tooling. Critical patches get lost in the noise.
| Breach | Vulnerability | Patch Available | Window |
|---|---|---|---|
| Equifax (2017) | Apache Struts CVE-2017-5638 | 2 months before breach | 143 days unpatched |
| WannaCry (2017) | EternalBlue MS17-010 | 2 months before breach | Months for many systems |
| Log4Shell (2021) | Log4j CVE-2021-44228 | Days after disclosure | Months for many organizations |
| MOVEit (2023) | SQL injection CVE-2023-34362 | Zero-day, patch released rapidly | Days to weeks |
| Citrix Bleed (2023) | CVE-2023-4966 | Days after disclosure | Weeks for many targets |
Effective Patch Management
Automated patching for standard systems (desktops, laptops, common applications, operating systems) dramatically reduces the patching burden and eliminates the most common delays. Windows Update, macOS Software Update, and Linux package managers can be configured to apply security updates automatically.
For production server infrastructure, automated patching requires testing pipelines: apply patches to staging environments first, run automated tests, and promote to production on a defined schedule. The goal is measured in days (for critical, actively exploited vulnerabilities) to weeks (for high-severity vulnerabilities)---not months.
Patch risk assessment should be risk-stratified: critical vulnerabilities in internet-facing systems with known active exploitation require emergency patching regardless of schedule. High-severity vulnerabilities in non-exposed systems can follow the standard patch cycle. Understanding risk management in security provides the framework for this prioritization.
Failure Category Three: Misconfiguration
The Invisible Gaps
Misconfiguration means systems are set up in ways that inadvertently expose data or create attack paths. Cloud environments have made this problem dramatically worse because the number of configurable settings has exploded and the consequences of errors are immediate: a misconfigured S3 bucket is globally accessible within seconds of the misconfiguration being made.
Example: In 2019, Capital One suffered a breach exposing 106 million credit card applications. The root cause was not a software vulnerability---it was a misconfigured web application firewall (WAF) that permitted server-side request forgery (SSRF) attacks. The SSRF allowed retrieval of AWS instance metadata, which contained temporary IAM credentials. Those credentials had excessive permissions (violating least privilege). The chain: WAF misconfiguration → SSRF attack → metadata service → temporary credentials → data exfiltration. A former AWS employee who understood this chain exploited it. Every link in the chain was a misconfiguration.
Public Cloud Storage Exposures
Amazon S3 buckets, Azure Blob Storage containers, and Google Cloud Storage buckets set to public access have exposed billions of records. In 2017 alone, security researchers discovered exposed S3 buckets containing:
- 14 million Verizon customer records including names, account details, and PINs
- 1.8 billion posts scraped from social media on behalf of the U.S. Army
- Internal Accenture credentials and decryption tools
- Internal documents and source code from dozens of other organizations
Each of these exposures required a single configuration change to fix: marking the bucket as private. Each had been publicly accessible because someone created the bucket, uploaded data, and checked a checkbox they did not fully understand.
AWS subsequently introduced S3 Block Public Access, a setting that overrides any bucket-level or object-level permission grants to the public. Organizations that enable this at the account level cannot accidentally make an S3 bucket public regardless of how they configure individual buckets. This is a good example of secure-by-default design: making the secure option the default option rather than the option users have to find and enable.
Database Exposure
MongoDB, Elasticsearch, and Redis instances deployed with default configurations---no authentication required, listening on all network interfaces---are directly accessible from the internet. The search engine Shodan, which crawls the internet for open ports and services, consistently finds tens of thousands of such databases. Attackers have automated scripts that find, dump, and delete exposed databases, leaving ransom notes demanding payment to restore the data. Thousands of organizations have paid these ransoms for data that they left publicly accessible by default.
Overly Permissive Access Controls
Cloud IAM roles with wildcard permissions ("Action": "*"), security groups that allow all inbound traffic from anywhere (0.0.0.0/0), and service accounts with administrative privileges "for convenience" create enormous attack surfaces. When attackers compromise a single system with excessive permissions, they inherit the full scope of those permissions for lateral movement and data access.
The principle of least privilege---each identity should have only the permissions necessary for its specific function---is well understood in theory and routinely violated in practice. The typical reason is operational convenience: granting broad permissions is faster than carefully enumerating exactly what is needed, and the cost of over-permission is invisible until a breach occurs.
Failure Category Four: Social Engineering and Phishing
Attacking the Human Layer
Social engineering bypasses technical controls entirely by targeting the people who operate systems. Phishing---the most common form---uses deceptive communications to trick people into revealing credentials, installing malware, approving fraudulent transactions, or taking other actions that benefit the attacker.
The 2023 MGM Resorts breach, which cost the company an estimated $100 million, began with a phone call. Attackers from the Scattered Spider group called MGM's IT help desk, impersonated an employee they had identified on LinkedIn, and convinced a support agent to reset the employee's MFA credentials. That single phone call provided a foothold for deploying ransomware across MGM's hotel and casino infrastructure, including systems controlling slot machines, digital room keys, and the reservation system.
Example: Business Email Compromise (BEC) is the most financially destructive form of cybercrime. Attackers compromise or spoof executive email accounts and direct employees to transfer funds, change vendor payment details, or share sensitive information. The FBI's 2023 Internet Crime Complaint Center report found that BEC caused over $2.9 billion in losses in the United States alone in 2023. A single successful BEC attack can transfer millions of dollars to attacker-controlled accounts within hours, before the fraud is detected.
Spear Phishing
Targeted phishing attacks use personal information from social media, company websites, press releases, and previous breaches to craft convincing, personalized deceptions. A spear phishing email might reference a real project the target is working on, come from a spoofed address resembling a colleague's email, or reference a real meeting on the target's calendar if the attacker has already compromised a less-privileged account.
Spear phishing accounts for a small percentage of phishing volume but a disproportionate share of successful compromises. Generic phishing campaigns succeed on 1-3% of targets. Well-crafted spear phishing succeeds on dramatically higher percentages because recipients have no strong signal to distinguish the malicious communication from legitimate ones.
The technical defense against phishing is phishing-resistant authentication. FIDO2 hardware security keys and passkeys are cryptographically bound to the legitimate domain: they will not authenticate to a fake website regardless of how convincing it appears. Google's deployment of hardware keys to all employees eliminated phishing-based account compromises entirely. Cloudflare's use of hardware keys made them immune to the same attack that breached Twilio in 2022.
Help Desk Social Engineering
IT help desk personnel are particularly vulnerable to social engineering because their job is to help people solve problems. Social engineers exploit this helpfulness by claiming to be locked out of accounts, urgently needing access before an important meeting, or pretending to be executives requiring immediate assistance.
The Uber breach (2022) and MGM breach (2023) both exploited help desk social engineering as the initial access vector. In both cases, attackers used information available on LinkedIn to impersonate real employees convincingly enough to convince support staff to override security controls.
Defenses: require strict identity verification procedures for all account recovery or MFA reset requests. Verify callback to numbers in the corporate directory, not numbers provided by the caller. Require approvals from multiple people for high-privilege actions. Train support staff on social engineering techniques and empower them to refuse requests that do not follow verification procedures even under apparent urgency.
Failure Category Five: Insider Threats
The Attacker Is Already Inside
Insider threats originate from people with legitimate access: current employees, former employees with lingering access, contractors, partners, and vendors. They divide into malicious insiders who deliberately steal data or sabotage systems, and negligent insiders who cause harm through carelessness, ignorance, or poor judgment.
Example: Edward Snowden, a National Security Agency contractor, exfiltrated an estimated 1.5 million classified documents in 2013 using his legitimate access credentials. The NSA's authorization controls, which should have detected a contractor accessing volumes of data far outside his job function and copying it to external media, failed to flag the activity in time. The breach revealed systemic failures in access monitoring at one of the world's most security-conscious organizations.
Negligent insiders cause more breaches than malicious ones. An employee who clicks a phishing link, sends sensitive data to their personal email "to work from home," attaches the wrong file to an email to a customer, or misconfigures a system due to a lack of training is a security threat as real as a deliberate attacker, and far more common.
Detecting insider threats is fundamentally harder than detecting external attacks because insider actions resemble normal authorized activity. Effective detection requires behavioral baselines: understanding what normal activity looks like for each role and flagging significant deviations. A customer service representative who normally accesses 50 customer records per day suddenly accessing 5,000 records is suspicious regardless of whether their access credentials are valid.
Failure Category Six: Supply Chain Compromise
When Your Trusted Software Becomes the Weapon
Supply chain attacks compromise software, hardware, or services that organizations trust, turning them into attack vectors against downstream targets. The attacker does not breach your organization directly---they breach something you use, and use that trust relationship to attack you.
The SolarWinds attack of 2020 is the canonical example. Russian SVR operatives (Cozy Bear) compromised the build pipeline for SolarWinds' Orion network monitoring software. They inserted SUNBURST malware into a legitimate software update, which was digitally signed by SolarWinds and distributed to approximately 18,000 customers as a routine update. The victims installed the malware themselves, trusting the update because it came through official channels with valid cryptographic signatures. Affected organizations included the U.S. Departments of Treasury, Commerce, State, Homeland Security, and Defense.
Example: The 2024 XZ Utils backdoor (CVE-2024-3094) showed that patient, sophisticated attackers target critical open-source infrastructure. An attacker using the identity "Jia Tan" spent two years contributing legitimately to the XZ compression library, building a reputation as a trustworthy contributor, before inserting a backdoor designed to compromise SSH authentication on Linux systems. The backdoor was discovered accidentally by a Microsoft engineer who noticed a 500-millisecond performance regression in SSH on a Fedora test build. Had it not been caught, it would have backdoored SSH authentication on millions of Linux systems worldwide.
Managed Service Provider Attacks
Attackers target IT service providers because a single compromise provides access to hundreds of client organizations. The 2021 Kaseya VSA attack used a zero-day vulnerability in Kaseya's remote monitoring and management software to deploy REvil ransomware to approximately 1,500 businesses across 17 countries---all by compromising a single service provider.
Supply chain risk requires extending security scrutiny beyond your organization's perimeter: vetting vendor security practices, requiring evidence of security controls in contracts, monitoring the behavior of software and services after installation, and maintaining the ability to detect and respond when trusted sources are compromised.
Why Organizations Keep Making the Same Mistakes
The Organizational Dynamics Behind Chronic Failures
Technical explanations for breaches are necessary but insufficient. The harder question is why organizations repeatedly fail at security basics that have been well-understood for decades. The answer lies in organizational incentives and dynamics, not technology.
Security as invisible cost center: Security spending does not generate revenue, making it vulnerable to budget pressure. Until a breach occurs, the return on security investment is invisible---you are paying to prevent events that do not happen. This creates persistent underinvestment bias. The Equifax breach cost the company $1.4 billion in settlements, fines, and remediation. The patch that would have prevented it was free and took minutes to apply. The calculation was clear in retrospect; it was not clear before the breach to people whose performance was measured on other metrics.
Misaligned incentives across the organization: Developers are rewarded for shipping features, not for writing secure code. Executives are measured on growth and margin, not security posture. Security teams are seen as obstacles rather than enablers. When security requirements slow feature development, the pressure to find exceptions is organizational and structural.
Compliance theater: Organizations conflate passing security audits with actually being secure. Achieving SOC 2, ISO 27001, or PCI-DSS certification becomes the goal rather than actual security improvement. Equifax had SOC 2 certification at the time of its breach. Certification demonstrates that security controls exist; it does not guarantee they are effective.
Normalization of deviance: Small security shortcuts that do not immediately cause problems become accepted practice. Skipping a patch cycle, leaving a test credential active, granting broad permissions "temporarily," disabling an alert that generates too many false positives---each deviation normalizes the next. The cumulative exposure grows invisibly until an incident reveals the full extent of the drift.
Building Defenses That Actually Work
Given that most breaches exploit basic failures, the most effective defenses address those basics consistently and systematically.
Enforce MFA universally and use phishing-resistant methods: Hardware security keys or passkeys for all users, particularly administrative accounts. Prioritize these over SMS-based MFA, which is vulnerable to SIM swapping and real-time phishing interception.
Automate patching for standard systems: Desktops, laptops, browsers, and common applications should patch automatically. Production server infrastructure should follow defined SLAs: days for critical vulnerabilities in internet-facing systems, not months.
Enforce infrastructure as code with automated security scanning: Define all cloud configurations in code. Use tools like Checkov and Terraform Sentinel to enforce security policies before resources are deployed. Detect and remediate configuration drift through continuous monitoring.
Implement credential rotation and management: Store all credentials in dedicated secret management systems (AWS Secrets Manager, HashiCorp Vault). Automate rotation. Monitor code repositories for exposed secrets using tools like git-secrets and GitHub's secret scanning.
Practice least privilege with regular review: Start with no permissions and add only what is demonstrably required. Review permissions quarterly. Implement just-in-time access for high-privilege operations. Remove access when roles change.
Build detection capability: Collect and retain comprehensive logs. Implement behavioral analytics to detect anomalies. Configure alerts for specific high-value indicators. Ensure alerting volume is manageable so genuine signals are not lost in noise.
Understanding security tradeoffs helps organizations make informed decisions about where to invest limited security resources for maximum risk reduction.
References
- Verizon. "2024 Data Breach Investigations Report." verizon.com, 2024. https://www.verizon.com/business/resources/reports/dbir/
- IBM Security. "Cost of a Data Breach Report 2024." ibm.com, 2024. https://www.ibm.com/reports/data-breach
- U.S. Government Accountability Office. "Equifax Data Breach: Urgent Need for Improved Oversight of Credit Reporting Agencies." GAO-19-196, 2019. https://www.gao.gov/products/gao-19-196
- CISA. "SolarWinds and Active Directory/M365 Compromise: Detection Guidance." cisa.gov, 2021. https://www.cisa.gov/news-events/alerts/2021/01/08/detecting-post-compromise-threat-activity-microsoft-cloud-environments
- FBI Internet Crime Complaint Center. "2023 Internet Crime Report." ic3.gov, 2024. https://www.ic3.gov/Media/PDF/AnnualReport/2023_IC3Report.pdf
- NCSC (UK). "WannaCry Ransomware Malware Analysis Report." ncsc.gov.uk, 2017. https://www.ncsc.gov.uk/report/wannacry-ransomware-malware-analysis-report
- Antonakakis, Manos et al. "Understanding the Mirai Botnet." USENIX Security Symposium, 2017. https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-antonakakis.pdf
- CISA. "StopRansomware: Colonial Pipeline." cisa.gov. https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-we-know
- GitHub. "GitHub's Secret Scanning Approach." github.blog. https://github.blog/2023-02-14-open-source-security-how-github-is-protecting-the-world-from-a-trillion-lines-of-code/
- Openwall. "XZ Utils Backdoor Analysis." openwall.com, 2024. https://www.openwall.com/lists/oss-security/2024/03/29/4
The Quantified Cost of Basic Security Failures
Research from academic institutions, government agencies, and the insurance industry has established the financial and operational impact of the recurring security failures described in this article, providing evidence-based grounding for security investment decisions.
IBM Security and Ponemon Institute, "Cost of a Data Breach Report" (2024): The annual study, based on analysis of 604 organizations that experienced breaches between March 2023 and February 2024, found that the global average cost of a data breach reached $4.88 million, a 10% increase from 2023 and the highest figure in the study's 19-year history. Credential-based breaches had an average cost of $4.81 million. Phishing-initiated breaches averaged $4.88 million. The study also found that organizations with fully deployed AI and automation in security operations detected and contained breaches 98 days faster than organizations without these capabilities---a gap that directly corresponds to dwell time and breach scope. For each category of failure documented in this article, the data provides specific cost estimates that support the business case for prevention over remediation.
Rand Corporation, Lillian Ablon and Andy Bogart (2017), "Zero Days, Thousands of Nights": Analysis of a dataset of zero-day vulnerabilities found that the median lifespan of a zero-day is 6.9 years before it is independently discovered and patched. However, this study also found that many exploited vulnerabilities are not zero-days at all---they are known vulnerabilities with available patches that have not been applied. The research reinforces the finding from the Verizon DBIR that most successful attacks exploit vulnerabilities for which fixes have existed for months or years, making patch management the highest-leverage defensive investment for most organizations.
Cyentia Institute and RiskRecon, "A Calculated Risk" (2022): Analysis of 1,000 organizations found that patch application speed was the single strongest predictor of overall security posture. Organizations that patched critical vulnerabilities within 14 days suffered breaches at a rate 3.5 times lower than organizations that took more than 30 days. The research also found that asset inventory completeness---knowing what systems exist and what software they run---correlated more strongly with security outcomes than security spending per employee, suggesting that visibility is a prerequisite for effective defense.
Carnegie Mellon CERT, "Common Sense Guide to Mitigating Insider Threats" (2022): Based on analysis of over 1,500 insider threat cases collected over two decades, researchers Robert Kowalski and Dawn Cappelli identified that the majority of malicious insiders took actions that were detectable through access log analysis before significant damage occurred. In 65% of cases, the insider had used legitimate access to steal or sabotage data, meaning technical access controls functioned correctly while behavioral monitoring failed to flag anomalous patterns. The study found that organizations with formal insider threat programs detected incidents in an average of 77 days, compared to 197 days for organizations without such programs---a difference that corresponds directly to breach scope and cost.
CISA and ENISA, "Top Routinely Exploited Vulnerabilities" (2023): A joint publication identifying the most commonly exploited vulnerabilities in 2022 found that the majority were disclosed vulnerabilities with patches available at the time of exploitation. More significantly, many were the same vulnerabilities that appeared in the same agencies' previous years' lists---organizations were failing to patch vulnerabilities they had already been warned about in prior years. This documented pattern of repeated, preventable failures across multiple years provides empirical evidence for the "normalization of deviance" dynamic described in this article.
Sector-Specific Case Studies: How Different Industries Fail Differently
Security failures manifest differently across industries because threat actors, data types, regulatory environments, and operational constraints vary---but the underlying failure categories remain consistent.
Healthcare: Change Healthcare (2024): The ransomware attack on Change Healthcare, a subsidiary of UnitedHealth Group that processes approximately 15 billion healthcare transactions annually, was initiated through a Citrix remote access portal that lacked multi-factor authentication. The BlackCat/ALPHV ransomware group obtained credentials (the vector was not publicly disclosed but credentials were reported as the initial access method) and moved laterally for approximately nine days before deploying ransomware. The attack disrupted healthcare payment processing across the United States for weeks, affecting hospitals, pharmacies, and medical practices that could not submit claims or receive reimbursements. UnitedHealth Group disclosed paying a $22 million ransom. Congressional testimony revealed that a system processing a third of all U.S. healthcare transactions was accessible through a remote access portal without MFA---a control failure that would be considered basic in any other industry.
Financial Services: SWIFT Heists (2016-2019): A series of attacks against financial institutions using the SWIFT inter-bank messaging network resulted in hundreds of millions of dollars in fraudulent transfers. The Bangladesh Bank heist of 2016 alone transferred $81 million to accounts in the Philippines and Sri Lanka before the fraud was detected. FBI investigation attributed the attacks to the Lazarus Group, a North Korean state-sponsored actor. The common pattern across SWIFT heists was attackers compromising bank internal networks, learning SWIFT transaction patterns and authentication procedures, then using legitimate credentials to initiate fraudulent transfer requests that appeared authentic to SWIFT's message validation systems. The failures combined credential compromise (gaining network access), insufficient monitoring (failing to detect unusual transfer patterns), and inadequate authorization controls (single-person initiation of large international transfers without secondary approval requirements).
Retail: Target (2013): The breach of 40 million credit card records and 70 million customer records began with stolen network credentials from Fazio Mechanical, an HVAC vendor that Target had granted remote network access for monitoring heating and cooling systems. The vendor's credentials provided access to Target's vendor portal; from there, attackers moved to point-of-sale systems through insufficient network segmentation. Target's security monitoring tool (FireEye) actually detected and alerted on the malware; the alerts were reviewed and dismissed by Target's security team as false positives. The breach cost Target $292 million in breach-related costs and a $18.5 million multistate settlement. The failures: vendor access not segmented from sensitive systems, monitoring alerts not acted upon, and credentials not properly scoped to the minimum access required for the vendor's function.
Government: U.S. Office of Personnel Management (2015): Chinese state-sponsored hackers exfiltrated personnel records of 4.2 million current and former federal employees, security clearance investigation files for 21.5 million individuals including 1.8 million non-applicants who were spouses or cohabitants of applicants, and 5.6 million fingerprint records. The breach was detected not by OPM's own monitoring but by a commercial product demonstration that happened to identify unusual data exfiltration. OPM's systems lacked multi-factor authentication for remote access, the network lacked adequate segmentation, and the agency had been warned by the Inspector General in prior years that security controls were inadequate. The breach compromised sensitive information about nearly every person who had applied for a federal security clearance in the previous fifteen years---precisely the population with access to classified information and systems. The data value to a foreign intelligence service was incalculable.
Frequently Asked Questions
What are the most common causes of security breaches?
Top breach causes: (1) Weak or stolen credentials—default passwords, password reuse, (2) Phishing attacks—employees tricked into revealing access, (3) Unpatched vulnerabilities—known security flaws left unfixed, (4) Misconfigured security settings—exposed databases, open S3 buckets, (5) Insider threats—malicious or negligent employees, (6) Third-party/supply chain compromises—trusted vendors breached, (7) Poor access controls—overly broad permissions, (8) Lack of encryption—sensitive data stored in plaintext, (9) Missing or ineffective monitoring—breaches undetected for months, and (10) Social engineering—manipulating people rather than hacking systems. Most breaches exploit basic security failures, not sophisticated attacks.
Why do organizations often discover breaches months after they happen?
Delayed detection happens because: insufficient logging and monitoring systems, no one actively analyzing security logs, alerts going to unmanned inboxes, lack of baseline understanding of normal behavior, attackers deliberately moving slowly to avoid detection, compromised systems looking like normal activity, insufficient staff to investigate anomalies, and organizations not looking for breaches unless something obviously breaks. Average time-to-detect is still measured in months for many organizations. Effective security requires active monitoring, not just reactive response—assume you're being attacked and look for evidence continuously.
What is the role of human error in security failures?
Human error contributes to 80-95% of security incidents through: clicking phishing links, using weak passwords, misconfiguring systems, sharing credentials inappropriately, falling for social engineering, ignoring security warnings, leaving devices unlocked, accidentally exposing data, and not following security procedures. This isn't because people are stupid—humans aren't designed to make perfect security decisions continuously. Effective security designs systems that are secure by default, makes secure choices easier than insecure ones, provides clear security training, and assumes humans will make mistakes rather than blaming them when it happens.
How do configuration errors lead to major breaches?
Configuration failures include: databases exposed to the internet without authentication, cloud storage buckets set to public access, overly permissive firewall rules, disabled security features (logging, MFA), default credentials not changed, unnecessary services running and exposed, debug modes left enabled in production, encryption not configured, and backup systems accessible from compromised networks. These errors are common because: cloud services default to flexibility not security, configuration is complex with many options, changes aren't reviewed by security experts, misconfigurations are invisible until exploited, and testing doesn't catch security issues. Automated security scanning and infrastructure-as-code reviews help prevent configuration drift.
What are supply chain security failures and why are they increasing?
Supply chain attacks compromise trusted third-party software, services, or vendors to breach multiple downstream targets. Examples: malicious code inserted into popular libraries, cloud service provider breach exposing customer data, compromised software update mechanism distributing malware, and vendor access used to attack customers. They're increasing because: organizations rely on numerous third-party services, vendors often have excessive access, supply chains are complex and difficult to audit, attackers get more value compromising one supplier than many individual targets, and responsibility boundaries are unclear. Defense requires: vendor security assessments, least-privilege vendor access, monitoring vendor activities, and incident response plans for supply chain scenarios.
Why do organizations fail to patch known vulnerabilities promptly?
Patching failures happen because: patches might break production systems (fear of downtime), testing patches takes time and resources, organizations don't know all systems requiring patches (poor asset inventory), patching requires maintenance windows for critical systems, legacy systems can't be easily patched, patches released faster than teams can apply them, organizations lack automated patch management, and security teams lack authority to mandate downtime for patching. This creates windows where attackers exploit published vulnerabilities. Solutions: automated patching for non-critical systems, regular maintenance schedules, testing environments, and risk-based prioritization focusing on critical vulnerabilities and internet-facing systems.
What security lessons do successful breaches teach?
Key lessons: (1) Basic security hygiene prevents most breaches—patching, strong authentication, encryption, (2) Assume breach—focus on detection and response, not just prevention, (3) Security must be organizational priority with executive support and budget, (4) People are both the weakest link and strongest defense—invest in training, (5) Third-party risk is your risk—vet vendors thoroughly, (6) Incident response plans must be tested before incidents, (7) Transparency about breaches builds trust despite short-term pain, (8) Compliance ≠security—meeting requirements doesn't guarantee protection, (9) Security requires continuous investment, not one-time fixes, and (10) Learn from others' breaches—most attack patterns repeat.