Breach Case Studies

Real Breaches Caused by Attack Surface Exposure

Every breach below exploited a gap in external visibility. These are not hypothetical scenarios. They are documented incidents where organizations were compromised through assets they didn't know they had, or exposures they didn't know existed.

The Common Thread: Invisible Assets

The breaches documented on this page span different industries, attack vectors, and threat actors. But they share a common thread: the organization was breached through assets it didn't know it had, or exposures it didn't know existed.

A misconfigured cloud instance that wasn't in the asset inventory. A forgotten Exchange server from a pre-cloud era. An S3 bucket created by a developer for testing that still held production data. Credentials hardcoded in a script that ended up in a public repository.

External Attack Surface Management exists to close these gaps. By continuously discovering and monitoring every internet-facing asset, regardless of whether the organization knows about it, EASM turns invisible exposures into visible, prioritized findings before attackers find them first.

Major Breaches Traced to Attack Surface Exposure

Each case study below documents a real breach, what went wrong, and specifically what an EASM platform would have caught.

2019

Capital One: Misconfigured AWS WAF

A former AWS employee exploited a misconfigured Web Application Firewall to access Capital One's cloud infrastructure. The breach exposed over 100 million customer records including names, addresses, credit scores, and Social Security numbers. The vulnerable server was not part of Capital One's known asset inventory. It was a misconfigured instance that slipped through manual cloud governance.

What EASM would have caught: EASM platforms continuously discover and inventory cloud assets across AWS, Azure, and GCP, including instances, buckets, and services not tracked in internal CMDBs. A misconfigured WAF on an unknown server would have been flagged during automated discovery and configuration assessment.

Source →
2021

Microsoft Exchange: ProxyLogon Zero-Day

Nation-state actors (Hafnium) exploited a chain of zero-day vulnerabilities in on-premises Microsoft Exchange servers. Over 30,000 organizations were compromised before patches were widely applied. Many victims didn't know they still had internet-facing Exchange servers, legacy installations that had been forgotten during cloud migrations or existed in acquired subsidiaries.

What EASM would have caught: Continuous external scanning detects internet-facing services like Exchange (ports 443, 25, 587) across all owned IP ranges and domains. EASM would have identified forgotten or unknown Exchange servers and flagged them as high-risk the moment the CVE was published.

Source →
2021

Log4Shell / Log4j: Ubiquitous RCE

A critical remote code execution vulnerability (CVE-2021-44228) in the Apache Log4j logging library affected virtually every organization running Java. The challenge wasn't patching. It was finding every instance. Organizations took weeks to locate all external-facing assets running Log4j because they lacked a centralized inventory of what technologies their internet-facing services used.

What EASM would have caught: EASM platforms maintain a continuously updated technology fingerprint for every discovered asset. When Log4Shell dropped, organizations with EASM could query their inventory to identify every external asset running Java / Log4j within hours, not weeks.

Source →
2023

MOVEit Transfer: Supply Chain Zero-Day

The Cl0p ransomware group exploited a zero-day SQL injection vulnerability in Progress MOVEit Transfer, a widely used file transfer application. The attack compromised over 2,500 organizations and exposed data on 67 million individuals. Most victims weren't running MOVEit themselves. Their data was exposed through vendors and partners in the supply chain.

What EASM would have caught: Supply chain monitoring and third-party technology detection identifies when vendors in your ecosystem run vulnerable software. EASM would have flagged MOVEit instances across partner organizations and alerted to the supply chain exposure before data exfiltration.

Source →
2022

Samsung / Lapsus$: Exposed Endpoints & Leaked Credentials

The Lapsus$ group stole 190GB of Samsung source code, including proprietary algorithms and device security implementations. The attackers gained access through exposed endpoints and credentials leaked in previous breaches that were never rotated. The exfiltrated data included source code for TrustZone, biometric unlock algorithms, and bootloader components.

What EASM would have caught: Credential monitoring and exposed repository detection would have identified leaked Samsung credentials circulating on dark web markets and paste sites. Exposed endpoints used as initial access vectors would have been discovered during continuous external scanning.

Source →
2023

Okta Support System: Stolen Service Credentials

Attackers used stolen credentials to access Okta's customer support management system. They were able to view customer-uploaded HAR files that contained session tokens, which were then used to hijack active sessions at downstream companies including Cloudflare and 1Password. The breach cascaded through trust relationships.

What EASM would have caught: Credential and dark web monitoring for service accounts would have detected the compromised credentials before they were used. EASM platforms that monitor for exposed support portals and authentication tokens in public data sources would have flagged the risk early.

Source →
2022

Uber: Hardcoded Credentials in Exposed Code

An attacker used social engineering to bypass MFA, then found hardcoded credentials for Uber's privileged access management platform in a PowerShell script stored on an internal network share. This gave them access to Uber's entire internal toolset: Slack, Google Workspace, AWS, and vulnerability reports in HackerOne.

What EASM would have caught: Code repository scanning and credential detection would have found the hardcoded PAM credentials before an attacker did. EASM platforms that scan for exposed internal tools, admin panels, and code repositories on the public internet would have flagged the exposure chain.

Source →
2021

Twitch: Misconfigured Server Leaks Everything

The entire Twitch platform was leaked. 125GB of data including full source code, internal tools, creator payout data, and unreleased products. The breach occurred through a misconfigured server that allowed unauthorized access to internal repositories. The server configuration error went undetected for an extended period.

What EASM would have caught: Misconfiguration detection and continuous server exposure monitoring would have identified the improperly configured server. EASM platforms flag publicly accessible assets with overly permissive configurations, especially those hosting source code repositories or internal tools.

Source →

Common Exposure Patterns EASM Catches Daily

The breaches above made headlines. But most attack surface exposure is quieter, the same categories of risk repeated across thousands of organizations every day. These are the recurring patterns EASM platforms detect before they become incidents.

Forgotten Staging Servers

Staging environments with production data, left running and internet-accessible after the project ships.

Open Cloud Storage

Publicly readable S3 buckets, Azure Blob containers, and GCS objects containing sensitive data or backups.

Exposed Admin Panels

Admin interfaces on non-standard ports (8080, 8443, 9090) with default credentials or no authentication.

Dangling CNAME Records

Subdomain takeover via DNS records pointing to deprovisioned cloud services, claimable by anyone.

Leaked API Keys in Git

API keys, tokens, and secrets committed to public repositories. Even if deleted, they persist in Git history.

Expired SSL Certificates

Expired or misconfigured certificates on payment pages, login portals, and API endpoints eroding trust and security.

Debug Endpoints in Production

Debug routes, stack traces, phpinfo() pages, and profiling endpoints left enabled on production servers.

Shadow SaaS Adoption

SaaS tools onboarded with corporate SSO but without security review, creating unmonitored data flows.

The Cost of an Invisible Attack Surface

Every unknown asset is an unmanaged risk. The financial, reputational, and operational impact of breaches caused by attack surface exposure continues to climb.

$4.88M

Average cost of a data breach in 2024 (IBM)

68%

Of breaches involve a non-malicious human element: misconfigurations, errors, exposure

194 days

Average time to identify a breach involving stolen credentials

30%+

Of external assets are unknown to the organization's security team

Don't Wait for a Breach to Discover Your Attack Surface

Every breach on this page could have been prevented with continuous external visibility. See which EASM platforms detect the exposure patterns that matter.