Vendor Evaluation Guide
Questions to Ask EASM Vendors Before You Buy
9 critical questions that separate mature EASM platforms from surface-level scanners. Use this guide during vendor demos, RFP processes, and POC evaluations.
What is your discovery model: passive, active, or both?
Why this matters
Passive discovery aggregates data from public sources (CT logs, DNS records, WHOIS, OSINT) without directly touching assets. Active discovery probes assets directly, scanning ports, crawling web applications, fingerprinting services. The best platforms combine both: passive for breadth and stealth, active for depth and validation.
What to look for
- Platforms that combine passive and active techniques discover 2-5x more assets than passive-only approaches
- Active scanning should include full IPv4/IPv6 port scanning, DNS brute-forcing, web crawling, and service fingerprinting
- Passive sources should include Certificate Transparency logs, passive DNS aggregation, WHOIS/RDAP data, and OSINT feeds
Ask this in the demo:
“What percentage of a typical customer's asset inventory was found via active vs. passive discovery? Can I see a breakdown?”
Red flag
If a vendor relies entirely on passive data aggregation, they're missing assets that only active scanning reveals: exposed services, open ports, running applications.
Do you operate your own internet-scale scanning infrastructure?
Why this matters
Some EASM platforms scan the entire IPv4 address space and ingest terabytes of internet-wide data on a regular basis. Others perform scoped lookups limited to known seed domains. Internet-scale scanning finds assets tied to your organization that scoped approaches miss entirely: assets on unexpected IP ranges, forgotten cloud instances, and infrastructure from acquisitions.
What to look for
- Own scanning infrastructure vs. relying on third-party data providers (Shodan, Censys datasets)
- Scan frequency: how often is the full internet re-scanned?
- Coverage of non-standard ports beyond the common top 1,000
Ask this in the demo:
“Do you maintain your own internet-scale scanning infrastructure, or do you rely on third-party data? How frequently do you re-scan the full internet?”
Red flag
Vendors relying entirely on third-party internet scan data have less control over freshness, coverage, and the ability to customize scanning for specific protocols.
How accurate is your asset attribution?
Why this matters
Discovery is only useful if assets are correctly attributed to your organization. The internet is full of shared infrastructure (CDNs, cloud hosting, SaaS platforms) where many organizations share the same IPs or domains. False positives waste analyst time; false negatives leave real assets unmonitored.
What to look for
- Multi-signal attribution using domain registration, SSL certificates, hosting patterns, code analysis, and organizational metadata
- Confidence scoring on each discovered asset
- Ability to handle shared infrastructure, CDN-hosted assets, and multi-tenant cloud environments
- Customer-tunable attribution rules for edge cases
Ask this in the demo:
“What is your false positive rate for asset attribution? How do you handle shared infrastructure and CDN-hosted assets? Can I see the attribution confidence score for each asset?”
Red flag
If a vendor can't tell you their false positive rate or doesn't have confidence scoring, their attribution model may create more noise than signal.
Do you model asset relationships as a graph?
Why this matters
Modern EASM platforms model the external attack surface as a graph, showing how domains connect to IPs, IPs to certificates, certificates to services, services to technologies, and so on. Graph-based models enable impact analysis ("if this IP is compromised, what else is affected?"), lateral connection mapping, and more intelligent prioritization.
What to look for
- Visual graph exploration showing domain → IP → certificate → service → technology chains
- Ability to trace relationships across the full attack surface
- Impact analysis capabilities based on graph traversal
- Graph-based deduplication and correlation
Ask this in the demo:
“Can I see the relationship graph between a domain, its IPs, the services running on them, and associated certificates? Can I trace the blast radius of a compromised asset?”
Red flag
Platforms that present findings as flat lists without relationship context make it harder to understand the true scope and impact of exposures.
Do you validate exposures, or only report raw findings?
Why this matters
There's a critical difference between "this asset might be vulnerable" and "this vulnerability is confirmed exploitable." Exposure validation means the platform actually confirms whether a finding is real and exploitable, not just theoretically risky based on version matching. Platforms that validate dramatically reduce noise and false positives.
What to look for
- Active validation of discovered vulnerabilities (safe exploitation checks, not just version matching)
- Distinction between confirmed, likely, and potential findings in the UI
- Reduction in false positive rate compared to raw CVE-to-version matching
- Validation of misconfigurations (not just flagging, but confirming exploitability)
Ask this in the demo:
“Do you validate findings for exploitability, or do you only report potential risk based on version detection? What percentage of your findings are validated?”
Red flag
Vendors that report every CVE-to-version match without validation will flood your team with noise. A high-volume, low-accuracy feed is worse than no feed at all.
What credential and dark web sources do you monitor?
Why this matters
Leaked credentials are one of the most direct attack vectors, often more dangerous than any technical vulnerability. EASM platforms should monitor breach databases, dark web forums and marketplaces, stealer malware logs, paste sites, and public code repositories for credentials tied to your organization.
What to look for
- Breach database coverage: how many sources and how quickly are new breaches ingested?
- Dark web forum and marketplace monitoring (not just surface-level paste sites)
- Stealer malware log analysis (Redline, Raccoon, Vidar, etc.)
- Public code repository scanning for exposed API keys and secrets (GitHub, GitLab, Bitbucket)
- Speed of detection: time from credential appearance to customer alert
Ask this in the demo:
“What specific sources do you monitor for leaked credentials? How quickly after a breach appears are affected credentials surfaced to customers? Do you monitor stealer malware logs?”
Red flag
If credential monitoring is limited to "we check HaveIBeenPwned," the coverage is insufficient. Look for vendors with direct dark web collection capabilities.
How do you handle third-party SaaS and supply chain visibility?
Why this matters
Your external attack surface extends to every SaaS vendor, API integration, and third-party script loaded on your properties. A misconfigured vendor endpoint can expose your customer data. Third-party JavaScript loaded on your site can be compromised in a supply chain attack.
What to look for
- Discovery of third-party SaaS tools connected to your organization
- Monitoring of vendor-side misconfigurations and vulnerabilities
- Tracking of third-party JavaScript and external dependencies on your web properties
- Supply chain risk scoring for critical vendors
Ask this in the demo:
“Can you show me the third-party SaaS tools and vendor integrations that are part of my external attack surface? Do you track third-party JavaScript loaded on my web properties?”
Red flag
Many EASM platforms focus exclusively on first-party assets and have no visibility into the third-party supply chain.
What AI-specific exposure vectors do you detect?
Why this matters
AI exposure is the fastest-growing category of external attack surface risk. EASM platforms should detect organizational data in AI training sets, exposed ML model endpoints, shadow AI tool usage, and AI-powered SaaS data flows. This capability is relatively new. Most vendors don't have it yet.
What to look for
- Detection of organizational data in LLM training sets and AI outputs
- Scanning for exposed ML model inference APIs and vector databases
- Shadow AI tool detection: which AI services are employees using with corporate data?
- AI SaaS integration analysis: which vendors are sending your data through AI pipelines?
Ask this in the demo:
“What AI-specific exposure vectors do you detect? Can you show me if our organizational data appears in any AI training sets or model outputs? Do you detect exposed vector databases?”
Red flag
If a vendor says AI exposure is "on the roadmap," it means they don't have it today. This is a gap that grows every month as AI adoption accelerates.
What API and automation capabilities do you offer?
Why this matters
Enterprise security teams need programmatic access to EASM data. The platform should offer a comprehensive API, webhook support, CI/CD integration points, and the ability to build custom automation workflows on top of EASM findings.
What to look for
- Full REST API covering all platform capabilities (not just a data export endpoint)
- Webhook support for real-time event-driven workflows
- CI/CD integration: can new deployments trigger an EASM rescan?
- SIEM/SOAR integration (Splunk, Sentinel, Cortex XSOAR, etc.)
- Ticketing integration (Jira, ServiceNow) with bi-directional status sync
Ask this in the demo:
“Can I integrate EASM findings into our CI/CD pipeline? Is the API comprehensive enough to build custom workflows, or is it limited to data export? Do you support bi-directional ticketing integration?”
Red flag
A "read-only" API or CSV export is not sufficient for enterprise security operations. Look for full CRUD API access and event-driven webhook capabilities.
How to use this guide
During demos: Ask at least 3-4 of these questions during each vendor demo. Pay attention to whether the vendor can show the capability live or only talks about it in slides.
In RFPs: Include these questions (or a subset) as formal evaluation criteria. Weight discovery model, attribution accuracy, and exposure validation higher than feature checklists.
During POCs: Run a time-boxed proof of concept against your own infrastructure. Compare how many assets each vendor discovers, their false positive rate, and the actionability of findings.