BUYER BRIEF  ·  VENDOR-NEUTRAL  ·  UPDATED 2026-04-27
RFP frameworkLast verified April 2026

XDR vendor evaluation: an independent RFP rubric.

Five domains. Forty-five checklist items. One scoring template. A vendor-neutral rubric for running an XDR RFP without defaulting to the vendor's own evaluation criteria.

Every vendor’s RFP checklist is optimised for their own product. A platform that leads on native telemetry asks you to weight native telemetry; a platform that leads on open integration asks you to weight integration breadth. The result is that running three vendor-authored RFPs produces three biased evaluations, each of which their author wins. A neutral rubric lets you evaluate all three on the same terms.

The five evaluation domains

The rubric covers architecture (how the platform is built), telemetry coverage (what it sees), detection quality (how well it detects), commercial terms (how it prices and contracts), and operational fit (how it integrates into your team). Each domain carries roughly equal weight in most evaluations; you may reweight based on priorities.

01

Architecture

How the platform is structured decides integration cost, vendor lock-in, and upgrade path.

  • 01Native XDR, open XDR, or hybrid architecture
  • 02Deployment model: cloud-only, on-premises, or hybrid
  • 03Data residency and sovereignty (GDPR, regional compliance requirements)
  • 04API completeness: can your team automate workflows against the platform?
  • 05Integration with existing SIEM, SOAR, and ticketing tools
  • 06Support for third-party telemetry ingestion (open-XDR sources, legacy logs)
  • 07Agent deployment and update model (centralised vs distributed)
  • 08Upgrade cadence: how often does the platform release breaking changes?
  • 09High availability and disaster recovery commitments (SLA, RTO, RPO)
02

Telemetry coverage

Six sources matter. For each, verify what the platform ingests and how richly.

  • 01Endpoint: workstations, servers, mobile. First-party agent vs bring-your-own.
  • 02Network: flow logs, packet capture, proxy logs, DNS. Native sensors vs third-party ingestion.
  • 03Email: message metadata, attachments, URLs, authentication headers. Native integration depth.
  • 04Identity: IdP events, MFA, session management, privilege changes. Native vs third-party.
  • 05Cloud workload: VM, container, serverless, managed-database. Coverage per cloud provider.
  • 06Application: SaaS apps, API gateways, custom application logs. Integration breadth.
  • 07Retention policy per source (queryable hot vs archive cold, default windows).
  • 08Ingest schema: normalisation across sources, preservation of source-specific fields.
03

Detection quality

Detection is what you pay for. These signals distinguish platforms that detect well from those that merely market well.

  • 01MITRE ATT&CK coverage: which techniques are covered by out-of-the-box detections
  • 02Detection engineering model: locked content vs customisable rules vs fully bring-your-own
  • 03False-positive rate claims and the methodology used to measure them
  • 04Detection content update cadence and customer-facing transparency
  • 05Threat intelligence integration: vendor-owned feed, third-party feeds, custom feeds
  • 06Behavioural analytics: user and entity behaviour analytics, anomaly detection approach
  • 07Machine learning models: trained on what data, how are they evaluated, what's the tuning process
  • 08Incident correlation logic: how events across layers are grouped into incidents
04

Commercial terms

Per-unit rate is only the start. The contract terms are where the year-two surprise usually lives.

  • 01Primary pricing axis (per endpoint, per user, per workload, per GB) and blended rate
  • 02Multi-year discount structure and commitment obligations
  • 03Ingest overage rate, overage billing cadence (monthly vs annual true-up)
  • 04Retention tier pricing beyond the default hot window
  • 05Onboarding fee and what it covers (deployment vs migration vs training vs integration)
  • 06Renewal escalation cap (often 5-10% annually) and renegotiation rights
  • 07Minimum commitment: can you reduce endpoints mid-term if the environment shrinks?
  • 08Data egress terms: what happens to your data if you terminate
  • 09Assignment and change-of-control clause (matters for M&A)
  • 10Service-level agreement: platform uptime, support response time, remediation credits
05

Operational fit

The best platform on paper is still wrong if your analysts cannot use it well.

  • 01Analyst console usability: incident workflow, triage efficiency
  • 02Integration with your existing SOAR, ticketing, and collaboration tools
  • 03Reporting: executive dashboards, compliance reports, trend analysis
  • 04Training: included hours, certification path, documentation quality
  • 05Customer support: tiered support model, named TAM availability, escalation path
  • 06Community: customer advisory groups, user conferences, content-sharing ecosystem
  • 07Professional services: deployment, tuning, detection engineering engagements
  • 08References: customers of similar size, industry, threat model

Scoring three vendors against the rubric

A simple scoring approach: each domain gets a score of 0 to 5, each item within a domain is marked yes, partial, or no, and domain scores are averaged. Give each domain an equal weight initially, then reweight by strategic priority. The final vendor score is the sum of weighted domain scores.

DomainWeightVendor AVendor BVendor C
Architecture20%__/5__/5__/5
Telemetry coverage25%__/5__/5__/5
Detection quality25%__/5__/5__/5
Commercial terms20%__/5__/5__/5
Operational fit10%__/5__/5__/5
Weighted total100%______

The weighting above is a reasonable default for a mid-market RFP where telemetry coverage and detection quality matter most. Reweight for your context: regulated industries often push commercial terms and retention higher; resource-constrained teams push operational fit higher.

Use the question bank in vendor calls

The rubric above is for evaluating what each vendor submits. The question bank is for forcing the vendor to submit honestly. The two tools work together.

Open question bank