Security Leadership & Strategy
Mean Time to Remediate (MTTR): The Security Leader’s Guide
The longer a vulnerability stays open, the more likely it gets exploited. That simple truth makes Mean Time to Remediate one of the most consequential metrics in cybersecurity. Yet most organizations measure MTTR in ways that overstate their performance and understate their risk.
The gap between patching a vulnerability and actually closing the attack path is where breaches happen. An organization might report an MTTR of 15 days for critical vulnerabilities based on when patches were applied, while the underlying exposure persists for months because the patch was incomplete, the configuration was wrong, or the finding required more than a patch to resolve.
This guide explains how to measure MTTR in a way that reflects actual risk, what benchmarks matter, and why validated MTTR from offensive testing is becoming the metric that boards, regulators, and cyber insurers care about most.
What MTTR Actually Measures
MTTR in cybersecurity measures the average elapsed time between vulnerability discovery and confirmed remediation. The complete lifecycle includes four phases:
Discovery. When the vulnerability is first identified, whether through scanning, penetration testing, bug reports, or threat intelligence.
Triage. When the finding is assessed for severity, exploitability, and business impact, and assigned to a responsible team for remediation.
Remediation. When the fix is implemented, whether through patching, configuration change, code fix, architectural change, or compensating control.
Verification. When the fix is confirmed effective through retesting. This is the phase most organizations skip, and it is the phase that matters most.
Without verification, MTTR measures when you think you fixed something, not when you actually did. The difference between perceived MTTR and actual MTTR can be weeks or months.
Why Most MTTR Measurements Are Wrong
Several common practices inflate MTTR performance and create false confidence.
Measuring Patch Application, Not Closure
The most common error is equating patch deployment with remediation. A patch addresses one specific vulnerability, but many findings require additional steps: configuration changes, service restarts, dependency updates, or architectural modifications. Measuring when the patch was applied rather than when the vulnerability was confirmed closed systematically understates your actual exposure window.
Ignoring Verification
If you mark a finding “remediated” without retesting, you are reporting an assumption, not a fact. Across thousands of assessments, offensive security teams consistently find that 10-20% of “remediated” findings remain exploitable after the initial fix attempt. The patch was applied to the wrong server, the configuration change was overwritten by automation, or the fix addressed the symptom but not the root cause.
Security validation through retesting is the only way to confirm that MTTR reflects actual closure. The Praetorian Guard platform includes retesting as part of the continuous testing cycle, ensuring that MTTR metrics reflect verified remediation.
Averaging Across All Findings
Reporting a single average MTTR across all severity levels obscures the metric that matters most: how quickly you close critical, exploitable findings. An average MTTR of 30 days sounds reasonable, but if that average includes thousands of low-severity findings remediated quickly and a handful of critical findings that took six months, the average masks your actual risk.
Always segment MTTR by severity, and give particular attention to MTTR for validated exploitable findings, the ones confirmed through offensive testing to represent real attack paths.
Starting the Clock at Triage, Not Discovery
Some organizations measure MTTR from when a finding enters their ticketing system rather than when it was first discovered. If a vulnerability scan runs on Monday and findings are not triaged until Thursday, three days of exposure are excluded from the measurement. Start the clock at discovery, not triage.
MTTR Benchmarks
Benchmarks provide context, but your own trend over time matters more than hitting an external number.
Industry Standards
CISA Binding Operational Directive 22-01 requires federal agencies to remediate known exploited vulnerabilities within 14 days. This has become a de facto benchmark for critical findings across both public and private sectors.
FedRAMP and NIST guidance generally recommends 30 to 90 days for critical findings, depending on the system categorization and the specific vulnerability.
PCI DSS 4.0 requires that critical and high-severity vulnerabilities be addressed in a timely manner, with specific timelines defined by the organization and validated during assessments. PCI DSS compliance programs should define explicit MTTR targets.
Practical Targets
Based on industry research and assessment experience, reasonable targets for validated MTTR are:
| Severity | Target MTTR | Context |
|---|---|---|
| Critical (exploitable, path to critical asset) | 7-15 days | These findings represent active risk to your most important assets |
| High (exploitable, significant impact) | 15-30 days | Real vulnerabilities that require prompt attention |
| Medium (exploitable, limited impact) | 30-90 days | Genuine findings balanced against operational priorities |
| Low (validated but minimal risk) | 90-180 days | Real findings with minimal business impact |
These targets apply to validated findings from offensive testing. Scanner-only findings without exploitation confirmation may warrant different timelines based on your organization’s risk appetite.
The Real Benchmark: Your Own Trend
External benchmarks are useful for context, but your most important benchmark is whether your MTTR is improving or degrading over time. A consistent downward trend in MTTR for critical findings indicates that your remediation processes are maturing. A flat or increasing trend signals a problem that needs attention, regardless of how the absolute number compares to industry averages.
Validated MTTR: The Metric That Matters
The distinction between MTTR for scanner findings and MTTR for validated exploitable findings is the difference between measuring noise and measuring risk.
What Validation Changes
When a penetration tester or red team validates a finding, they confirm three things: the vulnerability is real (not a false positive), the vulnerability is exploitable in your specific environment, and the vulnerability leads to a meaningful impact.
MTTR for these validated findings measures your response to confirmed risk. This is the number your board should see, the number your cyber insurer will ask about, and the number that actually correlates with breach probability.
The Retesting Loop
True validated MTTR includes retesting after remediation. The cycle looks like this:
- Offensive testing identifies exploitable vulnerability (clock starts)
- Finding reported with exploitation evidence
- Organization implements remediation
- Offensive team retests to verify fix
- Fix confirmed effective (clock stops) or finding returned for additional remediation
This retesting loop is built into continuous penetration testing programs. The Praetorian Guard platform tracks the complete lifecycle from discovery through verified remediation, producing MTTR metrics that reflect actual closure.
Why Insurers and Regulators Care
Cyber insurance carriers are increasingly requesting MTTR data as part of their underwriting process. Carriers want to see evidence that the organization responds quickly to known vulnerabilities, particularly validated ones. An organization with a 12-day MTTR for critical validated findings represents a materially different risk than one with a 90-day MTTR.
SEC cybersecurity disclosure rules require companies to describe their risk management processes. MTTR for validated findings is one of the most concrete data points an organization can provide to demonstrate that their risk management is operational, not just aspirational.
Improving MTTR
Reducing MTTR requires improvements across the entire discovery-to-verification lifecycle, not just faster patching.
Reduce Triage Time
The gap between discovery and actionable assignment is often the largest hidden contributor to MTTR. Automate severity classification where possible, ensure findings route directly to responsible teams, and establish SLA triggers that escalate untriaged findings after defined thresholds.
Prioritize by Exploitability
Not all findings deserve the same urgency. Focus remediation effort on validated exploitable findings first, particularly those that form attack paths to critical assets. Risk-based vulnerability management frameworks help organizations prioritize based on real risk rather than CVSS scores alone.
Fix Root Causes, Not Symptoms
Recurring findings indicate that remediation is addressing symptoms rather than root causes. If the same class of vulnerability keeps appearing across assessments, invest in the underlying process fix: developer training, secure defaults, architectural changes, or DevSecOps pipeline improvements that catch the issue earlier.
Automate Where Possible
Automated remediation for well-understood vulnerability classes (missing patches, configuration drift, certificate expiration) can dramatically reduce MTTR for those categories, freeing human attention for complex findings that require manual investigation.
Close the Verification Loop
Implement a process for retesting every remediated finding above a defined severity threshold. Without retesting, your MTTR reflects when you attempted a fix, not when the fix actually worked. The Praetorian Guard platform automates the retesting workflow as part of continuous engagement.
MTTR as a Board-Level KPI
MTTR is increasingly appearing in board reporting because it answers a question every director understands: when we find a problem, how quickly do we fix it?
How to Present MTTR to the Board
Segment by severity. Show MTTR for critical, high, and medium findings separately. The board cares most about how quickly you close the worst problems.
Show the trend. Quarter-over-quarter MTTR trends tell a clear story of improvement or regression. A downward trend demonstrates that security investments are producing faster response.
Connect to exposure window. Frame MTTR in terms of business risk: “Our MTTR for critical findings dropped from 45 days to 12 days this quarter, reducing our exposure window by 73%. Each day of exposure represents approximately $X in annualized risk.”
Distinguish validated from unvalidated. Make clear whether your MTTR includes retesting verification. Validated MTTR is a stronger metric that demonstrates more rigorous risk management.
Benchmark against targets. Show progress toward defined MTTR targets by severity level. Cybersecurity metrics frameworks should include MTTR targets as core KPIs.
When combined with cyber risk quantification, MTTR becomes a financial metric: each day of reduced MTTR translates to reduced annualized loss expectancy, making the ROI of security investments quantifiable.