{{ summaryHeading }}
{{ primaryFigure }}
{{ summaryLine }}
{{ badge.label }}
Incident MTTx inputs
Name the service, team, queue, or incident review population represented by these rows.
Set the acknowledgement target for responder pickup and paging-review flags.
min
Use a containment target when mitigation speed matters separately from final resolution.
min
Set the recovery target used by the summary badge and breach queue.
hr
Paste incident lifecycle rows; timestamp parsing and metrics stay in this browser.
{{ incidents.length.toLocaleString() }} chars
{{ fileStatus || 'Drop CSV or TXT onto the textarea.' }}
{{ fileError }}
MetricMeanMedianP85TargetStatusCopy
{{ row.metric }} {{ row.mean }} {{ row.median }} {{ row.p85 }} {{ row.target }} {{ row.status }}
IncidentSeverityMTTDMTTAMTTCMTTRBreachesNext actionCopy
{{ row.id }} {{ row.severity }} {{ row.mttd }} {{ row.mtta }} {{ row.mttc }} {{ row.mttr }} {{ row.breaches }} {{ row.nextAction }}
SeverityIncidentsAverage MTTRBreached incidentsImprovement focusCopy
{{ row.severity }} {{ row.incidents }} {{ row.avgMttr }} {{ row.breached }} {{ row.focus }}

        
Customize
Advanced
:

Introduction:

Incident MTTx metrics split an outage or service incident into time-based stages. MTTD measures how long detection took, MTTA measures how long acknowledgement took, MTTC measures how long containment took, and MTTR measures how long recovery or resolution took. Each number answers a different operational question, so a single average rarely explains the whole response.

These metrics are most useful when teams agree on the timeline first. The same incident can look faster or slower depending on whether the clock starts at impact, alert creation, incident declaration, or ticket opening. Clear timestamp definitions keep post-incident reviews from turning into arguments about the denominator.

Incident lifecycle stages used for MTTx metrics A horizontal incident timeline from started to detected, acknowledged, contained, and resolved, with MTTD, MTTA, MTTC, and MTTR spans shown above it. Started Detected Acknowledged Contained Resolved impact to alert pickup delay mitigation path recovery path MTTD MTTA MTTC MTTR

Mean values make the timeline easy to summarize, but incident data is often uneven. One long recovery can pull the mean upward while most incidents are handled quickly. Median and tail readings help separate the normal response path from the few cases that deserve deeper review.

MTTx metrics should guide learning, staffing, runbook tuning, and alert routing. They should not be treated as a scorecard by themselves. Severity, customer impact, missing timestamps, and whether containment happened before final resolution all affect what the numbers can honestly say.

Technical Details:

The calculation uses a base timestamp for each incident. When a detected or opened time exists, that becomes the base for acknowledgement, containment, and resolution metrics. If detected or opened is missing but a started time exists, the started time can be used as the base for those later stages. MTTD is only calculated when both started and detected times are available.

Each metric is calculated in minutes from parsed timestamps. Rows with missing stages can still contribute to the metrics they support. For example, a row without a contained timestamp can still contribute to MTTA and MTTR if it has acknowledgement and resolution times.

Incident MTTx metric formulas used by the calculator
Metric Formula Interpretation
MTTD detected - started Detection lag from impact start to alert, report, or opening time.
MTTA acknowledged - base Responder pickup time from detected or opened time to acknowledgement.
MTTC contained - base Time to containment, mitigation, or stabilization from the response start point.
MTTR resolved - base Time to recovery or resolution from detected, opened, or fallback started time.

The summary table reports count, mean, median, and P85 for each metric. P85 is a tail view: after durations are sorted, it selects the first value at or above the 85th percentile position. That makes it useful for spotting a slow minority of incidents even when the average still looks acceptable.

CSV fields recognized for incident MTTx analysis
Field Accepted meaning Metric impact
id Incident key, ticket, or row identifier. Labels rows, chart bars, exports, and copied row details.
started Impact start, outage start, or incident begin time. Enables MTTD when paired with detected time.
detected or opened Alert, report, created, triggered, or opened time. Default base timestamp for MTTA, MTTC, and MTTR.
acknowledged Responder acknowledgement, assignment, or response start time. Enables MTTA and the acknowledgement segment in the stage chart.
contained Contained, mitigated, stabilized, or containment time. Enables MTTC and separates containment from final recovery.
resolved Resolved, closed, recovered, fixed, or ended time. Enables MTTR and final recovery reporting.
severity SEV, priority, impact, or level label. Groups the severity breach queue and sorts higher-impact labels first.

Target checks compare individual rows and aggregate metric rows differently. Per-incident breaches are marked when MTTA, MTTC, or MTTR is above its configured target. The brief status checks the mean and P85 against the target so a metric can show a healthy average while still warning that the tail is too slow.

Negative durations are excluded rather than corrected. If an acknowledged, contained, or resolved time appears before the base timestamp, that stage is left out and a warning is shown. This protects the aggregate from timestamp-order errors while keeping the row visible for cleanup.

Everyday Use & Decision Guide:

Start with a small, consistent incident population such as one service, queue, team, or postmortem period. Name that slice in Service or queue, then set targets that match your internal response goals. The default targets are 15 min for MTTA, 120 min for MTTC, and 4 hr for MTTR.

The best first pass is a CSV with id, started, detected, acknowledged, contained, resolved, severity. Legacy rows with opened, acknowledged, resolved, and severity can still work, but they leave less room to separate detection lag and containment time from full recovery.

  • Use Load sample to confirm the expected shape before pasting production rows.
  • Use Normalize rows after copying from a spreadsheet, ticket export, or chat transcript.
  • Fix warnings about unparseable or out-of-order timestamps before treating the mean as report-ready.
  • Check MTTx Brief first, then use Incident Ledger to find the rows behind a target breach.
  • Use Severity Breach Queue when high-impact incidents should drive improvement work before low-impact noise.
  • Open Stage Latency Chart when the question is where the response time is accumulating.

A rough export is still useful when some fields are missing. If started time is unavailable, MTTD will not appear for that row, but MTTA and MTTR can still be calculated from opened or detected time. If containment is not tracked in your workflow, MTTC will be excluded and the warning will tell you how many rows were affected.

Parsing, metric calculation, chart preparation, and JSON output happen in the browser. Avoid sharing an address bar that contains pasted incident data, and treat downloaded CSV, DOCX, chart, and JSON files as incident records if they include internal tickets or service names.

Step-by-Step Guide:

  1. Enter the Service or queue name so summaries, exports, and filenames identify the incident population.
  2. Set MTTA target, MTTC target, and MTTR target. Use 0 only when you want a metric measured without target status.
  3. Paste CSV rows into Incident timestamps CSV, drag in a CSV or TXT file, or use Browse CSV. Files over 1 MiB are rejected for browser-side parsing.
  4. Click Normalize rows if the source contains blank lines or irregular spacing. The action does not invent missing timestamps.
  5. Resolve warnings in Review the incident rows. Missing detected or opened times can remove an entire row, while missing contained or resolved times exclude only those metrics.
  6. Read MTTx Brief for mean, median, P85, target, and status. A tail warning means P85 exceeds the target even if the mean is still within range.
  7. Use Incident Ledger for per-incident MTTD, MTTA, MTTC, MTTR, breach labels, and next action text.
  8. Export the view that matches the handoff: copy or download table CSV, export table DOCX, download the stage chart as PNG/WebP/JPEG/CSV, or copy/download the JSON payload for further analysis.

Interpreting Results:

The summary headline uses mean MTTR as the large figure because full recovery is usually the broadest operational signal. That does not make MTTR the only number worth reading. A healthy MTTR with slow MTTA may point to paging or ownership friction, while slow MTTC can point to runbook, rollback, or mitigation delays before final recovery.

How to read the incident MTTx result tabs
Result area What to read first Useful caution
MTTx Brief Mean, median, P85, target, and status for MTTD, MTTA, MTTC, and MTTR. Do not stop at the mean when P85 shows a slow tail.
Incident Ledger Rows with breach labels and concrete next action guidance. A row can be missing one metric while still contributing to another.
Severity Breach Queue Severity groups sorted by rank, breached incidents, and average MTTR. Severity labels are normalized, but your internal severity policy still controls impact meaning.
Stage Latency Chart Stacked detect, acknowledge, contain, and resolve segments for the slowest incidents. The chart shows up to 12 incidents, sorted by MTTR.
JSON Structured metrics, severity groups, incidents, warnings, and errors. JSON is useful for audit trails, but it may contain raw incident identifiers.

A target breach is a prompt for review, not proof that a team acted poorly. A high MTTA can come from alert fatigue, routing, escalation policy, or ambiguous ownership. A high MTTC can come from missing mitigation steps. A high MTTR can come from validation waits, data repair, customer communication, or a deliberate decision to avoid risky changes during impact.

Compare like with like. A mixed queue of SEV1 outages, SEV4 support issues, and auto-resolved alerts can flatten the story into a misleading average. Use the service name, date window, severity labels, and excluded-row warnings to decide whether the slice is clean enough to report.

Worked Examples:

Sample incident set with one slow tail

The sample rows use five incidents with targets of 15 min MTTA, 120 min MTTC, and 4 hr MTTR. The mean MTTA is about 9 min, mean MTTC is about 1.5 hr, and mean MTTR is about 3.1 hr. The average looks acceptable, but the P85 value reaches the slowest row for each metric, so the brief warns that the tail exceeds the target.

Legacy opened-to-resolved rows

A row shaped like INC-7,2026-04-30T10:00Z,2026-04-30T10:08Z,2026-04-30T11:20Z,SEV2 can calculate MTTA and MTTR from the opened time. It cannot calculate MTTD because there is no separate started timestamp, and it cannot calculate MTTC because there is no contained or mitigated time.

Severity-driven review

If the breach queue shows SEV1 with fewer incidents but more breaches than SEV3, prioritize the SEV1 rows before chasing the largest raw count. The queue is meant to point at the highest-impact bottlenecks first, then the ledger and chart explain which stage made those rows slow.

FAQ:

Does incident data leave the browser?

The parsing and calculations run in the browser. Still treat the page URL, copied text, and downloaded files carefully because incident identifiers, service names, and pasted rows can be sensitive.

Why is MTTD shown as n/a?

MTTD needs both a started timestamp and a detected or opened timestamp. If the row begins at alert creation, the detection lag is unknown and the metric is excluded for that row.

Why can the mean pass while P85 fails?

A few slow incidents can sit in the tail while most rows are quick. The mean can stay under target, but P85 can still show that a meaningful minority is too slow.

What happens when timestamps are out of order?

The affected duration is excluded and a warning is shown. For example, an acknowledged time before detected time cannot produce a valid MTTA.

What does MTTC mean in this calculator?

MTTC means time from detected or opened time to contained, mitigated, or stabilized time. It is separate from final resolution so teams can see whether impact was stopped before all cleanup was complete.

Can these numbers be compared across teams?

Only when the teams use similar timestamp definitions, severity rules, and incident inclusion rules. Otherwise, the comparison may reflect process differences more than response performance.

Glossary:

MTTx
A family name for mean-time metrics such as MTTD, MTTA, MTTC, and MTTR.
MTTD
Mean time to detect, measured from incident start to detection or opening when both timestamps exist.
MTTA
Mean time to acknowledge, measured from detected or opened time to responder acknowledgement.
MTTC
Mean time to contain, measured from detected or opened time to containment, mitigation, or stabilization.
MTTR
Mean time to resolve, measured from detected or opened time to recovery, closure, or resolution.
P85
The 85th percentile duration, used here as a tail signal for slower incidents.
Base timestamp
The detected or opened timestamp used as the start point for MTTA, MTTC, and MTTR, with started time used as a fallback when needed.

References: