Allowed Downtime Budget
{{ downtimeAllowedReadable }}
for {{ periodLabelReadable }} · {{ selectedTierLabel }}
{{ effectiveDowntimeReadable }} used {{ budgetStatusBadgeText }} {{ errorBudgetBurnPercent.toFixed(2) }}% budget used {{ burnPaceBadgeText }} {{ achievedAvailabilityPercent.toFixed(3) }}% achieved
%
{{ maintenanceImpactPercentValue.toFixed(0) }}%
{{ elapsedPercentValue.toFixed(0) }}%
Metric Value Copy
{{ row.label }} {{ row.value }}
No budget metrics available.
Budget Pace
Period elapsed {{ elapsedPercentValue.toFixed(0) }}% · {{ elapsedPeriodReadable }}
Budget consumed {{ errorBudgetBurnPercent.toFixed(2) }}% · {{ effectiveDowntimeReadable }}

{{ burnPaceNarrative }}

If The Current Pace Holds
  • Projected availability {{ projectedAvailabilityReadable }}
  • Projected balance {{ projectionBalanceText }}
  • Budget exhaustion {{ budgetExhaustionReadable }}
  • Best common tier met {{ bestCommonTargetLabel }}
  • Maintenance policy {{ maintenancePolicyText }}
Signal Value Interpretation
{{ row.label }} {{ row.value }} {{ row.note }}
Period Allowed downtime Current burn Status
{{ row.label }} {{ row.readable }} {{ row.burnText }} {{ row.statusText }}

            
:

Introduction

Availability targets are easier to manage when they stop being abstract percentages and become actual minutes or seconds. A reliability team deciding whether to schedule maintenance, ship a risky change, or explain an outage to stakeholders usually needs that translation first, because the difference between 99.9% and 99.99% is not a rounding detail. It is the difference between roughly three quarters of an hour each month and only a few minutes.

This calculator turns a target availability percentage and a time window into an allowed downtime budget, then compares that budget with the downtime you record for the same period. It reports how much budget was available, how much has been consumed, how much remains, how far over budget you are when the target has been missed, and what achieved availability looks like under the policy settings you chose.

The package is especially useful when planned maintenance is not treated the same way as unplanned downtime. Some teams exempt maintenance from the budget entirely, some count it in full, and some apply only part of it. The maintenance-weighting control lets you model all three cases, so the result can match the operational rule you use for internal SLO tracking or for a customer-facing commitment review.

It also adds practical review aids instead of stopping at one percentage. The metrics view gives a line-by-line summary, the lookup view converts the same target into preset windows such as 1 hour, 1 week, 30 days, 90 days, and 1 year, the budget chart shows burned versus remaining budget, the reliability table adds quick interpretation, and the JSON plus export actions make it easier to move the same state into incident notes or service review documents.

The result is still a policy model, not a legal reading of a contract. The arithmetic is exact for the values you enter, but the meaning of downtime, the treatment of scheduled work, and the choice of calendar versus rolling window still come from your service policy. If those definitions change between reviews, a month-to-month comparison can look better or worse without the service itself changing.

Everyday Use & Decision Guide

Start with the target and the period you actually report against. The bundle defaults to a 30-day window and also offers presets for common uptime commitments such as 99.0%, 99.5%, 99.9%, 99.95%, and 99.99%. That matters because the same outage can be harmless under one target and a serious miss under another. A six-minute interruption feels small in isolation, yet it is acceptable in some weekly targets and already catastrophic in tighter ones.

Next, enter the downtime you want charged to the period. Unplanned downtime is always counted directly. Planned maintenance is entered separately so you can decide whether it contributes 0%, 100%, or something in between to the error-budget burn. That makes the calculator useful in organizations where maintenance windows are operationally visible but not always scored the same way in reliability reviews.

If you know how many distinct incidents occurred, add that count as well. The calculator then derives two review metrics from the current window: mean time to repair (MTTR) for unplanned downtime only, and mean time between incidents for the whole period. Those numbers do not change the budget, but they help explain whether a poor month came from one long failure or many short interruptions.

The most useful reading pattern is simple. Look first at allowed downtime and remaining budget, then at error-budget burn, then at the reliability signals table. If the remaining budget is small but still positive, you are in a caution zone where another incident could end the period below target. If the over-budget field is nonzero, the current window has already missed the entered commitment even if the raw outage minutes do not seem dramatic.

This makes the tool a good fit for monthly service reviews, pre-maintenance risk checks, post-incident writeups, and side-by-side comparisons of stricter versus looser targets. It is less useful as a forecasting engine because it does not predict future traffic, future error rates, or rolling alert thresholds. It tells you how the selected window looks right now under the rules you entered.

Technical Details

In service reliability work, a service level objective (SLO) sets a target and an error budget is the gap between perfect availability and that target for the compliance period. This calculator uses that same structure even though the main field is labeled as an SLA target. In other words, the math works for any availability commitment, but whether the number is an internal operating target or a contractual promise is up to your process rather than to the calculator itself.

The core calculation is straightforward. The selected period is converted to seconds. Allowed downtime is the period length multiplied by (100 - target) / 100. Effective downtime is the sum of unplanned downtime and the maintenance duration after applying the maintenance-impact percentage. Error-budget burn is effective downtime divided by allowed downtime. Achieved availability is the remaining share of the period after subtracting that same effective downtime.

One detail matters more than it first appears: the achieved-availability figure follows the maintenance weighting you choose. If you mark maintenance as 50% counted toward burn, the achieved percentage is also based on that weighted maintenance time rather than on the full wall-clock maintenance duration. That makes the result consistent with the selected policy model, but it also means the figure is a scored availability result, not a raw observation of literal system interruption.

Incident analytics are intentionally limited and easy to audit. MTTR is computed as unplanned downtime divided by incident count. Mean time between incidents is period seconds divided by incident count. Planned maintenance does not affect MTTR, and no incident count means both metrics remain unavailable. That matches how many teams use these terms in lightweight review notes, but it should not be mistaken for a full failure-analysis model with partial restorations or per-service weighting.

Derived value How the package computes it Why it matters
Allowed downtime Period seconds × (1 − target) Shows the total tolerance for the selected window.
Effective downtime Unplanned downtime + weighted maintenance Defines what actually burns the budget in this model.
Error-budget burn Effective downtime ÷ allowed downtime Shows how aggressively the window is consuming tolerance.
Achieved availability (Period − effective downtime) ÷ period Shows the scored availability result for this period.
MTTR Unplanned downtime ÷ incident count Separates one long failure from several short ones.
Mean time between incidents Period seconds ÷ incident count Shows incident spacing across the whole window.

The output panes are built from that same state. The metrics table lists the target, period, allowed downtime, planned maintenance, weighted maintenance, effective downtime, achieved availability, burn, remaining budget, over-budget amount, incident count, MTTR, and mean time between incidents. The lookup table recalculates allowed downtime for fixed preset windows. The chart compresses the budget into burned in-budget time, remaining budget, and an overrun slice when the target is already missed.

The package is also fully export-oriented. It can copy or download the metrics table as CSV, export metrics and lookup tables as DOCX, download the budget chart as PNG, WebP, JPEG, or CSV, and copy or download the JSON payload that includes raw inputs, totals in seconds, readable summaries, and reliability signals. That makes the tool suitable for both quick decisions and recordkeeping.

Step-by-Step Guide

  1. Set the availability target directly or choose a preset that matches the commitment you want to evaluate.
  2. Choose the period length and unit so the window matches the way your team or contract measures compliance.
  3. Enter unplanned downtime, then enter planned maintenance and decide how much of that maintenance should count toward burn.
  4. Add incident count if you want MTTR and mean time between incidents included in the output.
  5. Read the summary, then use the metrics, lookup, chart, reliability, and JSON views or exports for the form that best fits your review workflow.

Interpreting Results

A positive remaining-budget value means the selected window still satisfies the entered target. A nonzero over-budget value means the window has already crossed the line, even if the achieved percentage looks close to the goal. That distinction matters because a target miss is binary at the threshold, while the achieved percentage still describes how far below or above the goal the window landed.

Error-budget burn gives the fastest sense of pressure. Around 50% means you have used half of the available tolerance for the period. Near 100% means there is almost no room left. Above 100% means the service has consumed more downtime than the target allows. The reliability table reinforces that reading by showing whether availability is above or below target, how maintenance was weighted, and what incident density looked like in the same window.

The lookup pane is best treated as a translation aid, not as an alternative evaluation of the same incident set. It answers a different question: if the target were applied to 1 hour, 1 day, 1 week, 30 days, 90 days, or 1 year, how much downtime would be allowed? That is helpful when explaining why “four nines” feels generous on paper but becomes very tight in the periods people actually operate against.

Worked Examples

Suppose a service has a 99.9% target over 30 days. The allowed downtime is 43 minutes and 12 seconds. If the service had 18 minutes of unplanned downtime, 12 minutes of maintenance, and the review policy counts maintenance at 50%, the effective downtime becomes 24 minutes. The period is still compliant, the remaining budget is 19 minutes and 12 seconds, the burn is a little over half of the budget, and a three-incident month yields an MTTR of 6 minutes.

Now consider a stricter weekly target of 99.99%. For 7 days, the allowed downtime rounds to about 1 minute in this package. A single 6-minute outage with no maintenance weighting already pushes the window well over budget. That example is useful because it shows why percentage targets alone can hide operational reality: the second target sounds only slightly stricter, but it cuts the available downtime to a fraction of the first case.

FAQ

Does incident count change the error budget?

No. Incident count affects only MTTR and mean time between incidents. The budget itself depends on the target, the selected period, unplanned downtime, and the weighted share of planned maintenance.

Should planned maintenance always count?

Not necessarily. Many organizations treat maintenance differently depending on whether the work was announced, exempted by policy, or still visible to users. The maintenance-impact setting exists precisely because there is no universal rule.

Is the achieved-availability figure raw uptime?

It is a scored result for the model you selected. Because maintenance can be partially weighted, the achieved percentage can differ from a simple raw ratio of total wall-clock interruption to total period length.

What if I need a contractual answer?

Use the calculator to test the arithmetic, then compare the inputs with the actual service commitment language. Definitions for excluded events, measurement boundaries, and compliance periods can change the final interpretation even when the time math is correct.

Glossary

Term Meaning in this package
Availability target The percentage goal the selected period is measured against.
Error budget The share of the period that can be unavailable before missing the target.
Burn How much of that budget the entered downtime has consumed.
Weighted maintenance The maintenance time after applying the chosen maintenance-impact percentage.
MTTR Mean time to repair, computed here from unplanned downtime only.
Mean time between incidents Period length divided by incident count for the current window.