{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ result.transferBadge }} {{ result.rtoBadge }} {{ result.requiredBadge }} {{ result.sizeBadge }}
Backup restore time inputs
Enter the source restore size, then choose the unit used by your backup catalog.
Use 1.0 when the catalog reports already-compressed transferred size.
Use backup logs, restore tests, or storage counters for the sustained value.
Mbps
Use whole streams; reduce this when target I/O or backup appliance slots are constrained.
streams
Include operator start steps that occur before bulk data movement.
min
Add the required verification time before the service can be declared restored.
min
Set 0 to skip RTO fit checks.
hr
Optional label for copied tables, JSON, and DOCX exports.
Leave 0 for warm/local backups.
hr
Use 0 for direct restores; add overhead for rehydration, unpacking, or cross-tier copy.
%
Leave 0 if validation already includes cutover.
min
PhaseDurationElapsedDetailCopy
{{ row.phase }}{{ row.duration }}{{ row.elapsed }}{{ row.detail }}
CheckpointElapsedStatusActionCopy
{{ row.checkpoint }}{{ row.elapsed }}{{ row.status }}{{ row.action }}
ScenarioAggregate throughputTransfer timeTotal restoreRTO statusCopy
{{ row.scenario }}{{ row.aggregate }}{{ row.transfer }}{{ row.total }}{{ row.status }}

          
Customize
Advanced
:

Introduction:

Backup restore time is the elapsed time between starting a recovery effort and reaching a usable, verified service state. The data copy is only one part of that window. A real restore can also include waiting for archive media, locating a recovery point, mounting storage, rehydrating compressed data, validating application behavior, and completing a final cutover.

That timing matters because Recovery Time Objective, or RTO, is a promise about how long a workload can remain unavailable before the business impact becomes unacceptable. A database restore that moves data in two hours may still miss a four-hour RTO if catalog work, cloud retrieval, validation, and operator approval consume the remaining time. The useful planning question is whether the whole recovery path fits, not whether the storage path looks fast in isolation.

Restore window timeline with archive wait, catalog setup, transfer, staging, validation, cutover, and an RTO target

Restore estimates are strongest when the inputs come from previous restore tests, not from interface line rates or marketing throughput. Backup appliances, target storage, dedupe rehydration, cloud archive retrieval, and application validation can each add delay. Parallel streams help only when the source, network, restore appliance, and target storage can all keep up.

Compression also needs careful reading. A 2.0 compression ratio means the restore reads or transfers about half the logical data size. If the backup catalog already reports the transferred or stored size, using that number with a compression ratio above 1.0 would count the reduction twice. A conservative estimate keeps unit labels, size meaning, and tested throughput consistent from one recovery plan to the next.

Technical Details:

A restore window combines variable data movement with fixed recovery phases. Data movement depends on transferred bytes and aggregate throughput. Fixed phases include archive retrieval, catalog and mount work, validation, and cutover. Staging or rehydration behaves like a percentage of transfer time, so it rises when the transfer takes longer and falls when sustained throughput improves.

RTO comparison is a total elapsed-time check. If fixed phases already exceed the target, a faster data path cannot make the plan fit without also reducing archive delay, catalog work, validation, or cutover. If fixed phases leave enough time for data movement, the required aggregate rate can be calculated from the remaining budget.

Formula Core:

The calculation starts by converting the entered backup size to bytes, reducing it by the compression ratio, then dividing by aggregate restore throughput. Decimal GB/TB and binary GiB/TiB are treated as different byte counts.

Blogical = entered size converted to bytes Btransferred = BlogicalC Raggregate = Rstream×N Ttransfer = Btransferred(Raggregate×1,000,000/8) Ttotal = Tarchive+Tcatalog+Ttransfer+Tstaging+Tvalidation+Tcutover

Here, C is compression ratio and N is the number of whole parallel streams. Aggregate throughput is expressed in megabits per second, so the transfer formula converts megabits to bits and then to bytes. Staging time is calculated as a percentage of transfer time, capped by the entered staging-overhead percentage.

Restore model inputs and how they affect elapsed time
Quantity Meaning Effect on result
Logical backup size Protected data size before compression, entered as GB, TB, GiB, or TiB. Larger size increases transferred bytes and transfer time.
Compression ratio Logical-to-transferred data ratio. A value of 1.0 means no reduction. Higher ratio lowers transferred bytes.
Restore throughput per stream Sustained restore rate for one stream, measured in Mbps. Higher rate lowers transfer time when transfer is the main delay.
Parallel streams Whole concurrent streams that can run without saturating shared limits. Multiplies aggregate throughput after rounding down to whole streams.
Fixed recovery phases Archive retrieval, catalog and mount, validation, and cutover allowances. Add elapsed time even if throughput improves.
Staging overhead Extra rehydration, unpacking, or copy work as a percentage of transfer time. Scales with the transfer duration.
RTO target Recovery-time objective in hours. A value of 0 turns off the fit check. Creates the RTO fit, spare time, shortfall, and required-rate outputs.

The required aggregate rate is only meaningful when the RTO leaves time for data movement. The fixed-time budget is subtracted first, then staging overhead is included because every transfer second may create extra staging seconds.

Rrequired = Btransferred×8 (TRTO-Tfixed)/(1+S)×1000000

When Tfixed is already greater than the RTO, the required rate becomes unattainable from throughput alone. When the current total is inside the target, the same formula still gives a useful ceiling check: it shows the aggregate rate that would be needed to barely meet the entered RTO.

Everyday Use & Decision Guide:

Start with one workload or restore runbook, not a whole estate. Enter the logical backup size from the backup catalog, choose the matching GB, TB, GiB, or TiB unit, and set Compression ratio to 1.0 if the catalog already reports the amount that will be read or transferred. Then use a sustained restore-test rate for Restore throughput per stream rather than a network port speed.

Parallel streams should describe streams that can really run together. Two streams at 2500 Mbps each are useful only if the backup appliance, network, and target storage can deliver about 5000 Mbps in aggregate during a restore. If the target disk group is the limit, raising stream count can make the estimate look better without changing the actual recovery plan.

  • Use Catalog and mount time for recovery-point selection, target provisioning, and job startup work before bulk data moves.
  • Use Archive retrieval delay when cold or archive storage must make data available before restore starts.
  • Add Staging overhead when rehydration, unpacking, temporary landing, or cross-tier copy extends the data-movement work.
  • Use Validation time for integrity checks, smoke tests, and operator verification before the service is declared restored.
  • Use Cutover allowance for endpoint changes, restarts, DNS work, or approval handoff when that is not already inside validation.

The summary badges are the fastest sanity check. RTO fits means the modeled total is at or below the entered target. RTO risk means the restore window exceeds it. The needs ... Mbps badge is most useful when the shortfall is transfer-bound. If the badge says fixed phases exceed RTO, a faster network is not enough because archive, catalog, validation, or cutover time already consumes the target.

Use Restore Phases to see where the hours are going, then compare RTO Checkpoints with the runbook. Throughput Scenarios and Throughput Sensitivity Curve are helpful when deciding whether faster media, more streams, or pre-staging would materially reduce the total. The result is a planning estimate, so a close pass should still be tested with a restore drill before it becomes a service commitment.

Step-by-Step Guide:

Use the calculator to build a restore-window estimate, then compare the modeled checkpoints with the recovery plan.

  1. Enter Logical backup size and choose the exact unit used by the backup catalog. The summary should show a size badge that matches the intended logical dataset after unit conversion.
  2. Set Compression ratio. Use 1.0 for no reduction or for a catalog value that already reflects transferred size. The summary line should show transferred bytes lower than logical bytes only when the ratio is above 1.0.
  3. Enter Restore throughput per stream and Parallel streams. The transfer badge should show aggregate Mbps as per-stream throughput multiplied by whole streams.
  4. Add Catalog and mount time, Validation time, and RTO target. If RTO is 0, the result should say the target is off rather than pass or fail the plan.
  5. Open Advanced when the runbook includes a Workload name, Archive retrieval delay, Staging overhead, or Cutover allowance. Restore Phases should then show those items as separate rows with elapsed time.
  6. If a field rejects a value or the result looks impossible, check the accepted ranges first: compression ratio must be at least 1.0, streams must be at least 1, throughput must be positive, and staging overhead is capped at 300%.
  7. Read RTO Checkpoints before copying the result. Required aggregate rate, 2x throughput check, and Throughput sanity show whether the shortfall is mainly transfer speed or fixed recovery work.

Interpreting Results:

The headline Restore window is the modeled end-to-end duration. Read it with RTO fits or RTO risk, not by itself. A 2.31-hour restore is comfortable against a 4-hour RTO, but the same result may be unacceptable for a one-hour application objective.

How to interpret backup restore time result states
Result cue Meaning Check next
RTO fits Total restore time is less than or equal to the entered RTO target. Keep restore-test evidence with the runbook, especially if spare time is small.
RTO risk Total restore time is greater than the entered RTO target. Use RTO margin and Required aggregate rate to size the shortfall.
fixed phases exceed RTO Archive, catalog, validation, and cutover time already exceed the target before transfer speed can help. Shorten fixed recovery work or change the recovery design before adding throughput.
transfer-bound The 2x throughput check shows that faster data movement materially reduces the window. Test more streams, faster media, or pre-staged data with the same validation requirements.
fixed-time bound Doubling throughput saves little because fixed phases dominate the total. Review archive retrieval, catalog setup, validation, and cutover allowances.

Do not treat a passing result as proof that recovery will work during an incident. The model does not verify backup integrity, credentials, target capacity, application consistency, operator access, or disaster-time congestion. Use Restore Phases to check the timing story, then validate the important path with a real restore test.

Throughput Scenarios is useful for what-if planning, but it should not replace measured restore throughput. If the sensitivity curve says 2x throughput would save only a few minutes, spend the next review on fixed work. If it saves hours, the next evidence to collect is a tested aggregate restore rate.

Worked Examples:

Production database restore inside target

A 4.5 TiB logical restore with a 1.8 compression ratio transfers about 2.50 TiB. At 2500 Mbps per stream with two streams, the aggregate rate is 5000 Mbps and bulk transfer takes about 1.22 hours. With 20 minutes of catalog work and 45 minutes of validation, Restore window is about 2.31 hours. Against a 4-hour RTO target, the summary shows RTO fits with about 1.69 hours of spare time.

Archive delay that pushes a plan over RTO

A 12 TiB restore at a 2.0 compression ratio transfers about 6.00 TiB. Three 1200 Mbps streams give 3600 Mbps aggregate throughput, so the transfer takes about 4.07 hours. Add a 2-hour Archive retrieval delay, 30 minutes of catalog work, 20% Staging overhead, 60 minutes of validation, and 15 minutes of cutover, and Restore window becomes about 8.64 hours. An 8-hour target becomes RTO risk, and Required aggregate rate is about 4139 Mbps.

Transfer-bound shortfall

A 20 TiB logical backup with only a 1.2 compression ratio leaves about 16.67 TiB to transfer. Two 800 Mbps streams create 1600 Mbps aggregate throughput, which stretches bulk transfer to about 1.06 days before staging, catalog, validation, or cutover. The total reaches about 1.32 days against a 6-hour RTO target. RTO Checkpoints points to a required aggregate rate near 13,380 Mbps, so this case needs a different recovery design or much faster tested data movement.

Fixed phases that throughput cannot solve

A small 1 TiB restore with a 2.0 compression ratio and four 5000 Mbps streams transfers about 512 GiB in only 3.7 minutes. If the plan also has a 2-hour archive wait, 30 minutes of catalog work, and 30 minutes of validation, Restore window is still about 3.06 hours. Against a 1-hour RTO, Required aggregate rate becomes unattainable from rate alone because fixed phases exceed the target before transfer time matters.

FAQ:

What restore throughput should I enter?

Use measured sustained restore throughput per stream from backup logs, restore tests, or storage counters. A 10 Gbps interface is not the same as a tested restore rate, especially when the appliance, dedupe store, target storage, or validation process is the slow point.

Should backup size be logical size or stored size?

Use logical backup size when you also know the logical-to-transferred Compression ratio. If your catalog already reports transferred or stored size, enter that size and set compression ratio to 1.0 so the estimate does not reduce the data twice.

Why does adding streams sometimes look too optimistic?

Parallel streams multiplies the per-stream rate after rounding down to whole streams. That is useful only when the restore system can sustain those streams together. If target I/O or appliance slots are capped, enter the rate and stream count that matched a real test.

What does RTO target off mean?

An RTO target of 0 skips the pass or fail comparison. The calculator still reports Restore window, phase durations, throughput scenarios, and JSON output, but it will not label the plan as inside or outside a recovery objective.

Does the calculator send my restore details to a server?

The restore math runs in the browser and no server-side helper is used for the calculation. Treat workload names and shared URLs carefully, because any values placed in a URL can be exposed wherever that URL is copied, logged, or shared.

Glossary:

Recovery Time Objective
The maximum planned time a workload can remain in recovery before the outage harms mission or business needs.
Logical backup size
The protected data size before compression or transfer reduction is applied.
Compression ratio
The ratio between logical bytes and the bytes that need to be transferred or read during restore.
Aggregate throughput
The combined restore rate after per-stream throughput is multiplied by the number of usable parallel streams.
Staging overhead
Extra recovery work, such as rehydration or temporary copy, modeled as a percentage of transfer time.
Cutover allowance
Final handoff time for service restart, endpoint changes, DNS work, or approval after validation.

References: