{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ badge.text }}
Example: 12 TiB or 0.5 PB.
Typical ranges: 1-5% steady, 10%+ for churn-heavy workloads.
%
Use incremental for most jobs; use differential when each copy grows from the last full.
Enter a whole number of full backups to keep.
copies
Enter days, for example 14, 30, or 45.
days
Use 7 for weekly fulls, 14 for biweekly.
days
Count only tiers that store their own backup data.
Start with a profile, then tune individual values.
Longer horizons expose growth-driven peaks.
Cloud gateway adds staged-upload reserve.
Match the method your backup platform uses.
Use observed Mbps, not nominal port speed.
Mbps
Example: 8 for an overnight job window.
hours
Use supplier plus internal change lead time.
days
Leave 0 when sizing a new repository.
Use percent per year, for example 20.
%
Leave 0 unless you know a spike rate.
%
Enter x:1, for example 1.7.
x:1
Enter x:1, for example 1.6.
x:1
Use percent of stored backup data.
%
Common planning margin: 20-30%.
%
Percent of one full copy to reserve.
% of full
Leave none if raw capacity is handled elsewhere.
80% keeps 20% operational slack.
%
Enter 0 when locks do not affect storage.
days
Enter count of monthly fulls to keep.
Enter count of yearly fulls to keep.
Enter 0 if no restore-test cadence is planned.
days
Metric Value Copy
{{ row.label }} {{ row.value }}
Layer Value Note Copy
{{ row.label }} {{ row.value }} {{ row.note }}
Checkpoint Primary Usable Build Reserve Stage Reserve Composite Usable Gap vs Current Composite Raw Copy
{{ row.checkpoint }} {{ row.primaryFootprint }} {{ row.buildReserve }} {{ row.stageReserve }} {{ row.recommendedUsable }} {{ row.gapVsCurrent }} {{ row.recommendedRaw }}
Lever Usable Delta Raw Delta Effect Tradeoff Copy
{{ row.label }} {{ row.usableDelta }} {{ row.rawDelta }} {{ row.effect }} {{ row.tradeoff }}
Current settings are already tight on the modeled levers, so the next gains will likely need a different retention target, a different repository architecture, or a broader copy-policy change.
Priority Focus Trigger Action Why Copy
{{ row.priority }} {{ row.focus }} {{ row.trigger }} {{ row.action }} {{ row.reason }}
Field Value Copy
{{ row.label }} {{ row.value }}

      
:

Backup capacity planning starts with a deceptively plain question: how much repository space is needed before retention, growth, full backup cycles, and offsite copies collide. A 12 TiB dataset is not stored once and then forgotten. It changes every day, ages through full and incremental restore points, may be copied to another tier, and often needs extra room for synthetic fulls, immutability, staging, and restore tests.

The useful result is not only a retained-data total. Operations teams need a capacity target that includes working reserves, a raw-storage estimate after parity or replication, and a warning when the current repository budget is already inside the procurement window. This calculator models those pieces together so a backup design can be reviewed before the repository is full.

Flow from protected dataset through retained backups, copy tiers, reserves, and raw storage budget

The estimate is a planning model, not a replacement for vendor repository telemetry. Compression, deduplication, changed-block tracking, immutability locks, and synthetic-full behavior vary by product. Treat the output as a sizing baseline, then compare it with backup-system reports after several real job cycles.

Technical Details:

The calculation converts the protected dataset to bytes, then simulates a daily timeline across the selected forecast horizon plus procurement lead time. Each day can create a full copy, an incremental copy, or a differential copy depending on the selected change-tracking mode and full-backup interval.

Incremental mode stores each day as the current day's changed data. Differential mode grows the daily copy from the last full backup, so it can consume much more repository space between full resets. Monthly and yearly archive fulls are modeled separately from the standard backup chain.

Backup capacity model components
ComponentHow it affects sizing
Reduction ratioCompression and dedupe divide logical full and delta sizes before repository overhead is added.
Retention and immutabilityRestore points remain counted while either the retention rule or the lock period keeps them from expiring.
Copy tiersThe retained single-copy footprint is multiplied for local, offsite, immutable, or multi-region copies.
Working reserveActive fulls and some synthetic fulls can require temporary workspace beyond the retained restore points.
Raw encodingFill ceiling and parity or replication factors convert usable capacity into a raw-storage target.

The central capacity stack can be read as retained backup bytes plus operational reserves, divided by the usable fill ceiling, then multiplied by the physical encoding factor.

Raw target=retained copies+overhead+headroom+scratch+stagingfill ceiling×encoding factor

Throughput checks compare the largest forecast full backup with the entered backup link and backup window. A plan can have enough long-term storage but still fail operationally if a periodic full cannot land inside the allowed job window.

Everyday Use & Decision Guide:

Use the balanced profile for a first pass, then change the fields that are known in your environment. The most important early inputs are protected dataset size, daily change rate, retention, full interval, reduction ratios, and copy tiers. If those are uncertain, run a cautious case with higher change, lower dedupe, and more headroom.

  • Use incremental mode when the backup system stores only changed blocks since the last successful job.
  • Use differential mode when each daily restore point grows from the last full backup.
  • Set immutability days to the lock period that prevents deletion, even if normal retention is shorter.
  • Set procurement lead time to the time it takes to approve, buy, rack, and present new capacity.
  • Use the current usable budget field when you want a direct runway warning against deployed repository space.

The output is most useful when compared across scenarios. For example, lowering the full-backup interval may improve restore-chain length but raise temporary reserve needs. Increasing monthly archives may be cheaper than extending daily retention if long-term restore points do not need daily granularity.

Step-by-Step Guide:

  1. Enter the protected dataset and pick TB, TiB, PB, or PiB carefully.
  2. Set daily change, retention, full interval, and copy tiers to match the backup policy.
  3. Open Advanced for growth, compression, dedupe, headroom, immutability, archive fulls, and raw-storage encoding.
  4. Review Capacity Target for the headline sizing number, then Reserve Layers for what is driving it.
  5. Use Capacity Levers and the action runbook to decide whether policy, reduction, growth, or procurement timing needs attention.

Interpreting Results:

The primary figure is the modeled peak target, not the average repository size. A peak can occur near a periodic full, an archive boundary, or the end of the growth horizon. If the action runbook flags a procurement issue, the lead-time horizon is already exposing a capacity gap.

Reserve rows explain why the target is high. Restore scratch helps with test restores. Staging reserve matters for cloud-gateway landing patterns. Physical encoding explains why raw disk or object-storage consumption can be higher than usable repository capacity.

Large sensitivity to daily change means the first validation task is measuring real changed-block rate from backup reports. Large sensitivity to dedupe or compression means the plan should be checked against representative production data, not vendor sample ratios.

Worked Examples:

Weekly synthetic full repository. A 12 TiB dataset with 3% daily change, four full restore points, 30 daily incrementals, two copy tiers, 1.7:1 compression, 1.6:1 dedupe, 20% headroom, and fast-clone synthetic fulls usually points to repository capacity well above a simple four-full estimate. The second copy tier doubles retained data before overhead and headroom are applied.

Compliance archive case. A lower 2% daily change rate can still produce a bigger capacity target when differential backups, long lock periods, monthly archives, and yearly archives are selected. In that case, retention policy dominates the estimate more than day-to-day churn.

Runway check. If the current usable repository budget is 80 TiB and the forecast peak reaches that range inside the procurement lead time, the warning is not about today's fill level alone. It means the modeled policy can reach the limit before replacement capacity is realistically available.

FAQ:

Why is the result higher than my protected data? Retention keeps multiple restore points, copy tiers multiply them, and reserves account for overhead, working space, fill ceiling, and restore scratch.

Should dedupe and compression be multiplied together? The calculator uses both as a combined reduction ratio. Confirm the resulting effective reduction against your backup platform, because some products report logical, protected, transferred, and stored sizes differently.

Does this prove a backup policy is recoverable? No. Capacity is only one part of recoverability. Test restore time, repository health, immutability, credentials, and offsite availability separately.

Glossary:

Protected dataset
The logical source data before backup reduction and repository overhead.
Incremental backup
A restore point containing changes since the previous backup job.
Differential backup
A restore point containing all changes since the last full backup.
Usable fill ceiling
The maximum planned fill percentage before raw storage is treated as exhausted.