Backup Footprint
{{ totalStorageReadable }}
Coverage horizon {{ coverageHorizon }}
{{ totalBackups }} backup copies Incremental {{ incrementalSizeReadable }} Full {{ fullSizeReadable }} Raw {{ totalStorageRawReadable }} Savings {{ dedupeSavingsLabel }} {{ changeRateLabel }}
copies:
copies:
copies:
copies:
{{ params.changeRatePercent }}%
:1
:1
{{ params.dedupePercent }}%
Tier Frequency Copies Per backup Storage (raw) Storage (net) Coverage Copy
{{ row.tier }} {{ row.frequency }} {{ row.copies }} {{ row.sizePerBackup }} {{ row.rawStorage }} {{ row.netStorage }} {{ row.coverage }}
Total {{ totalBackups }} {{ totalStorageRawReadable }} {{ totalStorageReadable }} {{ coverageHorizon }}
Planner Notes

Use these observations to prioritise storage remediation or policy tweaks.

  • {{ line }}

No insights yet—adjust retention or compression to surface guidance.


                
Configure retention targets to estimate backup storage footprint and coverage.
:

Introduction

Backup rotation is the practice of keeping recent restore points alongside a smaller set of older copies so you can recover a fresh mistake without storing every version forever. The tradeoff is simple to describe but hard to size: longer history improves resilience, yet each added week, month, or year increases storage consumption and pushes more policy weight onto the backup repository.

This planner turns that tradeoff into a concrete estimate. It models daily incrementals plus weekly, monthly, and yearly full copies, then reports total backup count, raw capacity, dedupe-adjusted footprint, coverage horizon, a per-tier capacity table, planner notes, a tier chart, and JSON export.

The preset list reflects common retention conversations rather than one vendor's exact implementation language. Two Grandfather-Father-Son (GFS) starting points sit beside a longer compliance-oriented recipe and a shorter cloud snapshot recipe, which makes the page useful when you want to compare a short rollback window against a long-archive policy without rebuilding the plan from scratch.

A modest file server shows why this matters. Two weeks of daily incrementals may look manageable on their own, but once monthly and yearly full copies are layered in, the older tiers often become the largest share of storage even before growth is considered. The result is a better answer to "how far back can we recover, and what will that really cost?" than a raw copy count alone.

The output is still a planning model, not proof that restores will succeed. Compression, dedupe, and growth are entered as assumptions, while real backup products can behave differently by workload, encryption, repository design, and chain handling. Use the numbers to compare policy choices, then confirm the winning plan with product-specific restore tests and operational runbooks.

Everyday Use & Decision Guide

Start with the preset that feels closest to your environment, then replace the default daily change volume and full-backup size with your own measurements. Because the totals update as the inputs change, the page is strongest as a comparison workspace: keep one assumption fixed, change one policy variable, and watch which tier starts to dominate the footprint.

  • Keep the size inputs stable when you compare retention policies. If the dataset size and the retention counts both change at once, you lose sight of which policy decision actually drove the storage jump.
  • Read Raw and net capacity together. A plan that only looks comfortable after a very optimistic dedupe percentage is a plan that deserves a more conservative storage purchase.
  • Treat Monthly full and Yearly archive counts as the main coverage levers. They usually move the farthest rollback point much more than adding a few extra daily copies.
  • Revisit the plan whenever churn grows. The monthly change-rate factor quietly inflates older tiers, so a policy that fit last quarter can drift upward even if the retention counts never move.

The Planner Notes tab works well as a triage view after you settle on a draft policy. It calls out which tier holds the largest share of net storage, how much capacity the dedupe assumption removes from the raw total, and whether short-term restore points or long-retention copies are doing most of the work in the current model.

Technical Details

This package calculates the plan in the page itself. The entered size values are converted to bytes with binary units, so MB, GB, and TB are treated as powers of 1024 rather than decimal storage marketing units.

The model has two base sizes: one compressed incremental copy and one compressed full copy. Daily retention uses the incremental base. Weekly, monthly, and yearly tiers all use the full-backup base. Each retained tier then receives a simple growth factor based on how long that tier reaches back in time, which means older copies can become larger even when the entered full size stays constant.

Planner formula core
Derived value Package rule Why it matters
Effective incremental size incremental bytes / incremental compression Sets the starting size for daily restore points.
Effective full size full bytes / full compression Sets the starting size for weekly, monthly, and yearly tiers.
Coverage days Daily = n, weekly = 7n, monthly = 30n, yearly = 365n Defines both the displayed horizon and the age used by the growth model.
Growth factor 1 + (monthly change rate / 100) x (coverage days / 30) / 2 Approximates the idea that older retained copies reflect a larger dataset on average.
Raw tier storage size per backup x copies Shows the footprint before any cross-copy savings are applied.
Net tier storage raw tier storage x (1 - dedupe percent / 100) Produces the headline footprint shown in the summary box.

The bounds are deliberately light but not unlimited in every direction. Negative values are floored out by the calculations, the dedupe percentage is clamped to a 0 to 80 range, and the change-rate slider runs from 0 to 50 percent per month. Compression ratios begin at 1, so the page never models expansion as a valid compression outcome.

Key inputs and limits
Input What the package accepts Operational meaning
Preset Custom, GFS 7-4-12-2, GFS 14-8-12-4, Compliance 30-12-7-5, Cloud snapshot 14-6-6-0 Loads a full retention recipe and advanced assumptions.
Daily incremental size Numeric value with MB, GB, or TB Represents the usual changed data captured between restore points.
Full backup size Numeric value with MB, GB, or TB Represents one complete protected copy before compression.
Retention counts Integer copy counts for daily, weekly, monthly, and yearly tiers Define how many restore points are kept in each layer.
Monthly change rate 0 to 50 percent Expands older retained copies as the protected dataset grows.
Compression ratios Full and incremental ratios, minimum 1 Reduce the pre-storage size of each copy type independently.
Global dedupe savings 0 to 80 percent Applies a single repository-wide savings assumption after compression.

The result area is split into four practical views. Capacity Breakdown lists each tier with copy count, per-backup size, raw storage, net storage, and coverage. Planner Notes highlights dominant tiers and savings. Capacity Trends charts net storage by tier with raw values and coverage in the tooltip. JSON exports the modeled totals and tier rows in machine-readable form. The table supports CSV and DOCX export, while the chart can be saved as PNG, WebP, JPEG, or CSV.

A few interpretation limits matter. The displayed Coverage horizon is the longest retained tier, not a promise of uninterrupted day-by-day restore points across the whole window. Month and year conversions use fixed 30-day and 365-day approximations. Most importantly, the model applies one effective size to each tier rather than simulating every backup chain event individually, so it is best viewed as policy sizing rather than repository accounting.

Step-by-Step Guide

  1. Choose a Preset that roughly matches the policy discussion you are having. If none fit, switch to Custom and build the schedule from scratch.
  2. Enter Daily incremental size and Full backup size with the correct units. These two inputs drive almost everything else in the model.
  3. Set the retained copy counts for Daily incrementals, Weekly full backups, Monthly full backups, and Yearly archives.
  4. Open Advanced and adjust Monthly change rate, Full compression ratio, Incremental compression, and Global dedupe savings so the assumptions reflect your environment.
  5. Read the summary box first. It shows the net backup footprint, total copy count, effective incremental and full sizes, raw capacity, dedupe label, and the farthest rollback horizon.
  6. Use Capacity Breakdown to find which tier dominates storage, Planner Notes to review the summary narrative, Capacity Trends for a visual comparison, and JSON or the export buttons when you need to share the plan.

Interpreting Results

The large Backup Footprint number is the net estimate after the dedupe assumption has been applied. The Raw badge beside it is just as important because it shows how much of the apparent efficiency depends on the dedupe percentage you entered.

  • Total backup copies is the sum of every retained copy across all included tiers. It is a count of stored restore points, not a count of backup jobs.
  • Coverage horizon is the farthest modeled rollback point reached by the longest tier. It is useful for policy comparison, but it does not mean every date inside that window has the same restore granularity.
  • If Monthly full or Yearly archive rows dominate net storage, long-term history is driving cost more than short-term recovery speed.
  • If the gap between raw and net is very large, challenge the dedupe assumption before you treat the footprint as procurement-ready.

The planner notes are especially helpful when the totals look reasonable but the mix does not. A policy can show an acceptable overall footprint while still putting too much weight on a single tier, which is a signal to revisit retention counts, storage class, or full-backup strategy.

Worked Examples

Default GFS balance

With the built-in GFS 7-4-12-2 style starting point in this package, a 60 GB daily change rate, a 500 GB full backup, 14 daily copies, 4 weekly fulls, 12 monthly fulls, 2 yearly archives, 5 percent monthly growth, 1.5x full compression, 2.0x incremental compression, and 30 percent dedupe produce about 7.9 TB raw and 5.5 TB net with a 2.0 year horizon.

The interesting part is not the headline total. It is that the monthly and yearly full tiers carry most of the weight. The plan is telling you that long-history coverage, not daily restore points, is the real storage cost driver.

Shorter cloud-style retention

The cloud snapshot preset trims history and leans harder on efficiency: 30 GB of daily change, a 150 GB full copy, 14 daily copies, 6 weekly fulls, 6 monthly fulls, no yearly archive, 4 percent monthly growth, 1.7x full compression, 2.5x incremental compression, and 50 percent dedupe.

That lands at about 1.3 TB raw and 653 GB net with a 6.0 month horizon. The shorter history and stronger efficiency assumptions dramatically reduce the footprint, but the tradeoff is obvious: there is no multi-year archive layer to fall back on.

Long-retention compliance pressure

The compliance preset shows how quickly archive-heavy policies can grow. With 100 GB of daily change, a 1.5 TB full copy, 30 daily copies, 12 weekly fulls, 7 monthly fulls, 5 yearly archives, 10 percent monthly growth, 1.4x full compression, 1.8x incremental compression, and 20 percent dedupe, the package estimates about 48.1 TB raw and 38.5 TB net.

The horizon stretches to 5.0 years, but the yearly archive tier becomes a major capacity commitment. That is the kind of result that usually pushes a real storage discussion toward archive media, separate repositories, or a policy review with compliance stakeholders.

FAQ

Does this create a dated backup calendar?

No. The package models copy counts, effective sizes, and coverage windows. It does not schedule actual backup dates or simulate a product-specific retention engine.

Does coverage horizon mean I have daily restore points for the whole window?

No. It means the retained tiers reach that far back at their own cadence. Daily points, weekly fulls, monthly fulls, and yearly archives are separate layers with different granularity.

Why are raw and net storage so different?

The difference is created by the entered dedupe percentage. A large gap means the plan depends heavily on repository-wide block reuse beyond simple compression.

Do my backup sizes leave the browser?

This bundle contains no package-specific backend for the calculation path. The figures are computed in the page, and the export buttons save the same modeled state that is already visible on screen.

Why do the totals stay empty?

The summary only appears when there is at least one retained copy and the resulting storage estimate is greater than zero. If all retention counts are zero or the sizes are missing, there is nothing to model.

Glossary

Incremental backup
A backup that stores only the data changed since the previous recovery point.
Full backup
A complete protected copy of the dataset at a point in time.
GFS
Grandfather-Father-Son retention, a layered pattern that keeps short-, medium-, and long-term copies.
Dedupe
Repository savings created by storing repeated blocks only once across multiple copies.
Coverage horizon
The farthest modeled rollback point reached by the longest retained tier.

References