| Tier | Frequency | Copies | Per backup | Storage (raw) | Storage (net) | Coverage | Copy |
|---|---|---|---|---|---|---|---|
| {{ row.tier }} | {{ row.frequency }} | {{ row.copies }} | {{ row.sizePerBackup }} | {{ row.rawStorage }} | {{ row.netStorage }} | {{ row.coverage }} | |
| Total | {{ totalBackups }} | {{ totalStorageRawReadable }} | {{ totalStorageReadable }} | {{ coverageHorizon }} |
Use these observations to prioritise storage remediation or policy tweaks.
No insights yet—adjust retention or compression to surface guidance.
Backup rotation is the practice of keeping recent restore points alongside a smaller set of older copies so you can recover a fresh mistake without storing every version forever. The tradeoff is simple to describe but hard to size: longer history improves resilience, yet each added week, month, or year increases storage consumption and pushes more policy weight onto the backup repository.
This planner turns that tradeoff into a concrete estimate. It models daily incrementals plus weekly, monthly, and yearly full copies, then reports total backup count, raw capacity, dedupe-adjusted footprint, coverage horizon, a per-tier capacity table, planner notes, a tier chart, and JSON export.
The preset list reflects common retention conversations rather than one vendor's exact implementation language. Two Grandfather-Father-Son (GFS) starting points sit beside a longer compliance-oriented recipe and a shorter cloud snapshot recipe, which makes the page useful when you want to compare a short rollback window against a long-archive policy without rebuilding the plan from scratch.
A modest file server shows why this matters. Two weeks of daily incrementals may look manageable on their own, but once monthly and yearly full copies are layered in, the older tiers often become the largest share of storage even before growth is considered. The result is a better answer to "how far back can we recover, and what will that really cost?" than a raw copy count alone.
The output is still a planning model, not proof that restores will succeed. Compression, dedupe, and growth are entered as assumptions, while real backup products can behave differently by workload, encryption, repository design, and chain handling. Use the numbers to compare policy choices, then confirm the winning plan with product-specific restore tests and operational runbooks.
Start with the preset that feels closest to your environment, then replace the default daily change volume and full-backup size with your own measurements. Because the totals update as the inputs change, the page is strongest as a comparison workspace: keep one assumption fixed, change one policy variable, and watch which tier starts to dominate the footprint.
The Planner Notes tab works well as a triage view after you settle on a draft policy. It calls out which tier holds the largest share of net storage, how much capacity the dedupe assumption removes from the raw total, and whether short-term restore points or long-retention copies are doing most of the work in the current model.
This package calculates the plan in the page itself. The entered size values are converted to bytes with binary units, so MB, GB, and TB are treated as powers of 1024 rather than decimal storage marketing units.
The model has two base sizes: one compressed incremental copy and one compressed full copy. Daily retention uses the incremental base. Weekly, monthly, and yearly tiers all use the full-backup base. Each retained tier then receives a simple growth factor based on how long that tier reaches back in time, which means older copies can become larger even when the entered full size stays constant.
| Derived value | Package rule | Why it matters |
|---|---|---|
| Effective incremental size | incremental bytes / incremental compression |
Sets the starting size for daily restore points. |
| Effective full size | full bytes / full compression |
Sets the starting size for weekly, monthly, and yearly tiers. |
| Coverage days | Daily = n, weekly = 7n, monthly = 30n, yearly = 365n |
Defines both the displayed horizon and the age used by the growth model. |
| Growth factor | 1 + (monthly change rate / 100) x (coverage days / 30) / 2 |
Approximates the idea that older retained copies reflect a larger dataset on average. |
| Raw tier storage | size per backup x copies |
Shows the footprint before any cross-copy savings are applied. |
| Net tier storage | raw tier storage x (1 - dedupe percent / 100) |
Produces the headline footprint shown in the summary box. |
The bounds are deliberately light but not unlimited in every direction. Negative values are floored out by the calculations, the dedupe percentage is clamped to a 0 to 80 range, and the change-rate slider runs from 0 to 50 percent per month. Compression ratios begin at 1, so the page never models expansion as a valid compression outcome.
| Input | What the package accepts | Operational meaning |
|---|---|---|
| Preset | Custom, GFS 7-4-12-2, GFS 14-8-12-4, Compliance 30-12-7-5, Cloud snapshot 14-6-6-0 | Loads a full retention recipe and advanced assumptions. |
| Daily incremental size | Numeric value with MB, GB, or TB | Represents the usual changed data captured between restore points. |
| Full backup size | Numeric value with MB, GB, or TB | Represents one complete protected copy before compression. |
| Retention counts | Integer copy counts for daily, weekly, monthly, and yearly tiers | Define how many restore points are kept in each layer. |
| Monthly change rate | 0 to 50 percent | Expands older retained copies as the protected dataset grows. |
| Compression ratios | Full and incremental ratios, minimum 1 | Reduce the pre-storage size of each copy type independently. |
| Global dedupe savings | 0 to 80 percent | Applies a single repository-wide savings assumption after compression. |
The result area is split into four practical views. Capacity Breakdown lists each tier with copy count, per-backup size, raw storage, net storage, and coverage. Planner Notes highlights dominant tiers and savings. Capacity Trends charts net storage by tier with raw values and coverage in the tooltip. JSON exports the modeled totals and tier rows in machine-readable form. The table supports CSV and DOCX export, while the chart can be saved as PNG, WebP, JPEG, or CSV.
A few interpretation limits matter. The displayed Coverage horizon is the longest retained tier, not a promise of uninterrupted day-by-day restore points across the whole window. Month and year conversions use fixed 30-day and 365-day approximations. Most importantly, the model applies one effective size to each tier rather than simulating every backup chain event individually, so it is best viewed as policy sizing rather than repository accounting.
The large Backup Footprint number is the net estimate after the dedupe assumption has been applied. The Raw badge beside it is just as important because it shows how much of the apparent efficiency depends on the dedupe percentage you entered.
The planner notes are especially helpful when the totals look reasonable but the mix does not. A policy can show an acceptable overall footprint while still putting too much weight on a single tier, which is a signal to revisit retention counts, storage class, or full-backup strategy.
With the built-in GFS 7-4-12-2 style starting point in this package, a 60 GB daily change rate, a 500 GB full backup, 14 daily copies, 4 weekly fulls, 12 monthly fulls, 2 yearly archives, 5 percent monthly growth, 1.5x full compression, 2.0x incremental compression, and 30 percent dedupe produce about 7.9 TB raw and 5.5 TB net with a 2.0 year horizon.
The interesting part is not the headline total. It is that the monthly and yearly full tiers carry most of the weight. The plan is telling you that long-history coverage, not daily restore points, is the real storage cost driver.
The cloud snapshot preset trims history and leans harder on efficiency: 30 GB of daily change, a 150 GB full copy, 14 daily copies, 6 weekly fulls, 6 monthly fulls, no yearly archive, 4 percent monthly growth, 1.7x full compression, 2.5x incremental compression, and 50 percent dedupe.
That lands at about 1.3 TB raw and 653 GB net with a 6.0 month horizon. The shorter history and stronger efficiency assumptions dramatically reduce the footprint, but the tradeoff is obvious: there is no multi-year archive layer to fall back on.
The compliance preset shows how quickly archive-heavy policies can grow. With 100 GB of daily change, a 1.5 TB full copy, 30 daily copies, 12 weekly fulls, 7 monthly fulls, 5 yearly archives, 10 percent monthly growth, 1.4x full compression, 1.8x incremental compression, and 20 percent dedupe, the package estimates about 48.1 TB raw and 38.5 TB net.
The horizon stretches to 5.0 years, but the yearly archive tier becomes a major capacity commitment. That is the kind of result that usually pushes a real storage discussion toward archive media, separate repositories, or a policy review with compliance stakeholders.
No. The package models copy counts, effective sizes, and coverage windows. It does not schedule actual backup dates or simulate a product-specific retention engine.
No. It means the retained tiers reach that far back at their own cadence. Daily points, weekly fulls, monthly fulls, and yearly archives are separate layers with different granularity.
The difference is created by the entered dedupe percentage. A large gap means the plan depends heavily on repository-wide block reuse beyond simple compression.
This bundle contains no package-specific backend for the calculation path. The figures are computed in the page, and the export buttons save the same modeled state that is already visible on screen.
The summary only appears when there is at least one retained copy and the resulting storage estimate is greater than zero. If all retention counts are zero or the sizes are missing, there is nothing to model.