Planned Usable Capacity
{{ formatCap(currentPlan.plannedUsableAfterReserve) }} {{ unit }}
Before reserve: {{ formatCap(currentPlan.usableBeforeReserve) }} {{ unit }} · Raw engaged: {{ formatCap(currentPlan.rawEngaged) }} {{ unit }}
{{ currentPlan.statusText }} {{ currentPlan.raidLabel }} {{ currentPlan.engagedDiskCount }} engaged / {{ currentPlan.activeDiskCount }} active {{ currentPlan.spareCount }} spare(s) Tolerates {{ currentPlan.toleratedFailuresMin }} disk(s) Fill {{ formatPercent(targetFill, 0) }} · reserve {{ formatPercent(safetyReservePercent, 0) }} Rebuild ~ {{ formatNumber(currentPlan.rebuildHours, 1) }} h {{ currentPlan.idleDiskCount }} disk(s) idle
{{ manualPreview.installedCount }} installed {{ manualPreview.emptySlots }} empty slot(s) Smallest {{ formatCap(manualPreview.smallest) }} {{ unit }} Largest {{ formatCap(manualPreview.largest) }} {{ unit }}
Disk {{ i }}: {{ unit }}
Mirror width:
Mirror width:
drives
{{ formatPercent(fsOverhead, 1) }}
{{ formatPercent(targetFill, 0) }}
{{ formatPercent(safetyReservePercent, 0) }}
MB/s
%
Disk use
{{ currentPlan.engagedDiskCount }} engaged / {{ currentPlan.activeDiskCount }} active
{{ currentPlan.spareCount }} spare(s), {{ currentPlan.idleDiskCount }} idle disk(s), {{ formatCap(currentPlan.idleRaw) }} {{ unit }} sitting outside the final layout.
Normalization loss
{{ formatCap(currentPlan.sizeMismatchRaw) }} {{ unit }}
Mixed-size members above the smallest engaged disk are stranded inside the current RAID group.
Closest protected alternative
{{ bestProtectedLabel }}
{{ bestProtectedSummary }}
Metric Value Copy
{{ row.label }} {{ row.value }}
Metric Value Copy
{{ row.label }} {{ row.value }}
Layout Planned usable ({{ unit }}) Tolerance Idle disks Rebuild Risk Verdict
{{ row.raidLabel }}
{{ row.note }}
{{ row.displayPlannedUsable }} {{ row.toleranceDisplay }} {{ row.idleDisplay }} {{ row.rebuildDisplay }} {{ row.riskDisplay }} {{ row.statusText }}

                
:

Introduction

RAID capacity planning starts with a number that looks simple and almost never survives the real layout. Raw disk size is only the starting pool. Once you choose striping, mirroring, parity, hot spares, filesystem overhead, operating headroom, and an extra safety reserve, the number you can promise to users is usually much smaller.

This calculator covers RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50, and RAID 60. You can plan with one shared disk size or enter every disk manually, switch between decimal units such as TB and binary units such as TiB, and compare layouts against the same disk set without re-entering the rest of the assumptions.

That makes it useful for NAS builds, server refreshes, virtualization hosts, backup targets, and lab arrays where the real question is not just how much raw space is installed, but how much protected capacity is still left after you leave room to operate safely. The page answers that with a summary banner, a Capacity Ledger, a Failure Envelope, a RAID Matchup table and chart, a Capacity Mix chart, and a structured JSON export.

The calculation stays in your browser. Disk sizes, spare counts, reserve settings, tables, charts, CSV files, DOCX exports, chart downloads, and JSON output are generated from the current page state rather than sent to a remote service.

It is still a planning model, not a controller firmware manual. Different RAID cards, filesystems, and vendors handle spare eligibility, metadata reservation, rebuild priorities, and wide-group limits differently. The strongest use of this page is to make the tradeoffs visible early, then confirm the final layout against the platform you will actually deploy.

Technical Details

Every input is converted to bytes first. That matters because storage marketing often uses decimal prefixes such as GB and TB, while operating systems and some administrators think in GiB and TiB. The calculator lets you choose either style, so the same physical disks can be discussed in the unit system your procurement sheet or operating system actually uses.

With one shared disk size, the installed inventory is simply slot count multiplied by per-disk size. Manual mode changes the logic. Each disk entry is converted separately, blank manual slots are ignored, the disks are sorted from largest to smallest, and the requested number of hot spares is pulled from that sorted list before the active set is evaluated. In a mixed array, that means the spare pool is reserved first and the remaining active members define the layout.

For RAID 0, the tool stripes every active disk and provides no redundancy. RAID 1 uses complete mirror groups of the chosen mirror width. RAID 5 and RAID 6 use all active disks once the minimum counts are met. RAID 10 requires at least two complete mirror groups, each using the chosen mirror width. RAID 50 and RAID 60 require a group count and disks-per-group value; only complete groups are engaged, so extra active disks can sit idle instead of being partially counted.

After the engaged members are chosen, the active pool is normalized to the smallest engaged disk. That is the key mixed-size rule. Any capacity above that smallest engaged member becomes stranded capacity inside the current layout. The calculator can then subtract a per-disk system reserve, apply a filesystem-overhead profile or a custom overhead fraction, apply RAID efficiency, reduce the result to the target fill you actually want to run at, and finally carve out an explicit safety reserve. That last number is the planned usable figure shown in the summary.

Recovery metrics are handled separately from capacity. Rebuild time is estimated from the smallest engaged disk and the rebuild throughput you enter. The Failure Envelope then combines the annual disk failure rate, the rebuild duration, the number of disks still exposed during recovery, and the additional failures needed to break guaranteed protection into a conservative probability estimate. That makes the risk field useful for comparing scenarios, but it is not a vendor-certified failure forecast.

Installed disks
Shared size or manual list
Spare removal
Reserved disks leave the active pool
Engaged layout
Complete mirrors or parity groups only
Planning deductions
System reserve, FS overhead, fill target, safety reserve
Outputs
Capacity, tolerance, rebuild, charts, exports
Raw disks -> spare removal and group selection -> equalized engaged pool -> planning deductions -> usable and recovery views
The page does not jump from raw capacity straight to one answer. It shows the intermediate losses so you can see whether parity, mixed sizes, spare choice, or operating headroom is doing most of the damage.
Capacity flow used here

Installed raw = sum of all installed disks

Equalized engaged raw = smallest engaged disk x engaged disks

Post-system raw = equalized engaged raw - (system reserve per disk x engaged disks)

Protected usable = post-system raw x (1 - filesystem overhead) x RAID efficiency

Usable before reserve = protected usable x target fill

Planned usable after reserve = usable before reserve x (1 - safety reserve)

Rebuild hours = smallest engaged disk / rebuild throughput

RAID layout rules used by the calculator
Layout Minimum active disks in this calculator Capacity share used for data Guaranteed tolerance What the page does with the disk set
RAID 0 1 100% 0 disks Uses every active disk and keeps no redundancy.
RAID 1 Mirror width 1 / mirror width mirror width - 1 per mirror group Counts only complete mirror groups of the chosen width.
RAID 5 3 (N - 1) / N 1 disk Uses all active disks once the minimum is met.
RAID 6 4 (N - 2) / N 2 disks Uses all active disks once the minimum is met.
RAID 10 2 x mirror width 1 / mirror width mirror width - 1 per mirror group Requires at least two complete mirror groups before striping them together.
RAID 50 groups x disks per group (per group - 1) / per group 1 disk guaranteed, one per group in the best case Uses only the complete RAID 5 groups you specify.
RAID 60 groups x disks per group (per group - 2) / per group 2 disks guaranteed, two per group in the best case Uses only the complete RAID 6 groups you specify.

Everyday Use & Decision Guide

Start with the hardware you truly have, not the capacity target you wish you had. Count installed disks, choose the unit system that matches your environment, and decide whether every drive is effectively identical. If even one drive is smaller, manual mode usually gives a truer picture than pretending the array is uniform.

Hot spares are easiest to justify on larger protected sets where replacement may not be immediate. The tool treats spares as reserved disks outside usable capacity, which is the right planning mindset. Real platforms also care about media type, interface, security mode, and spare size eligibility, so confirm those rules on the final system before you assume that any spare can protect any member.

Use the filesystem controls to avoid fooling yourself with raw RAID math. The ext4 or XFS, btrfs, and ZFS presets are quick planning shortcuts, not promises. If your actual stack uses snapshots heavily, thin provisioning, reserved blocks, or extra metadata partitions, switch to custom overhead and enter the assumption you want to defend.

Target fill and safety reserve answer different questions. Target fill is operational headroom inside the usable pool. Safety reserve is an extra slice you deliberately refuse to allocate even after the target-fill reduction. If you want the number that application owners should plan against, planned usable after reserve is usually the safest figure to share.

The RAID Matchup tab is where layout selection becomes easier. It keeps spare count, overhead, fill target, and reserve assumptions fixed, then compares the same disk set across nearby RAID choices. That makes the bar chart and table more useful than a generic RAID cheat sheet because the only moving part is the layout.

In practice, the best result is rarely the one with the highest raw capacity. A RAID 5 plan may top the capacity table while a RAID 6 or RAID 10 plan gives shorter or safer recovery behavior for the same disks. Use the comparison view to decide which tradeoff you are actually buying.

Step-by-Step Guide

  1. Enter the number of disk slots you want to plan and the base disk size, or switch to manual mode if the array mixes capacities.
  2. Choose the unit system you want to work in, especially if your procurement notes use TB while your operating system reports TiB.
  3. Pick the RAID layout, then set mirror width or group count and disks per group when the selected layout needs those details.
  4. Reserve hot spares if you want them excluded from usable capacity from the start.
  5. Open the advanced section to choose a filesystem-overhead preset or custom overhead, a target fill, a safety reserve, an optional per-disk system reserve, a rebuild throughput, and an annual failure rate.
  6. Read the summary banner and the Capacity Ledger first, then move to Failure Envelope if recovery behavior matters more than headline capacity.
  7. Use RAID Matchup, Capacity Mix, CSV, DOCX, chart downloads, or JSON export when you need to compare scenarios, share the result, or store the assumptions.

Interpreting Results

The first distinction to keep clear is raw engaged capacity versus usable capacity versus planned usable capacity. Raw engaged is the capacity of the disks currently participating in the layout before equalization losses, filesystem deductions, or planning reserves. Usable before reserve is what remains after redundancy, overhead, and target fill are applied. Planned usable after reserve is the more conservative figure after the explicit safety reserve has also been carved out.

Mixed-size loss and idle active capacity explain two common surprises. Mixed-size loss appears when a smaller engaged member drags the equalized pool down. Idle active capacity appears when the selected mirror or parity-group structure leaves some active disks outside the final layout. Those two fields are different: one is trapped inside engaged groups, and the other is sitting outside them entirely.

The Failure Envelope needs more care than the headline capacity number. Guaranteed tolerance is the safe number to plan around. Best-case tolerance can be higher for mirrored or grouped layouts, but only if failures are distributed favorably across mirrors or parity groups. The status badge then turns those facts into a quick posture label: No redundancy, Long rebuild window, Balanced, or Recovery-focused.

The charts answer two different questions. RAID Matchup Map compares planned usable capacity across valid layouts for the same disk set. Capacity Mix shows where the current array budget goes, separating planned usable space from redundancy, target-fill headroom, safety reserve, platform and filesystem overhead, stranded capacity, and hot spares. The JSON tab and file exports are there when you need to preserve the assumptions instead of only reading them once on the page.

How to read the main result surfaces in the RAID capacity calculator
Result surface What it answers Common mistake
Summary badges The current layout, planned usable capacity, engaged disks, spare count, tolerance, fill target, reserve, and rebuild time. Treating the first big capacity number as raw storage rather than a planned figure after deductions.
Capacity Ledger Where raw capacity was lost or reserved on the way to the final planning number. Ignoring mixed-size loss or idle active disks and blaming parity alone for the drop.
Failure Envelope How much failure protection remains and how exposed the array is during rebuild. Reading best-case tolerance as a guarantee for any random failure pattern.
RAID Matchup Which valid layout gives the best capacity and protection balance for the same installed disks. Comparing RAID levels while changing spare count or reserve assumptions at the same time.
Capacity Mix How the current array budget is split between usable space and every major loss bucket. Treating all non-usable space as parity overhead when some of it is reserve, spares, or stranded capacity.
JSON and exports A reusable record of inputs, current plan, and cross-layout comparison data. Sharing a screenshot without preserving the assumptions that produced it.

Worked Examples

Twelve 10 TB disks in RAID 6

Start with 12 identical 10 TB disks, no hot spare, no filesystem overhead, a 100% target fill, and the default 5% safety reserve. RAID 6 uses all 12 active disks and gives two-disk guaranteed tolerance. The protected usable pool is 100 TB because two disk equivalents are consumed by parity. After the 5% safety reserve, the planned usable figure is 95 TB. At the default rebuild throughput of 180 MB/s, the rebuild estimate for the smallest engaged disk is about 15.4 hours. This is a good baseline example because the ledger is clean: almost every loss is clearly parity or reserve rather than mixed-size waste.

A mixed array with one large spare

Now switch to manual mode and enter 18, 18, 18, 18, 12, 12, 12, and 12 TB with one hot spare and RAID 5. The spare is pulled from the largest end of the sorted list, so one 18 TB disk leaves the active pool first. That leaves 102 TB of raw active capacity, but the engaged set is equalized to the smallest active disk at 12 TB. Across seven active members, only 84 TB counts as equalized engaged raw, so 18 TB is stranded before parity is even applied. RAID 5 then turns that equalized pool into 72 TB of usable capacity before any fill target or reserve. The example is useful because it shows why a mixed shelf can disappoint twice: once from spare reservation and again from size normalization.

Fourteen 8 TB disks in RAID 60 with different group choices

Suppose you have 14 identical 8 TB disks and want RAID 60. If you set the layout to three groups of four, the calculator can only engage 12 disks and leaves two active disks idle because the selected pattern requires complete groups. That produces 48 TB of protected usable capacity before any fill or reserve deductions. If you instead use two groups of seven, all 14 disks are engaged and the protected usable pool rises to 80 TB before fill and reserve. This is exactly the kind of scenario where RAID Matchup and the idle-disk field earn their keep: the layout rule, not the disks themselves, is what changed the outcome.

FAQ

Does the calculator model a specific RAID controller or NAS vendor?

No. It models RAID layout math and planning deductions that are broadly useful across storage platforms. Controller-specific spare rules, metadata layouts, rebuild behavior, and supported group widths still need to be checked against the system you will deploy.

Do blank manual disk slots count as zero-size drives?

No. Blank manual entries are ignored. That lets you plan a partially filled enclosure without pretending the empty bays are installed disks.

Why can planned usable capacity be far below installed raw capacity?

Several deductions can stack at once: hot spares leave the active pool, mixed-size disks are equalized to the smallest engaged member, parity or mirrors consume capacity, filesystem overhead reduces the protected pool, target fill keeps operating headroom, and safety reserve removes one more slice after that.

Why are some active disks marked idle?

The selected mirror width or parity-group layout may only accept complete groups. If the active disk count does not divide cleanly into those groups, the remaining active disks stay outside the final layout and are reported as idle.

Does best-case tolerance mean any disks can fail in any pattern?

No. Best-case tolerance assumes failures are spread across different mirror or parity groups in a favorable way. Guaranteed tolerance is the safer planning number because it does not rely on where the failures land.

Is the rebuild risk figure a URE model or a warranty statement?

No. The page uses your annual failure rate input, the rebuild estimate, the disks exposed during recovery, and the number of extra failures that would break guaranteed protection. It is a conservative comparison estimate, not a manufacturer forecast.

Does the page send my disk list or RAID plan elsewhere?

No. The calculator works locally in the browser and generates its tables, charts, and exports from the data already on the page.

Glossary

Active disks
Installed disks left after the reserved hot spares are removed.
Engaged disks
The subset of active disks that the selected layout can actually use once mirror or group rules are enforced.
Equalized engaged raw
The engaged pool after every participating disk is capped to the size of the smallest engaged member.
Hot spare
A reserved disk kept out of usable capacity so it can stand by for rebuild work after a failure.
Target fill
The share of protected usable capacity you plan to run at in normal operation instead of filling the array to the edge.
Safety reserve
An extra planning deduction applied after target fill so you can keep deliberate growth or emergency headroom.
Guaranteed tolerance
The number of disk failures the current layout can absorb without relying on a favorable failure pattern.
Best-case tolerance
The higher failure count that may be survivable only when failures are distributed across groups in the most favorable way.