Provisioned Usable RAID Capacity
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ result.layoutLabel }} {{ result.faultDisplay }} Fill {{ formatPct(params.target_fill_pct, 0) }}
RAID sizing inputs
Accepted values include RAID 5, RAID 6, RAID 10, and RAID-Z layouts.
Example: 10 drives in each RAID 6 group.
drives
Example: 3 identical vdevs or RAID groups.
groups
Example: 18 TB per drive; select decimal or binary units.
Use 0 if all installed drives are active members.
drives
Common planning range: 70% to 85%.
%
Enter 0 to hide target-fit guidance.
Example: 24 bays in one enclosure.
bays
Use 0 if every available bay may be populated.
bays
Examples: 12, 24, or 60 bays per shelf.
bays
Enter 0 if overhead is already included elsewhere.
%
Example: 10 GB per drive for appliance partitions.
GB
Enter 70 for a 70% read, 30% write workload.
%
Use 0 to skip IOPS estimation.
IOPS
Use 0 to skip rebuild-time estimates.
MB/s
{{ advanced.rebuild_contention_pct }}%
Accepted range: 0% to 90%.
Example: 0.8 for 0.8% annualized failure rate.
%
Choose Skip URE model when no media error estimate is needed.
Accepted range: 0 to 6 decimal places.
Metric Value Planning Context Copy
{{ row.label }} {{ row.value }} {{ row.note }}
Layout Usable (TiB) Efficiency Fault Tolerance Write Penalty Profile Copy
{{ row.label }} {{ row.usableTib }} {{ row.efficiency }} {{ row.faultTolerance }} {{ row.writePenalty }} {{ row.profile }}
Path Configuration Provisioned usable Target delta Bay delta Sizing note Copy
{{ row.label }} {{ row.configuration }} {{ row.usable }} {{ row.targetDelta }} {{ row.bayDelta }} {{ row.note }}
Drive size Provisioned usable Delta vs current Sizing signal Rebuild window Market note Copy
{{ row.driveSize }} {{ row.usable }} {{ row.deltaVsCurrent }} {{ row.sizingSignal }} {{ row.rebuild }} {{ row.note }}
Field Value Copy
{{ row.label }} {{ row.value }}

        
:

Introduction

RAID sizing is a capacity and risk exercise, not only a raw-drive multiplication. The layout decides how many drives store data, how many are consumed by mirrors or parity, and how many failures can be survived before data is at risk.

Planning numbers also need operational deductions. Hot spares, metadata overhead, platform partitions, and a steady-state fill target all reduce the capacity that should be promised to applications. A plan that looks large on paper can become too tight once rebuild windows, chassis bays, and future expansion steps are included.

The useful answer is therefore a provisioning envelope: raw capacity, protected capacity, planned usable capacity, fault tolerance, growth path, and the assumptions behind them.

Technical Details

Classic RAID capacity is governed by the data-drive count inside each group. RAID 5 and RAID-Z1 spend one member on parity. RAID 6 and RAID-Z2 spend two. RAID-Z3 spends three. RAID 10 and mirror-style layouts trade roughly half the active members for mirrored copies, while RAID 0 spends nothing on redundancy and therefore has no fault tolerance.

After layout math, the calculator subtracts hot spares and optional overhead, then applies the planning fill target. It can also estimate mixed random IOPS from read ratio, per-drive IOPS, and write-penalty assumptions. Rebuild-time and simple annual-loss estimates appear only when the needed advanced assumptions are present.

Technical rule summary
Raw installed capacitydrives per group x group count x drive size
Protected capacitydata members per group x group count x drive size
After sparesprotected capacity minus dedicated hot-spare capacity
Planned usableafter overhead and the target fill percentage
RAID 5 / RAID-Z1 cautionsingle-parity groups become riskier as width and drive size grow

The tool also builds a layout ladder from the same disk count, a drive-market comparison using common decimal drive sizes, and a growth path against an optional target usable capacity. Chassis slots, reserved empty bays, and expansion shelf size convert pure capacity shortfall into bay-level procurement guidance.

Everyday Use & Decision Guide

Use the main fields for the shape of the array: RAID layout, drives per group, group count, drive size, and hot spares. A first pass for shared storage is often dual parity or mirrors with a fill target around 80 percent, because that leaves space for snapshots, rebuild work, and ordinary growth.

  • Check RAID Sizing Ledger before quoting capacity. It shows where raw capacity goes.
  • Use RAID Layout Ladder to compare the same disk inventory across supported layouts.
  • Use RAID Growth Path after setting a target usable capacity.
  • Use RAID Drive Market when choosing between more bays and larger drives.
  • Review warning rows before relying on single parity, wide parity groups, odd mirror counts, or no hot spare on a large plan.

The result does not replace vendor sizing for a production appliance. It gives a transparent planning model so questionable assumptions are visible before the quote or change request hardens.

Step-by-Step Guide

  1. Select the RAID or RAID-Z style layout.
  2. Enter active drives per group, number of identical groups, per-drive size, and hot spares.
  3. Open Advanced for fill target, target usable capacity, chassis slots, overhead, IOPS, rebuild speed, AFR, or URE assumptions.
  4. Read the summary badges for resilience, capacity bias, or review status.
  5. Export the ledger, ladder, pathway, market table, or JSON when documenting the sizing decision.

Interpreting Results

Provisioned Usable RAID Capacity is the planned number after reserves, not the raw installed total. Treat it as the capacity to plan around for routine use.

Fault Tolerance reports the layout's failure cushion per group. RAID 6 and RAID-Z2 can survive two member failures in a group, but rebuild duration, URE exposure, and controller behavior still matter.

Capacity-biased profile is not a failure state. It means the plan favors usable space or high fill over resilience. That is acceptable for replaceable scratch data and much less acceptable for backup, archival, or virtual-machine storage.

Worked Examples

Eight 12 TB drives in RAID 6. With one group of eight and one hot spare, raw installed capacity is 96 TB. Dual parity leaves six data-drive equivalents before spares and overhead, so the ledger explains why planned usable capacity is much lower than the box label.

Two mirror groups for virtualization. A RAID 10 style plan with eight drives and four mirror pairs usually gives less usable capacity than parity, but the ladder can show why it may be preferable for random I/O and rebuild flexibility.

Target shortfall. If a 200 TiB target is set and the current plan misses it, the growth path translates the gap into additional groups, larger disks, or shelf steps when bay assumptions are provided.

FAQ

Why is usable capacity lower than drive count times drive size?

Mirrors, parity, spares, metadata overhead, platform reserve, and fill target all reduce the number that should be assigned to workloads.

Does RAID replace backups?

No. RAID protects against some drive failures. It does not protect against deletion, corruption, theft, controller faults, malware, or site loss.

Why does the tool warn about wide RAID 5 or RAID-Z1?

Single parity gives only one-drive fault tolerance. Large, wide groups can spend a long time rebuilding after failure, which raises exposure to another fault.

What if my appliance reserves capacity differently?

Use the metadata overhead and system reserve per drive fields to approximate the appliance policy, then document that assumption in the exported context.

Glossary

Parity
Redundant information that can reconstruct data after a tolerated drive failure.
Hot spare
A reserved drive that is not counted as planned usable capacity.
URE
Unrecoverable read error, a media read failure that matters during rebuild exposure.
Fill target
The planned occupancy ceiling used to avoid sizing to the theoretical maximum.

References