Projected Usable Capacity
{{ formatCap(usable) }} {{ unit }}
{{ formatCap(rawEffective) }} {{ unit }} Raw (active, post-overhead) {{ (efficiency*100).toFixed(1) }} % Protected efficiency {{ formatCap(reservedRaw) }} {{ unit }} Reserved
Disk {{ i }} {{ unit }}
mirror
pair
groups
per group
{{ (fsOverhead*100).toFixed(1) }} %
{{ (targetFill*100).toFixed(0) }} %
MetricValue
Disks (active + spares) {{ activeDisks.length }} + {{ spareCount }}
RAID {{ raidLabel }}
Raw storage (all) ({{ unit }}) {{ formatCap(rawAll) }}
Spares capacity ({{ unit }}) {{ formatCap(sparesRaw) }}
Raw active ({{ unit }}) {{ formatCap(rawActive) }}
Equalized raw ({{ unit }}) {{ formatCap(equalizedRaw) }}
Overhead ({{ unit }}) {{ formatCap(equalizedRaw - rawEffective) }}
Raw post-overhead ({{ unit }}) {{ formatCap(rawEffective) }}
Target fill {{ (targetFill*100).toFixed(0) }} %
Protected efficiency {{ (efficiency*100).toFixed(2) }} %
Usable capacity ({{ unit }}) {{ formatCap(usable) }}
Redundancy overhead ({{ unit }}) {{ formatCap(redundancyRaw) }}
Reserved / free ({{ unit }}) {{ formatCap(reservedRaw) }}

      

Introduction:

Redundant Array of Independent Disks (RAID) combines multiple drives to improve availability and performance while trading some raw capacity for redundancy. Capacity planning must consider mirroring or parity overheads, unequal disk sizes, and practical limits like recommended occupancy and filesystem metadata. This tool focuses on usable capacity, helping you quantify how much space remains for data after protections and planning reserves.

You enter either a uniform disk size and count or a list of heterogeneous capacities, then choose a protection scheme and optional spares. The calculator equalizes mixed disks to the smallest active unit, deducts configurable filesystem overhead, and applies a target fill percentage, yielding an efficiency factor and a projected usable figure with a clear breakdown.

Use it when scoping new arrays, expanding shelves, or comparing layouts before procurement. Caution: estimates are planning aids that omit controller peculiarities, rebuild penalties, vendor-specific metadata, and workload-dependent effects.

Technical Details:

Concept overview. Usable capacity reflects three stages: equalization, non-data overheads, and protection efficiency. Equalization constrains mixed arrays to the smallest active disk across the disks actually used by the chosen layout. Non-data overheads model filesystem and metadata. Efficiency captures the data fraction after mirroring or parity. Variables include active disk count, smallest active capacity, overhead rate, target fill, mirror width, and parity-per-group assumptions.

Core equations and definitions

Overall sequence:

C_usable= e× f× (1o)× C_eq
  • Ceq – equalized raw capacity. For mixed disks, cmin times the number of disks used by the geometry.
  • o – filesystem/metadata overhead rate (0–0.15 typical).
  • f – target fill (recommended maximum occupancy, 0.50–0.99).
  • e – protection efficiency:
    • RAID0: 1
    • RAID1 / RAID10 (mirror width w): 1÷w
    • RAID5 (n active): (n1)÷n
    • RAID6 (n active): (n2)÷n
    • RAID50/60 (group size per): (perparity)÷per

Interpretation and protections

RAID levelMinimum active disksFault toleranceEfficiency (symbolic)Notes
01None1No redundancy; for testing or scratch workloads.
1ww−1 per set1/wMirroring; group width w ≥ 2.
531 disk(n−1)/nSingle parity across the stripe.
642 disks(n−2)/nDual parity across the stripe.
10w×2w−1 per set1/wStriped mirrors; w even.
50≥ 2 groups ×31 per group(per−1)/perStriped RAID5 groups.
60≥ 2 groups ×42 per group(per−2)/perStriped RAID6 groups.

Fault tolerance is per mirror set or per parity group, not across the entire pool. Grouping may leave some disks unused when counts do not divide evenly.

Variables & parameters

ParameterMeaningUnit/DatatypeTypical RangeNotes
Capacity per diskUniform capacity for all drivesMB/GB/TB/PB (SI) or MiB/GiB/TiB/PiB (IEC)0.5–40 TBIgnored when a list of disks is provided.
Total disksDrive count including actives and sparesInteger1–96Active set excludes hot spares.
Disk listComma/space separated capacitiesArray of numbers2–96 entriesMixed sizes are equalized to the smallest active.
Data protectionChosen RAID level and geometryEnum0/1/5/6/10/50/60Mirror width or group size may apply.
Mirror widthMembers per mirror setInteger2–4Used by RAID1 and RAID10.
Groups × per-groupNumber of groups and membersIntegers2–12 groups; 3–16 per groupUsed by RAID50/60.
Hot sparesGlobal reserve drivesInteger0–NNot part of active capacity.
Filesystem overheadNon-data space for metadataProportion0–0.15Adjust per platform guidance.
Target fillPlanning occupancy thresholdProportion0.50–0.99Lower for fragmentation-sensitive workloads.

Worked example. Eight 10 TB disks, RAID6, no spares, 3% overhead, 85% target fill.

Equalization:

C_eq=c_min×n=10×8=80 TB

Overhead:

C_eff=(10.03)×80=77.6 TB

Target fill:

C_used=0.85×77.6=65.96 TB

Protection efficiency (RAID6, n=8):

e=(82)÷8=0.75

Usable capacity:

C_usable=0.75×65.96=49.47 TB

Assumptions & limitations

  • Equalization limits mixed arrays to the smallest active disk.
  • Parity and mirroring costs are modeled as ideal fractions without controller overheads.
  • Hot spares are excluded from active capacity calculations.
  • Filesystem overhead is a single rate, not a platform-specific curve.

Edge cases & error sources

  • Insufficient disks for a chosen level invalidate the layout.
  • Group-based levels may leave disks unused when counts do not divide evenly.
  • Extreme target fill or overhead values can dominate results.
  • Heterogeneous disks amplify equalization loss relative to nominal raw totals.

Scientific validity & references

Concepts align with the RAID taxonomy (Patterson, Gibson, Katz), standard storage engineering texts, and the SNIA Dictionary’s definitions of mirroring, parity, and usable capacity.

Privacy & compliance

Inputs describe device capacities rather than personal data; typical privacy regulations such as GDPR or HIPAA do not apply to these calculations.

Step-by-Step Guide:

Follow this sequence to estimate protected, practical storage for your layout.

  1. Choose capacity unit and enter capacity per disk with total disks; or paste a disk list to model heterogeneous sizes.
  2. Select data protection. For mirrors set mirror width. For striped parity sets define groups and per group.
  3. Specify hot spares to exclude reserve drives from active capacity.
  4. Adjust filesystem overhead and target fill to match platform guidance and operational headroom.
  5. Review the projected usable capacity, efficiency fraction, and a breakdown of redundancy and reserved space; export results as CSV or JSON when needed.

Caution Equalization can significantly reduce capacity when disk sizes vary widely.

Troubleshooting:

  • Result empty – verify disk count meets the minimum for the selected level.
  • Unexpectedly low capacity – check for mixed disk sizes causing equalization loss.
  • Disks unused – for group-based levels, ensure counts are multiples of the per-group setting.
  • Efficiency seems off – confirm mirror width, parity assumptions, and active disk count.
  • Exports look inconsistent – confirm unit selection and comma vs. space separators in lists.

Advanced Tips:

  • Tip Favor identical capacities within parity groups to minimize equalization loss.
  • Tip Choose a lower target fill for fragmentation-prone filesystems or mixed workloads.
  • Tip Reserve at least one hot spare per chassis to reduce rebuild start times.
  • Tip Increase per-group size for parity levels to improve efficiency, balanced against rebuild risk.
  • Tip Use occupancy targets aligned with your snapshot and replication policies to avoid surprise shortfalls.

FAQ:

Does it handle mixed disks?

Yes. Mixed arrays are equalized to the smallest active disk times the number of disks used by the geometry, which can reduce capacity noticeably.

How are hot spares treated?

Hot spares are excluded from the active set and do not contribute to equalization or efficiency. They exist solely to accelerate recovery readiness.

What is “target fill” for?

Target fill models a practical occupancy ceiling to preserve performance headroom and maintenance space. Lower values are prudent when fragmentation is expected.

Is my data stored?

No. Calculations run locally in your browser using a reactive engine; inputs are not transmitted to a server.

Why does efficiency differ by level?

Mirrors duplicate data across members, while parity levels reserve space for parity blocks. Efficiency expresses the data-to-raw ratio implied by each scheme.

Glossary:

Equalization
Capacity constrained by the smallest active disk across used members.
Parity
Redundancy computed from data blocks to tolerate disk failures.
Mirror width
Number of copies within a mirrored set.
Target fill
Planned maximum occupancy to preserve performance headroom.
Filesystem overhead
Space consumed by metadata and structure, not user data.
Hot spare
Standby drive reserved for automatic rebuilds.
Fault tolerance
Number of simultaneous disk failures a layout can endure.