Usable Capacity
{{ formatCap(usable) }} {{ unit }}
{{ diskCount }} disks {{ diskCap }} {{ unit }} / disk {{ raidLabel }} {{ spareCount }} spare(s) {{ (fsOverhead*100).toFixed(1) }} % OH {{ Math.round(targetFill*100) }} % fill
Enter per‑disk capacities ({{ unit }}). Leave blank to treat as 0.
Disk {{ i }} {{ unit }}
Mirror
Mirror
{{ (fsOverhead*100).toFixed(1) }} %
{{ Math.round(targetFill*100) }} %
Metric Value Copy
Disks (active + spares) {{ activeDisks.length }} + {{ spareCount }}
RAID {{ raidLabel }}
Raw storage (all) ({{ unit }}) {{ formatCap(rawAll) }}
Spares capacity ({{ unit }}) {{ formatCap(sparesRaw) }}
Raw active ({{ unit }}) {{ formatCap(rawActive) }}
Equalized raw ({{ unit }}) {{ formatCap(equalizedRaw) }}
Overhead ({{ unit }}) {{ formatCap(equalizedRaw - rawEffective) }}
Raw post-overhead ({{ unit }}) {{ formatCap(rawEffective) }}
Target fill {{ Math.round(targetFill*100) }} %
Protected efficiency {{ (efficiency*100).toFixed(2) }} %
Usable capacity ({{ unit }}) {{ formatCap(usable) }}
Redundancy overhead ({{ unit }}) {{ formatCap(redundancyRaw) }}
Reserved / free ({{ unit }}) {{ formatCap(reservedRaw) }}

                

Introduction:

Redundant disk arrays are pooled storage systems that spread data across multiple drives to balance capacity and protection. Enter a few basics and see how mirroring and parity shape the space you can actually use.

Choose the unit that matches your planning notes, set the number of drives and their size, then select a layout such as mirroring, single parity, or dual parity. You can also reserve drives as spares, adjust filesystem overhead, and pick a target fill so the plan keeps headroom.

For a quick sense check, twelve drives of 10 terabytes with dual parity and no spares yield 100 terabytes of usable space. The remainder sits in parity, which carries the recovery information that makes failures tolerable.

Mixed sizes are equalized to the smallest active drive, so plan groups and spares with that effect in mind. Keeping some headroom avoids slowdowns when the array is near capacity.

Technical Details:

Storage capacity here means the application‑usable space after accounting for redundancy, filesystem overhead, and any deliberate headroom. The underlying scheme is a Redundant Array of Independent Disks (RAID), and efficiency depends on the chosen layout and the number of engaged drives.

Inputs describe physical or nominal sizes and policy choices. Quantities are converted to bytes using either decimal prefixes (MB, GB, TB, PB) or binary prefixes (MiB, GiB, TiB, PiB). Results present usable space, redundancy overhead, and reserved space in the selected unit.

Computation proceeds by equalizing active drives, applying protection efficiency, removing filesystem overhead, and applying the target fill. The equalization step uses the smallest active drive when sizes differ; spares are excluded from active counts.

Q=N×C
Q'=Q×(1F)
U=Q'×E×T
R=Q'×(1E)×T
Z=Q'×(1T)
Symbols and units
Symbol Meaning Unit/Datatype Source
NEngaged active drivesintegerDerived
CEqualized per‑drive capacitybytesDerived
QEqualized raw capacity before overheadbytesDerived
Q'Raw post‑overheadbytesDerived
UUsable capacity at target fillbytesDerived
RRedundancy overhead at target fillbytesDerived
ZReserved or free spacebytesDerived
EProtection efficiencyratioDerived
FFilesystem overhead fraction0–1Input
TTarget fill fraction0–1Input
Worked example. Twelve drives of 10 TB, dual parity, no spares, zero filesystem overhead, target fill at 100%.
E= n2n = 1012 0.8333
Q=N×C=12×10=120 TB
U=Q×E×T=120×0.8333×1=100 TB

Interpretation: ~100 TB usable, ~20 TB parity, ~0 TB reserved.

Layout efficiency formulas
Layout Efficiency E Minimum engaged Notes
StripeE = 11No protection.
MirrorE = 1⁄w2Mirror width w copies.
Dual mirror stripeE = 1⁄w2RAID 10 with mirror width w.
Single parityE = (n−1)⁄n3Requires three or more engaged.
Dual parityE = (n−2)⁄n4Requires four or more engaged.
Single parity groupsE = (p−1)⁄p2 groups × 3Up to g groups of width p.
Dual parity groupsE = (p−2)⁄p2 groups × 4Up to g groups of width p.

Comparability notes. Decimal units use 10‑based multiples and binary units use 2‑based multiples; decide once and stick with it. Equalization uses the smallest active drive when sizes vary; extra capacity on larger drives does not improve totals.

Validation & bounds extracted from the UI

Inputs, ranges, and constraints
Field Type / Choices Min Max Step/Pattern Notes
UnitMB, GB, TB, PB, MiB, GiB, TiB, PiBenumApplies to inputs and outputs.
Disk countnumber01Includes spares.
Per‑disk capacitynumber00.01Used when manual entries are off.
RAID level0, 1, 5, 6, 10, 50, 60enumStripe has no protection.
Mirror widthnumber21Visible for level 1.
RAID10 mirror widthnumber21Visible for level 10.
Groupsnumber21Visible for 50 and 60.
Per‑group widthnumber3/41Min 3 for single parity; 4 for dual.
Hot sparesnumber0≤ count1Largest drives are taken as spares first.
Filesystem overheadrange010.01Neutral default 0; typical 3–10%.
Target fillrange010.01Leave headroom to avoid performance cliffs.
Manual per‑disk capacitiesnumber per drive00.01Equalized to the smallest active.

Units, precision & rounding policy

  • Decimal: MB=106, GB=109, TB=1012, PB=1015 bytes.
  • Binary: MiB=10242, GiB=10243, TiB=10244, PiB=10245 bytes.
  • Displayed values round to two decimals; decimal separator is a dot.
  • Fractions F and T are clamped to 0–1.

Networking & storage behavior

Processing is client‑only; no data is transmitted or stored server‑side. One charting library is fetched from a public CDN to render the breakdown dial.

Assumptions & limitations

  • Efficiency uses ideal parity math; metadata beyond declared overhead is not modeled.
  • Extra drives beyond filled groups in grouped layouts are ignored. Heads‑up
  • Largest drives are selected as spares before equalization. Heads‑up
  • Mixed sizes equalize to the smallest active drive.
  • Stripe provides no fault tolerance.
  • Single parity needs at least three engaged; dual parity needs at least four.
  • Mirror layouts drop leftover drives that do not complete a full mirror set.
  • No modeling of rebuild time, failure rates, or performance.
  • Headroom reflects your target fill, not automatic tuning.
  • Unit choice affects displayed magnitudes but not underlying bytes.

Edge cases & error sources

  • Blank or non‑numeric entries are treated as zero.
  • Spare count beyond total disks is clamped to the maximum.
  • Groups or per‑group widths that cannot be fully filled yield zero efficiency for grouped layouts.
  • Rounding to two decimals can mask small differences.
  • Decimal vs binary units can cause apparent discrepancies.
  • Manual entries with one very small drive can severely reduce totals.
  • Very large integers may exceed safe precision in some environments.
  • Locale decimal commas are not parsed; use a dot.
  • Target fill below 1 allocates some space to reserved by design.
  • Filesystem overhead at 1 removes all usable capacity.
  • Zero disks or insufficient engaged disks produce zero usable capacity.
  • Changing units after input can change rounding of displayed results.

Privacy & compliance

No data is transmitted or stored server‑side. Results are planning aids and do not guarantee performance or availability outcomes.

Step‑by‑Step Guide

Disk array capacity planning with redundancy and headroom.

  1. Select the Unit.
  2. Enter Disk count and Per‑disk capacity.
  3. Choose the Layout and its options.
  4. Set Hot spares, Filesystem overhead, and Target fill.
  5. Read usable, redundancy, and reserved figures in the chosen unit.

Example. 12 × 10 TB with dual parity, no spares, overhead 0, target fill 100% → ~100 TB usable.

Pro tip: keep unit and target fill consistent across scenarios for clean comparisons.

FAQ

Is my data stored?

No. Calculations run in your browser and values are not sent to a server.

Processing is client‑only.
How accurate is the estimate?

It reflects ideal parity or mirroring math plus your overhead and fill choices. Real systems add metadata, alignment, and performance trade‑offs.

Use vendor guidance for final sizing.
Which units can I use?

Both decimal (MB, GB, TB, PB) and binary (MiB, GiB, TiB, PiB) prefixes are supported.

Choose one and keep it consistent.
Can I plan with mixed drive sizes?

Yes. When sizes differ, active drives equalize to the smallest, so larger ones do not add capacity beyond that floor.

Spares are excluded from equalization.
What does the target fill do?

It reserves some space as free to keep headroom. Lower values trade usable space for performance safety.

Common headroom is 10% or more.
How do grouped layouts work?

You set the number of groups and per‑group width. Only fully filled groups count toward capacity and efficiency.

Unused drives beyond full groups are ignored.
Does stripe provide protection?

No. A pure stripe maximizes capacity with zero redundancy.

Use only for noncritical data.

Glossary

Engaged drives
Active disks participating in the layout after spares.
Mirror width
Number of copies kept for each block.
Parity
Redundant information used to reconstruct data.
Group
A set of drives forming one parity set.
Per‑group width
Drives per group for grouped layouts.
Filesystem overhead
Space reserved for metadata and structure.
Target fill
Desired fullness of usable space to keep headroom.
Efficiency
Fraction of post‑overhead raw space that becomes usable.