Metric | Value | Copy |
---|---|---|
Disks (active + spares) | {{ activeDisks.length }} + {{ spareCount }} | |
RAID | {{ raidLabel }} | |
Raw storage (all) ({{ unit }}) | {{ formatCap(rawAll) }} | |
Spares capacity ({{ unit }}) | {{ formatCap(sparesRaw) }} | |
Raw active ({{ unit }}) | {{ formatCap(rawActive) }} | |
Equalized raw ({{ unit }}) | {{ formatCap(equalizedRaw) }} | |
Overhead ({{ unit }}) | {{ formatCap(equalizedRaw - rawEffective) }} | |
Raw post-overhead ({{ unit }}) | {{ formatCap(rawEffective) }} | |
Target fill | {{ Math.round(targetFill*100) }} % | |
Protected efficiency | {{ (efficiency*100).toFixed(2) }} % | |
Usable capacity ({{ unit }}) | {{ formatCap(usable) }} | |
Redundancy overhead ({{ unit }}) | {{ formatCap(redundancyRaw) }} | |
Reserved / free ({{ unit }}) | {{ formatCap(reservedRaw) }} |
Redundant disk arrays are pooled storage systems that spread data across multiple drives to balance capacity and protection. Enter a few basics and see how mirroring and parity shape the space you can actually use.
Choose the unit that matches your planning notes, set the number of drives and their size, then select a layout such as mirroring, single parity, or dual parity. You can also reserve drives as spares, adjust filesystem overhead, and pick a target fill so the plan keeps headroom.
For a quick sense check, twelve drives of 10 terabytes with dual parity and no spares yield 100 terabytes of usable space. The remainder sits in parity, which carries the recovery information that makes failures tolerable.
Mixed sizes are equalized to the smallest active drive, so plan groups and spares with that effect in mind. Keeping some headroom avoids slowdowns when the array is near capacity.
Storage capacity here means the application‑usable space after accounting for redundancy, filesystem overhead, and any deliberate headroom. The underlying scheme is a Redundant Array of Independent Disks (RAID), and efficiency depends on the chosen layout and the number of engaged drives.
Inputs describe physical or nominal sizes and policy choices. Quantities are converted to bytes using either decimal prefixes (MB, GB, TB, PB) or binary prefixes (MiB, GiB, TiB, PiB). Results present usable space, redundancy overhead, and reserved space in the selected unit.
Computation proceeds by equalizing active drives, applying protection efficiency, removing filesystem overhead, and applying the target fill. The equalization step uses the smallest active drive when sizes differ; spares are excluded from active counts.
Symbol | Meaning | Unit/Datatype | Source |
---|---|---|---|
N | Engaged active drives | integer | Derived |
C | Equalized per‑drive capacity | bytes | Derived |
Q | Equalized raw capacity before overhead | bytes | Derived |
Q' | Raw post‑overhead | bytes | Derived |
U | Usable capacity at target fill | bytes | Derived |
R | Redundancy overhead at target fill | bytes | Derived |
Z | Reserved or free space | bytes | Derived |
E | Protection efficiency | ratio | Derived |
F | Filesystem overhead fraction | 0–1 | Input |
T | Target fill fraction | 0–1 | Input |
Interpretation: ~100 TB usable, ~20 TB parity, ~0 TB reserved.
Layout | Efficiency E | Minimum engaged | Notes |
---|---|---|---|
Stripe | E = 1 | 1 | No protection. |
Mirror | E = 1⁄w | 2 | Mirror width w copies. |
Dual mirror stripe | E = 1⁄w | 2 | RAID 10 with mirror width w. |
Single parity | E = (n−1)⁄n | 3 | Requires three or more engaged. |
Dual parity | E = (n−2)⁄n | 4 | Requires four or more engaged. |
Single parity groups | E = (p−1)⁄p | 2 groups × 3 | Up to g groups of width p. |
Dual parity groups | E = (p−2)⁄p | 2 groups × 4 | Up to g groups of width p. |
Comparability notes. Decimal units use 10‑based multiples and binary units use 2‑based multiples; decide once and stick with it. Equalization uses the smallest active drive when sizes vary; extra capacity on larger drives does not improve totals.
Field | Type / Choices | Min | Max | Step/Pattern | Notes |
---|---|---|---|---|---|
Unit | MB, GB, TB, PB, MiB, GiB, TiB, PiB | — | — | enum | Applies to inputs and outputs. |
Disk count | number | 0 | — | 1 | Includes spares. |
Per‑disk capacity | number | 0 | — | 0.01 | Used when manual entries are off. |
RAID level | 0, 1, 5, 6, 10, 50, 60 | — | — | enum | Stripe has no protection. |
Mirror width | number | 2 | — | 1 | Visible for level 1. |
RAID10 mirror width | number | 2 | — | 1 | Visible for level 10. |
Groups | number | 2 | — | 1 | Visible for 50 and 60. |
Per‑group width | number | 3/4 | — | 1 | Min 3 for single parity; 4 for dual. |
Hot spares | number | 0 | ≤ count | 1 | Largest drives are taken as spares first. |
Filesystem overhead | range | 0 | 1 | 0.01 | Neutral default 0; typical 3–10%. |
Target fill | range | 0 | 1 | 0.01 | Leave headroom to avoid performance cliffs. |
Manual per‑disk capacities | number per drive | 0 | — | 0.01 | Equalized to the smallest active. |
Processing is client‑only; no data is transmitted or stored server‑side. One charting library is fetched from a public CDN to render the breakdown dial.
No data is transmitted or stored server‑side. Results are planning aids and do not guarantee performance or availability outcomes.
Disk array capacity planning with redundancy and headroom.
Example. 12 × 10 TB with dual parity, no spares, overhead 0, target fill 100% → ~100 TB usable.
Pro tip: keep unit and target fill consistent across scenarios for clean comparisons.
No. Calculations run in your browser and values are not sent to a server.
Processing is client‑only.It reflects ideal parity or mirroring math plus your overhead and fill choices. Real systems add metadata, alignment, and performance trade‑offs.
Use vendor guidance for final sizing.Both decimal (MB, GB, TB, PB) and binary (MiB, GiB, TiB, PiB) prefixes are supported.
Choose one and keep it consistent.Yes. When sizes differ, active drives equalize to the smallest, so larger ones do not add capacity beyond that floor.
Spares are excluded from equalization.It reserves some space as free to keep headroom. Lower values trade usable space for performance safety.
Common headroom is 10% or more.You set the number of groups and per‑group width. Only fully filled groups count toward capacity and efficiency.
Unused drives beyond full groups are ignored.No. A pure stripe maximizes capacity with zero redundancy.
Use only for noncritical data.