{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ badge.text }}
RAID performance inputs
Choose the RAID layout used by the active groups.
Enter the active member width before hot spares are removed.
drives
Use multiple groups for striped mirrors, RAID 50, RAID 60, or repeated vdev-style groups.
groups
Global spares are subtracted from the widest groups before RAID compatibility is checked.
drives
Use measured single-drive random read and write values for the media you plan to deploy.
Enter sustained single-drive read and write MB/s before RAID penalties or caps.
{{ mixLabel }}
Healthy-state workload mix used for blended RAID read/write math.
%
Average IO size links IOPS to the final MB/s ceiling.
KiB
Per-drive raw capacity powers usable capacity and rebuild ETA estimates.
TB
Sustained MB/s per rebuilding drive before reserve and RAID complexity adjustments.
MB/s
Quick-fill a realistic RAID layout, then edit fields as needed.
Seeds read share, block size, target IOPS, target MB/s, and full-stripe share.
Use a common media baseline or keep Custom for measured values.
Optional target for fit checks and margin calculations.
IOPS
Optional MB/s target checked alongside IOPS.
MB/s
Controls how the comparison matrix chooses the recommended RAID level.
{{ advanced.read_cache_boost_pct }}%
Optional controller/read-cache uplift for random reads.
{{ advanced.write_cache_boost_pct }}%
Optional uplift for cached writes before backend media pressure dominates.
Lower values model parity pipeline overhead below nominal RAID math.
%
{{ advanced.queue_depth_efficiency_pct }}%
Models host/controller queue behavior against ideal per-drive benchmarks.
{{ advanced.stripe_alignment_penalty_pct }}%
Write-performance drag for unaligned/random write patterns that miss full-stripe writes.
{{ advanced.rebuild_reserve_pct }}%
IOPS/throughput held back for degraded periods and background resilver operations.
{{ advanced.degraded_write_penalty_pct }}%
Additional write-loss multiplier applied while the array is operating in degraded mode.
{{ advanced.full_stripe_write_pct }}%
Share of writes that land as streaming full-stripe work instead of small parity updates.
Set a host, HBA, or controller ceiling, or leave 0 to disable.
IOPS
Set a controller or fabric throughput ceiling, or leave 0 to disable.
MB/s
Leave 0 to auto-estimate from per-drive read IOPS.
ms
Leave 0 to auto-estimate from per-drive write IOPS.
ms
Adds host, HBA, cache, or fabric latency to media service time.
ms
Required headroom above target values before a plan is comfortably sized.
%
Decimal precision for summary text, tables, and exported payloads.
Metric Value Copy
{{ row.label }} {{ row.value }}
Scenario Safe IOPS Safe MB/s Retention Target fit Copy
{{ row.scenario }} {{ row.safeIopsDisplay }} {{ row.safeMbsDisplay }} {{ row.retentionDisplay }} {{ row.targetFitDisplay }}
Constraint Current effect Severity Recommendation Copy
{{ row.constraint }} {{ row.effect }} {{ row.severity }} {{ row.recommendation }}
RAID Penalty Safe IOPS Safe MB/s Usable TB Efficiency Fault tolerance Rebuild ETA Target fit Copy
{{ row.levelLabel }} Recommended {{ row.writePenaltyDisplay }} {{ row.safeIopsDisplay }} {{ row.safeMbsDisplay }} {{ row.usableDisplay }} {{ row.efficiencyDisplay }} {{ row.faultLabel }} {{ row.rebuildDisplay }} {{ row.targetFitDisplay }}
Mix Safe IOPS Safe MB/s Target delta Status Copy
{{ row.mixLabel }} {{ row.safeIopsDisplay }} {{ row.safeMbsDisplay }} {{ row.targetGapDisplay }} {{ row.statusText }}
Write locality Penalty Safe IOPS Safe MB/s Degraded retention Target fit Interpretation Copy
{{ row.shareLabel }} Selected {{ row.effectivePenaltyDisplay }} {{ row.safeIopsDisplay }} {{ row.safeMbsDisplay }} {{ row.retentionDisplay }} {{ row.targetFitDisplay }} {{ row.interpretation }}
Field Value Copy
{{ row.label }} {{ row.value }}

        
:

RAID performance planning connects three questions that are easy to confuse: how many operations the array can serve, how much sequential bandwidth it can move, and how much usable capacity remains after mirrors, parity, and spare drives. A layout that looks fast for reads can still be weak for small random writes because parity RAID has to update data and parity for each logical write.

The useful number is not the raw drive total. It is the workload-aware ceiling after write penalty, read/write mix, block size, controller caps, reserve, and rebuild assumptions are applied. That matters when a virtualization datastore, database pool, backup target, or analytics volume must meet a target without running at its theoretical edge.

RAID estimates are planning models, not drive-bench substitutes. Cache, firmware, queue depth, filesystem behavior, deduplication, compression, host multipathing, and the actual failure location can change measured results. Treat the output as a sizing conversation starter and then confirm candidate designs with storage-vendor guidance and workload testing.

RAID performance flow from workload mix through write penalty to safe IOPS, throughput, capacity, and rebuild outputs

Technical Details:

Mirrored RAID levels usually pay a lower random-write cost than parity RAID, while parity layouts keep more usable capacity. RAID 5 and RAID 50 model a four-write penalty for small random writes, and RAID 6 and RAID 60 model a six-write penalty. Full-stripe writes can reduce that penalty because the parity update no longer behaves like a small read-modify-write cycle.

The calculator models RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50, and RAID 60. It allocates hot spares out of the active drive groups first, checks minimum drive counts, applies even-drive requirements for mirrors, and then estimates healthy and degraded ceilings. RAID 0 returns no rebuild protection because a single failed member is treated as array loss.

The main random I/O estimate blends read and write capacity after the selected write penalty and efficiency assumptions.

SafeIOPS=(ReadIOPS×ReadShare+WriteIOPS×WriteShareEffectiveWritePenalty)×QueueFactor×ReserveFactor
RAID behavior modeled by the calculator
RAID levelMinimum active drivesSmall-write penaltyModeled fault note
RAID 011xNo member failure tolerance
RAID 12, even2xUp to one failure per mirror pair
RAID 534xOne data drive per group
RAID 646xTwo data drives per group
RAID 104, even2xMirror-pair based tolerance
RAID 503 per group, at least 2 groups4xOne drive per RAID 5 group
RAID 604 per group, at least 2 groups6xTwo drives per RAID 6 group

Throughput is constrained twice. Sequential media throughput is calculated from per-drive MB/s, but the final MB/s cannot exceed the IOPS-based ceiling implied by the average block size. For example, 320,000 IOPS at 16 KiB is about 5,000 MB/s before other caps, while the same IOPS at 4 KiB is about 1,250 MB/s.

Rebuild time uses drive size, rebuild MB/s, the rebuild reserve percentage, and a RAID-complexity multiplier. The estimate is intentionally a sustained-bandwidth approximation. It does not predict unrecoverable read errors, controller throttling, or how much user workload continues during the rebuild.

Everyday Use & Decision Guide:

Start with a configuration preset only when it resembles the job you are planning. Balanced NAS, virtualization, archive, and low-latency presets fill in drive media, workload mix, target values, and advanced assumptions, but manual edits switch the run back toward a custom plan.

For a first pass, enter the actual drive read/write IOPS and sequential MB/s from the drive family you expect to buy, then set Read share and Block size from the workload. Small database writes, VM mixed I/O, and backup streams should not share the same block-size assumption.

  • Use Safe blended IOPS and Safe blended throughput for sizing, not the raw read total.
  • Check Throughput limiter before blaming the RAID level; the active block size may make IOPS the limiter.
  • Use IOPS target delta and MB/s target delta only when the target inputs are set.
  • Read Worst single-group failure IOPS before treating a healthy-state result as resilient.

A high healthy ceiling does not mean the layout stays useful during a failure. If the degraded retention row is low or the target status changes to Target not met, the design may need more groups, a mirror-based level, lower write pressure, or a larger safety margin.

Step-by-Step Guide:

  1. Select the RAID level and enter Drives per group, Group count, and Hot spares. If an alert reports an input conflict, fix the active drive count before reading the tables.
  2. Enter Per-drive random IOPS, Per-drive sequential throughput, Drive size, and Rebuild baseline from the media you actually plan to use.
  3. Set Read share and Block size. Confirm the mix label matches the workload before comparing RAID levels.
  4. Open Advanced when you need cache boosts, parity efficiency, controller caps, full-stripe share, degraded write penalty, or safety margin.
  5. Read the snapshot rows first, then compare the RAID Comparison Matrix, Degraded Failure Budget, and Write Locality Sweep for alternatives.

Interpreting Results:

The status badge is the first stop. Target with safety margin met means the selected level clears the requested IOPS and MB/s after the configured margin. Target met, margin short means the raw target clears but the reserve does not. Target not met means at least one requested ceiling is below the target.

Do not overread Recommended RAID level as a universal answer. The recommendation score changes with Optimization priority and weighs performance, resilience, degraded retention, and capacity efficiency. If procurement cost or vendor support policy matters more, keep that external constraint in the final decision.

Worked Examples:

A 16-drive SSD virtualization pool with RAID 6, one spare, 70% reads, and 16 KiB blocks should be checked against Safe blended IOPS, Safe blended throughput, and Worst single-group failure throughput. If the IOPS target clears but degraded target latency is high, the healthy result is not enough for maintenance windows.

A backup repository with 256 KiB blocks and high full-stripe write share may make RAID 6 look much better than a small-write OLTP profile. In that case, the Write-locality sweep is the useful proof because it shows how 0% and 100% full-stripe assumptions change the effective write penalty.

If RAID 10 reports an input conflict, the common cause is an odd active drive count after hot spares are removed. Reduce hot spares, change drives per group, or use an even group width before trusting the comparison matrix.

FAQ:

Why does the same drive count produce different MB/s at different block sizes?

MB/s is derived from IOPS times average block size, then compared with the sequential media ceiling. Smaller blocks can leave MB/s low even when IOPS looks high.

Does the rebuild estimate prove the array is safe for that many hours?

No. Estimated rebuild ETA is a bandwidth model. It does not include unreadable sectors, throttling, firmware behavior, or the performance cost of live workload during rebuild.

Why can a target pass in healthy state but fail the safety margin?

The safety margin increases the required IOPS and MB/s by the configured percentage. A design can clear the exact target while still being too close to the chosen planning reserve.

What should I do with Input conflict?

Read the alert text. It usually means the selected RAID level does not have enough active drives per group, an even-drive mirror requirement is missed, or spares removed too many active members.

Glossary:

Effective write penalty
The modeled write cost after random and full-stripe write behavior are blended.
Safe blended IOPS
The workload-aware IOPS ceiling after penalties, efficiency, and reserve assumptions.
Degraded retention
The percentage of healthy performance retained in the modeled single-group failure case.