| Metric | Value | Copy |
|---|---|---|
| Nodes | {{ nodes.length }} | |
| Raw storage ({{ unit }}) | {{ formatCap(rawTB) }} | |
| Overhead ({{ unit }}) | {{ formatCap(rawTB - rawEffectiveTB) }} | |
| Raw post-overhead ({{ unit }}) | {{ formatCap(rawEffectiveTB) }} | |
| OSD nearfull (used) | {{ (effectiveNearfull*100).toFixed(0) }} % | |
| Recommended nearfull | {{ (recommendedNearfull*100).toFixed(0) }} % | |
| Protection | {{ protectionLabel }} | |
| Protected efficiency | {{ (100*totalEfficiency).toFixed(2) }} % | |
| Efficiency @ min_size | {{ (100*minEfficiency).toFixed(2) }} % | |
| Usable capacity ({{ unit }}) | {{ formatCap(usableTB) }} | |
| Redundancy overhead ({{ unit }}) | {{ formatCap(redundantRawTB) }} | |
| Reserved / free raw ({{ unit }}) | {{ formatCap(reservedRawTB) }} |
Ceph stores objects with redundancy so a cluster can lose devices or nodes and keep data available. Capacity planning balances raw storage with protection costs and a safe fill level that leaves headroom for recovery work.
Choose replication or erasure coding, set how many copies or data and parity chunks you need, and enter node capacities. Pick a nearfull threshold that leaves space to reshape data when failures happen. The calculator shows usable capacity, redundancy overhead, and reserved space.
Use a higher headroom when you tolerate multiple node losses in the same failure domain. Adding nodes increases raw storage while also improving balance, which can increase effective headroom at the same threshold.
The model simplifies placement and assumes balanced data across devices. Use it to compare protection modes and to set a safe nearfull level for your environment.
Raw capacity is the sum of node capacities after subtracting per‑OSD or filesystem overhead and any skew factor. Usable capacity depends on the protection scheme and the portion of raw space you allow the cluster to use before marking OSDs nearfull.
| Symbol | Meaning | Unit/Datatype | Source |
|---|---|---|---|
| Capacity of node i | TB | Input | |
| Overhead fraction | 0–0.95 | Input | |
| Skew fraction | 0–0.95 | Input | |
| Nearfull fraction | 0–1 | Input/derived | |
| Replica size | integer | Input | |
| EC data, parity chunks | integers | Input |
Six nodes at 10 TB, replication size 3, nearfull 85, zero overhead and skew.
Raising nearfull leaves more free raw space but reduces immediately usable capacity. Erasure coding can improve efficiency at the cost of rebuild time.
| Field | Type | Accepted values | Notes | Placeholder |
|---|---|---|---|---|
| Nodes | integer | ≥ 0 | Manual per‑node caps optional | 6 |
| Unit | enum | TB | Display only | TB |
| Mode | enum | rep | ec | Replication or EC | rep |
| replicas | integer | ≥ 1 | Size for replication | 3 |
| ec k+m | integers | k ≥ 1, m ≥ 0 | Chunks for EC | 4+2 |
| nearfull | number | 0–1 | Effective ≤ recommended | 0.85 |
| Overhead, skew | number | 0–0.95 | Applied multiplicatively | 0 |
| Failure domain | enum | host | osd | custom | Headroom estimate | host |
| Input | Accepted Families | Output | Encoding/Precision |
|---|---|---|---|
| Node capacities, protection, thresholds | Numbers and enums | Raw, usable, redundancy, reserved | 2 decimals |
| — | — | CSV/JSON exports | Plain CSV; UTF‑8 JSON |
All inputs are processed locally. No cluster information is transmitted.