Metric | Value |
---|---|
Nodes |
{{ nodes.length }}
|
Raw storage ({{ unit }}) |
{{ formatCap(rawTB) }}
|
Overhead ({{ unit }}) |
{{ formatCap(rawTB - rawEffectiveTB) }}
|
Raw post-overhead ({{ unit }}) |
{{ formatCap(rawEffectiveTB) }}
|
OSD nearfull (used) |
{{ (effectiveNearfull*100).toFixed(0) }} %
|
Recommended nearfull |
{{ (recommendedNearfull*100).toFixed(0) }} %
|
Protection |
{{ protectionLabel }}
|
Protected efficiency |
{{ (100*totalEfficiency).toFixed(2) }} %
|
Efficiency @ min_size |
{{ (100*minEfficiency).toFixed(2) }} %
|
Usable capacity ({{ unit }}) |
{{ formatCap(usableTB) }}
|
Redundancy overhead ({{ unit }}) |
{{ formatCap(redundantRawTB) }}
|
Reserved / free raw ({{ unit }}) |
{{ formatCap(reservedRawTB) }}
|
Ceph storage capacity is the amount of application data a cluster can hold after accounting for redundancy and operational headroom. A Ceph capacity planning calculator helps you translate node sizes and protection choices into a realistic estimate you can act on. Results reflect the balance between usable data space, redundancy, and reserved room for recovery so you can plan growth with fewer surprises.
Describe your nodes and choose the unit you prefer, then select replication or erasure coding. Set the nearfull ratio that you are willing to approach so daily writes continue and recovery remains possible. If hardware varies, enter per node values and adjust for filesystem and metadata overhead or for placement imbalance in mixed or uneven clusters.
When resilience matters, allocate headroom for the largest failure domains that match your placement rule and specify how many simultaneous losses you want to tolerate. The calculator compares your chosen nearfull level to the minimum needed to rebuild after those losses so you can trade capacity for safety with eyes open.
A practical example is six nodes with 10 TB each using triple replication. With a nearfull level of 85 percent the outcome is about 17 TB of usable space while roughly 34 TB is consumed by redundancy and about 9 TB remains free for recovery. Raising protection or reserving more headroom will reduce usable space which is expected in a durable design.
Capacity estimates guide planning and do not certify pool settings or account status. Use realistic inputs and revisit the calculation when hardware or policies change.
The calculation models protected storage capacity for a Ceph cluster by combining raw node capacity, efficiency from the protection scheme, and a fullness limit that preserves recovery room. It reports the split between data, redundancy, and reserved raw space in the unit you select.
Protection efficiency summarizes how much of the stored raw becomes user data. Replication uses a factor of the inverse of replica count. Erasure coding uses the data shard share of the stripe. A second metric, minimum efficiency, reflects behavior at min_size when the system is degraded but still accepts writes.
Recovery headroom is handled through a nearfull ratio that caps how much post‑overhead raw can be used. A recommended nearfull is derived from the capacity required to rebuild after losing one or more failure domains. If you choose to accept degraded placement groups temporarily, the effective nearfull can exceed that recommendation at increased risk.
Comparisons assume a single protection policy, consistent units across inputs, and a cluster‑wide nearfull target. Per‑pool placement rules, device classes, and time‑varying behavior are outside scope; treat results as planning guidance rather than operational guarantees.
Symbol | Meaning | Unit/Datatype | Source |
---|---|---|---|
Capacity of node i | MB | GB | TB | PB / MiB | GiB | TiB | PiB | Input | |
Raw capacity across nodes | same as input unit | Derived | |
Post‑overhead raw | same as input unit | Derived | |
OSD/metadata overhead fraction | 0 to 0.15 in UI | Input | |
CRUSH imbalance skew fraction | 0 to 0.15 in UI | Input | |
Chosen OSD nearfull ratio | 0.50 to 0.95 in UI | Input | |
Recommended nearfull ratio | 0 to 1 | Derived | |
Effective nearfull ratio | 0 to 1 | Derived | |
Replication size | integer ≥ 1 | Input | |
Erasure coding data and parity shards | k ≥ 1, m ≥ 0 | Input | |
Protected efficiency | 0 to 1 | Derived | |
Projected usable capacity | same as input unit | Derived |
Protected efficiency is 33.33 percent; minimum efficiency at min_size 2 is 50 percent.
Field | Type | Min | Max | Step/Pattern | Notes |
---|---|---|---|---|---|
Total nodes | number | 1 | — | 1 | Counts storage hosts. |
Capacity per node | number | 0 | — | 0.01 | Used when not defining nodes manually. |
Per‑node capacity | number | 0 | — | 0.01 | Empty inputs count as 0. |
Unit | select | — | — | MB, GB, TB, PB, MiB, GiB, TiB, PiB | Applies globally. |
Mode | select | — | — | rep | ec | Replication or erasure coding. |
Replication size | number | 1 | — | 1 | Protected efficiency is 1/size. |
Replication min_size | number | 1 | — | 1 | Minimum active copies to accept writes. |
EC k | number | 1 | — | 1 | Data shards. |
EC m | number | 0 | — | 1 | Parity shards. |
EC min_size | number | k | — | 1 | Clamped to ≥ k. |
OSD nearfull | range | 0.50 | 0.95 | 0.01 | Often 80 to 90; 95 is an upper safety bound. |
Failure domain | select | — | — | host | osd | custom | Used for recommended nearfull. |
OSDs per node | number | 1 | — | 1 | Average OSD size inferred. |
Custom domain capacity | number | 0 | — | 0.01 | Capacity per domain. |
Domains to tolerate | number | 0 | — | 1 | Number of failures to absorb. |
OSD/metadata overhead | range | 0 | 0.15 | 0.005 | Typical 2 to 5. |
CRUSH skew | range | 0 | 0.15 | 0.005 | Typical 0 to 10. |
Accept degraded PGs | checkbox | — | — | — | Allows writes above recommendation. |
Input | Accepted Families | Output | Encoding/Precision | Rounding |
---|---|---|---|---|
Numeric entries and toggles | Integers and decimals | Table metrics and breakdown | Capacities in chosen unit | Two decimals for capacities |
— | — | CSV and JSON for export | JSON keys: inputs, metrics | CSV mixes numbers and labels |
No data is transmitted or stored server‑side. CSV and JSON are produced locally for your use.
Ceph usable capacity planning, from node inventory to a clear breakdown.
Example. Six nodes × 10 TB, replication 3, nearfull 0.85 → about 17 TB usable, 34 TB redundancy, 9 TB reserved.
No. Inputs are used locally to compute results, and copies you make stay on your device.
It captures protection efficiency, overhead, imbalance, and recovery headroom. It does not model per‑pool rules or device classes, so treat it as planning guidance.
Decimal MB/GB/TB/PB or binary MiB/GiB/TiB/PiB. Calculations and displays follow the unit you choose.
Yes. The computation runs in your browser. Copy and download actions also work without a network connection.
Select erasure coding, set k to 4 and m to 2, and choose a nearfull that leaves enough headroom to rebuild two shard losses.
When your chosen nearfull meets the recommendation, recovery is just possible. Pushing beyond this raises the risk of running out of room during rebuild.
The package does not declare pricing or licensing. Treat it as an informational calculator unless stated otherwise in your environment.
Use the copy controls for CSV or JSON. You can also download files to share with teammates.
Raw capacity, efficiency, and nearfull must all be positive. Check that nodes and units are set, and that the nearfull slider is above zero.
Typical nearfull choices are 80 to 90 to leave breathing room for recovery.