{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ result.runwayBadge }} Limiter: {{ result.limiter }} {{ result.placementBadgeText }}
Virtualization host capacity inputs
hosts
N+ hosts
cores
GiB
vCPU
GiB
x
x
VMs
GiB/host
cores/host
GiB/VM
vCPU
GiB
VMs
nodes
{{ fmt(cpu_target_utilization_pct, 0) }}%
{{ fmt(memory_target_utilization_pct, 0) }}%
%
{{ fmt(ballooning_buffer_pct, 0) }}%
{{ fmt(growth_reserve_pct, 0) }}%
hosts
VMs
digits
Section Item Value Note Copy
{{ row.section }} {{ row.item }} {{ row.value }} {{ row.note }}
Metric Value Copy
{{ row.label }} {{ row.value }}
Planner Metric Value Copy
{{ row.label }} {{ row.value }}
Placement Metric Value Copy
{{ row.label }} {{ row.value }}
Total hosts Delta vs current Active at policy Safe VM ceiling Gap vs current Gap vs projected Reserve footprint Limiter Status Copy
{{ row.totalHostsText }} {{ row.additionalHostsText }} {{ row.activeHostsText }} {{ row.safeVmText }} {{ row.currentGapText }} {{ row.projectedGapText }} {{ row.reserveShareText }} {{ row.limiter }} {{ row.statusText }}
Failed hosts Active hosts Safe VM ceiling CPU limit RAM limit Limiter Runway vs current Copy
{{ row.failedHosts }} {{ row.survivingHosts }} {{ row.safeVmDisplay }} {{ row.cpuVmDisplay }} {{ row.ramVmDisplay }} {{ row.limiter }} {{ row.runwayDisplay }}
Field Value Copy
{{ row.label }} {{ row.value }}

      
:

Introduction:

Virtualization host capacity is a placement question as much as a totals question. A cluster can have enough aggregate vCPU and memory on paper while still failing a high-availability target, a maintenance reserve, or an oversized virtual machine that cannot fit cleanly on one host or NUMA node.

Capacity planning therefore needs both density and survivability. CPU overcommit, memory overcommit, host reserves, VM overhead, scheduler penalties, ballooning buffer, and growth reserve all reduce the safe count of virtual machines the cluster should plan to carry.

Diagram showing host inventory reduced by reserves, policy caps, failure targets, and runway

The final number should be treated as a planning guardrail. Real clusters still need monitoring for CPU ready, memory pressure, storage latency, network bottlenecks, and workload seasonality.

Technical Details:

The model separates CPU and memory because each resource fails in a different way. CPU overcommit can work well for bursty guests, but high demand shows up as scheduling delay. Memory overcommit depends on platform behavior, guest activity, ballooning, sharing, and swap policy; when it goes wrong, performance can collapse quickly or workloads may fail.

Each failure scenario starts with total hosts, subtracts failed hosts, then subtracts maintenance reserve hosts. The remaining active hosts contribute physical cores and memory. Host CPU and memory reserves are removed, overcommit ratios are applied, utilization caps reduce the policy budget, and scheduler or NUMA penalty lowers both CPU and memory planning capacity.

Formula Core:

The safe VM ceiling is the smaller of the CPU-derived and memory-derived VM counts after policy and growth reserve.

activeHosts=hostCount-failedHosts-maintenanceReserve cpuVMs=activeHosts×guestCores×overcommit×cpuCap×efficiencyvCPUperVM memoryVMs=policyMemory×balloonBufferramPerVM+vmOverhead safeVMs=min(cpuVMs,memoryVMs)×growthReserve
Virtualization capacity controls and effects
InputWhat it changesCommon misread
Failure tolerance targetHosts removed before capacity is judged.N+1 capacity is not the same as normal-day density.
Maintenance reserve hostsHosts kept out for patching or evacuation.Reserve hosts do not carry planned VM load.
Largest VM fieldsSingle-host and NUMA placement checks.Aggregate capacity can hide a guest that is too large for one host.
Growth reserveFinal safety margin after hard CPU and memory limits.Lower reserve raises the count but leaves less room for bursts.

The risk label is based on aggressive inputs such as no failure tolerance, high overcommit, high utilization caps, low growth reserve, and negative VM runway. It is a planning flag, not a hypervisor telemetry result.

Everyday Use & Decision Guide:

Use a preset only to start the model. General server, bursty dev/test, steady VDI, and memory-heavy profiles set a VM shape and policy baseline, but the important work is replacing those values with the cluster you actually run.

  • Check Capacity Brief first for safe VM ceiling, limiter, runway, risk, and failure policy.
  • Use Policy Metrics to see whether CPU or memory is setting the ceiling.
  • Open Placement Mix when a small number of large guests must fit alongside average VMs.
  • Use Host Ladder and Failure Envelope before approving growth or maintenance plans.

Do not treat a positive runway as approval to deploy. Compare the result with monitoring from the same workload class, especially CPU ready, memory pressure, NUMA locality, storage latency, and backup windows.

Step-by-Step Guide:

Follow this practical sequence:

  1. Enter Hosts in cluster, Failure tolerance target, cores per host, and memory per host.
  2. Set average VM vCPU and memory, then choose the closest workload preset or leave manual values in place.
  3. Open Advanced and enter current VM count, host reserves, per-VM overhead, utilization caps, scheduler penalty, ballooning buffer, and growth reserve.
  4. Add largest-VM fields and NUMA nodes per host if outsized guests matter.
  5. Read Projected VM target, Growth Path, and Host Ladder before deciding whether to add hosts.

If the tool reduces a value, such as failure tolerance or maintenance reserve, review the normalized inputs because the requested scenario left no active hosts.

Interpreting Results:

The most important output is the safe VM ceiling under the selected failure policy. If Limiter is CPU, memory changes will not fix the modeled ceiling. If it is Memory, more cores or a higher vCPU ratio will not fix the modeled ceiling.

A Fits host only with cross-NUMA placement warning means the large guest may fit the host but not one NUMA node. That can still run on some platforms, but it deserves extra review because locality can affect latency-sensitive workloads.

Worked Examples:

A 10-host cluster with 32 cores and 512 GiB per host, N+1 failure tolerance, one maintenance reserve, 2 vCPU and 8 GiB average VMs, and a 4:1 CPU ratio may show memory or CPU as the limiter depending on utilization caps and reserve settings. Runway vs current tells whether the existing 180 VMs still fit after the selected failure event.

A memory-heavy estate with 4 vCPU and 16 GiB average VMs, low memory overcommit, and a 15% growth reserve can look comfortable by CPU but tight by memory. In that case, raising CPU overcommit only raises a number the model is not using as the limit.

If 12 large VMs require 16 vCPU and 96 GiB each, Placement Mix can report that the declared cohort fits or does not fit per host. That output is more useful than aggregate totals when the risk is packing, not total capacity.

FAQ:

Is a higher overcommit ratio always better?

No. It raises planned density only when that resource is the limiter, and it can create scheduling or memory pressure if workloads are active at the same time.

Why does the VM count drop after adding failure tolerance?

The scenario removes failed hosts before calculating capacity, then applies the same reserves and policy caps to the surviving hosts.

Can this replace platform admission control?

No. It is a planning calculator. Hypervisor admission control, placement engines, and live telemetry remain authoritative for production decisions.

Glossary:

vCPU:pCPU ratio
The planned virtual CPU allocation per physical core.
NUMA
A hardware locality boundary where memory attached to one CPU node is faster for that node.
Runway
The difference between safe VM ceiling and the current or projected VM count.