{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ badge.label }}
VM migration inputs
Choose the path that matches your hypervisor migration job.
Use the memory size visible to the guest or hypervisor migration task.
Set 0 when storage is already shared or disk movement is outside this migration window.
Enter the measured or reserved bandwidth for this VM, not the interface line rate.
Mbps
Use hypervisor counters, migration logs, or a conservative workload estimate.
Mbps
A 1.2 ratio means the migration sends about 83% of the logical bytes.
Include pre-check and post-check time that consumes the maintenance window.
min
This is added to the final stop-copy time for the downtime estimate.
sec
Set 0 to skip window-fit checks.
hr
Use 0 when the downtime objective is unknown.
sec
Use a hostname, VM name, or migration wave identifier.
A churn rate near the usable bandwidth can prevent disk copy catch-up.
Mbps
Leave 1 when this VM has the full bandwidth reservation.
VMs
Use a lower value when host policy forces quick evacuation; use a higher value for gentle convergence.
rounds
The model stops when the dirty set fits this threshold or the downtime budget.
MiB
Leave 0 for raw math; add buffer for production change planning.
%
PhaseDurationElapsedStatusBasisCopy
{{ row.phase }}{{ row.duration }}{{ row.elapsed }}{{ row.status }}{{ row.basis }}
RoundData sentDurationDirty set after roundProjected downtimeDecisionCopy
{{ row.round }}{{ row.dataSent }}{{ row.duration }}{{ row.dirtyAfter }}{{ row.projectedDowntime }}{{ row.decision }}
CheckStatusEvidenceActionCopy
{{ row.check }}{{ row.status }}{{ row.evidence }}{{ row.action }}

          
Customize
Advanced
:

Introduction:

VM migration time is the expected duration for moving a virtual machine workload from one host or storage path to another. The estimate matters during maintenance windows, cluster drains, hardware replacements, storage moves, and evacuation runs, because a move that looks small by VM count can still miss the window when memory, disk payload, write churn, and usable migration bandwidth are not aligned.

Live migration is usually a race between transfer speed and change speed. Memory can be copied while the VM continues running, but pages that change during the copy have to be sent again. Disk movement has a similar catch-up problem when storage writes continue during the migration. A quiet web VM and a busy database VM can have the same configured memory and disk sizes yet produce very different cutover risk.

VM migration timeline with setup, disk copy, memory pre-copy rounds, stopped copy, and a migration window target.

A migration estimate should separate elapsed migration time from service downtime. Most of the copy can happen while the guest is still running. The visible pause happens near the end, when the remaining dirty memory and cutover overhead are handled. A plan can fit a one-hour change window and still be unacceptable if the stopped phase exceeds the application downtime target.

The result is a planning estimate, not a compatibility check. CPU feature mismatch, host policy, storage reachability, network isolation, encryption cost, device passthrough, snapshots, and hypervisor limits can still block or reshape a migration. Use the estimate to size the window, compare choices, and decide what needs a real migration test before the change is approved.

Technical Details:

Live VM migration commonly uses iterative pre-copy for memory. The source host sends memory while the VM keeps running, tracks pages that were modified during that round, then sends the changed pages again. The process can finish with a short stopped copy when the remaining dirty set is small enough for the downtime budget. If the guest changes memory almost as fast as the migration link sends it, pre-copy rounds shrink slowly or stop shrinking.

Shared-storage live migration mainly models memory movement because the VM disk is already visible to the destination. Shared-nothing live migration adds disk payload transfer, so elapsed time can be dominated by storage bytes even when memory convergence is healthy. A storage-only move removes live memory pre-copy from the model and focuses on whether disk data can catch up while writes continue.

Bandwidth should be the sustained migration throughput available to this VM, not the physical interface speed. Parallel migrations divide the entered bandwidth into a per-VM share. Compression reduces the transferred byte count by the selected logical-to-transferred ratio, while setup time, cutover overhead, and schedule buffer add planning time outside the raw copy math.

Formula Core:

The model converts memory and disk into bytes, applies compression, divides bandwidth across parallel moves, and then adds the phases that consume the migration window.

Bvm = BenteredP Tdisk = Draw/CBvm-(Wdisk/C) r = Wmemory/CBvm Tround = Mdirty/CBvm Mnext = min(Mvm,Wmemory×Tround) Ttotal = setup+Tdisk+Tprecopy+Tdowntime+buffer
Variables used in the VM migration time equations
Symbol Meaning Unit or rule
Bvm Usable migration bandwidth per VM Entered Mbps divided by Parallel migrations sharing link, converted to bytes per second.
C Compression ratio Minimum 1.0; a value of 1.2 sends one compressed byte for about 1.2 logical bytes.
Draw Raw disk payload Included for shared-nothing and storage-only migration modes.
Wdisk Disk churn during copy If churn consumes the per-VM link share, disk copy is reported as not converging.
Wmemory Memory dirty rate Average memory write rate during pre-copy, entered in Mbps.
r Dirty-rate-to-link ratio Values near 1.0 mean memory changes nearly match migration throughput.

Stop Conditions and Result Rules:

Stop conditions and result rules for VM migration planning
Rule How it is evaluated Meaning for the result
Disk catch-up usable bandwidth per VM > disk churn after compression If false, Disk copy and the total migration time become not converging.
Pre-copy stop budget Maximum of Dirty-set stop threshold and the dirty bytes that fit the downtime copy budget. A round can stop when the next dirty set is small enough for either budget.
Round limit Pre-copy round limit is clamped from 1 to 30. If the dirty set has not met the stop budget by the final round, the status becomes forced stop-copy.
Downtime fit Stopped copy <= Downtime target, unless the target is 0. A migration can fit the full window while still showing downtime risk.
Window fit Total modeled time <= Migration window, unless the window is 0. The headline status moves to window risk when elapsed time misses the window.

Mode Scope:

Migration modes and which phases they include
Migration type Memory modeled Disk modeled Typical use
Shared-nothing live migration Yes Yes Host and storage move where disk payload must cross the migration link.
Live migration with shared storage Yes No Compute move where both hosts already see the VM disk.
Storage-only VM move No Yes Storage relocation or disk-copy planning without live memory pre-copy.

Everyday Use & Decision Guide:

Start with the migration path. Choose Shared-nothing live migration when disk data has to move with the VM, Live migration with shared storage when storage is already shared, and Storage-only VM move when the question is only the disk transfer. That first choice prevents a disk payload from being counted in a memory-only move, or ignored when local storage really must be copied.

Use measured values whenever possible. Enter configured or active VM memory, the disk payload or changed-block delta in Disk data to move, and a sustained Usable migration bandwidth after reservations, policy limits, encryption, storage bottlenecks, and other VM moves are considered. Link speed is often too optimistic for a change approval.

  • Use Memory dirty rate from hypervisor counters, migration logs, or a conservative workload estimate. Busy databases, caches, and write-heavy jobs usually need more headroom.
  • Set Compression ratio to 1.0 when compression is disabled or already included in the measured throughput.
  • Add Setup and validation time for pre-checks, placement, lock acquisition, and post-migration checks that consume the window.
  • Use Cutover overhead for pause-time actions such as device state, resume, route or ARP updates, and operator confirmation.
  • Open Advanced when disk writes continue during copy, several migrations share the same link, host policy limits pre-copy rounds, or the final dirty set must be capped.

The strongest first read is the summary badge group. ready to plan means inputs are valid and the model does not see an immediate window or convergence problem. forced stop-copy, window risk, disk not converging, or needs input should slow the plan down before the result is copied into a runbook.

Use Window Guidance to decide the next experiment. More bandwidth helps both disk transfer and memory pre-copy. Lower dirty rate helps downtime. Pre-seeding storage or reducing the disk delta helps shared-nothing moves. Staggering parallel migrations raises the per-VM share when one workload is on the critical path.

Step-by-Step Guide:

Build the estimate from the migration type, copy sizes, throughput, and targets, then check the phase ledger before using the headline number.

  1. Choose Migration type. The result should include disk phases for shared-nothing and storage-only moves, and show not in mode for disk copy when shared storage is selected.
  2. Enter VM memory and Disk data to move with the right units. Validation errors appear if a selected mode needs memory or disk data and the matching value is 0.
  3. Set Usable migration bandwidth, Memory dirty rate, and Compression ratio. A bandwidth value of 0 or a compression ratio below 1.0 triggers needs input.
  4. Add Setup and validation time, Cutover overhead, Migration window, and Downtime target. The summary updates with total modeled time, window status, and downtime status.
  5. Open Advanced for Disk churn during copy, Parallel migrations sharing link, Pre-copy round limit, Dirty-set stop threshold, and Schedule buffer when those conditions exist in the runbook.
  6. Review Migration Phases first. Check each phase's Duration, Elapsed, Status, and Basis before trusting the headline total.
  7. Open Convergence Rounds for live-memory moves. The Dirty set after round, Projected downtime, and Decision columns show why pre-copy stopped or why the round limit was reached.
  8. Use Window Guidance, Phase Duration Chart, Dirty Set Convergence Curve, and JSON after the assumptions match the migration plan you intend to communicate.

Interpreting Results:

Migration window is the headline elapsed time, but it should be read with the badges and the Migration Phases table. A short total with downtime risk can still violate the application pause target. A long total with downtime fits may be acceptable when the change window is large enough and the service can tolerate the background copy.

The dirty/link badge is the main convergence cue for live-memory moves. Low values leave headroom for pre-copy rounds to shrink the dirty set. Values near 100% mean the workload is changing memory almost as fast as the migration can send it, so a forced stopped copy or post-copy policy may become part of the operational decision.

How to interpret VM migration time outputs
Output cue What it means Useful follow-up
inside window The total modeled time is within the entered Migration window. Still check Downtime fit and the longest phase before approving the plan.
downtime risk The stopped copy plus cutover overhead exceeds the entered Downtime target. Lower dirty rate, increase bandwidth, allow more downtime, or schedule during quieter activity.
forced stop-copy Pre-copy reached the round limit before the dirty set met the stop budget. Inspect Convergence Rounds and decide whether more rounds, workload throttling, or a different migration policy is safer.
disk not converging Disk churn is at or above the compressed per-VM migration link share. Quiesce writes, pre-seed the disk, reduce the disk delta, or reserve more bandwidth.
needs input At least one required value is invalid for the selected migration type. Use the error message above the results to fix bandwidth, compression ratio, memory, disk size, or parallel count.

Do not treat a green plan as proof that the hypervisor will complete the move. The estimate assumes the hosts, VM devices, destination resources, storage access, and migration network are already valid. Keep recent test evidence or migration logs with the change record when downtime targets are tight.

Worked Examples:

Shared-nothing move inside a one-hour window

A VM with 64 GiB memory, 250 GiB disk data, 5,000 Mbps usable bandwidth, 400 Mbps dirty rate, and 1.2 compression returns 15.9 min for Migration window. The phase table shows Disk copy at about 6.0 min, Memory pre-copy at about 1.5 min, and Stopped copy at 26.1 sec. The result is ready to plan, inside window, and downtime fits for a 45 sec downtime target.

Shared-storage live move with a small pause budget

With shared storage selected, the same 64 GiB memory VM and no disk copy at 10,000 Mbps, 250 Mbps dirty rate, and 1.3 compression returns about 7.0 min. The Disk copy phase is not in mode, and Stopped copy is about 15.8 sec against a 30 sec target. This is the kind of result where the runbook should focus on cutover validation rather than storage transfer time.

High dirty rate reaches the round limit

A 64 GiB shared-storage live move at 5,000 Mbps with a 4,500 Mbps dirty rate and only 3 pre-copy rounds returns forced stop-copy. The dirty/link badge shows 90.0%, and Stopped copy rises to about 1.7 min, missing a 45 sec downtime target even though the full 8.6 min elapsed time fits a 30 min migration window. The corrective path is to lower write activity, increase per-VM bandwidth, permit more rounds only if downtime allows it, or use a migration mode designed for non-converging workloads.

Disk churn blocks a storage-only move

A storage-only move with 500 GiB of disk data, 800 Mbps usable bandwidth, and 900 Mbps disk churn reports not converging. The Window Guidance row for Disk catch-up shows disk churn exceeds link, while live memory fields are not applicable. That result points to pre-seeding, quiescing writes, increasing the migration reservation, or splitting the move before cutover.

FAQ:

Why can the total fit the window but still show downtime risk?

The total includes setup, disk copy, memory pre-copy, stopped copy, and schedule buffer. Downtime target checks only the stopped memory copy plus Cutover overhead, so a migration can finish inside the change window while pausing the VM longer than the service target allows.

What dirty rate should I enter?

Use a measured memory dirty rate from hypervisor counters or migration logs when you can. If you only have an estimate, choose a conservative Mbps value for the workload period when migration will run, because the dirty/link ratio drives pre-copy convergence and downtime risk.

Should I enter line rate or measured throughput?

Enter sustained usable migration throughput in Usable migration bandwidth. The model divides that value by Parallel migrations sharing link, so a 20,000 Mbps reservation shared by 4 moves becomes 5,000 Mbps per VM.

What does not converging mean?

not converging means the entered write churn can consume the available transfer rate. For disk copy, disk churn is at or above the compressed per-VM link share. For memory pre-copy, a very high dirty-rate-to-link ratio can keep the dirty set from shrinking before the round limit.

Why do I get a needs input status?

needs input appears when the selected migration mode is missing required values or has invalid numeric settings. Fix the visible error, such as bandwidth at 0 Mbps, compression below 1.0, parallel migrations below 1, or zero memory or disk size for a mode that requires it.

Glossary:

Live migration
Moving a running VM while most memory copy happens before the final pause.
Pre-copy
Iterative memory copying while the VM keeps running and modified pages are tracked for later rounds.
Dirty rate
The rate at which the running workload changes memory pages during pre-copy.
Stopped copy
The paused phase that transfers the remaining dirty memory and adds cutover overhead.
Shared-nothing migration
A migration path where disk payload must move to the destination as part of the job.
Migration window
The elapsed change-window budget used to judge whether the modeled phases finish in time.

References: