{{ summary.title }}
{{ summary.primary }}
{{ summary.line }}
{{ badge.label }}
LACP bundle capacity inputs
Count only same-speed member links that can forward traffic when healthy.
links
Use the physical line rate before hash efficiency and reserve adjustments.
Gbps
Model the current outage or the failure case you want the bundle to survive.
links
Enter the peak aggregate demand you expect the bundle to carry.
Gbps
Lower this when traffic has few flows or poor source/destination entropy.
%
Use this to keep capacity planning below raw line-rate math.
%
A single flow usually cannot consume more than one member link, even when the aggregate has spare capacity.
Gbps
Metric Value Details Copy
{{ row.label }} {{ row.value }} {{ row.details }}
Failed links Active links Usable capacity Headroom State Copy
{{ row.failedDisplay }} {{ row.activeDisplay }} {{ row.capacityDisplay }} {{ row.headroomDisplay }} {{ row.state }}
Check Finding Evidence Action Copy
{{ row.check }} {{ row.finding }} {{ row.evidence }} {{ row.action }}

          
Customize
Advanced
:

Introduction:

Link Aggregation Control Protocol (LACP) lets several same-speed Ethernet links act as one logical link aggregation group, usually called a LAG. The bundle can carry more total traffic than one member link and can keep forwarding after a member fails, but the usable result is not simply the raw port speed multiplied by the number of cables.

Capacity planning for a LAG depends on active member count, member line rate, planned reserve, and traffic distribution. A four-member 10 Gbps bundle may have 40 Gbps of raw line rate, yet a production planning number can be much lower after one member is down, a reserve is held back, and hashing is discounted for uneven flow spread.

Diagram of LACP traffic hashing across active member links with one failed member removed

LACP hashing normally assigns each conversation or flow to one member link so frames stay in order. That protects traffic from packet reordering, but it also means one large flow does not automatically use every member at once. Many smaller flows can fill the bundle well, while one backup stream, replication session, or storage flow can hit a single member ceiling.

A LAG estimate is most useful before a change window, port-speed upgrade, failure test, or capacity review. It helps answer whether the modeled demand fits after a failed link, how much headroom remains, and whether the largest expected flow is too large for one member even when aggregate capacity still looks healthy.

Technical Details:

IEEE link aggregation treats parallel full-duplex point-to-point links as one logical link for the MAC client. The individual links in a planning model should use the same line rate, because mixed-speed member behavior is platform-specific and can make aggregate math misleading. Failed or removed members are no longer part of the forwarding set.

Traffic distribution is hash-based rather than byte-perfect striping. Switches and routers commonly hash on source and destination MAC addresses, IP addresses, transport ports, VLANs, or platform-specific combinations. A diverse set of flows usually spreads better than a small number of repeated source-destination pairs. The hash efficiency percentage in this model is a planning discount for that imperfect spread.

Formula Core:

The calculation first removes failed members, then applies reserve and hash factors to the active raw line rate. Headroom compares that usable planning capacity with planned demand.

Nactive = max(0,Ntotal-Nfailed) payload factor = 1-reserve percent100 hash factor = hash efficiency percent100 usable capacity = Nactive×member speed×payload factor×hash factor demand headroom = usable capacity-planned demand
LACP capacity variables and meanings
Quantity Meaning Unit or range
Member linksConfigured same-speed physical links in the bundle.1 to 64 links
Member speedLine rate of each healthy member before reserve and hash discount.0.1 to 800 Gbps
Failed membersMembers removed from the modeled forwarding set.0 to member count
Hash efficiencyPlanning factor for uneven flow distribution across active members.1% to 100%
Overhead and reserveCapacity held back before the hash discount.0% to 80%
Planned demandTraffic load compared with usable planning capacity.Gbps
Largest single flowExpected largest conversation that may be pinned to one member.Gbps

Single-flow checking uses a separate ceiling because aggregate headroom cannot rescue a flow that hashes to one member. The per-member payload ceiling equals member speed multiplied by the payload factor. If the largest single flow is above that ceiling, the single-flow check reports a pinned-flow risk even when total demand still fits.

LACP bundle status rules
Status or row state Rule Meaning
No active membersActive members are 0 and planned demand is above 0.The bundle has no modeled forwarding capacity for the demand.
Capacity short or ShortDemand headroom is below 0 Gbps.The modeled demand exceeds usable capacity.
Near ceilingDemand utilization is at least 90% and headroom is non-negative.The demand fits, but the remaining margin is thin.
Watch demand or WatchDemand utilization is at least 70% and below 90%.The demand fits with visible pressure on the bundle.
Capacity ok or FitsDemand utilization is below 70% with non-negative headroom.The modeled demand fits with broader headroom.

The member failure ladder repeats the same capacity formula for each possible failed-member count from zero through the full bundle. This shows the last failure point where demand still fits, then reports how many additional failed members are tolerated from the current scenario. A result of zero additional failures means the current state is already the last modeled safe state for the entered demand.

Everyday Use & Decision Guide:

Start with the physical bundle you actually intend to operate. Enter the number of Member links, the same Member speed for those links, and the Failed members scenario you want to survive. For a design review, model both the normal state and the failure case that operations expects, such as one failed member in a four-link bundle.

Use Planned demand for the peak aggregate traffic you need the LAG to carry, not the average day rate. Keep Hash efficiency conservative when traffic has a small number of elephant flows, repeated source-destination pairs, or a hash policy that does not include enough fields. Use Overhead and reserve for operational space, protocol overhead, monitoring uncertainty, or a deliberate safety margin.

  • Check the summary first. It names Capacity ok, Watch demand, Near ceiling, Capacity short, or No active members.
  • Open Bundle Snapshot for Usable planning capacity, Planned demand, Failure tolerance, and Single-flow pinning.
  • Use Member Failure Ladder to see the first failed-link count that turns the state from Fits or Watch into Short.
  • Use Failure Capacity Curve when a visual comparison of usable capacity against demand is easier to discuss in a design review.
  • Read Hashing Action Brief before changing hardware. It separates demand fit, failure margin, single-flow risk, hash entropy, and reserve assumptions.

A healthy aggregate result does not mean every application flow is safe. If Single-flow pinning says the largest flow is beyond one member, adding more same-speed members may improve total capacity while leaving that one flow limited. In that case, increasing member speed, splitting the flow, changing the application path, or reducing that flow size is more relevant than adding another identical member.

Calculations run in the page from the values you enter. Copy or download tables and JSON only after checking that the units, failed-member scenario, and hash efficiency match the capacity question you plan to document.

Step-by-Step Guide:

Build the model from the physical bundle first, then read the failure and flow checks before using the result as planning evidence.

  1. Enter Member links for the configured LAG size. If the warning list says the value must be from 1 to 64, correct the count before reading the summary.
  2. Enter Member speed in Gbps. Use the physical line rate of each member, not the aggregate rate.
  3. Set Failed members to the outage or maintenance case being tested. If failed members exceed configured members, the validation panel will ask you to fix the count.
  4. Enter Planned demand for the peak aggregate load to compare against capacity.
  5. Set Hash efficiency and Overhead and reserve. Lower hash efficiency when traffic diversity is weak, and raise reserve when you want more operational headroom.
  6. Enter Largest single flow if one conversation may dominate the bundle. Compare it with Single-flow pinning in the snapshot.
  7. Read the summary line for active members, usable capacity, and headroom, then open Member Failure Ladder to find the last failed-member count that still fits demand.
  8. Use Hashing Action Brief to decide whether the next action is adding capacity, improving traffic spread, increasing member speed, or documenting the current failure margin.

Finish with the output surface that matches the decision: Bundle Snapshot for a short capacity note, Member Failure Ladder for N+ planning, Failure Capacity Curve for a review graphic, or JSON for a structured record.

Interpreting Results:

Read Usable planning capacity and Planned demand together. Positive headroom means the aggregate demand fits under the entered reserve and hash assumptions. Negative headroom means the bundle is short by that many Gbps in the current failed-member scenario.

  • Configured members shows raw aggregate line rate before failures, reserve, and hashing are applied.
  • Active members shows how many links remain after the failed-member scenario.
  • Failure tolerance shows the total failed-member count where demand still fits and how many more failures are safe from the current state.
  • Single-flow pinning compares the largest expected flow with one member's payload ceiling.
  • Hashing Action Brief turns the same numbers into practical follow-up actions.

A good status label does not prove the LAG will balance evenly in production. It depends on the traffic mix, platform hash fields, active member health, and actual counters during peak load. After a modeled pass, check real member utilization and confirm that no single conversation is saturating one member while other members sit below their share.

Worked Examples:

A four-member bundle with 10 Gbps members, one failed member, 18 Gbps planned demand, 78% hash efficiency, and 3% reserve has three active members. The usable planning capacity is 22.70 Gbps, so Planned demand has about +4.70 Gbps headroom at roughly 79.3% utilization. Failure tolerance stays at one total failed member, so another failure would push demand beyond the modeled capacity.

A 4 x 25 Gbps bundle with no failed members, 80 Gbps demand, 85% hash efficiency, 5% reserve, and a 30 Gbps largest flow produces about 80.75 Gbps usable capacity. The summary should read Near ceiling because utilization is about 99.1%. The per-member payload ceiling is only 23.75 Gbps, so Single-flow pinning reports that the 30 Gbps flow is too large for one member even though aggregate demand barely fits.

An eight-member 10 Gbps bundle with two failed members, 42 Gbps demand, 70% hash efficiency, and 5% reserve has six active members and about 39.90 Gbps usable capacity. Capacity short is the expected result, with about -2.10 Gbps headroom. The failure ladder shows demand would still fit with one failed member, so the current two-failure condition is already past the modeled tolerance.

A quick validation failure can happen when Member links is 4 and Failed members is entered as 5. The warning panel reports that failed members cannot exceed configured member links, and the summary changes to Check inputs. Fix the failed-member count before using the snapshot, ladder, chart, or JSON.

FAQ:

Why is usable capacity lower than raw bundle speed?

Usable planning capacity removes failed members, then applies Overhead and reserve and Hash efficiency. Raw aggregate line rate appears in Configured members, but it is not the final planning number.

Can one flow use the whole LACP bundle?

Usually no. The model treats the largest single flow as pinned to one member, so Single-flow pinning compares that flow with the per-member payload ceiling rather than the aggregate bundle capacity.

What hash efficiency should I enter?

Use a higher value when many flows spread well across members. Lower it when traffic has only a few large conversations, weak source or destination variety, or a platform hash policy that does not match the traffic mix.

What does N+0 from current mean?

The current failed-member scenario is already the last modeled state where demand fits. In Member Failure Ladder, the next additional failed member should move the row state toward Short or No forwarding.

Why do I see Check inputs?

At least one field is outside the accepted range. Common causes are fewer than one member link, more failed members than configured links, hash efficiency outside 1% to 100%, or reserve outside 0% to 80%.

Does the chart prove the bundle is balanced?

No. Failure Capacity Curve shows the modeled capacity at each failed-member count. Use interface counters and flow telemetry during peak load to confirm real distribution across members.

Glossary:

LACP
Link Aggregation Control Protocol, the negotiation protocol used to form and monitor a link aggregation group.
LAG
Link aggregation group, a logical link made from multiple member links.
Member link
One physical link that participates in the bundle.
Hash efficiency
Planning percentage used to discount usable capacity for uneven flow distribution.
Overhead and reserve
Percentage of active raw capacity held back before hash efficiency is applied.
Headroom
Usable planning capacity minus planned demand.
Single-flow pinning
The risk that one large conversation is limited by one member link instead of the full bundle.
Failure ladder
Capacity comparison across each possible failed-member count.

References: