{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ result.bdpBadge }} {{ result.streamBadge }} {{ result.currentBadge }}
Bandwidth delay product inputs
Use the shaped WAN, VPN, or private-link bandwidth available to this flow.
Mbps
Enter measured idle RTT in milliseconds.
ms
Use 80-95% for practical tuning instead of a perfect 100% fill.
%
Use 1 for single-stream transfers; increase when the tool opens multiple flows.
flows
Enter the current per-flow window; use 0.064 MiB for an unscaled 64 KiB baseline.
MiB
Typical Ethernet IPv4 MSS is 1460 bytes; tunnels are often lower.
bytes
Leave 0 for raw BDP math; add overhead to size for usable payload rate.
%
MetricValueDetailCopy
{{ row.label }}{{ row.value }}{{ row.detail }}
CheckRecommendationReasonCopy
{{ row.check }}{{ row.recommendation }}{{ row.reason }}
Customize
Advanced
:

Introduction

Bandwidth-delay product, or BDP, is the amount of data that can be in flight on a network path before acknowledgments return. It connects two numbers that often get considered separately: the path rate and the round-trip time. A fast link with noticeable latency needs a larger TCP window than the same link on a nearby LAN, because more bytes must be outstanding while the sender waits for acknowledgments.

The measure matters most when a WAN, VPN, private link, satellite path, or inter-region transfer has enough bandwidth to look healthy on paper but one TCP flow cannot fill it. A 1 Gbps path with an 80 ms round trip needs several mebibytes of in-flight data before it can approach line rate. A small receive window or socket buffer can make that same path behave like a much slower connection even when there is no obvious interface error.

Diagram of a sender and receiver connected by a path, with bandwidth, round-trip time, and in-flight bytes forming the bandwidth-delay product.

BDP is a sizing estimate, not a promise that an application will run at that speed. Congestion control, packet loss, host buffers, application read and write behavior, disk speed, encryption overhead, and middleboxes can all reduce real throughput. The value is still useful because it tells you whether the window is obviously too small before you chase less likely causes.

Parallel streams change the practical target. If one flow needs a large receive window, several flows can divide the required in-flight bytes across multiple TCP connections. That can rescue a transfer tool that supports parallel streams, but it should not be confused with fixing the underlying path. The same RTT and bandwidth still define the aggregate amount of data that must be kept moving.

Technical Details:

BDP combines rate and time. Bandwidth is entered as megabits per second, RTT is entered in milliseconds, and the result is reported as bytes with binary units such as KiB, MiB, and GiB. The receive window comparison uses the per-stream target, because each TCP flow has its own effective window.

The calculation first reduces the raw path rate by the target utilization and optional protocol overhead reserve. That payload-rate target is then multiplied by RTT and divided by eight to convert bits into bytes. Parallel streams do not reduce the aggregate BDP. They divide the per-stream window target.

Formula Core

The primary equation computes the aggregate bytes that need to be in flight for the selected target:

BDP = B * U * ( 1 - O ) * R 8
Bandwidth-delay product variables and units
Symbol Meaning Unit or form How it affects the result
B Path bandwidth bits per second Higher bandwidth raises the aggregate BDP in direct proportion.
U Target utilization fraction from 0.01 to 1.00 A 90% target sizes the window for 90% of the path, not for a perfect fill.
O Protocol overhead reserve fraction from 0.00 to 0.90 Overhead reduces the payload-rate target before BDP is calculated.
R Round-trip latency seconds Higher RTT raises the amount of unacknowledged data needed in flight.
BDP / streams Per-stream window target bytes per TCP flow More parallel streams lower the target window for each flow.

With the default values, the payload target is 900 Mbps because the path is 1,000 Mbps and the target utilization is 90%. An 80 ms RTT gives 9,000,000 aggregate bytes in flight, which is about 8.58 MiB. With four streams, the per-stream target is about 2.15 MiB.

Core outputs generated by the bandwidth-delay product calculation
Output Meaning Decision value
Aggregate BDP target Total bytes needed in flight across all selected streams. Use it to understand the path-wide buffer demand created by rate and RTT.
Per-stream window target Aggregate BDP divided by the selected flow count. Compare this value with the current receive window or socket buffer for one flow.
Current aggregate ceiling The maximum modeled throughput from the current per-stream window and flow count, capped at path bandwidth. Use it to see whether the existing window can plausibly reach the selected target.
Packets in flight Aggregate BDP divided by the chosen maximum segment size. Use it as packet-scale guidance for testing and discussion, not as a packet-capture count guarantee.
Needed streams with current window The smallest integer stream count that can carry the aggregate BDP using the current window. Use it when you cannot raise the receive window but can adjust parallelism.

Several guardrails keep the model within useful ranges. Invalid values are clamped rather than stopping the calculation, and the input audit records that adjustment. Target utilization above 95% receives a warning because a perfect fill leaves little room for jitter, loss recovery, or shared-link bursts. A per-stream target above 64 KiB receives a TCP window scaling warning because the original unscaled TCP receive window cannot advertise that much space.

Validation and warning rules used by the calculator
Input or condition Allowed range Result cue
Path bandwidth Greater than 0 Mbps Zero or negative values are clamped above zero and flagged in Input audit.
Round-trip latency Greater than 0 ms Zero or negative values are clamped above zero and flagged in Input audit.
Target utilization 1% to 100% Values outside the range are clamped; values above 95% also receive a caution.
Parallel streams At least 1 flow Fractional or low values are converted to a whole stream count of at least one.
Protocol overhead 0% to 90% The reserve lowers the payload-rate target before window sizing.
Per-stream window target > 64 KiB Warning threshold Window scaling must be available for that per-flow target.

Everyday Use & Decision Guide:

Begin with measured RTT and the usable path rate, not the provider headline speed. For a shaped WAN, VPN, cloud private link, or storage replication path, Path bandwidth should reflect the rate that one transfer is allowed to fill. Set Target utilization to 80% to 95% for a tuning pass unless you are intentionally modeling an ideal ceiling.

Use Parallel streams to match the transfer tool you actually plan to run. A single-stream backup, SCP transfer, or database connection should use 1. A transfer tool that opens four data streams should use 4, because the page divides the aggregate BDP across those flows. If the application cannot open parallel streams and the current receive window is short, adding streams to the model only describes a workaround the application may not have.

  • Use Current receive window as the effective per-flow window or socket buffer you want to compare against the target.
  • Leave Protocol overhead at 0 for raw BDP math; add a reserve when tunnels, encryption, framing, or shared-link planning should reduce the payload target.
  • Use MSS when packet-scale guidance matters. It changes Packets in flight, not the byte target itself.
  • Read Input audit before trusting a surprising result. An adjusted input means the number is based on a supported fallback, not exactly on what you typed.

The most useful first answer is usually the comparison between Per-stream window target and Current receive window. If the status says window fits, the receive window is not the obvious limiter for the selected target. If it says window short, the guidance table shows whether raising the window or increasing stream count is the simpler next move.

Do not treat a fitted window as proof that the path is healthy. BDP sizing answers one narrow question: can enough bytes be outstanding during one RTT? If measured throughput is still low after the window fits, check packet loss, retransmits, congestion control, host CPU, disk, encryption, and application pacing before blaming the WAN link alone.

Step-by-Step Guide:

Run the sizing pass from the path you actually need to tune, then compare the per-flow target with the current receive window.

  1. Enter the sustained rate in Path bandwidth. Use the shaped WAN or VPN rate when it is lower than the physical interface speed.
  2. Enter measured idle round-trip time in Round-trip latency. After this value changes, Aggregate BDP target and Per-stream window target should move in the same direction.
  3. Set Target utilization. A practical pass usually uses 80% to 95%; a value above 95% will add a warning because little margin remains.
  4. Set Parallel streams to the number of TCP flows the transfer will use. Watch Per-stream window target; it should fall as the stream count rises while Aggregate BDP target stays tied to the path.
  5. Enter the current per-flow buffer in Current receive window. The summary status will show window fits or window short.
  6. Open Advanced only when the defaults do not match the path. Adjust MSS for tunnel or jumbo-frame assumptions and Protocol overhead for payload reserve.
  7. Check Window Metrics first, then Tuning Guidance. If Input audit says values were adjusted, fix the out-of-range field before using the result in a change plan.
  8. Use RTT Capacity Map to see how the window grows as latency changes, and use Stream Scaling Curve to judge how many flows the current window can support.

A clean finish is a result where the inputs are in range, the per-stream target matches your application model, and the guidance row for window size gives an action you can actually apply.

Interpreting Results:

Start with the status badge, then read the metric rows that explain it. window fits means the current per-stream receive window is at least as large as the calculated per-stream target. window short means the current window cannot keep enough bytes in flight for the selected rate, RTT, target utilization, overhead reserve, and stream count.

How to interpret the calculator status and follow-up cues
Cue What it means Useful follow-up
window fits The current receive window meets or exceeds Per-stream window target. Look for loss, host limits, or application pacing if throughput is still low.
window short The current receive window is below Per-stream window target. Raise the per-flow window or use the Needed streams with current window count if the application supports it.
check inputs At least one value was clamped to a supported range. Review Input audit before using the result.
TCP window scaling warning The target exceeds the original 64 KiB TCP window range. Confirm window scaling or autotuning is available on both endpoints.

Current aggregate ceiling is a modeled cap from the current window, stream count, and RTT. It is useful for explaining why a transfer stalls near a repeatable rate, but it does not account for packet loss, congestion-control backoff, disk speed, TLS overhead, or application throttling. Use it as a window-size check, then validate against a real transfer or a controlled throughput test.

The charts are comparison aids. RTT Capacity Map shows why a route change or remote region can raise the needed window even when bandwidth is unchanged. Stream Scaling Curve shows when extra flows stop helping because the current window already fills the path or because the path bandwidth cap has been reached.

Worked Examples:

Four-stream transfer over a 1 Gbps path

A storage copy uses a 1,000 Mbps private link with 80 ms RTT, 90% target utilization, four streams, and a 16 MiB current receive window. The Aggregate BDP target is about 8.58 MiB, and Per-stream window target is about 2.15 MiB. Because 16 MiB per stream is above the target, the status is window fits. If throughput is still poor, the next check should be loss, endpoint limits, or the storage application rather than receive-window size.

Single stream across a high-latency WAN

A single-flow backup tries to use 2,500 Mbps across 120 ms RTT at 90% target utilization with a 16 MiB current receive window. The Aggregate BDP target and Per-stream window target both land near 32.19 MiB because there is only one stream. The guidance row reports a shortfall of about 16.19 MiB and suggests 3 streams with the current window. If the application cannot open those streams, the practical fix is to raise the receive window or accept a lower target.

Tunnel reserve on a fast inter-region link

A 10,000 Mbps path with 70 ms RTT, 85% target utilization, four streams, a 16 MiB receive window, and 5% protocol overhead produces an Aggregate BDP target near 67.38 MiB. The Per-stream window target is near 16.85 MiB, so the current window is just short. That small difference is worth noticing because the same path without the overhead reserve would be sized for a different payload target.

Input audit after a bad paste

A pasted setup accidentally leaves Target utilization at 125% and Parallel streams at 0. The page still calculates, but Input audit changes to Adjusted and the status becomes check inputs. Fixing the fields back to 1% to 100% utilization and at least one stream should be done before copying the Window Metrics or trusting the JSON values.

FAQ:

Why does latency make a fast link need a larger window?

TCP can only send so much unacknowledged data before it waits for more receive-window space. Higher RTT means acknowledgments take longer to come back, so the sender needs more bytes in flight to keep transmitting at the selected path rate.

Should I use Mbps or MiB for bandwidth?

Enter path rate in Path bandwidth as Mbps. The result converts that rate into bytes and displays window sizes with binary units such as KiB and MiB, which are more useful for socket buffers and receive windows.

Why does the page warn about TCP window scaling?

The warning appears when Per-stream window target is above 64 KiB. That target is larger than the classic unscaled TCP receive-window range, so the endpoints need TCP window scaling or autotuning to advertise enough receive space.

Does adding streams always improve throughput?

No. Extra streams reduce the per-flow window target in this model, but they only help when the application can really open those flows and when the path is not already limited by loss, congestion control, endpoint resources, or a configured rate cap.

What should I do when Input audit says values were adjusted?

Review the field ranges first. Keep bandwidth and RTT above zero, keep target utilization from 1% to 100%, use at least one stream, keep MSS at 200 bytes or higher, and keep protocol overhead from 0% to 90%.

Is the BDP result enough to prove a network fault?

No. A short receive window can explain a repeatable throughput ceiling, but BDP does not diagnose packet loss, queueing, CPU pressure, disk limits, encryption overhead, firewall inspection, or application pacing. Use the result as one check in a wider performance investigation.

Glossary:

Bandwidth-delay product
The amount of data that must be in flight to fill a path at a given rate and round-trip time.
Round-trip latency
The time for data to reach the other endpoint and for an acknowledgment or response to return.
Receive window
The amount of data a TCP receiver can advertise as available space for incoming bytes.
Maximum segment size
The TCP payload size used to estimate how many packets make up the aggregate in-flight data.
Protocol overhead
A reserve for bytes consumed by framing, encryption, tunneling, or other non-payload work.
Window scaling
A TCP option that allows receive windows larger than the original 16-bit window field.

References: