| Metric | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
| Planning line | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
| Target Fill | Needed Throughput | Window / Stream | Streams @ Current Window | Needed Packet Rate | Max Packet Loss | Window Scale | Readiness | Copy |
|---|---|---|---|---|---|---|---|---|
| {{ row.targetFillLabel }} | {{ row.targetThroughput }} | {{ row.windowPerStream }} | {{ row.streamsNeededLabel }} | {{ row.packetRateNeededLabel }} | {{ row.lossBudget }} | {{ row.scaleLabel }} | {{ row.readiness }} |
| Action | Expected Throughput | Gain | Est. Transfer Time | New Bottleneck | Copy |
|---|---|---|---|---|---|
| {{ row.action }} | {{ row.throughput }} | {{ row.gain }} | {{ row.transferTime }} | {{ row.newLimiter }} |
| Metric | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
{{ result.fieldCheck.iperfTemplate }}| Field | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
Network throughput is the usable payload rate a path can sustain after latency, window size, loss, protocol overhead, and sharing have taken their share of the headline link speed. A 1 Gbps circuit can move far less than 1 Gbps of application data when a single TCP stream cannot keep enough bytes in flight or when packet loss forces congestion control to back away.
That distinction matters when planning backup windows, branch transfers, data-center replication, and wide-area file moves. The useful question is not only how fast the port is, but which ceiling is active for the transfer shape in front of you. Link share, receive window, loss, packet rate, and retransmission penalty can each become the limit.
Throughput estimates are planning numbers, not a replacement for a measured transfer. They are most useful when the inputs come from the same path: observed round-trip time, realistic packet loss, the real TCP window or stream count, and a payload size in the correct unit family.
Payload throughput begins with raw line rate and then removes the effects that do not carry application bytes. Protocol overhead reduces payload rate, link share reserves only part of the path for the job, and retransmission penalty models extra work caused by retries or duplicate traffic. TCP window and round-trip time add a bandwidth-delay product limit: a sender cannot fill the path unless enough unacknowledged bytes can stay in flight.
The loss ceiling uses a Mathis-style relationship. The modeled rate rises with maximum segment size and falls as round-trip time and packet loss rise. That is why a small loss percentage can dominate a long-distance path even when the port speed looks generous.
The calculator compares active ceilings and uses the smallest usable rate for transfer timing.
| Limiter | What it means | Useful response |
|---|---|---|
| Link payload | Overhead and sharing set the usable share of the line. | Reserve more link share or reduce encapsulation cost. |
| TCP window | RTT is too high for the current per-stream receive window. | Raise window size or use more parallel streams where safe. |
| Packet loss | Loss and RTT cut the TCP rate before the link fills. | Check path quality before buying more bandwidth. |
| Packet rate | A firewall, overlay, or CPU path cannot process enough packets. | Use larger payloads, reduce per-packet work, or raise the PPS ceiling. |
The model also reports transfer time for the selected payload size, bandwidth-delay product, window scale hints, field-check commands, observed-rate comparison, and tuning targets for 60%, 75%, 85%, 95%, or 100% of payload link rate.
Start with the path preset only as a rough shape. Same rack, metro, regional WAN, cross-country, transoceanic, and satellite presets load RTT and packet-loss assumptions, but measured values from the real path are better. Choose the framing preset that matches the transfer envelope, then adjust protocol overhead and MSS if you know the actual encapsulation.
Observed throughput when you have a real copy or iperf result. The Field Check tab then helps compare measured rate, modeled rate, and loaded RTT.Parallel streams carefully. More streams can overcome a single-stream window cap, but they can also hide congestion from other users.Packet-rate ceiling when small packets, tunnels, firewalls, or CPU-bound gateways are part of the path.Bottleneck before changing hardware. A loss or window limit usually needs a different fix than a link-share limit.A low utilization badge does not prove the circuit is underused. It means the current model found a stronger ceiling before payload bytes could fill the configured share of the link.
Use the calculator in this order for a first pass:
Link capacity, RTT, TCP receive window, Packet loss, stream count, and Transfer size.Framing preset, Protocol overhead, Link share, and Loss model profile.Observed throughput and Loaded RTT if a field test exists.Throughput Metrics for effective throughput, utilization, BDP, and estimated transfer duration.Tuning Budget and Field Check to size the next test or copy the suggested command.If a value is rejected or forced into range, check the normalized inputs before relying on the result.
Read the active limiter first. If it says RTT / TCP window, a larger receive window or more streams can matter more than raw bandwidth. If it says Packet loss, path quality is the warning sign. If it says Packet-rate ceiling, inspect firewalls, tunnels, packet size, or CPU paths.
Compare Observed vs model only when the observed test matches the same direction, file size class, and path. A loaded RTT increase points to queue growth, while a clean RTT with poor throughput may point to window, loss, application, or storage limits.
A 250 GB transfer over a 1 Gbps cross-country path with 40 ms RTT, 1 MiB receive window, one stream, 5% overhead, and 0.02% loss may show a throughput ceiling below the payload link rate. The useful output is Bottleneck, not just the duration, because it tells you whether stream count, window size, or path quality is worth testing next.
A branch VPN copy with 100 Mbps capacity, 70% link share, 12% tunnel overhead, and a packet-rate ceiling can report a packet limiter even when TCP window is adequate. In that case, increasing TCP buffers will not remove the modeled packet processing cap.
If an iperf result is 320 Mbps but the model predicts 600 Mbps, the Field Check comparison should trigger a second measurement with loaded RTT recorded. The mismatch may come from queueing, storage speed, application throttling, or a loss estimate that is too optimistic.
Network rates are usually bits per second. File-copy tools often show bytes per second. Divide bits per second by eight before comparing the two.
No. It models likely ceilings from your inputs. Use the generated field-check command and real measurements before treating the result as evidence.
Each stream gets its own receive-window pipeline. Multiple streams can fill a high-RTT path when one stream cannot keep enough data in flight.