| Aspect | Value | Details | Copy |
|---|---|---|---|
| {{ row.label }} | {{ row.value }} | {{ row.details }} |
| Metric | Value | Copy |
|---|---|---|
| Total bytes | {{ format(total_bytes) }} | |
| Effective bytes | {{ format(effective_bytes) }} | |
| Raw rate (B/s) | {{ format(raw_bps) }} | |
| After share (B/s) | {{ format(available_raw_bps) }} | |
| Post-overhead (B/s) | {{ format(post_overhead_bps) }} | |
| BDP cap (B/s) | {{ bdp_limit_bps ? format(bdp_limit_bps) : 'N/A' }} | |
| Loss cap (B/s) | {{ loss_limit_bps ? format(loss_limit_bps) : 'N/A' }} | |
| Effective (B/s) | {{ format(eff_bps) }} | |
| Handshake time (s) | {{ format(handshake_time_seconds) }} | |
| Time (s) | {{ format(transfer_time_seconds) }} | |
| Finish (local) | {{ finish_time_local }} |
Transfer time is the stretch between starting a copy and having every payload byte arrive. That matters when backup windows, maintenance cutovers, and large uploads have fixed deadlines. This calculator turns file size, path speed, and transport assumptions into a schedule estimate before the transfer begins.
The model starts with the bytes you plan to send, applies optional compression, converts the advertised bandwidth into bytes per second, and then asks which ceiling really controls the job. Sometimes the line rate wins. On long or noisy paths, latency, receive window size, or packet loss can become the actual bottleneck instead.
A 50 GB export over a fast local link and the same export over a 120 ms WAN are not the same task, even if both links are labeled 1 Gbps. One may finish comfortably inside a change window, while the other may be limited by bandwidth-delay product or loss and spill into business hours.
The page returns a readable duration, a local finish timestamp, a table of intermediate rates, and two charts that make the bottleneck easier to spot. If you only know size and headline bandwidth, the basic fields are enough. If you know RTT, receive window, loss, or handshake cost, the advanced inputs let you model those assumptions explicitly.
It is still a planning model, not a packet capture or a live throughput test. A short estimate does not mean the path will stay stable for the whole run, and a long estimate does not prove the application itself is tuned poorly. The practical value comes from seeing which ceiling the tool says is active and then checking whether that assumption matches the path you actually have.
Start with File size, Size standard, and Bandwidth. That first pass tells you whether the transfer is obviously small enough for the available window or whether you need a more careful model. The most common early mistake is unit confusion: ISP numbers are often in Mbps, while storage and copy tools often display MB/s, which changes the estimate by a factor of eight.
Size standard aligned with the number you were given. A backup job quoted in binary units will not match a decimal storage spec exactly.Protocol preset as a shortcut, not as a law. It only fills typical Overhead (%) and Handshake RTTs values for TCP+TLS, SFTP, SMB, or NFS.Latency (RTT), TCP window, and Connections together when you are modeling a WAN. RTT alone does not change the steady-state ceiling unless the window or loss model is also active.Limited by: post-overhead to BDP cap or Loss cap. That usually means the path assumptions matter more than the advertised link speed.Compression is another place where people overread the output. The slider reduces Effective bytes, which is useful for text payloads or protocol streams that really do shrink, but it should stay near zero for archives, video, or already-compressed backups. If the result only looks reasonable after a large compression setting, the schedule is probably depending on an optimistic assumption rather than on the network itself.
Before trusting the final number, compare Effective (B/s) with Raw rate (B/s) in Transfer Metrics. If they are close, the simple model is probably enough. If they are far apart, open Transfer Timeline next; the flat opening segment is handshake time, and the slope after that shows the payload pace the tool is actually using.
The calculator measures total transfer time as handshake delay plus payload bytes divided by the effective rate. Payload bytes come from File size and Size standard, then shrink if Compression (%) is set. Bandwidth accepts both bits-per-second and bytes-per-second families, and bit units are converted by dividing by eight before the comparison starts.
From there the tool constructs three candidate ceilings. Post-overhead (B/s) is the line-rate ceiling after Bandwidth share (%) and Overhead (%). BDP cap (B/s) appears only when both Latency (RTT) and TCP window are positive, using decimal megabytes for the window and multiplying by Connections. Loss cap (B/s) appears only when both RTT and Packet loss are positive, using the simplified Mathis-style relation with MSS and Mathis C.
The smallest non-zero ceiling becomes Effective (B/s) and sets the Limited by badge. Handshake cost is added separately as RTT in seconds multiplied by Handshake RTTs, so protocol startup changes the schedule even when steady-state rate does not. The Rate Trend and Transfer Timeline tabs are visual summaries of the same deterministic calculation rather than measured traffic traces.
The model combines payload size, a usable-throughput ceiling, and optional startup delay. In the equations below, BDP and loss ceilings are included only when their required inputs are greater than zero.
| Symbol | Meaning | Unit |
|---|---|---|
B_total |
Total bytes after unit conversion | bytes |
B_eff |
Bytes left after compression | bytes |
R_post |
Rate after link share and protocol overhead | B/s |
W |
Per-connection receive window from TCP window |
MB |
n |
Parallel transfer count from Connections |
count |
p |
Packet loss as a fraction of 1 | unitless |
h |
Startup RTT count from Handshake RTTs |
RTTs |
| Limiter | When it appears | What it usually means | What to inspect next |
|---|---|---|---|
post-overhead |
No lower BDP or loss ceiling is active | The line-rate assumption is still the main bound | Check units, share, overhead, and compression realism |
BDP cap |
Latency (RTT) > 0 and TCP window > 0, with the BDP value below Post-overhead (B/s) |
The path is long enough that receive window and stream count matter | Compare window size, stream count, and RTT against the real path |
Loss cap |
Packet loss > 0 and RTT > 0, with the loss ceiling below the other candidates |
Small loss on a long path is dominating the sustained rate estimate | Sanity-check the loss assumption before using the schedule operationally |
All calculations, chart exports, CSV copies, DOCX export, and JSON export are performed in the browser. The package does not ship a helper endpoint for this tool, so the estimate is not sent to a tool-specific backend as part of normal use.
A good first pass usually takes less than a minute if you already know the size and the headline link rate.
File size, choose the correct size unit, and set Size standard to match the number you were given. If the source says 700 MB from a storage tool, binary IEC is often the safer assumption.Bandwidth and pick the right unit family. If the summary box does not appear after this step, check that both size and bandwidth are greater than zero.Advanced only when the path needs more realism. Use Protocol preset for a quick fill of Overhead (%) and Handshake RTTs, then adjust those values if your environment differs.Latency (RTT), TCP window, Connections, and Packet loss. Leave MSS and Mathis C at their defaults unless you have a specific reason to change the loss model.Estimated Transfer Time, Finishes by, and the Limited by badge tell you whether the schedule is governed by line rate, BDP, or loss.Transfer Metrics for the exact numbers, Rate Trend for the ceiling comparison, Transfer Timeline for the cumulative schedule shape, and JSON if you need to move the modeled values into another report.Estimated Transfer Time and Time (s) express the same modeled duration in different forms. The readable duration is for scheduling, while Time (s) is the precise numeric result. Finish (local) is simply the current device clock plus the modeled seconds, so it is only as trustworthy as the clock on the machine you are using.
Effective (B/s) is the output that matters most, because every other summary value flows from it.BDP cap (B/s) and Loss cap (B/s) are modeled ceilings, not measured counters. They tell you what assumption is dominating the schedule.Transfer Timeline is linear by design. It is a visual explanation of the estimate, not a prediction of bursty application behavior.A favorable result does not mean the sender, receiver, or storage stack will actually sustain that pace for the whole run. When the window is tight, verify one real transfer sample and compare it with Effective (B/s) rather than trusting the headline bandwidth alone.
Set File size to 700, leave the unit at MB, keep Size standard on IEC, and enter Bandwidth as 10 MB/s. With all advanced controls left at their defaults, the summary returns an Estimated Transfer Time of about 1 m 13 s and Time (s) of 73.40. The badge stays on Limited by: post-overhead because neither BDP nor loss is active.
The interpretation is straightforward: this is essentially a size divided by usable rate estimate. In Transfer Metrics, Effective (B/s) and Raw rate (B/s) stay identical, so the model is telling you that the basic fields are enough for this path.
Model a 50 GB transfer over 1 Gbps, then set Overhead (%) to 3, Latency (RTT) to 120 ms, TCP window to 4 MB, Connections to 1, and Handshake RTTs to 2. The table shows Post-overhead (B/s) at 121,250,000.00, but BDP cap (B/s) falls to 33,333,333.33. The summary switches to Limited by: BDP cap, and Time (s) lands at about 1,610.85.
That is the classic case where port speed overstates what one flow can sustain across a high-latency path. The short shaded start in Transfer Timeline represents the 0.24-second handshake cost, but the real story is the lower sustained slope after startup.
A 4 GB transfer at 20 MB/s looks harmless until you add Overhead (%) 5, Latency (RTT) 150 ms, Packet loss 1.0, and Handshake RTTs 2. Under those settings, Loss cap (B/s) drops to about 118,746.67, the badge changes to Limited by: Loss cap, and Time (s) stretches to roughly 36,169.46.
This is the troubleshooting pattern to watch for. When small changes in Packet loss move the finish time by hours, the fragile part of the estimate is the loss assumption itself. Re-check the path, try a lower loss value, and compare the new ceiling with a measured sample before scheduling production work around the longer number.
Because the byte count changes. In this tool, IEC treats KB, MB, GB, and TB as powers of 1024, while SI treats them as powers of 1000. A large file can shift by many millions of bytes depending on that choice.
BDP cap (B/s) lower than my line rate?Because a fast link still needs enough in-flight data to stay full. If Latency (RTT) is high and TCP window or Connections are too small, the path cannot keep enough bytes outstanding to reach the nominal port speed.
The loss ceiling is inversely related to the square root of loss in the package model. On long paths, even a fraction of a percent can push Loss cap (B/s) far below Post-overhead (B/s), which is why that assumption deserves a sanity check before you trust the schedule.
No file upload is part of the package. The tool computes the estimate in the browser and creates CSV, image, DOCX, and JSON exports locally. Its job is to model a transfer, not to inspect the file itself.