{{ result.summaryTitle }}
{{ result.primaryDisplay }}
{{ result.secondaryText }}
{{ result.statusText }} {{ result.handshakeBadge }} {{ result.budgetBadge }} {{ result.reuseBadge }}
TLS handshake load inputs
Use measured edge, proxy, or load-balancer RPS for this TLS termination point.
RPS
Lower values mean better keep-alive or connection pooling; 100% models one request per connection.
%
Use certificate/key-exchange metrics when available; high values point to weak session resumption.
%
Choose a starting profile, then replace the CPU values with benchmark or production measurements.
Measure with production telemetry or OpenSSL/proxy benchmarks when possible.
ms
Resumed handshakes are usually much cheaper than full handshakes; keep this separate from the full path.
ms
Use the effective cores assigned to TLS work, not the total machine size if other tasks share the host.
cores
Use 60-75% for edge capacity planning, higher only with measured latency margins.
%
Names do not affect the calculation.
Leave at 1 for steady state; use 1.5-3 for reconnect surge checks.
x
Default 0 keeps the baseline neutral; add reserve only when other host work shares the same cores.
%
Use 1-3 decimal places for capacity reviews.
digits
{{ header }} Copy
{{ cell }}
Customize
Advanced
:

Introduction:

TLS handshake load is the CPU work created when clients establish secure HTTPS connections. The expensive part is not usually every request. It is the subset of traffic that opens a new TLS connection, especially when that connection needs a full handshake instead of a resumed session. A busy edge, ingress proxy, API gateway, or load balancer can serve many requests comfortably and still struggle during reconnect bursts if the handshake mix changes.

Connection reuse is the first practical control. HTTP/1.1 persistent connections, HTTP/2 multiplexing, HTTP/3 connection behavior, client pools, and proxy keep-alive settings can all reduce how often a request turns into a new TLS handshake. Session resumption is the second control. A full handshake performs certificate and key-exchange work, while a resumed handshake can skip part of that path by using previously established session material or a pre-shared key.

Capacity planning needs both traffic shape and cryptographic cost. A 10,000 request-per-second service with strong keep-alive may create far less TLS CPU demand than a smaller service whose clients reconnect for every request. Certificate type, TLS version, mutual TLS, hardware generation, crypto library, and proxy build also change the per-handshake CPU cost, so measured values matter more than generic guesses.

Diagram showing HTTPS request rate reduced to new TLS connections, split into full and resumed handshakes, then compared with a TLS core budget.

The estimate is a planning model for the TLS termination tier. It does not measure live latency, packet loss, queueing, kernel scheduling, WAF cost, application CPU, or every possible edge node behind a service. Use it to compare handshake assumptions, size TLS worker cores, and decide which production metrics need a closer look.

Technical Details:

A TLS handshake establishes shared keys, negotiates protocol parameters, and authenticates the server. In TLS 1.3, the full handshake includes key exchange and certificate authentication work. Resumption uses pre-shared key material from an earlier connection, so it can avoid some certificate and signature work even though a new handshake still occurs. Mutual TLS adds client certificate authentication, which can raise the full-path cost.

Request rate becomes handshake load only after connection behavior is applied. If 5% of requests open a fresh TLS connection, then 95% are assumed to reuse an existing secure connection and do not create new handshake CPU in this model. The new handshakes are then split between full and resumed paths. The CPU demand is the weighted sum of those two rates multiplied by the measured CPU milliseconds per handshake.

The model compares CPU demand with an effective target budget, not with the raw host core count. TLS worker cores are multiplied by the planning CPU target, and the optional CPU reserve is removed after that. This mirrors a capacity review where operators want sustained headroom for interrupts, logging, proxy work, and reconnect storms rather than running the TLS tier at 100% of assigned cores.

Formula Core:

The calculation converts request traffic into weighted TLS CPU cores, then divides that demand by the target budget.

Rpeak = Rrequest×M Htotal = Rpeak×N Hfull = Htotal×F Hresumed = Htotal-Hfull Cdemand = Hfull×Tfull+Hresumed×Tresumed1000 Ctarget = C×P×(1-S) Utilization = CdemandCtarget×100
TLS handshake load formula variables
Symbol Meaning Unit
Rrequest Entered HTTPS request rate before the peak multiplier. requests/sec
M Peak multiplier used for deploy reconnects, failover, cache misses, or burst checks. ratio
N New TLS connection share, entered as a fraction from 0 to 1. ratio
F Full handshake share among new TLS connections. ratio
Tfull and Tresumed CPU milliseconds per full and resumed server-side handshake. ms
C, P, S TLS worker cores, planning CPU target, and optional CPU reserve. cores / ratio

Status Rules:

TLS handshake CPU status boundaries
Status Rule How to read it
no CPU budget Effective target budget is 0 cores. TLS worker cores or target settings need correction before capacity can be judged.
over target Target utilization >= 110%. The modeled handshake demand exceeds the planning budget by a large margin.
near target Target utilization >= 85% and below 110%. The tier still fits the model but has little room for measurement error or reconnect bursts.
capacity ok Target utilization < 85%. The entered handshake mix stays comfortably below the target budget.

Input Bounds:

TLS handshake load input bounds and warnings
Input Accepted model range Warning or effect
HTTPS request rate, Full handshake CPU, and Resumed handshake CPU Non-negative numbers. Negative values are clamped at zero. Resumed CPU above full CPU triggers a measurement warning.
New TLS connection share and Full handshake share 0% through 100%. Values outside the range are clamped, changing the handshake rate or mix.
TLS worker cores Non-negative core count. Zero cores produce a no CPU budget result and a review warning.
Planning CPU target and CPU reserve Target is 1% through 100%; reserve is 0% through 90%. Targets above 85% warn because they leave little latency headroom.
Peak multiplier and Display precision Multiplier is non-negative; precision is rounded to 0 through 4 decimals. A multiplier of 2x or higher warns that the surge assumption should be checked against test or incident data.

Everyday Use & Decision Guide:

Start with measured edge, proxy, or load-balancer request rate for the exact TLS termination point. Daily average traffic is usually too soft for this job. Use the busy minute, planned launch peak, deploy reconnect window, or regional failover estimate that the TLS tier must survive.

Choose the handshake profile only as a starting point. The presets load plausible CPU millisecond pairs for TLS 1.3 ECDSA edge, TLS 1.3 RSA edge, TLS 1.2 ECDHE-RSA, and mTLS client certificate paths, but the fields are editable because real results depend on hardware, certificate algorithm, crypto build, and proxy configuration. Replace the preset values with production telemetry or benchmark results when you have them.

  • Use New TLS connection share to represent keep-alive and pooling quality. Lower values mean more requests reuse existing secure connections.
  • Use Full handshake share for the part of new connections that cannot use session resumption.
  • Set TLS worker cores to cores available for TLS termination, not total host size when the same machine also runs routing, WAF, logging, or application work.
  • Keep Planning CPU target around 60% to 75% for conservative edge planning unless measured latency margins justify a higher target.
  • Add CPU reserve when interrupts, log volume, background proxy tasks, or co-located work share the same cores.

The useful first read is the large target-utilization percentage, then TLS Capacity Snapshot. If the status is near target or over target, do not only add cores by habit. Check whether the problem is too many new connections, too many full handshakes, an expensive certificate path, an aggressive surge multiplier, or a target budget that leaves little reserve.

A green result does not prove TLS latency is healthy. It says the entered handshake CPU model fits the planning budget. Confirm the model against proxy metrics such as new TLS sessions, session reuse, handshake latency, worker CPU, reconnect spikes, and load-test behavior before using it as a production gate.

Step-by-Step Guide:

Build the model from traffic shape first, then replace CPU assumptions with measurements when they are available.

  1. Enter HTTPS request rate for the TLS termination point. The summary will later convert this to handshakes after connection share and peak multiplier are applied.
  2. Set New TLS connection share. Watch the handshake badge because this field controls how many requests become new TLS handshakes.
  3. Set Full handshake share. The Handshake CPU Budget tab should then split the rate into full and resumed handshakes.
  4. Choose Handshake profile. If measured values differ, edit Full handshake CPU and Resumed handshake CPU; the profile switches to the custom path when you change those fields.
  5. Enter TLS worker cores and Planning CPU target. If the summary says no CPU budget, fix the core count before trusting any headroom result.
  6. Open Advanced when you need Endpoint label, Peak multiplier, CPU reserve, or Display precision. A peak multiplier of 2x or above will add a review warning.
  7. Read TLS Capacity Snapshot, Handshake CPU Budget, and Capacity Guidance. Clear any Review TLS model inputs warnings before treating the result as a capacity record.
  8. Use TLS CPU Budget to see full, resumed, and headroom cores, then use Connection Reuse Sensitivity to see how target utilization changes as new connection share rises.

Interpreting Results:

Target utilization is the headline because it compares modeled TLS CPU demand with the target budget after CPU target and reserve are applied. Raw core utilization is useful as a host-size sanity check, but it can hide risk when the planning target is intentionally below full core capacity.

How to interpret TLS handshake load outputs
Output cue What it means Follow-up check
capacity ok Handshake demand is below 85% of the target budget. Confirm that the connection share and full handshake share came from the same traffic window.
near target The model is using at least 85% of the target budget. Check handshake latency and reconnect behavior before traffic or certificate cost grows.
over target The model is at least 110% of the target budget. Reduce full handshakes, improve connection reuse, add TLS cores, or lower the surge assumption.
reuse risk New connection share is at least 25% or full handshake share is at least 60%. Review idle timeouts, client pools, session tickets, cache behavior, and deploy reconnect patterns.

Safe request ceiling assumes the current new-connection share, handshake mix, CPU costs, peak multiplier, and target budget stay unchanged. If any of those assumptions change, the ceiling changes too. New connection share at target is often the more useful stress point because it shows how much connection churn the current budget can absorb before reaching the target.

Do not read a low CPU demand as proof that TLS is not part of an incident. A node can still have certificate-chain problems, bad session ticket rotation, uneven load balancing, handshake latency spikes, or application bottlenecks. Use the model to narrow the CPU question, then validate against telemetry from the same listener and time window.

Worked Examples:

API edge with ordinary connection reuse:

With 4,500 RPS, 12% new TLS connections, 35% full handshakes, the TLS 1.3 RSA edge profile, 4 TLS worker cores, and a 70% planning target, the model produces about 540 new TLS handshakes per second. The full path contributes about 189/sec, the resumed path about 351/sec, and total TLS CPU demand is about 0.54 cores. Against a 2.80 core target budget, Target utilization is about 19.3%, so the status reads capacity ok.

Reconnect surge with weak resumption:

A release window is modeled at 12,000 RPS with a 2x peak multiplier, 45% new TLS connections, 80% full handshakes, 3.20 ms full CPU, 0.35 ms resumed CPU, 6 TLS cores, a 70% target, and a 10% reserve. The model reaches about 10,800 handshakes per second and 28.40 cores of TLS demand. The target budget is only 3.78 cores, so Target utilization is about 751.4% and the status is over target. The sensitivity result shows target CPU would be reached near 6.0% new connection share at the same handshake mix.

ECDSA edge with strong pooling:

An edge using the TLS 1.3 ECDSA edge profile is checked at 8,000 RPS, a 1.5x surge, 4% new TLS connections, 15% full handshakes, 2 TLS cores, and a 65% target. The model reports about 480 handshakes per second, weighted average handshake CPU near 0.20 ms, and total demand of about 0.10 cores. Target utilization is about 7.5%, which leaves room for growth if the measured CPU values are representative.

Input warning before a review:

If TLS worker cores is set to 0, the summary changes to TLS CPU budget unavailable with the no CPU budget badge. If Resumed handshake CPU is higher than Full handshake CPU, the page adds a warning to verify the measured values. Both warnings should be cleared before exporting the capacity snapshot for a change review.

FAQ:

What should I use for new TLS connection share?

Use the share of requests that arrive on fresh TLS connections at the modeled listener. Proxy metrics for new SSL sessions, keep-alive reuse, HTTP/2 or HTTP/3 connection counts, and client pool behavior are better than guessing from request totals alone.

Why separate full and resumed handshakes?

The model gives each path its own CPU millisecond value. A full handshake can be much more expensive than a resumed handshake, so the same handshake rate can produce different CPU demand when the resumption mix changes.

Can I use the presets as final benchmark numbers?

Treat presets as starting assumptions. Edit Full handshake CPU and Resumed handshake CPU when you have measurements from the actual proxy build, certificate type, CPU generation, and TLS configuration.

Why does a higher peak multiplier add a warning?

A multiplier of 2x or higher can dominate the result because it multiplies the modeled request rate before the handshake share is applied. Use load-test, deployment, failover, or incident data to justify that surge.

Does the result include application CPU or WAF rules?

No. The core demand is only the weighted TLS handshake CPU from full and resumed paths. Use CPU reserve to leave room for interrupts, logging, WAF work, and other proxy tasks, then validate with host metrics.

Where does the calculation run?

The arithmetic runs in the browser from the values you enter. The page can copy or download CSV, DOCX, chart, and JSON outputs when you choose those actions, so treat exported data as operational capacity evidence.

Glossary:

TLS handshake
The setup exchange that negotiates security parameters, establishes keys, and authenticates the server before application data is protected.
Full handshake
A new handshake path that performs the full certificate and key-exchange work represented by the full CPU millisecond input.
Resumed handshake
A handshake that uses prior session material or a pre-shared key path and is modeled with a separate lower CPU cost.
New TLS connection share
The percentage of requests that open a fresh secure connection instead of reusing an existing one.
Planning CPU target
The sustained CPU percentage used as the capacity budget before the model reports target pressure.
CPU reserve
The optional share removed from the target budget for non-handshake work and measurement uncertainty.
Connection reuse
Using an existing secure connection for more requests, reducing the number of new TLS handshakes.

References: