HTTP Access Latency Log Analyzer
Analyze online HTTP access logs for endpoint P95 latency, 5xx rate, slow requests, parser gaps, and SLA findings so service teams can triage bottlenecks.{{ analysis.summaryTitle }}
| {{ header }} | Copy |
|---|---|
| No rows to export for the current input. | |
| {{ cell.value }} {{ cell.value }} |
Introduction
HTTP access latency logs connect user-visible requests with the timing and status evidence left by web servers, reverse proxies, and load balancers. A single row can show the request method, path, response status, and one or more timing fields. Those details make a latency review more useful than a headline average because slow behavior usually clusters around a few endpoints, status classes, or upstream paths.
Percentiles are central to this kind of review. Median latency can look calm while the slowest five percent of requests are already hurting checkout, search, login, or health-check behavior. A P95 readout asks how slow the request near the 95th percentile is, which is often a better operational cue than a simple mean when traffic has retries, cache misses, cold starts, or backend saturation.
Access-log latency still needs context. A slow endpoint row does not prove the backend code is the only cause, and a low 5xx rate does not prove users had a good experience. Edge time, target time, retries, status mix, sample size, and parse loss all affect how confident the readout should be.
A useful triage pass keeps the original parser evidence close to the summary. If the tool cannot parse a row, or if the chosen timing basis does not match the log format, the endpoint ranking can point in the wrong direction even when the remaining rows look precise.
Technical Details:
Access-log latency analysis begins by reducing each row to a common request shape: method, path, HTTP status, latency in milliseconds, and the timing field used as the basis. The same service can expose several clocks. NGINX can log total request time and upstream response time, Apache can log elapsed request time in seconds, milliseconds, or microseconds, and an Application Load Balancer separates request, target, and response processing segments.
The latency basis changes the meaning of every percentile. Backend or target time focuses on the origin or upstream service. Edge or total request time includes more of the proxy or load-balancer path. Auto mode prefers backend timing when it is available and falls back to edge timing when it is not. Comparisons across runs should keep that basis fixed, or the difference may be measurement scope rather than a real service change.
Endpoint grouping also affects the result. Query strings are removed from request targets, and the optional grouping switch replaces numeric path segments with :id and long hexadecimal segments with :hash. That keeps paths such as /api/orders/42 and /api/orders/43 together, while leaving literal grouping available when each unique path needs its own row.
Rule Core
The parser rules are deliberately narrow enough to keep failed rows visible. Timing values are converted to milliseconds before percentiles, slow counts, and SLA findings are calculated.
| Log format | Timing fields used | Status and request handling | Important boundary |
|---|---|---|---|
| NGINX key-value timing | urt or upstream_response_time for backend timing, uht as a backend fallback, and rt or request_time for edge timing. |
Reads request, quoted request lines, method/path fields, status, or status_code. |
Timing is read as seconds and converted to milliseconds. Multiple timing values use the largest usable value. |
| Apache combined plus latency | Reads keyed duration fields such as duration, latency, request_time, rt, or time_taken, otherwise the last numeric timing token. |
Reads the quoted request line and status when present, with token inference as a fallback. | Apache latency unit can force seconds, milliseconds, or microseconds. Auto mode infers from suffixes, decimals, and large numeric values. |
| AWS ALB access log | Uses target_processing_time as backend timing and the sum of request, target, and response processing time as edge timing. |
Reads the logged request line and prefers target status, then load-balancer status. | Negative processing values are treated as unusable timing, so malformed or undispatched requests may be ignored for latency math. |
Percentile Core
Percentiles are calculated from sorted latency values with linear interpolation. That means a small sample can produce a P95 value between two observed requests rather than simply choosing the slowest row.
For six backend latencies sorted as 18, 142, 205, 388, 711, and 1590 ms, the P95 rank is 4.75. The result sits three quarters of the way between 711 and 1590, so the displayed P95 is about 1,370 ms.
| Signal | Boundary used | How to read it |
|---|---|---|
| Endpoint review state | Endpoint P95 is greater than P95 target, or endpoint 5xx rate is greater than 5xx target. |
The endpoint deserves review before lower-latency or passing rows. |
| Overall P95 latency finding | Overall P95 is greater than P95 target. |
The sample misses the chosen tail-latency target. |
| Tail latency finding | Overall P99 is greater than the effective slow threshold. | The slowest part of the sample is beyond the slow-request gate. |
| Slow request count | Any parsed request latency is at or above the effective slow threshold. | The count shows how many individual rows crossed the slow gate. |
| 5xx rate finding | Overall 5xx rate is greater than 5xx target. |
Server-side or load-balancer failure responses are above the chosen limit. |
| Parse-loss finding | One or more source lines cannot produce a usable latency row. | The selected parser, timing fields, or Apache unit may need correction. |
The effective slow threshold is the larger of P95 target and Slow request cutoff. If the slow cutoff is lower than the P95 target, slow counts still use the P95 target so the page does not flag rows as slow while treating the same threshold as acceptable for P95.
Everyday Use & Decision Guide:
Start with one service and one log family. Put the service, load balancer, or virtual host label in Service name, choose Log format, and leave Latency basis on Backend / target time when the question is whether the upstream service is slow. Use Edge / request total when the full proxy or load-balancer path is the measurement that matters.
Keep P95 target tied to the service-level target you would actually use in a review, such as 300 ms for an API endpoint. Leave Group IDs in paths on for incident triage because it groups order IDs, user IDs, and hash-like path segments. Turn it off when a literal object path needs to remain separate.
- Endpoint Latency is the first table for route ownership. It shows request count, P50, P95, P99, 5xx rate, slow count, and pass or review state.
- Status Class Mix shows whether 5xx, 4xx, redirects, successful responses, or unknown statuses dominate the sample.
- SLA Findings collects the reasons the summary slowed down, including P95 breach, high 5xx rate, slow requests, parse loss, and hottest endpoint.
- Parse Ledger is the fastest place to catch the wrong log format, missing timing fields, or a suspicious timing basis.
- Endpoint Percentile Profile compares the top endpoint percentiles against the P95 target line.
Use Advanced when the default gates do not match the service. Slow request cutoff controls the slow-count gate, 5xx target controls error-rate findings, Apache latency unit prevents unit mistakes, and Parse ledger limit controls visible row-level evidence without changing aggregate counts.
A calm result should still be checked against the parse evidence. Confirm that the parsed request count matches the source slice, the basis badge matches the timing field you meant to review, and ignored lines are either expected noise or fixed before the numbers leave the triage room.
Step-by-Step Guide:
Work from parser choice to endpoint evidence, then use the findings table only after parse loss is understood.
- Enter
Service name. The value appears in the summary and JSON output, so use the same label you would use in an incident note. - Choose
Log format:NGINX key-value timing,Apache combined plus latency, orAWS ALB access log. If the summary says Check input, switch back here before changing thresholds. - Set
Latency basis. Use backend or target time for upstream service triage, edge or request total for end-to-end proxy timing, and auto only when mixed rows may lack one timing field. - Set
P95 targetand keepGroup IDs in pathsenabled unless literal path values matter. The endpoint table should then group paths such as/api/orders/42and/api/orders/43together. - Paste rows into
Access log lines, chooseBrowse LOG/TXT, drop a LOG or TXT file, or clickLoad sample. Files larger than 2 MiB are rejected for browser-side analysis. - Use
Normalizeafter copying from a terminal or ticket. It trims line endings and blank gaps while keeping the log content in the textarea. - Open
Advancedif slow-count, 5xx, Apache unit, or parse-ledger settings need to match the service. If Apache rows show impossible latency, setApache latency unitexplicitly. - Read Latency SLA Readout, then open Endpoint Latency. Start with rows marked review, especially the highest P95 endpoint and any row with a high 5xx rate.
- Open Parse Ledger when Check source data appears or the parsed count is lower than expected. Fix the parser, timing field, or source excerpt before trusting the percentile profile.
- Use SLA Findings, Status Class Mix, and Endpoint Percentile Profile after the ledger looks credible. Those views are strongest when the parser evidence and target settings are already correct.
Interpreting Results:
The main number is the summary P95, but the most actionable row is often the highest endpoint P95 in Endpoint Latency. Overall P95 tells you whether the sample missed the target. Endpoint rows tell you where to start looking.
Do not read pass as proof that the service is healthy. It means the parsed sample stayed inside the chosen P95 and 5xx gates for that row. A narrow sample, wrong timing basis, missing 5xx status, or ignored rows can still hide the problem.
| Visible cue | Best first reading | What to verify next |
|---|---|---|
| P95 summary exceeds target | The sample missed the selected tail-latency target. | Open Endpoint Latency and start with the highest P95 row. |
| 5xx rate exceeds target | Server-side or load-balancer failures are above the configured limit. | Check Status Class Mix and compare 5xx timing against endpoint rows. |
| Slow count is nonzero | At least one parsed request crossed the effective slow threshold. | Use SLA Findings and the percentile chart to see whether this is isolated tail noise or a route pattern. |
| Ignored log lines appears | Some source rows did not produce usable latency values. | Open Parse Ledger and check parser selection, missing timing fields, and Apache units. |
| basis edge or basis backend badge | The percentile is tied to that timing scope. | Keep the same basis before comparing with another run. |
Threshold edges are strict for review state: P95 and 5xx rate must be greater than their targets to flag a breach. Slow requests use an at-or-above check, so a request exactly at the effective slow threshold counts as slow.
Worked Examples:
Default NGINX backend sample
The sample rows use NGINX-style rt and urt timing, Latency basis set to backend, P95 target set to 300 ms, Slow request cutoff set to 1000 ms, and 5xx target set to 1%. The six parsed requests produce a summary P95 of about 1,370 ms and a 5xx rate of 16.67%. Endpoint Latency marks POST /api/payments for review at about 1,530 ms P95, while SLA Findings points to P95, P99, 5xx rate, slow request count, and hottest endpoint evidence.
ALB target timing versus full edge timing
An Application Load Balancer excerpt with target times near 0.110, 0.125, and 1.480 seconds answers a backend question when Latency basis is backend. Switching the same rows to edge timing adds request and response processing segments, so Endpoint Latency can show a higher P95 even though the target response time did not change. That difference is useful when deciding whether to inspect the target service, client upload path, WAF path, or load-balancer side of the request.
Threshold boundary before a release check
A team checks a short deployment canary with P95 target set to 300 ms and an endpoint P95 that lands exactly at 300 ms. The endpoint does not breach the P95 gate because the review rule is greater than target, not greater than or equal. If one request is exactly at the effective slow threshold, though, the Slow column counts it because slow rows use an at-or-above boundary.
Apache unit mismatch in a copied excerpt
Apache rows that end with a microsecond %D value can look like huge millisecond latency if the unit is guessed wrong. When Check source data appears, or Parse Ledger shows odd timing notes, set Log format to Apache and choose Apache latency unit as microseconds. After that, check Parsed request(s), Endpoint Latency, and SLA Findings again before using the table in a handoff.
Responsible Use Note:
Access logs can contain internal hostnames, endpoint paths, account identifiers, query-derived business context, client addresses, and operational timing details. Paste the smallest useful slice, redact unrelated secrets or identifiers when possible, and keep copied JSON, CSV, images, or documents inside the same handling rules used for production logs.
The log text is parsed in the browser session for the analysis. The page does not call the service being reviewed and does not send pasted log lines to a server for latency calculation, but the resulting tables and exported files can still contain sensitive operational evidence.
FAQ:
Which log format should I choose first?
Choose the format that produced the rows. Use NGINX key-value timing for lines with rt, urt, request_time, or upstream_response_time; use Apache for combined rows with a duration token; use AWS ALB access log for load-balancer rows with processing-time fields.
Should I use backend or edge latency?
Use backend or target time when the upstream service is being reviewed. Use edge or request total when the full proxy, load balancer, client upload, and response path are part of the question. Keep the same basis when comparing runs.
Why were some rows ignored?
Rows are ignored when the selected parser cannot find a usable latency value and endpoint. Open Parse Ledger, then check the selected Log format, timing field names, and Apache latency unit.
Does a 5xx finding identify the root cause?
No. A 5xx finding means the parsed sample crossed the configured 5xx percentage target. Use Status Class Mix, endpoint rows, deploy timing, target health, upstream reset evidence, and service logs to find the cause.
Why does grouping change the endpoint list?
Group IDs in paths replaces numeric path segments with :id and long hexadecimal segments with :hash. That makes repeated resource paths easier to compare, but literal path review may need the switch turned off.
Are pasted logs uploaded for analysis?
No. The pasted text or selected LOG/TXT file is analyzed in the browser session. The page can still load site assets and chart code, so treat this as local log processing for the calculation rather than a promise that the browser makes no network requests at all.
Glossary:
- Access log
- A request record written by a web server, proxy, or load balancer, often including method, path, status, bytes, and timing fields.
- Backend latency
- The upstream or target processing time used when the service behind the proxy is the focus.
- Edge latency
- The wider request path timing used when proxy, load-balancer, request, target, and response segments matter together.
- P95
- The latency value at the 95th percentile of the sorted request sample.
- 5xx rate
- The share of parsed requests whose status indicates a server-side or load-balancer failure response.
- Parse Ledger
- The row-level evidence table showing which parser was used, what endpoint and latency were found, and why a row was ignored or accepted.
- SLA finding
- A review row that flags a target breach, tail-latency issue, high 5xx rate, slow request count, parse loss, or hottest endpoint.
References:
- Configuring Logging, NGINX Documentation.
- Apache Module mod_log_config, Apache HTTP Server Version 2.4.
- Access logs for your Application Load Balancer, Amazon Web Services.
- HTTP response status codes, MDN Web Docs.