{{ summaryHeading }}
{{ summaryPrimary }}
{{ summaryLine }}
{{ badge.label }}
MySQL slow query log inputs
Paste a slow-log slice or drop a LOG/TXT file; parsing stays in the browser.
{{ sourceMetaLabel }}
{{ fileStatus || 'Drop LOG or TXT onto the textarea.' }}
Match `long_query_time` or the review gate used by your tuning queue.
sec
Use a lower value for OLTP endpoints and a higher value for expected analytical queries.
rows
Name the schema, service, or ticket context represented by this slow-log sample.
Keep exported tables focused on the heaviest query entries and fingerprints.
rows
Customize
Advanced
:

Introduction:

MySQL slow query logs capture SQL statements that crossed the server's configured logging rules. They are most useful after a slow endpoint, high database CPU period, queue backlog, or deployment regression has already narrowed the time window. The log does not explain every wait in the database, but it gives concrete evidence about statement time, rows scanned, rows returned, lock waits, and sometimes extra per-statement counters.

A single slow entry can be misleading. One expensive report query may be expected, while a repeated application query that scans hundreds of thousands of rows can explain a user-facing slowdown. Slow-log review becomes more useful when similar statements are grouped, because the repeated shape of the SQL often matters more than the literal order ID, customer ID, date, or status value in one sample.

Flow diagram showing a MySQL slow-log entry reduced into parsed timing fields, grouped as a SQL fingerprint, then queued for review by latency and scan signals.

Slow logs are not a full performance diagnosis by themselves. They usually omit buffer-pool state, current execution plans, competing workload, and application timing around the query. They do, however, answer a practical first question: which SQL shapes consumed enough time or scanned enough rows to deserve a closer plan, index, or transaction review.

The safest reading keeps timing evidence and SQL evidence together. Query time points to latency, rows examined points to access-path cost, lock time points to waiting, and repeated fingerprints point to workload concentration. A review that keeps those facts connected is less likely to chase the longest single query while missing the pattern that hurts users most often.

Technical Details:

MySQL writes FILE-style slow-log entries around statement execution evidence. The core fields are Query_time, Lock_time, Rows_sent, and Rows_examined. With extra slow-log counters enabled, the same entry may also include handler reads, sort counters, temporary-table counters, byte counts, thread identifiers, error status, and explicit start or end timestamps.

The important distinction is that slow-log rows are samples of completed statements, not live traces. MySQL writes the entry after execution and lock release, so file order can differ from execution order. The SET timestamp line records when the slow statement began, while Query_time remains the elapsed statement time in seconds.

SQL fingerprinting makes repeated shapes comparable. Literal strings, numbers, UUIDs, hexadecimal values, comments, and long IN (...) lists can hide the fact that the same query shape is repeating. Collapsing those values produces a normalized statement shape, then repeated samples can be ranked by count, total query time, P95 query time, and maximum rows examined.

Rule Core

The review rules use inclusive gates for the two user-set thresholds. An entry at or above Slow threshold receives a latency signal. An entry at or above Rows examined warning receives a scan signal. Other signals come from fixed counters or SQL text patterns.

Slow query log fields and how they affect review
Field or counter Meaning Review consequence
Query_time Statement execution time in seconds. Compared with Slow threshold and used for totals, averages, P95, and maximum timing.
Lock_time Time spent acquiring locks, in seconds. Values at or above 0.1 seconds raise a lock-wait signal.
Rows_sent Rows returned to the client. Compared with rows examined to spot low-yield scans.
Rows_examined Rows examined by the MySQL server layer. Compared with Rows examined warning and used to rank scan-heavy statements.
Created_tmp_disk_tables Disk temporary tables created by the statement when extra counters are present. Any value above 0 makes the related row or fingerprint high severity.
Sort_merge_passes and Sort_rows Sort work reported by extra counters. A merge pass or a sort-row count at or above the row warning raises sort pressure.
Signals used in MySQL slow query log analysis
Signal Condition What to check next
Latency gate Query_time is greater than or equal to Slow threshold. Compare the fingerprint against the endpoint, workload window, and expected response budget.
Scan gate Rows_examined is greater than or equal to Rows examined warning. Check predicates, join order, covering indexes, and whether statistics are stale.
Low row yield At least 1000 rows examined and returned rows are under 1% of examined rows. Look for weak selectivity, missing composite indexes, or filters applied after a large scan.
Lock wait Lock_time is at least 0.1 seconds. Correlate with transaction length, hot rows, gap locks, and DDL windows.
Disk temp table or sort pressure Extra counters show disk temporary tables, sort merge passes, or high sort-row counts. Review GROUP BY, ORDER BY, memory limits, and composite index order.
SQL pattern The statement includes patterns such as SELECT *, JSON predicates, leading-wildcard LIKE, unbounded ORDER BY, or OR predicates. Confirm whether the pattern is intentional and whether MySQL can still use a selective access path.

Severity is a queueing signal. A row or fingerprint is High when query time reaches four times the slow threshold, rows examined reaches twenty times the row warning, or a disk temporary table is present. It is Medium when any signal is present, or when a grouped fingerprint has at least two slow-threshold units of total query time. Otherwise it is Low.

Fingerprint grouping and output measures
Measure How it is built How to read it
Fingerprint SQL command plus normalized SQL text with literals and repeated value lists collapsed. Useful for finding repeated query shapes, not proof that every execution used the same plan.
Samples Count of parsed slow-log entries in the fingerprint group. A high count can matter even when no one sample is the slowest entry.
Total Query_time Sum of Query_time for the grouped samples. Good first ranking for workload concentration.
P95 Near-worst query time within the grouped samples. Useful for repeated fingerprints where the maximum alone may be a one-off spike.
Max rows examined Largest Rows_examined value in the group. Flags access-path risk even when the query returned only a few rows.

A slow-log parser can rank evidence, but it cannot confirm the actual plan used for the next run. MySQL EXPLAIN, EXPLAIN ANALYZE, table statistics, indexes, production bind values, and concurrent workload still decide whether an index change, query rewrite, or transaction change is the right repair.

Everyday Use & Decision Guide:

Start with a focused log excerpt around one slow endpoint, deploy window, report job, or database alert. Paste the text into Slow query log, drop a LOG or TXT file, or choose Browse LOG/TXT. The file reader is capped at 2 MB, so large rotated logs should be filtered before review.

Set Slow threshold to match the value your team cares about for this review. A production OLTP incident might use 1 second or lower, while an expected reporting query may need a higher gate. Set Rows examined warning lower for latency-sensitive request paths and higher for analytical work where larger scans are expected.

  • Use Slow Log Snapshot first to confirm parsed entries, fingerprint count, P95 query time, total Query_time, maximum rows examined, and review queue size.
  • Use Fingerprint Hotspots to choose the first query shape to inspect. The table ranks grouped SQL by total time, P95, rows examined, signals, and state.
  • Use Query Ledger when one exact entry matters. It preserves the time, query time, lock time, rows examined, rows sent, schema, fingerprint, and signal list for each visible row.
  • Use Tuning Findings for the handoff note. It turns the parsed evidence into next checks such as EXPLAIN ANALYZE, index review, lock review, sort review, or parse cleanup.
  • Use Fingerprint Time Ladder when a chart is clearer than a table for showing which grouped statement consumed the most total query time.

The Database or service label field is only a label for the result, JSON, and exports. It is still worth filling in when several services share a database server, because copied rows are easier to attach to the right incident or ticket.

Do not treat a hot fingerprint as an automatic instruction to add an index. A high row count can come from an intentional report, a stale statistic, a missing composite index, a JSON predicate that needs a generated-column index, or a query that should be rewritten. Take the top fingerprint and its suggested next check into plan review before changing production schema.

Step-by-Step Guide:

Use one coherent slow-log slice per run so the summary describes one workload question rather than a mixed week of unrelated jobs.

  1. Paste FILE-style slow-log entries into Slow query log, drag a LOG or TXT file onto the textarea, choose Browse LOG/TXT, or press Load sample. The source badge should change from No source loaded to a character and entry count.
  2. Press Normalize if copied text has trailing spaces or large blank gaps. If the warning says no complete entries were parsed, keep each # Query_time line together with its SQL statement.
  3. Set Slow threshold in seconds. Entries at or above that value become latency review candidates and appear in summary badges as gated entries.
  4. Set Rows examined warning in rows. Entries at or above that value receive a scan signal and can make fingerprints rise in Fingerprint Hotspots.
  5. Open Advanced when report labeling or display length matters. Fill Database or service label and adjust Visible row limit between 5 and 100.
  6. Read the summary box and Slow Log Snapshot. Confirm parsed entries, skipped blocks, total Query_time, P95 query time, and maximum rows examined before using the deeper tabs.
  7. Open Fingerprint Hotspots and start with the highest total Query_time fingerprint. Use Query Ledger to inspect exact samples and Tuning Findings to see the suggested next check.
  8. Use Fingerprint Time Ladder when the total-time distribution needs to be shared visually. Use JSON when another workflow needs thresholds, summary, fingerprints, queries, findings, and skipped blocks together.
  9. Before copying results, resolve warnings about missing source, missing Query_time, missing SQL text, or skipped blocks so the review queue is not based on a partial parse.

Interpreting Results:

The first result to trust is parse coverage. If Parsed entries is low or skipped blocks are reported, fix the source text before ranking fingerprints. A clean parse should show entries, fingerprints, total timing, and the row gate you intended to apply.

How to interpret MySQL slow query log result cues
Visible cue Best first reading What to verify next
Fingerprint Review Queue At least one fingerprint has non-low severity under the current gates. Inspect the top fingerprint with real bind values and a current plan.
Slow Log Below Gates Parsed entries did not cross the configured latency or scan review gates. Check whether the thresholds are too high for the user-facing issue, or paste a larger focused sample.
Rows examined pressure One or more entries scanned at least the configured row-warning count. Check predicate selectivity, join order, and index coverage before blaming the database server generally.
Lock wait signal A statement spent at least 0.1 seconds acquiring locks. Correlate with transaction logs, hot rows, and concurrent writes.
Unparsed source fragments Some copied blocks lacked either timing fields or SQL text. Copy the surrounding slow-log lines again before sharing totals or counts.

High, Medium, and Low are review priorities, not proof of root cause. A high entry may come from an expected maintenance job, while a medium repeated fingerprint may explain more user impact because it appears often. Always compare severity with sample count, total Query_time, and the service or endpoint that produced the log slice.

The chart ranks visible fingerprints by total query time. It is a good way to explain concentration, but it should not replace the ledger. Keep one exact query sample, the fingerprint, the timing evidence, and the suggested next check together when opening a ticket or reviewing an index change.

Worked Examples:

Checkout queries scanning many order rows

A checkout excerpt contains two similar SELECT statements against orders and customers. One has Query_time: 2.431 and Rows_examined: 182000; the other has Query_time: 2.118 and Rows_examined: 176200. With Slow threshold at 1 second and Rows examined warning at 10000, Fingerprint Hotspots groups the two statements into one fingerprint, shows two samples, and flags latency, scan, and low row-yield signals. The next check is an EXPLAIN ANALYZE and index review for the filter and sort path.

A fast enough query that still scans too much

A query with Query_time: 0.820, Rows_sent: 4, and Rows_examined: 65000 stays below a 1 second latency gate, but it crosses a 10000 row gate. Tuning Findings should still call out rows examined pressure, because a query can be acceptable in one small sample and become expensive when traffic rises or cache conditions change.

Lock wait on a single-row update

An update with Rows_examined: 1 can look harmless until Lock_time reaches 0.128 seconds. That entry receives a lock-wait signal even though it does not cross the row warning. Query Ledger keeps the exact entry visible, and Tuning Findings points the next check toward transaction length, hot rows, and concurrent writes rather than a broad scan or missing-index problem.

Copied source missing the SQL statement

If a pasted block contains # Query_time but not the following SQL text, the parser skips that block and reports it in skipped fragments. The summary and JSON still keep the skipped count, but the review should pause. Copy the slow-log entry again with its SQL statement before comparing totals, because a missing heavy query can change the top fingerprint and the review queue.

FAQ:

Can I paste a whole production slow log?

Use a focused excerpt instead. The file loader stops at 2 MB, and very broad logs can mix reports, batch jobs, incidents, and normal traffic in a way that makes the top fingerprint less useful.

Why are different SQL statements grouped together?

The fingerprint replaces changing literal values with placeholders, so statements with the same shape can be reviewed together. That is useful for repeated application queries, but it does not prove every grouped sample used the same execution plan.

Does a High row mean an index should be added?

No. High means the row or fingerprint crossed strong review conditions, such as very high time, very high rows examined, or a disk temporary table. Confirm with EXPLAIN, EXPLAIN ANALYZE, current statistics, and production parameter values before changing indexes.

What format does the parser expect?

It expects MySQL FILE-style slow-log entries with # Query_time metadata and SQL text. It also reads common surrounding lines such as # Time, # User@Host, SET timestamp=..., schema hints, and extra slow-log counters when present.

Is my pasted log uploaded for analysis?

No server-side parser is used for this tool. Pasted text and selected LOG or TXT files are read in the browser for the current analysis, then the visible tables, chart, and JSON are built from that local source.

Glossary:

Query_time
Statement execution time in seconds as recorded in the slow query log.
Rows_examined
The number of rows the MySQL server layer examined while processing the statement.
Fingerprint
A normalized SQL shape that collapses changing literal values so repeated statements can be grouped.
P95
A near-worst timing value within a group, useful when one maximum might be an unusual spike.
Lock wait
Time spent acquiring locks before the statement could continue.
log_slow_extra
A MySQL setting that adds per-statement counters such as sort and temporary-table values to FILE slow-log output.

References: