PostgreSQL EXPLAIN Plan Analyzer
Analyze online PostgreSQL EXPLAIN plans for cost, scans, row-estimate drift, buffers, spills, and next checks before tuning queries with review-ready evidence.{{ summaryHeading }}
- {{ error }}
| Aspect | Value | Review detail | Copy |
|---|---|---|---|
| {{ row.aspect }} | {{ row.value }} | {{ row.detail }} |
| Level | Node | Relation | Cost | Rows | Actual | Signals | Copy |
|---|---|---|---|---|---|---|---|
| {{ row.level }} | {{ row.node }} | {{ row.relation }} | {{ row.cost }} | {{ row.rows }} | {{ row.actual }} | {{ row.signals }} |
| Severity | Signal | Evidence | Next action | Copy |
|---|---|---|---|---|
| {{ row.severity }} | {{ row.signal }} | {{ row.evidence }} | {{ row.action }} |
Introduction:
PostgreSQL EXPLAIN plans show how the planner expects a query to run. A plan turns SQL into a tree of scan, join, sort, aggregate, and limit nodes, with cost and row estimates attached to each node. For a slow endpoint, a risky migration, or a before-and-after index review, the plan helps identify where the database expects to spend work before anyone changes production schema.
The first reading should separate estimates from runtime evidence. Plain EXPLAIN reports planner estimates only. EXPLAIN ANALYZE runs the statement and adds actual timing, actual rows, loops, and optional buffer details. That distinction matters because a plan can look cheap in estimated cost while the real run shows poor row estimates, disk-backed sorting, repeated nested-loop work, or temporary I/O.
Cost is not elapsed time. PostgreSQL cost units are planner units based on cost parameters, and the top node includes child-node cost. A high-cost node is a useful review target, but it is not proof that the query ran slowly. The stronger evidence is a plan where estimates, actual rows, loops, buffers, and spill details point in the same direction.
Plan review should lead to a specific next check, not a reflexive index change. A sequential scan may be correct for a small table or a broad report. A nested loop may be ideal for a tiny outer relation. A sort may be unavoidable unless an index can provide the needed order. The useful question is whether the plan matches the data shape and response budget of the query being reviewed.
Technical Details:
A PostgreSQL plan is a tree. Leaf nodes usually read rows from a table, index, function, common table expression, or subquery. Parent nodes combine or reshape those rows through joins, hashes, sorts, aggregates, limits, and other operations. Indentation in text output, or nested Plans arrays in JSON output, shows which nodes feed other nodes.
Each plan node has estimated startup cost, estimated total cost, estimated rows, and estimated row width. When ANALYZE is included, PostgreSQL also reports actual startup time, actual total time, actual rows, and loops. The actual time shown for a node is reported per loop, so repeated inner-node work needs the loop count before it can be compared fairly with a once-run node.
Row estimates depend on planner statistics. Large gaps between estimated rows and actual rows often point to stale statistics, correlated predicates, skewed data, partial indexes, or parameters that made the planner choose a plan that did not fit the real input. Buffer and spill evidence adds another check: shared reads, temporary blocks, disk-backed sorts, and hash batches can reveal work that cost alone does not explain.
Rule Core
The analyzer turns parsed plan nodes into review signals. User thresholds provide inclusive gates, so a node equal to the configured value is flagged. The default gates are 10000 cost units for Cost warning, 10x for Row drift warning, and 10000 rows for Seq scan row warning.
| Signal | Condition | Review consequence |
|---|---|---|
| High estimated total cost | The highest node total cost is greater than or equal to Cost warning. |
Marked Warning, or Critical when the max cost reaches at least 5x the gate. |
| Large sequential scan | A Seq Scan node has estimated or actual rows greater than or equal to Seq scan row warning. |
Points the review toward predicate selectivity, expected table size, and whether an index would actually help. |
| Row estimate drift | The drift multiple is greater than or equal to Row drift warning. |
Marked Warning, or Critical when it reaches at least 10x the row-drift gate. |
| Disk-backed sort | A sort node reports disk space usage. | Suggests checking ORDER BY, LIMIT placement, index order, and work_mem. |
| Hash batching or temporary I/O | A hash operation reports more than one batch, or temp block reads/writes are present. | Suggests comparing hash input size, join order, statistics, memory pressure, and spill causes. |
| Rows removed by filter | A node reports filtered rows greater than or equal to Seq scan row warning. |
Raises an Info finding for earlier filtering, partial-index, or predicate-rewrite review. |
| Estimate-only plan | No actual rows or actual timings are present. | Raises an Info finding because runtime evidence is unavailable. |
Row-estimate drift compares planned rows with actual rows when the plan includes runtime data. A value of 1x means the estimate matched exactly. Larger values show how far the estimate missed, with the direction reported as underestimated or overestimated.
D is the drift multiple, A is actual rows clamped to at least 1, and E is estimated rows clamped to at least 1. If actual rows are greater than estimated rows, the row count was underestimated. If estimated rows are greater, it was overestimated.
| Review status | Weighted risk score | How to read it |
|---|---|---|
Critical review |
>= 7 |
Multiple strong signals or at least one critical signal deserve careful plan and data review. |
Tuning review |
>= 4 |
The plan has enough warnings to justify a focused query, statistics, index, or memory check. |
Watch plan |
>= 1 |
At least one lower-weight finding is present, often estimate-only input or an informational signal. |
Low-signal plan |
0 |
No major heuristic flags were found under the configured gates. |
Text input is parsed from ordinary PostgreSQL plan lines that contain cost=.... JSON input must contain a top-level Plan object, such as output from EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON). The parser reads plan evidence and produces review signals; it does not execute SQL, query the database, inspect indexes, or confirm whether a proposed schema change is safe.
Everyday Use & Decision Guide:
Start with one query shape or one incident question. Paste raw text output into EXPLAIN plan, drop a TXT, LOG, PLAN, or JSON file onto the textarea, or choose Browse. Keep Input format on Auto detect for normal text or JSON output, and force a mode only when copied wrapper text makes detection ambiguous.
Use Plan label for the endpoint, query name, migration ticket, or benchmark case. The label does not change the analysis, but it keeps the result tied to the right review when several plans are being compared.
- Use
Cost warningas a local triage gate, not a universal performance standard. Cost units depend on PostgreSQL cost settings and data statistics. - Use
Row drift warningwhenEXPLAIN ANALYZEis available. The default10xgate catches estimates that are far enough off to affect join choice or scan choice. - Set
Seq scan row warningabove expected small-table scans. A deliberate scan of a tiny lookup table should not drown out a large table scan. - Open
Plan Review Brieffirst to confirm status, node count, max cost, runtime, drift, buffers, spill evidence, and the primary next action. - Use
Plan Node Ledgerwhen you need node-by-node evidence, including relation, cost range, row estimate, actual timing, and signal text. - Use
Plan Finding Ledgerfor the handoff note. It keeps severity, signal, evidence, and next action in one place.
Plan Hotspot Ladder ranks nodes by actual time when runtime evidence exists, otherwise by estimated cost. Row Estimate Drift only appears meaningfully when actual rows exist, so a plain estimate-only plan can still be reviewed for scan and cost shape, but it cannot answer whether planner row estimates matched the real run.
A warning does not automatically mean the query is broken. Treat the top finding as the first verification target. Check whether statistics are fresh, whether the predicates match available indexes, whether the row counts fit production bind values, and whether a repeated result changes after the proposed query or index change.
Step-by-Step Guide:
Use one plan per review so the summary status describes a single query and a single set of thresholds.
- Fill
Plan labelwith a short query, endpoint, or ticket name. The label should appear inPlan Review Brief. - Paste the plan into
EXPLAIN plan, drag a supported text or JSON file onto the textarea, chooseBrowse, or pressLoad sample. The source helper should show character count or a loaded-file message. - Leave
Input formatonAuto detectunless the parse warning says JSON failed or no costed text nodes were found. If needed, switch toText planorJSON planand retry with raw PostgreSQL output. - Set
Cost warning,Row drift warning, andSeq scan row warningfor this review. Threshold matches are inclusive, so a10000row sequential scan reaches a10000row gate. - Read the summary box.
Critical review,Tuning review,Watch plan, orLow-signal planshould appear with finding count, node count, timing availability, and parsed format. - Open
Plan Finding Ledgerand start with the highest-severity finding. Use theEvidenceandNext actioncells to decide whether the next check is statistics, index shape, join order, memory, or copied-source cleanup. - Open
Plan Node Ledgerwhen a finding points to a specific node. CheckRelation,Cost,Rows,Actual, andSignalsbefore changing SQL or indexes. - Use
JSONwhen another workflow needs the parsed nodes, thresholds, findings, and chart data together. If the error panel reports missing input, invalid JSON, or no costed node lines, fix the source before sharing results.
Interpreting Results:
Trust parse coverage before trusting severity. If the error panel says the plan could not be fully parsed, or the summary says Check input, fix the pasted source first. A partial parse can hide the node that matters most.
| Visible cue | Best first reading | What to verify next |
|---|---|---|
Critical review |
The weighted finding score reached 7 or more. |
Confirm the top finding with current statistics, production-like parameters, and a repeat run when possible. |
High estimated total cost |
At least one node crossed the configured cost gate. | Check the highest-cost node against indexes, join order, filters, and whether cost settings are comparable to prior runs. |
Row estimate drift |
Actual rows and estimated rows differ by at least the configured multiple. | Run ANALYZE when statistics may be stale, and inspect skewed columns or correlated predicates. |
Disk-backed sort or Temporary buffer I/O |
The plan reports spill evidence from sort, hash, or temp-block activity. | Review ORDER BY, LIMIT, join input size, and query memory settings before assuming an index alone fixes it. |
Estimate only |
No actual rows or actual timings were present. | Use EXPLAIN (ANALYZE, BUFFERS) in a safe environment when runtime evidence is needed. |
A low-signal result does not prove the query is healthy. It only means the parsed plan stayed below the current gates. If users still see latency, compare with a focused slow-log entry, real bind values, a fresh EXPLAIN ANALYZE, and the application path that called the query.
Worked Examples:
Checkout order lookup with spill evidence
A pasted EXPLAIN ANALYZE plan for a checkout query includes a Sort, a Hash Join, and two Seq Scan nodes. With Cost warning at 10000, Row drift warning at 10x, and Seq scan row warning at 10000, Plan Review Brief should move toward Critical review. The ledgers show high estimated cost, large sequential scans, disk-backed sort evidence, hash batching, temporary buffer I/O, and rows removed by filter. The next check is not simply "add an index"; it is to inspect predicates, join order, sort order, fresh statistics, and memory pressure for that query shape.
Estimate-only index plan
A plain plan such as Index Scan using orders_pkey on orders (cost=0.42..8.44 rows=1 width=96) parses cleanly but has no actual rows or timing. Plan Review Brief can still show node count and estimated cost, while Plan Finding Ledger reports Estimate-only plan. That is useful for code review, but it should not be used as proof that the production run is fast.
Borderline sequential scan gate
A Seq Scan on events estimated at 10000 rows reaches a Seq scan row warning of 10000 because the gate is inclusive. Plan Node Ledger should show a large-scan signal for that node. If the table is a tiny daily staging table, raising the gate may make sense. If it is a user-facing lookup table with selective predicates, the scan deserves a predicate and index review.
JSON copied without the top-level plan object
A JSON paste that starts with a nested node object but does not include the top-level Plan wrapper produces an error saying no top-level plan object was found. The summary changes to Check input, and Plan Finding Ledger reports an input parse error. Copy the full EXPLAIN (FORMAT JSON) output array, or switch to raw text output, before using the findings.
FAQ:
Can I paste plain EXPLAIN output?
Yes. Text output is accepted when node lines include PostgreSQL cost fields such as cost=0.00..445.00 rows=10000 width=244. The parser also accepts JSON output with a top-level Plan object.
Why does the same plan show estimated cost and actual time?
Plain EXPLAIN shows estimates only. EXPLAIN ANALYZE executes the statement and adds actual timing, actual rows, and loops. Cost and time use different units, so compare them as separate evidence.
Does a large sequential scan always mean a missing index?
No. A sequential scan can be correct for a broad query, a small table, or a case where most rows are needed. Use Plan Node Ledger and the finding's Next action to check selectivity, expected table size, and index usefulness.
Why is Row Estimate Drift empty?
Row Estimate Drift needs actual rows from EXPLAIN ANALYZE. A plain estimate-only plan can still show cost, scan shape, and findings, but it cannot compare estimated rows with real rows.
What should I do when parsing fails?
Use raw PostgreSQL text output or full FORMAT JSON output. If the pasted source has copied table borders, logs around the plan, or a partial JSON object, press Normalize, force Text plan or JSON plan, and paste the complete plan again.
Is my pasted plan uploaded for analysis?
No server-side parser is used for the plan text. Pasted content and selected TXT, LOG, PLAN, or JSON files are read in the browser for the current analysis, then the visible tables, charts, and JSON are built from that local source.
Glossary:
- Plan node
- One operation in a PostgreSQL plan tree, such as a scan, join, sort, aggregate, hash, or limit.
- Startup cost
- The estimated work before a node can start returning rows.
- Total cost
- The estimated work for a node if it runs to completion, including child-node cost.
- Actual rows
- The row count reported by
EXPLAIN ANALYZEfor a plan node. - Loops
- The number of times a plan node was executed, often important for nested-loop plans.
- Buffer evidence
- Shared, local, or temporary block counters reported when the plan includes buffer details.
- Spill evidence
- Disk sort, hash batching, or temp block activity that suggests work did not stay fully in memory.
References:
- Using EXPLAIN, PostgreSQL Documentation.
- EXPLAIN, PostgreSQL Documentation.
- ANALYZE, PostgreSQL Documentation.