{{ summaryHeading }}
{{ summaryPrimary }}
{{ summaryLine }}
{{ badge.label }}
Database migration risk inputs
Short name for the migration review artifact.
Pick the engine closest to production; Generic SQL keeps portable destructive and locking checks.
Use Critical writes for checkout, auth, billing, ingest, and similar always-on write paths.
One table per line or comma-separated. Examples: orders:28000000, invoices hot, public.events:120m.
Paste the migration body. Drop a .sql, .txt, or .log file directly on the textarea.
{{ migration_sql.length.toLocaleString() }} chars {{ fileStatus || 'Drop SQL, TXT, or LOG onto the textarea.' }}
Aspect Value Review detail Copy
{{ row.aspect }} {{ row.value }} {{ row.detail }}
Severity Signal Evidence Statement Next action Copy
{{ row.severityLabel }} {{ row.signal }} {{ row.evidence }} {{ row.statementLabel }} {{ row.action }}
Phase Action Trigger Review note Copy
{{ row.phase }} {{ row.action }} {{ row.trigger }} {{ row.note }}
Customize
Advanced
:

Introduction:

Database migration risk review looks for the changes that can turn a normal deploy into a stalled release, blocked write path, or difficult rollback. Syntax can be valid while the operational shape is risky. Adding a required column, building an index, validating a foreign key, touching a hot table, or running a broad backfill can all behave differently once production traffic and table size are involved.

The practical goal is early release judgment. A schema change should be read for lock behavior, table rewrites, destructive operations, validation scans, and data movement before it is bundled into a deploy. That matters most when the migration touches checkout, billing, authentication, event ingest, job queues, or any table where stalled writes become a customer-facing incident.

Flow diagram showing SQL text reduced into migration risk signals, score gates, and a release runbook.

A static review cannot prove that a migration will run safely. It cannot see live locks, table statistics, replica state, disk headroom, application compatibility, or trigger behavior. It is best used as a preflight gate that says which statements need rehearsal, staging evidence, separate approval, or a different rollout order before production execution.

The strongest review combines SQL shape with production context. The same ALTER TABLE can be uneventful on a tiny lookup table and risky on a 120 million row events table. A useful risk note therefore names the statement, the table, the traffic level, the suspected failure mode, and the next verification step.

Technical Details:

Migration risk comes from how a database engine must protect table structure and data consistency while schema or bulk data changes are running. Locks protect metadata and rows. Table rewrites copy or rebuild storage. Constraint validation checks existing rows. Broad updates and deletes can create undo, write-ahead log, replication lag, and long lock waits even when each statement is syntactically ordinary.

Engine choice changes the safest form of the same intent. PostgreSQL can build indexes with CONCURRENTLY, but that form cannot run inside a normal transaction block. MySQL and MariaDB deployments often rely on explicit ALGORITHM and LOCK clauses so unsupported concurrency requests fail rather than silently taking a heavier path. SQL Server index work can use the engine's reduced-blocking index option and, where supported, RESUMABLE = ON to make long builds easier to control.

Large or hot table context raises the cost of being wrong. The review treats row-count hints such as orders:28000000, events:120m, and words such as hot, critical, write, or tier-0 as table inventory. A table is also treated as hot when its parsed count is at least 10000000 rows. That context does not make a statement unsafe by itself, but it raises severity when the statement touches that table.

Rule Core

The static scan separates SQL into semicolon-delimited statements after removing comments and respecting quoted strings, quoted identifiers, brackets, and PostgreSQL dollar-quoted blocks. It then extracts table names from common DDL and DML patterns such as ALTER TABLE, CREATE INDEX ... ON, DROP TABLE, TRUNCATE, UPDATE, DELETE FROM, INSERT INTO, MERGE INTO, and LOCK TABLE.

Database migration static risk signal rules
Signal family Representative match Severity pattern Primary review consequence
Destructive change DROP TABLE, TRUNCATE, DROP COLUMN, CASCADE, unbounded DELETE Critical for table removal or truncate, usually High for drop-column or cascade paths. Require restore proof, dependency review, app compatibility, and explicit approval before release.
Lock-sensitive DDL PostgreSQL index build without CONCURRENTLY, SQL Server index build without a reduced-blocking build option, MySQL DDL without safe LOCK or ALGORITHM clauses. High under hot or critical traffic, often Medium otherwise. Pin the concurrency option where the engine supports it, or schedule a write pause with a rollback point.
Rewrite or storage rebuild VACUUM FULL, CLUSTER, MySQL ALGORITHM=COPY, character set conversion, engine change, broad column type change. Critical for full rewrite operations, usually High or Medium for type and storage changes. Model runtime, lock wait, disk growth, replica delay, and rollback cost on production-size data.
Validation-heavy change Required column without a default, SET NOT NULL, unique index, foreign key, check, unique, or primary-key constraint validation. High when a known large or hot table is touched, lower when validation is deferred with NOT VALID. Precheck bad rows, split validation from deployment, and make the validation step observable.
Backfill or bulk DML UPDATE without WHERE, large-table UPDATE or DELETE, and large-table INSERT INTO ... SELECT. High for unbounded updates and deletes, with traffic and table inventory increasing concern. Move data movement into checkpointed batches with throttling, progress markers, and lag monitoring.
Release shape More than 12 parsed statements in one migration. Medium. Split unrelated or high-risk statements into separately observable deploy steps.

Score Core

Each scored finding contributes a severity weight. Info findings explain static-scan limits but do not add score. The raw score is multiplied by the selected traffic profile, rounded, and capped at 100.

S = min ( 100 , round ( M i = 1 n w i ) )

S is the displayed risk score, M is the production-traffic multiplier, and w is each finding weight. The weights are 34 for Critical, 24 for High, 12 for Medium, 5 for Low, and 0 for Info. Traffic multipliers are 0.85 for low traffic or a maintenance window, 1.00 for steady production traffic, 1.18 for a hot table write path, and 1.35 for a critical write path.

Database migration risk score status gates
Release stance Score condition How to read it
Release hold >= 80 or any Critical finding Do not run as a blind deploy step. Require a rehearsed runbook, approval owner, and rollback boundary.
High-risk review >= 55 and no Critical finding The migration needs focused review before release, usually because several high-value signals are present.
Needs rollout plan >= 30 and < 55 Attach timing, lock, verification, and rollback notes before production execution.
Low static signal < 30 No major static pattern was found, but staging evidence and engine lock checks still matter.

The method is deterministic and local to the SQL text and selected context. It does not connect to a database, execute SQL, inspect catalogs, measure table statistics, prove whether an index exists, or confirm that an application version can tolerate a schema change. Treat the result as a review aid, not a database safety guarantee.

Everyday Use & Decision Guide:

Start with the production engine and the table context. Set Database engine to PostgreSQL, MySQL / MariaDB, SQL Server, or Generic SQL before judging a migration, because index and lock guidance changes by engine. Then set Production traffic to the closest real path. Choose Critical write path for checkout, auth, billing, ingest, and queue tables where blocked writes quickly become incidents.

Use Large or hot tables when you know table names, row counts, or hot write paths. Entries such as orders:28000000, invoices hot, and public.events:120m make the scan more useful because large-table matches raise the severity of backfill, lock, validation, and rewrite signals. Rough counts are better than leaving the inventory blank when the table is clearly large.

  • Use Migration label for the ticket, migration class, or deploy step name that should appear in handoff notes.
  • Paste SQL into Migration SQL, drop a SQL, TXT, or LOG file on the textarea, or use Browse when the migration is already saved as a file.
  • Use Normalize after pasting from a pull request or terminal output. It trims trailing whitespace without changing the SQL meaning.
  • Open Migration Review Brief first. Check Risk score, Release stance, Risk findings, Large table coverage, Locking signals, and Rewrite/backfill signals.
  • Use SQL Finding Ledger for the statement-level handoff. It names Severity, Signal, Evidence, Statement, and Next action.
  • Use Mitigation Checklist to turn findings into release work such as rehearsal, deploy order, lock control, data movement, validation, and rollback notes.
  • Use Statement Risk Ladder when several statements are bundled together and the riskiest statement needs to be pulled out first.

A low score does not approve the migration. It only means the visible SQL did not match the built-in high-risk patterns under the selected context. Before trusting it, confirm that Database engine and Production traffic match production, that the large-table list includes the tables actually touched, and that the summary does not say No inventory for a migration known to touch big tables.

Use the result to decide the next release action. Release hold should lead to a runbook and approval discussion. High-risk review should lead to staging evidence and statement splitting. Needs rollout plan should lead to lock-timeout, verification, and rollback notes. Low static signal should still be paired with a staging rehearsal before production.

Step-by-Step Guide:

Review one migration bundle at a time so the score, findings, and mitigation rows describe a single release decision.

  1. Fill Migration label with the ticket, migration class, or deploy step name. The same label should appear in the JSON output and exported review notes.
  2. Choose Database engine. PostgreSQL, MySQL / MariaDB, and SQL Server apply engine-specific checks; Generic SQL keeps portable destructive, locking, and broad DML checks.
  3. Set Production traffic to the expected production path. Moving from Steady production traffic to Hot table write path or Critical write path raises the score for the same findings.
  4. Enter known table inventory in Large or hot tables. Use one table per line or comma-separated entries; row-count shorthand such as 120m is accepted.
  5. Paste the migration into Migration SQL, drag a SQL, TXT, or LOG file onto the textarea, choose Browse, or press Load sample. The helper text should show a character count or a loaded-file message.
  6. Press Normalize if the pasted SQL has noisy spacing. If the red error panel says Paste SQL migration text before reviewing risk., add SQL before using the result tabs.
  7. Read the summary box and Migration Review Brief. Confirm Risk score, Release stance, SQL statements, Risk findings, and Large table coverage.
  8. Open SQL Finding Ledger and start with the highest Severity. Use Evidence and Next action to decide whether the next check is lock mode, rewrite cost, validation path, destructive dependency review, or batched data movement.
  9. Open Mitigation Checklist and Statement Risk Ladder before handoff. If a statement carries most of the score, split it into its own release step and rehearse that step separately.

Interpreting Results:

Read Release stance before reading individual rows. A Release hold means at least one critical static signal was found or the weighted score reached 80. A lower stance can still require action when the highest row in SQL Finding Ledger points to a known large table, a non-concurrent index build, or validation of existing rows.

How to interpret database migration risk result cues
Visible cue Best first reading What to verify next
Release hold The score reached 80 or more, or a critical destructive, lock, or rewrite signal was found. Require runbook, backup or restore proof where relevant, and a clear stop point before production execution.
High-risk review The score reached 55 without a critical finding. Check the highest rows in SQL Finding Ledger and split high-risk statements out of the deploy if possible.
Known large table touched A statement matched a supplied large or hot table name, or a table with at least 10000000 rows. Verify row count, write traffic, lock timeout, replica lag, and whether the operation can run in batches.
No major static signal The SQL did not match destructive, rewrite, unbounded DML, or obvious index-lock patterns. Still rehearse on production-size data; the scan cannot inspect triggers, catalog state, existing indexes, or app compatibility.
No inventory No large-table context was supplied, so table-size weighting is missing. Add known hot tables before using the score for release review.

Do not treat a clean static scan as production approval. The corrective check is to compare SQL Finding Ledger and Mitigation Checklist with actual staging evidence: observed lock mode, duration, write impact, validation time, disk growth, replica lag, and rollback behavior.

Worked Examples:

Invoice state rollout on hot tables

A PostgreSQL migration labeled invoice state rollout adds state text NOT NULL to invoices, creates idx_invoices_state without CONCURRENTLY, updates every invoice row, and adds a foreign key on orders. With Production traffic set to Hot table write path and inventory entries orders:28000000, invoices hot, and public.events:120m, Risk score reaches 100/100 and Release stance becomes Release hold. The highest rows in SQL Finding Ledger point to required-column validation, a non-concurrent index build, unbounded UPDATE, foreign-key validation, and known large table matches. The right next move is to split the schema change, backfill, index, and validation steps into separate rehearsed releases.

Concurrent index wrapped by a transaction

A PostgreSQL migration uses BEGIN;, CREATE INDEX CONCURRENTLY idx_events_account_id ON events(account_id);, and COMMIT; while events:120m is listed in Large or hot tables. The index keyword lowers write-blocking concern, but SQL Finding Ledger still reports Concurrent index inside transaction wrapper because PostgreSQL does not allow that index form inside an ordinary transaction block. Statement Risk Ladder should rank the index statement above the wrapper statements. The corrective path is to split the index into its own non-transactional migration step and rehearse lock wait behavior separately.

Small metadata change with incomplete context

A one-statement migration such as CREATE TABLE feature_flags (name text primary key, enabled boolean not null); under Steady production traffic may show Low static signal with an informational row saying no major static signal was found. If Large table coverage also says No inventory, the result is not a blanket approval. It means the visible SQL did not match the built-in risk patterns. The next check is ordinary release hygiene: confirm the statement count, run it in staging, and make sure app versions can tolerate the new object.

Empty paste or copied wrapper text

If Migration SQL is empty, the error panel reports Paste SQL migration text before reviewing risk. and the summary box shows Waiting for SQL. Add the migration body, use Normalize if copied text has extra blank lines, and then confirm SQL statements in Migration Review Brief. If the statement count is lower than expected, check whether the copied source omitted semicolons or wrapped SQL in non-SQL text that should be removed before review.

FAQ:

Does a low score mean the migration is safe?

No. Low static signal means the visible SQL did not match the built-in high-risk patterns under the selected engine, traffic, and table inventory. It does not prove safe locks, short runtime, replica health, app compatibility, or rollback success.

Why does production traffic change the score?

The same finding is more serious on a hot write path. Low traffic or maintenance window uses a 0.85 multiplier, Steady production traffic uses 1.00, Hot table write path uses 1.18, and Critical write path uses 1.35.

What table formats work in Large or hot tables?

Use one table per line or comma-separated entries. Names may include schema prefixes, and counts can use shorthand such as 800k, 28m, or 1.2b. Words such as hot, critical, write, large, busy, and tier-0 mark a table as hot.

Why did a PostgreSQL concurrent index still get flagged?

The scan flags CREATE INDEX CONCURRENTLY when the migration text also contains a transaction wrapper such as BEGIN and COMMIT. PostgreSQL requires that concurrent index build form to run outside an ordinary transaction block.

What should I do when the error panel appears?

Start with the exact message. If it says Paste SQL migration text before reviewing risk., add SQL to Migration SQL. If statement coverage looks wrong, normalize copied text and confirm semicolons so the parser can separate statements.

Is the SQL executed or uploaded?

No. The migration body is parsed locally in the browser for static signals. The review does not connect to a database, execute statements, or upload the SQL for the risk scan.

Glossary:

DDL
Data definition language, the SQL used to change schema objects such as tables, columns, indexes, and constraints.
DML
Data manipulation language, the SQL used to change rows, such as UPDATE, DELETE, INSERT, and MERGE.
Table rewrite
A storage rebuild where the database creates or reorganizes table data rather than only changing metadata.
Large or hot table
A table with a high row count, heavy write traffic, or a critical production role that raises the impact of locks and backfills.
Concurrent index build
An index build form intended to reduce write blocking, with engine-specific restrictions and tradeoffs.
Expand-contract release
A safer rollout pattern that adds compatible schema first, moves application reads and writes, and removes old objects later.

References: