JSON Schema Breaking Changes Checker
Check JSON Schema releases for required fields, removed properties, type and enum shifts, tighter constraints, closure risks, and severity counts before release review.{{ summary.heading }}
| Severity | Change | JSON Pointer | Released | Proposed | Release action | Copy |
|---|---|---|---|---|---|---|
| {{ severityLabel(row.severity) }} | {{ row.change }} | {{ row.pointer }} | {{ row.oldValue }} | {{ row.newValue }} | {{ row.action }} |
| Area | Value | Evidence | Next action | Copy |
|---|---|---|---|---|
| {{ row.area }} | {{ row.value }} | {{ row.evidence }} | {{ row.action }} |
Introduction
JSON Schema changes can break existing producers, consumers, validators, and generated clients even when the new schema looks cleaner. Adding a required field, removing a property, replacing a type, narrowing an enum, closing object extensions, or tightening a numeric bound can reject data that was valid under the released contract.
Compatibility review compares the schema that clients already rely on with the schema proposed for the next release. The useful question is not only whether both documents are valid JSON, but whether old payloads still have a reasonable path through the proposed validation rules.
A breaking-change checker is a review aid, not a replacement for testing real examples. JSON Schema keywords can interact through references, composition, validator configuration, and business rules outside the schema. The safest release review combines schema comparison with representative old payloads and client behavior tests.
The practical value is a readable change ledger. It should show the changed JSON Pointer, the released value, the proposed value, the severity, and a release action that tells reviewers whether to relax the schema, stage a migration, or publish the change as a breaking contract version.
Technical Details:
JSON Schema uses assertion keywords to decide whether an instance is valid and annotation keywords to attach descriptive information. Compatibility risk is highest when a proposed assertion accepts a smaller set of instances than the released assertion accepted. Removing an allowed type, adding a required property, narrowing a value set, or introducing a stricter bound all move in that direction.
Object and array shape changes need careful reading because they can affect old payloads without changing a field name directly. additionalProperties: false can reject unknown object fields, contains can require a matching array item, and tuple keywords can change which array positions are checked. Local $ref values also matter because the visible property may point to a reusable fragment elsewhere in the schema.
Rule Core
The comparison is an ordered schema walk. At each matching node, the old and new fragments are checked for type, enum, object, array, constraint, closure, composition, and optional annotation changes. Findings are deduplicated, assigned severity, and sorted with the strongest signals first.
| Rule Area | Signal Detected | Typical Release Meaning |
|---|---|---|
| Types | Allowed types are removed, replaced, introduced, or widened. | Removing an old accepted type is treated as breaking; adding a type is informational when no old type is removed. |
| Required fields and properties | Required names are added or relaxed, properties are removed, and new properties are added. | New required properties and removed required properties receive the strongest compatibility warnings. |
| Enums and const values | Allowed values are removed, added, introduced, or removed as a restriction. | Removed enum values and new fixed value sets can reject old payloads or surprise strict client switches. |
| Constraints | Minimums rise, maximums fall, patterns appear or change, formats change, uniqueness appears, and multipleOf changes. |
Tighter bounds need old sample validation before release because existing values may no longer pass. |
| Closure and dependencies | Object or array extension points close, schema-valued extension rules appear, or dependent requirements are introduced. | Unknown fields or extra array items that were accepted before may be rejected after the change. |
| Composition | allOf, anyOf, oneOf, not, conditionals, or dependent schemas change. |
Composition changes are marked for manual review because local comparison cannot prove every instance path. |
Coverage Profiles
Review coverage changes which rule groups are active. The broadest profile checks structural, value, constraint, closure, and composition signals. Narrower profiles are useful for quick triage, but they intentionally skip some rule groups and should not be used as the last compatibility claim for a contract release.
| Profile | Checked Rule Groups | Best Fit |
|---|---|---|
Contract-strict release review |
Types, enums, constraints, closure, composition, required fields, properties, arrays, and references. | Release gates and migration planning. |
API surface review |
Types, enums, composition, required fields, properties, arrays, and references. Constraint and closure groups are skipped. | Fast API contract triage before deeper testing. |
Structure-only scan |
Types, required fields, properties, arrays, and references. Enum, constraint, closure, and composition groups are skipped. | Early shape checks while a schema is still being drafted. |
Reference and Scope Limits
Local JSON Pointer references such as #/$defs/User can be expanded before common property schemas are compared. Remote references are not fetched, unresolved local references are reported as warnings, and circular local references are not expanded indefinitely. A nested depth stop prevents very deep schemas from turning one comparison into an unbounded walk.
The checker reads schema text in the browser and produces deterministic ledger rows from the pasted or loaded JSON. It does not validate actual instance documents, fetch remote schemas, interpret every validator-specific option, or guarantee that generated code will remain source-compatible.
Everyday Use & Decision Guide:
Start with Contract-strict release review and Resolve local $ref pointers when the schema is close to release. Paste the published schema into Released schema and the candidate schema into Proposed schema. Use Schema label for the API body, event contract, or configuration object being reviewed so exports are easy to identify later.
Use the lighter profiles only when you are deliberately narrowing the scan. API surface review is useful when type and enum shape matter most, while Structure-only scan can catch removed properties and newly required fields during early design. Re-run the strict profile before treating the result as a release gate.
- Read
Compatibility Ledgerfirst for severity, change type, JSON Pointer, released value, proposed value, and release action. - Open
Rule Summaryto confirm which profile ran, how many properties and required fields were parsed, and how references were handled. - Use
Schema Risk Mixto see whether the comparison is dominated by critical, high, medium, low, or informational rows. - Enable
Include annotation-only noteswhen documentation, defaults, examples, deprecation flags, or generated client comments are part of the review. - Raise
Finding render limitwhen a large schema has more rows than the visible ledger shows; the JSON report keeps the full finding list.
Stop on parse errors before interpreting compatibility. A failed released schema or proposed schema means the comparison did not reach the meaningful rule walk. Reference warnings also deserve attention because opaque, unresolved, or remote references can hide changes inside reused fragments.
A result with no breaking signals is not a full compatibility proof. Validate representative old payloads against the proposed schema, especially for business-critical contracts, remote references, composition-heavy schemas, and clients generated from schema definitions.
Step-by-Step Guide:
- Enter a clear
Schema label, such as an event name, endpoint body, or stored configuration object. - Choose
Review coverage. Use the strict profile for a release decision unless you are doing an early, narrower scan. - Leave
Local $ref handlingonResolve local $ref pointerswhen the schema uses$defs,definitions, or internal fragments. - Paste or load the released schema and proposed schema. Use
Formatif either document needs quick JSON cleanup. - Fix any
Input reviewwarnings for parse failures, unresolved references, remote references, circular references, or visible-row limits. - Review
Compatibility Ledgerfrom strongest severity downward. Check each JSON Pointer against the proposed release notes or migration plan. - Use
Rule Summary,Schema Risk Mix, andJSONto share the evidence with release reviewers or issue trackers. - Validate old example payloads with the proposed schema before shipping a business-critical contract.
Interpreting Results:
Critical, High, and Medium rows count as breaking signals in the summary. They point to changes that can reject old data or require manual release review. Low rows usually mark coverage limits, while Info rows describe relaxations or optional context that still may matter to documentation or generated clients.
| Result Cue | What It Means | What to Verify |
|---|---|---|
Schema parse failed |
At least one JSON input could not be parsed, so compatibility was not compared. | Fix the JSON syntax before trusting any ledger row. |
Breaking signals |
The comparison found critical, high, or medium rows under the selected profile. | Decide whether to relax the proposed schema, stage a migration, or version the contract. |
No breaking signals |
No active rule group found a critical, high, or medium row. | Check skipped profile groups, reference warnings, and real old payload validation. |
Refs kept opaque |
References were compared as reference strings rather than expanded shapes. | Resolve local references or vendor remote schemas when fragment changes matter. |
Annotation notes |
Title, descriptive, default, example, deprecation, read-only, or write-only changes are listed as informational rows when enabled. | Review generated documentation and client code if those annotations feed downstream output. |
Treat the release action column as a prompt for follow-up, not as automatic approval or rejection. Some findings can be intentional breaking changes, but they should be named clearly in versioning, migration notes, or compatibility exceptions.
Worked Examples:
New required property:
If the released schema required id and the proposed schema requires both id and currency, old payloads without currency can fail. The ledger marks this as a critical required-field signal and recommends keeping the field optional until producers are migrated.
Enum value removed:
A status enum that changes from draft, paid, and cancelled to only paid and cancelled removes an old accepted value. The ledger treats the removed value as critical because older clients may still send or expect it.
Constraint tightened:
Changing a quantity field from minimum: 1 to minimum: 2 tightens the numeric lower bound. Existing instances with quantity 1 need migration, grandfathering, or a contract version change before the proposed schema can be treated as backward-compatible.
Object closure introduced:
Moving from open object properties to additionalProperties: false can reject older payloads that carried harmless unknown fields. The closure row is a signal to deprecate unknown fields first or confirm that real clients never send them.
FAQ:
Does a clean result prove the schema is backward-compatible?
No. It means the selected rule groups did not find breaking signals in the schema comparison. Real payload validation and client tests are still needed for important releases.
Why can an enum addition appear as informational?
Adding an accepted value usually widens validation, but strict client code may still need updates. The row is informational so reviewers can document the new value without treating it like a direct validation break.
Should local references be resolved?
Yes when local fragments define shared property shapes. Resolving local $ref pointers lets the comparison inspect the referenced fragment rather than only comparing the pointer text.
Why are composition changes marked for manual review?
Composition keywords can create complex validation paths. A local comparison can detect that the composition changed, but representative instances should be validated to understand the exact release impact.
Does pasted schema data leave the browser?
The schema text is parsed and compared in the browser. The description shows no server submission path for the released or proposed schema inputs.
Glossary:
- Released schema
- The JSON Schema version already published to clients, producers, validators, or stored data.
- Proposed schema
- The candidate JSON Schema version being reviewed as a replacement.
- JSON Pointer
- A path syntax that identifies a location inside a JSON document, such as
#/properties/status. - Breaking signal
- A critical, high, or medium finding that may reject data accepted by the released schema or require release review.
- Annotation keyword
- A JSON Schema keyword that carries information such as title, examples, defaults, or deprecation state rather than directly asserting validity.
- Closure
- A rule that limits extra object properties or array items that were previously accepted.
References:
- JSON Schema Draft 2020-12, JSON Schema.
- JSON Schema Validation: A Vocabulary for Structural Validation of JSON, JSON Schema, 2022-06-16.
- Understanding JSON Schema Reference, JSON Schema.