| # | {{ header }} | Copy |
|---|---|---|
| {{ row.rowNumber }} | {{ cell }} | |
| No rows available. |
| Path | Types | Coverage | Example | Copy |
|---|---|---|---|---|
| {{ row.path }} | {{ row.types }} | {{ row.coverage }} | {{ row.example }} | |
| Path atlas appears after a successful transform. | ||||
| Category | Status | Detail | Copy |
|---|---|---|---|
| {{ risk.category }} | {{ risk.status }} | {{ risk.detail }} |
{{ deckText }}
JSON is convenient for machines and often inconvenient for people. A payload can be perfectly valid while still being hard to inspect, difficult to flatten for a spreadsheet, or awkward to reuse in another text format. This converter is built for that translation step, when you need the structure to stay truthful but the presentation to become more workable.
That is a common operational problem. API responses arrive wrapped in metadata, one useful record list may be buried several levels down, and nested arrays or objects can explode into unreadable exports if they are flattened too aggressively. The page parses the payload, lets you aim at the branch that actually matters, and then reshapes the result into tabular or text-based outputs that are easier to inspect and hand off.
The package is practical because it handles several adjacent tasks in one place. It can auto-detect ordinary JSON versus newline-delimited JSON, resolve a working branch with dot notation or JSON Pointer, flatten nested structures into headers, keep whole subtrees as JSON strings when flattening would be too noisy, and then export the transformed records in several formats without forcing you to repeat the same parsing step elsewhere.
It is especially useful when the original payload is structurally correct but operationally inconvenient. A nested array hidden under a wrapper object can become a record grid. A set of line-delimited events can become CSV. A quick staging table can be generated as SQL inserts. The path atlas and warning badges help you see whether the transformation still reflects the data shape you intended.
The main limit is representational compression. Flattened headers are excellent for review and downstream tooling, but they are not the same thing as the original nested model. The converter is therefore best treated as a structure translator and inspection aid, not as a substitute for a formal schema system or a semantic validation pipeline.
The safest starting point is to leave source detection on automatic unless you already know the content must be handled strictly as one JSON text or strictly as one JSON value per line. That keeps routine paste-and-transform work fast, especially when the source came from logs, an API console, or a saved event stream and you do not want to diagnose the format before you diagnose the data.
The next important choice is the working path. If the root payload already is the record set you care about, leave the path blank. If the records live deeper in the document, point the converter there so the record grid and exports reflect the meaningful branch rather than every outer wrapper field. This is often the difference between a useful export and a noisy one.
Flattening choices should follow the downstream audience. Turning nested objects and arrays into headers is ideal for spreadsheets and quick visual comparisons. Leaving them unflattened is often better when the nested content is irregular, when column explosion would make the output hard to read, or when you need to preserve subdocuments as intact JSON strings inside a tabular result.
The remaining settings are mostly compatibility controls. Header case helps adapt columns to downstream naming conventions. Null-as-blank makes tables cleaner but can hide the distinction between an explicit null and an empty-looking cell. Preview row limits affect the interactive table only, not the full exports, which is useful when the payload is large and you still want complete output files.
| Need | Best place to start | Why it helps |
|---|---|---|
| Confirm the shape of the chosen branch | Conversion Snapshot | Shows root type, target type, record count, field count, depth, byte size, and warning badges. |
| Understand where the nested keys live | Path Atlas | Lists discovered paths so you can confirm the branch and flattening choices before exporting. |
| Hand the data to a spreadsheet user | Record Grid plus CSV, DOCX, or XLSX export | Turns the selected branch into rows and headers that survive common business-tool workflows. |
| Reuse the same transform in another text format | Output deck | Generates JSONL, XML, HTML table, Markdown table, or SQL insert text from the same transformed records. |
The parser begins by normalizing line endings. In strict JSON mode it parses one JSON text. In strict JSONL mode it parses each non-empty line independently and reports the exact line number if one line fails. In automatic mode it tries ordinary JSON first and then falls back to JSONL, returning a combined error when neither interpretation succeeds. That makes the tool tolerant of mixed real-world inputs without hiding parse failures.
After parsing, the package optionally canonicalizes object-key order and then resolves the target path. Dot or bracket notation and JSON Pointer are both supported, with auto mode choosing the pointer route when the path starts with a slash. If the requested path does not resolve, the converter does not silently return nothing. It warns and falls back to the root payload so the user can still see what the parser actually received.
The target value is then normalized into an array of records. Arrays stay arrays. Single values become one-row arrays. Each record is flattened recursively into key-value pairs when flattening is enabled. Arrays become bracketed tokens and objects become dotted tokens. When flattening is disabled for a branch type, the subtree is preserved as serialized JSON text rather than exploded into more columns.
The package also computes a structural risk profile from the parsed content. It measures byte size, nesting depth, record count, distinct header count, unsafe integer count, and long integer string count. Those metrics power the badges and warnings because a payload can be syntactically valid yet still be operationally awkward if it is too deep, too wide, or numerically risky for ordinary JavaScript handling.
| Metric or warning | What the package measures | Why it matters |
|---|---|---|
| Root type and target type | The original parsed value type and the type reached after path resolution. | Confirms whether the selected branch is really the record set you intended to export. |
| Record count and field count | How many rows the selected branch becomes and how many distinct headers the flattening step generates. | Exposes unexpectedly wide or unexpectedly small results before export. |
| Unsafe integer count | The number of parsed integer values outside JavaScript's safe exact range. | Warns that numeric precision may not survive naive downstream numeric handling. |
| Long integer string warning | The number of string values that look like very large integers. | Encourages keeping identifier-like values as strings rather than converting them accidentally. |
| Path not found | A warning raised when the requested branch cannot be resolved. | Makes the fallback-to-root behavior explicit instead of hiding it. |
| Output family | What it preserves | Where it is strongest |
|---|---|---|
| Record Grid exports | Flattened headers and row values. | Spreadsheet handoffs through CSV, DOCX, and XLSX. |
| Path Atlas exports | Discovered paths and their structural profile. | Auditing or documenting how nested data was flattened. |
| JSONL, XML, HTML, Markdown | The transformed record set rendered in another text format. | Quick reuse in logs, markup, or human-readable documents. |
| SQL insert output | Flattened rows with dialect-aware identifier and literal handling. | Rapid staging into MySQL, PostgreSQL, or SQLite scratch tables. |
| Pretty JSON | The selected branch itself, before flattening. | Truth-checking the chosen working value against the exported rows. |
If the record grid becomes much wider than expected, the cause is usually flattening rather than bad input. Nested arrays and objects multiply headers quickly. In that case, the best correction is often to target a narrower branch or preserve one branch type as JSON strings instead of flattening everything.
Unsafe integer warnings deserve careful attention because they are about numeric exactness, not syntax. The payload may still be valid, but exact identifiers or counters can be damaged if downstream tools treat them as ordinary numbers without preserving precision.
A path warning should be read as a clue, not as a fatal failure. The converter falls back to the root payload so you can still inspect something meaningful, but the fallback means the resulting grid may be much broader or conceptually different from the intended branch. The snapshot badges and target type help catch that quickly.
The output deck should be interpreted as a rendering of the transformed data, not of the original source in untouched form. If you change path selection, flattening, or header normalization, the meaning of every CSV, XML, Markdown, SQL, DOCX, or XLSX export changes with it.
An API returns metadata at the root and the actual records several levels down. Setting the path to that branch turns the export into a clean row set instead of repeating wrapper fields in every record.
A payload contains nested label lists and uneven configuration arrays. Leaving array flattening off preserves those values as JSON strings inside the grid, which is often more readable than exploding them into a large set of sparse columns.
After flattening and normalizing the headers, the SQL export can generate insert statements for a temporary MySQL, PostgreSQL, or SQLite table. That is useful when the goal is inspection and comparison rather than building a permanent schema.
Yes. It supports both, and automatic mode tries JSON first and JSONL second.
The package raises a warning and falls back to the root payload instead of returning an empty result silently.
Because they exceed JavaScript's safe exact integer range and may not survive downstream numeric handling without loss of precision.
It preserves the data values, but it changes how the structure is represented for inspection and export.
No. The page loads the spreadsheet library when XLSX export is requested.