JSON Summary
Root: {{ metrics.rootType }} Target: {{ metrics.targetType }} {{ metrics.recordCount.toLocaleString() }} records {{ metrics.fieldCount.toLocaleString() }} fields Depth {{ metrics.depth }} {{ metrics.sizeBytes.toLocaleString() }} bytes
{{ warn }}
Showing first {{ max_preview_rows }} records in the table preview. Downloads include the full data.
JSON:
spaces
rows
# {{ header }}
{{ row.index }} {{ cell }}
No rows to preview yet.
{{ csvText }}
{{ tsvText }}
{{ xmlText }}
{{ htmlTable }}
{{ markdownTable }}
{{ sqlInsertText }}
:

Introduction:

JavaScript object notation data is structured text that represents nested records, lists, and simple values, and it often needs to be checked or reshaped before you share or import it. The converter turns that structure into clear rows and columns so you can see what is inside without reading raw braces and brackets.

You paste or drop a file of structured text into the box, choose which branch of the data should be treated as individual records, and then view an instant table with column names and readable values. The summary badges highlight the main type at the top, the number of records and fields, the nesting depth, and an approximate size in bytes so you can spot unusual shapes early.

A typical use is taking a feed of analytics events where each item includes user details and nested project lists, then turning that collection into a spreadsheet ready table for review with a team. You can flatten nested objects and arrays into dotted paths so that each project, score, and flag becomes a cell that can be filtered or sorted by other systems.

Results describe the structural layout of your data and do not replace checks of business rules or permissions. For consistent comparisons you can keep the same path and flattening options between runs and avoid pasting secrets or production credentials into shared screens, especially when projecting work during meetings. If a preview looks unexpectedly shallow or wide it often means the chosen branch or flattening switches need adjustment rather than that the source itself is broken.

Technical Details:

JavaScript Object Notation (JSON) input is parsed using the host environment's JSON parser into standard arrays, objects, strings, numbers, booleans, and null values. The component then distinguishes between the overall root value and an optional target branch selected through the data path so you can focus tabular exports on a specific array or object.

From that parsed structure the converter computes a root type label, a target type label, the recordCount, the fieldCount, the maximum nesting depth, and the sizeBytes of the original text measured in bytes using UTF-8 encoding. These metrics are shown as badges so you can judge whether you are inspecting a list of records, a single object, or a mixture, and whether the depth or byte size matches expectations for the dataset.

When a target branch is chosen the working value is normalised into an array so that a single object becomes a one element list, which keeps the downstream flattening logic consistent. Each entry is flattened into a record by recursively walking nested objects and arrays, combining property names and array indexes into dotted paths such as projects[0].code, and optionally stringifying whole subtrees when flattening of objects or arrays is disabled.

Flattened records are collected into a union of header keys in first seen order, then each cell is formatted so that numbers and booleans stay as simple tokens, null becomes the literal string null or an empty cell depending on the null as blank toggle, and remaining structures are serialised as JSON text. The same flattened headers and cell values are reused to build the preview table, comma separated and tab separated text, HTML and Markdown tables, JSON Lines, an XML tree, and an SQL INSERT script.

  1. Normalise newline characters in the input text and attempt to parse the JSON, reporting a detailed error message on failure.
  2. Optionally sort object keys recursively in ascending or descending order, or preserve their original order when no sorting is chosen.
  3. Resolve the data path as dot notation or JSON Pointer syntax, and use the root value when resolution fails.
  4. Convert the resolved value into an array of records so that single objects become a one row collection.
  5. Flatten each record by joining property names and array indexes into paths, or stringify nested structures when flattening switches are turned off.
  6. Construct a header list from all flattened records, build preview rows up to the configured limit, and flag when extra rows are excluded.
  7. Generate pretty JSON, JSON Lines, delimited text, HTML and Markdown tables, XML, SQL INSERT statements, and optional DOCX and XLSX exports.
Summary of computed metrics and their meaning
Symbol Meaning Unit/Datatype Source
rootType Type of the original parsed value. String Derived from parsed JSON.
targetType Type of the value reached by the configured data path. String Derived from resolved branch.
recordCount Number of records produced after normalising the target value into an array. Integer Derived from working array.
fieldCount Number of distinct flattened field keys across all records. Integer Derived from flattened records.
depth Maximum nesting depth encountered while traversing the parsed JSON structure. Integer Derived by recursive walk.
sizeBytes Length of the original text measured as UTF-8 encoded bytes. Integer Derived from raw input text.

Validation and bounds

Input controls, bounds, and validation behaviour
Field Type Min Max Step/Pattern Error Text Placeholder
input_text Text area Valid JSON Browser limit Strict JSON syntax without comments or trailing commas. Unable to parse JSON: <message>. Sample array of member records.
array_path String 0 characters Unbounded Dot notation or JSON Pointer segments. Path not found, using root data. items.details[0] or /items/0/details
indent_size Number 2 8 Step 1; out-of-range values are clamped. Invalid values fall back to default width. Defaults to 2 spaces.
sort_keys Enum n/a n/a Values none, asc, desc. Invalid values treated as none. Preserve order.
flatten_objects Boolean switch n/a n/a When false, nested objects may be kept as JSON strings. No explicit error; affects fieldCount and cells. On by default.
flatten_arrays Boolean switch n/a n/a When false, arrays are kept as JSON strings. No explicit error; affects fieldCount and cells. On by default.
null_as_blank Boolean switch n/a n/a When true, null and non-finite numbers render as empty cells. No explicit error; may hide missing values. Off by default.
max_preview_rows Number 10 1000 Step 10; values outside range are clamped. No explicit error; previewTrimmed flag indicates truncation. Defaults to 200 rows.
xml_root String 0 characters 32 characters Invalid characters replaced; non-letter starts prefixed automatically. No explicit error; tag names are sanitised. dataset
xml_item String 0 characters 32 characters Invalid characters replaced; non-letter starts prefixed automatically. No explicit error; tag names are sanitised. item
sql_table String 0 characters 64 characters Non-alphanumeric characters removed or replaced; leading digits prefixed with c_. Empty names fall back to dataset. dataset

I/O formats and rounding

Input sources, output formats, and precision notes
Input Accepted Families Output Encoding/Precision Rounding
Text area or dropped file. JSON documents as UTF-8 text. Pretty JSON, structural metrics, and XML. Uses host JSON parser; strings kept as Unicode. No extra rounding beyond environment number formatting.
Array or object at data path. Array of objects, single object wrapped into array. Preview table plus CSV, TSV, HTML, Markdown, JSON Lines, and SQL. Cells formatted as strings, numbers, booleans, null, or JSON text. Numeric values preserved as parsed; non-finite values mapped to null or blanks.
Flattened records. Records built from nested branches. XLSX sheet and DOCX summary exports. Export helpers construct tables from current headers and data rows. No additional numeric transformation during document generation.

Assumptions and limitations

  • Input must be valid JSON; comments, unquoted keys, and trailing commas will cause parsing to fail.
  • Record detection is structural only and does not understand domain concepts such as users, orders, or events.
  • Flattening treats each array element independently and does not cross join nested arrays into multiple rows.
  • When objects or arrays are left unflattened they are serialised as JSON strings, which may be harder to query downstream.
  • Preview is limited to the configured row count, while downloads always include the full set of records.
  • Large or deeply nested payloads may take longer to process and render in older devices or constrained environments.
  • Heads-up Mixed record shapes produce wide tables with many sparse columns, and missing keys simply appear as blank or null cells.
  • Heads-up Using the null as blank option hides the distinction between explicit null values and fields that were never present.
  • SQL output follows a simple INSERT pattern and may require adjustment to match specific database dialects or schemas.

Edge cases and error sources

  • Non-JSON text or truncated files trigger a parse error and suppress all conversions until the syntax is corrected.
  • Heads-up Very large integers may lose exact precision when parsed as JSON numbers by the host environment.
  • Numeric values such as NaN or Infinity are treated as non-finite and exported as null or blank depending on settings.
  • Array indexes that fall outside available elements in a path expression cause a path not found warning and fall back to the root.
  • Empty objects or empty arrays flatten to null cells at their path, which can make them hard to distinguish from missing data.
  • Arrays left unflattened are wrapped as JSON strings, so embedded newlines or commas may surprise some CSV consumers.
  • JSON Lines output serialises each selected record independently; if serialisation fails, the literal word null is emitted instead.
  • Heads-up Columns whose names sanitise to an empty string receive generic column_1 style names in SQL output.
  • Heads-up XML tag names containing invalid characters are cleaned and may be prefixed, so they can differ from original field names.
  • If the spreadsheet helper fails to load, a warning is added and XLSX export is skipped while other formats remain available.
  • Dragging non-text files into the input may yield unreadable content depending on how the host environment decodes them.

Parsing, flattening, and export generation are executed by scripts within the page; this component does not post your data to remote endpoints, though an auxiliary script for spreadsheet export may be fetched, and you should avoid pasting credentials or personal identifiers on shared or untrusted devices.

Step-by-Step Guide:

Converting structured JSON data into tables and export formats follows a short sequence that starts with clean input and ends with files or snippets ready to share.

  1. Paste your JSON input or drop a structured file into the main text area.
  2. If records live under a nested branch, set the Data path field to the appropriate dot or pointer expression.
  3. Decide whether to keep nested structures as strings or expand them and toggle Flatten objects and Flatten arrays accordingly.
  4. Choose an indentation width and key sorting mode so the pretty JSON view matches your preferred style for reviewing structure.
  5. Adjust Preview rows if you expect many records, balancing responsiveness against how many samples you want to see.
  6. Switch between Data, JSON, JSON Lines, CSV, TSV, XML, HTML, Markdown, and SQL tabs to inspect, copy, or download each format.
  7. If you see a path not found warning Common issue, correct the path or clear it to reuse the root.
  • Use smaller preview limits when exploring unfamiliar feeds and increase them only after the table structure looks stable.
  • Turn flattening off temporarily when you want to inspect raw nested JSON fragments before deciding which branches to promote to columns.
  • Keep a small library of tried and tested data paths for recurring feeds so colleagues can reuse them without re-discovery.

Pro tip: test new data paths on small samples first, then apply them to full exports once the preview metrics look plausible.

Once you are comfortable with the structure and exports, you can treat the converter as a staging step between raw feeds and downstream analysis tools.

Key Features:

  • Automatic parsing, validation, and pretty-printing of JSON input with live metrics for type, depth, size, and record counts.
  • Configurable data path and flattening behaviour to target specific arrays or objects while keeping nested structures under control.
  • Multiple export formats including JSON Lines, CSV, TSV, HTML, Markdown, XML, SQL INSERT statements, DOCX summaries, and XLSX spreadsheets.
  • Clipboard and download helpers that turn the current view into reusable snippets or files for downstream tools and documentation.

FAQ:

Is my data stored anywhere?

The converter runs entirely within the page using client-side scripting, and parsed structures stay in memory only while the page is open. Clipboard and download actions hand the generated text to your device without adding background storage within this component.

Close the tab when finished to clear state.
What formats can I export to?

You can work with pretty JSON, JSON Lines, CSV, TSV, HTML tables, Markdown tables, XML documents, and SQL INSERT statements, plus DOCX and XLSX exports built from the same flattened records. Each format uses the same header set so columns stay consistent between downloads.

Some formats are disabled until records and headers are available.
How accurate is the conversion?

Conversion is exact with respect to JSON parsing and flattening, and numbers, booleans, and null values maintain their semantics. Extremely large integers follow the host environment's numeric limits, and complex nested structures may be serialised as JSON strings when flattening is disabled.

Use domain checks elsewhere for business rules.
Can I filter which records are used?

You choose the structural slice via the data path, which can point to the root, a nested array, or a single object. Preview row limits affect only the on-screen table; downloads always include every record exposed by the resolved branch.

Filtering by value must be done in downstream tools.
Does it work without a network connection?

Once the page and its helper scripts are loaded, parsing, flattening, and most export formats continue to work without further requests. Spreadsheet export may need a one-time helper download; if that fails you will see a warning while other formats remain available.

Refreshing the page while offline may disable features that depend on external scripts.
Do I need a licence for outputs?

The JSON, CSV, TSV, XML, HTML, Markdown, and SQL text generated here is plain structured data. Any licensing, redistribution rules, or retention policies depend on your source systems and organisational guidelines rather than on the converter itself.

Check data governance rules before sharing exports externally.
How do I convert only a nested array?

Set the data path to the branch that contains that array, using dot notation such as items[0].details or JSON Pointer syntax such as /items/0/details. When the path resolves successfully, the array elements become the records shown in the preview and exports.

If a warning appears, adjust the path and recompute.
What does a very deep depth value mean?

Depth counts how many layers of arrays and objects are nested inside the root, so very large values usually indicate multiple wrappers or embedded lists. Depth by itself is not an error, but sudden changes between runs can signal that a feed has gained or lost structure.

Combine depth with record and field counts for a fuller picture.

Troubleshooting:

  • If an unable to parse JSON message appears, check for missing quotes, extra commas, or other syntax issues in the input text.
  • If a path not found warning shows up, confirm the spelling, style, and nesting of your data path or temporarily clear it.
  • When the preview table shows no rows, ensure the resolved branch is an array or object and that it actually contains values.
  • If CSV or TSV exports look empty, confirm that flattening is enabled and that at least one record exposes fields after flattening.
  • When SQL output misses column names, inspect how nested paths were sanitised and consider simplifying paths or flatten options.
  • If XLSX export fails or reports a helper error, try again with a stable connection or rely on CSV and TSV formats instead.
  • When copying output fails silently, verify that your browser allows clipboard access for this page and use download buttons as a fallback.

Advanced Tips:

  • Tip Use ascending key sorting when preparing reference JSON samples so diffs focus on value changes rather than key order.
  • Tip Keep both flatten switches enabled during exploration, then selectively turn them off when you want to preserve complex subtrees as single cells.
  • Tip Treat the metrics at the top as a regression check; unexpected changes in recordCount, fieldCount, or depth may reveal upstream feed issues.
  • Tip Experiment with lower preview limits for very large datasets, then gradually increase them once you are confident in the chosen data path.
  • Tip Align sanitised SQL column names with explicit table definitions so generated INSERT statements integrate cleanly with existing database scripts.
  • Tip Use DOCX exports to capture snapshots of columns and sample rows for documentation, leaving JSON and CSV outputs for pipelines and automation.
  • Tip Prefer JSON Lines output when feeding log or stream processors that expect one JSON object per line for ingestion.

Glossary:

JSON
JavaScript Object Notation, the structured text format parsed and pretty-printed here.
CSV
Comma separated values text, one row per line with fields separated by commas.
TSV
Tab separated values text, similar to CSV but using tab characters between fields.
JSON Lines
Format where each line is one JSON object, convenient for log pipelines and streaming imports.
Flattening
Process of turning nested objects and arrays into single-level key paths such as projects[0].code.
Data path
Expression selecting a branch within the JSON using dot notation or JSON Pointer syntax.
Record
One logical row of data formed from a single object or array element.
Field
Single named value inside a record, corresponding to one column in the exported table.