JSON to CSV Converter
Convert JSON arrays and JSON Lines into CSV or TSV with flattened path headers, custom delimiters, formula guarding, previews, and audit tables for spreadsheet imports.JSON to CSV Converter
{{ csvOutput }}
| # | {{ header }} | Copy |
|---|---|---|
| {{ row.index }} | {{ cell.value }} | |
| No converted rows to preview. |
| # | CSV header | JSON path | Present | Blank | Type mix | Sample value | Copy |
|---|---|---|---|---|---|---|---|
| {{ column.index }} | {{ column.header }} | {{ column.path }} | {{ column.present }} | {{ column.blank }} | {{ column.typeMix }} | {{ column.sample }} |
| Area | Setting | Value | Evidence | Copy |
|---|---|---|---|---|
| {{ row.area }} | {{ row.setting }} | {{ row.value }} | {{ row.evidence }} |
Introduction
JSON is good at carrying nested records, optional fields, arrays, booleans, numbers, and timestamps. CSV is good at moving rectangular rows into spreadsheets, import screens, data-cleaning scripts, and review tables. Converting between them is useful when an API response, log export, or saved configuration needs to become a table that people can inspect column by column.
The hard part is not changing the file extension. JSON can place a value several levels deep, store a list inside one property, or omit a property from some records. CSV needs a stable set of headers and one cell for each row and column. A good conversion must choose which JSON value becomes the record set, how nested paths become headers, how blank or missing cells are represented, and how text is quoted so spreadsheet software reads the file as intended.
Flattening makes the result easier to filter, sort, and hand to a spreadsheet importer, but it also makes choices visible. Array indexes can become separate path headers, primitive arrays can be joined into one cell, and complex values can be kept as JSON text. Those choices should match the receiving system and stay consistent when you compare repeated exports.
CSV output can also carry spreadsheet risk when untrusted text starts with formula-triggering characters. A formula guard can make many dangerous-looking cells open as text, but it is still worth reviewing exports before sending them to someone who will open the file in a spreadsheet application.
Technical Details:
JSON and CSV have different data shapes. JSON represents values as objects, arrays, strings, numbers, booleans, and nulls. CSV represents a sequence of records, where each record has fields separated by a delimiter. A conversion from JSON to CSV therefore has two technical jobs: identify the records, then reduce each record into the same ordered header set.
Record detection begins with the parsed JSON value. A root array already has record shape. A top-level object may contain an array under common API keys such as data, items, results, records, or rows. When a specific nested array is needed, a dot-and-bracket path points to it. If the source is newline-delimited JSON, each nonblank line is parsed as its own JSON value and then treated as part of the record set.
Once records are selected, each object or array is flattened into header paths. Object properties use dot notation, array positions use bracket notation, and scalar root values fall back to a value column. Header order follows the first appearance of flattened fields across the selected records, so later records can add columns when they contain fields that earlier records did not.
Transformation Core
The conversion path is deterministic. The same source text and settings produce the same rows, headers, quoted fields, warnings, and audit values.
| Stage | Rule | Review Point |
|---|---|---|
| Parse source | Try complete JSON first. If that fails and multiple nonblank lines exist, parse each line as JSON Lines. | Parse errors stop conversion until the source text or line is fixed. |
| Select records | Use the root array, an auto-detected top-level array, the root value, or a custom path. | An empty selected array is rejected because there are no rows to emit. |
| Flatten values | Turn nested object keys and array indexes into path headers unless a stringified or joined-array mode is selected. | Path headers such as team.name and tags[0] show where each cell came from. |
| Build cells | Convert strings, numbers, booleans, bigint values, nulls, missing values, and nested JSON text into cell text. | Null and missing values use the configured blank cell token. |
| Serialize fields | Apply the selected delimiter, header setting, quote mode, double-quote escaping, and formula guard. | Quoted output is required when cell text contains a delimiter, line break, quote, or edge whitespace. |
Nested Value Policies
| Nested Values Setting | What Happens | Best Fit |
|---|---|---|
| Flatten objects and arrays into path columns | Objects and arrays keep expanding until scalar cells are reached. | Spreadsheet filtering, column review, and schema discovery. |
| Flatten objects, join primitive arrays | Objects still become path columns, while arrays of simple values join with the array join token. | Tags, roles, labels, or other short lists that belong in one cell. |
| Stringify nested objects and arrays | Nested complex values are written as JSON text inside one cell. | Preserving compact nested detail when column count matters more than filtering. |
CSV Field Rules
CSV quoting protects table structure. Minimal quoting leaves ordinary cells bare and quotes only cells that need it. Quote every cell makes each field explicit, which can help strict importers or reviewers who want a uniform output style.
| Condition | Output Behavior | Why It Matters |
|---|---|---|
| Cell contains the selected delimiter | The cell is wrapped in double quotes. | The delimiter stays part of the value instead of starting a new column. |
| Cell contains a line break or double quote | The cell is quoted, and embedded double quotes are doubled. | Spreadsheet and parser imports can keep multiline text and quote characters intact. |
| Cell starts or ends with whitespace | The cell is quoted. | Leading and trailing spaces remain visible data rather than accidental trim candidates. |
Formula guard is on and text begins with =, +, or @ after optional whitespace |
An apostrophe is prefixed before serialization. | Many spreadsheet apps then open the value as text instead of evaluating it as a formula. |
The formula guard is a useful default for spreadsheet-bound exports, but it is not a complete security control. Some spreadsheet applications and locales also treat other prefixes or separator tricks as formula entry points, so untrusted data should still be reviewed before a CSV file is shared.
Everyday Use & Decision Guide:
Start with Auto detect records for common API responses. It handles a root array directly and can find top-level arrays named data, items, results, records, or rows. Switch to Use custom path when the record array lives deeper, such as response.results or payload.items[0].children.
Use comma output when the receiver expects standard CSV. Choose tab for TSV-style imports, semicolon for regional spreadsheet workflows, pipe for logs or text systems where commas are common in values, and Custom only when the receiving system has already specified a delimiter. Keep the custom delimiter short and make sure it does not appear inside the data values you care about.
- Keep Header row on unless the importer already has a fixed column order.
- Leave Spreadsheet formula guard on when any cell may be opened in a spreadsheet by another person.
- Use Flatten objects and arrays into path columns when filtering or sorting by nested fields matters.
- Use Join primitive arrays for compact tag lists, then choose an array join token that is not the same as the output delimiter.
- Use Stringify nested objects and arrays when preserving nested detail in one cell is more useful than creating many columns.
- Raise Preview rows for inspection of large inputs, but remember that copy and download actions use the full conversion.
The most common wrong assumption is that a successful parse means the selected rows are the rows you wanted. Check the summary line for the source shape and source path, then open Column Ledger to see whether the expected headers appeared. If Auto selected the wrong array, choose Custom path and point directly to the record list.
Use the audit table before handing output to another system. It records the delimiter, header choice, nested-value policy, quote mode, formula guard state, missing cells, blank cells, and warnings. Those fields are the quickest way to explain why a spreadsheet import has more columns, blank cells, or quoted values than expected.
Step-by-Step Guide:
Use the summary badge, result tabs, and warnings as checkpoints while moving from JSON source to delimited output.
- Paste text into
JSON source, useBrowse JSON, or drop a.json,.jsonl,.ndjson, or.txtfile onto the source area. The file badge should show the chosen file name and size, or the sample badge should remain when the sample is loaded. - Set
Record source. UseAuto detect recordsfirst,Use root valuefor a single object or primitive root, andUse custom pathwhen the table records live inside a nested property. - If
Use custom pathis selected, enter aRecord pathsuch asdata.itemsorresponse.results. If the path is empty or not found, the error message tells you to enter a path or fix the missing path before conversion can continue. - Choose
Nested valuesbased on the receiving workflow. Path columns create headers such asteam.regionorprojects[0].code; joined arrays keep primitive lists in one cell; stringified values keep complex nested values compact. - Choose
Output delimiterand, if needed, enter aCustom delimiter. The summary badge changes to the selected delimiter label after conversion is ready. - Keep
Header rowandSpreadsheet formula guardin the expected state before trusting the output. Advanced settings can also changeQuote mode,Blank cell token,Array join token, andPreview rows. - Open
CSV Outputfor the final text,Record Previewfor visible rows,Column Ledgerfor present and blank counts, andConversion Auditfor settings evidence. Fix any warning that changes the intended import. - If parsing fails, check the exact error. A single invalid JSON document reports a JSON parse error, while newline-delimited input reports the failing JSON Lines line number.
After the output matches the expected row count, column count, source path, and formula-guard setting, copy or download the generated text for the target import.
Interpreting Results:
The summary gives the first sanity check: converted row count, column count, source shape, source path, delimiter, header state, and warning count. If the row count or source path is not what you expected, fix record selection before reviewing individual cells.
CSV Output is the text to use for import. Record Preview is only a visible sample when the preview limit is lower than the row count, so a warning appears when the preview is trimmed. Copy and download actions still use all converted rows.
Column Ledgershows each CSV header, JSON path, present count, blank count, type mix, and sample value.Conversion Auditexplains the selected delimiter, record source, nested-value setting, quote mode, formula guard, missing cells, blank cells, and warning notes.JSONpackages the summary, settings, column ledger, and audit rows for handoff review.
A clean conversion does not prove the table is ready for every importer. Missing cells can be normal when records have optional fields, quoted cells can be normal when values contain delimiters or line breaks, and a formula-guard apostrophe can be intentional. Check those cues against the target spreadsheet, database import, or script before treating the CSV as final.
If a chart tab is present, use the data tables as the source of evidence for the current conversion. Row-specific checks belong in the preview, ledger, audit, and JSON result views.
Worked Examples:
API response with a nested data array
A response such as {"data":[{"id":101,"team":{"name":"Data"},"skills":["etl","sql"]},{"id":102,"team":{"name":"Platform"},"skills":["ops"]}]} works with Record source set to Auto. The summary should report JSON at $.data, CSV Output should include headers such as id, team.name, skills[0], and skills[1], and Column Ledger should show skills[1] as present for only one of the two rows.
JSON Lines log sample
Two lines such as {"email":"ada@example.test","score":8} and {"email":"grace@example.test","score":null} are parsed as JSON Lines after full JSON parsing fails. The warning list should say that two JSON Lines records were parsed. With a blank cell token left empty, CSV Output should leave the second score cell empty, while Conversion Audit records the blank cell count.
Custom path correction
A source shaped like {"response":{"items":[{"sku":"A-1","qty":3}]}} will fail if Record path is set to response.rows. The error reports that the path was not found. Changing the path to response.items should rebuild the conversion with one row, source path $.response.items, and headers sku and qty.
Spreadsheet formula text
A record containing {"name":"Quarter total","note":"=SUM(A1:A2)"} should keep Spreadsheet formula guard on when the result will be opened in a spreadsheet. CSV Output prefixes the formula-looking cell with an apostrophe before quoting rules are applied, and Conversion Audit records the formula guard as On. Review untrusted exports manually because the guard covers the prefixes implemented by the converter, not every spreadsheet formula trigger.
FAQ:
Does the converter upload my JSON?
The conversion flow reads pasted text or selected files in the browser tab and builds the delimited output there. No backend conversion request is part of the source-processing path shown by the tool.
Why did Auto choose the wrong rows?
Auto prefers a root array, then top-level arrays named data, items, results, records, or rows, then the first top-level array it finds. Use Record path when the real records are somewhere else.
Why do some headers contain brackets?
Bracketed headers come from array indexes in flattened path mode. For example, the first item in a skills array becomes skills[0]. Use joined arrays or stringified nested values if that creates too many columns.
What should I do with a JSON Lines parse error?
Use the reported line number to find the invalid line, then fix that line as a complete JSON value. Blank lines are ignored, but each nonblank line must parse on its own.
Does the formula guard make every CSV safe?
No. It prefixes cells that begin with =, +, or @ after optional whitespace. That reduces common spreadsheet formula surprises, but untrusted CSV data still needs review before another person opens it.
Glossary:
- CSV
- A delimited text table made of records and fields, commonly opened by spreadsheets and import tools.
- JSON Lines
- A text format where each nonblank line is parsed as one complete JSON value.
- Record path
- A dot-and-bracket path that points to the JSON value that should become CSV rows.
- Flattened path
- A column header built from nested object keys and array indexes, such as
team.nameorprojects[0].code. - Delimiter
- The character or short token that separates fields in the generated output.
- Formula guard
- The optional apostrophe prefix added to formula-looking spreadsheet cells before CSV serialization.
References:
- The JavaScript Object Notation (JSON) Data Interchange Format, RFC Editor, December 2017.
- Common Format and MIME Type for Comma-Separated Values (CSV) Files, RFC Editor, October 2005.
- JSON Lines, jsonlines.org.
- Testing for CSV Injection, OWASP Foundation.