| # | {{ header }} | Copy |
|---|---|---|
| {{ row.index }} | {{ cell }} | |
| No rows to display. |
| Column | Key | Type | Non-empty | Null | Empty | Unique | Sample value | Copy |
|---|---|---|---|---|---|---|---|---|
| {{ profile.index }}. {{ profile.label }} | {{ profile.key }} | {{ profile.type || 'empty' }} | {{ profile.nonEmpty.toLocaleString() }} | {{ profile.nullCount.toLocaleString() }} | {{ profile.emptyCount.toLocaleString() }} | {{ profile.unique.toLocaleString() }} | {{ profile.sample || '—' }} | |
| No columns profiled yet. | ||||||||
null markers. Empty counts represent blank strings after trimming.
{{ sqlInsertText }}
{{ xmlText }}
{{ htmlTable }}
{{ markdownTable }}
{{ tsvText }}
Comma separated values files are plain text tables where each line represents a row and commas separate the individual cells. They make it easy to move lists or reports between tools without changing every field by hand.
In many projects you need a quick way to inspect and reshape comma separated values text before loading it into dashboards or applications. A flexible comma separated values to structured data conversion helper avoids writing brittle one off scripts for every import.
Here you paste or drop tabular text and immediately see a clean table with row counts, column counts, sample values, and simple warnings. From there you can turn the same dataset into configuration friendly formats such as structured objects, markup, or database statements.
For example you might paste a list of customer records from a spreadsheet, confirm that names, dates, and amounts look sensible, then export inserts for a staging database. It still helps to review edge cases such as empty values or irregular rows because a structurally valid file can hide subtle mistakes.
Running the same dataset with consistent delimiter settings, header choices, and trimming rules keeps results comparable over time. When you change those options, watch how the reported column types and row counts shift since that often reveals hidden whitespace, stray separators, or extra header lines.
Comma Separated Values (CSV) text encodes tabular data as rows and columns separated by newline characters and a chosen delimiter symbol. The converter reads that text into a structured grid, infers an optional header row, and then regenerates equivalent records as JavaScript Object Notation (JSON) objects or arrays, Structured Query Language (SQL) insert statements, tab separated values (TSV), eXtensible Markup Language (XML), HTML tables, Markdown tables, and spreadsheet workbooks.
As the file is parsed, the engine measures the number of data rows, the widest column count, and whether the first row behaves like column labels. It also builds a compact profile for each column, counting non empty cells, null markers, empty strings, unique values, and a representative sample value while classifying columns as numeric, boolean, date like, textual, object shaped, purely null, or empty.
Header detection combines several signals rather than relying on a single rule. It scores the first row by checking for common header keywords, the share of distinct labels, how closely the values resemble proper names or compact codes, and how different the type distribution in the first row is from the type distribution of later rows, then compares that score with a width dependent threshold before deciding whether to treat the first row as labels or data.
When type detection is enabled, each field is first trimmed and then interpreted in stages: explicit null markers become null values, true or false like tokens become booleans, plain decimal or scientific notation strings become numbers, and optionally ISO style timestamps are recognised and preserved in canonical form. Everything that does not match these patterns remains a string so that the original content is still accessible in exports.
| Symbol | Meaning | Unit/Datatype | Source |
|---|---|---|---|
rowCount |
Number of data rows after optional header and empty line handling. | integer | Derived from parsed rows |
columnCount |
Maximum number of fields across all rows after padding. | integer | Derived from parsed rows |
header |
Indicates whether a header row is currently in use. | boolean | Auto detection or user choice |
delimiterLabel |
Human readable description of the delimiter that was applied. | string | Resolved from settings and text scan |
previewTrimmed |
Signals that the on screen preview shows only the first block of rows. | boolean | Comparison of total rows with preview limit |
typeSummary |
Top column type counts, such as numeric, text, date, boolean, or object. | array of strings | Aggregated from column profiles |
id,name,score
101,Ada,98.5
102,Grace,91.2
With automatic settings the engine chooses a comma delimiter, treats the first row as headers, and finds three columns.
The profile reports one numeric column for id, one text column for name, and one numeric column for score, producing a type summary such as “2 numerics, 1 text”.
JSON object exports then use these header labels as keys, SQL inserts reuse them as column names, and XML tags default to cleaned versions of the same identifiers.
| Field | Type | Min | Max | Step / Pattern | Error text or behaviour | Placeholder |
|---|---|---|---|---|---|---|
| CSV text area | multiline text | n/a | n/a | any pasted or loaded text | Shows “No rows detected after parsing” when nothing remains after trimming. | Sample customer table |
| Preview rows | integer | 10 | 1000 | step 10, clamped to bounds | Extra rows stay exportable while the preview badge notes trimming. | 200 |
| Custom delimiter | short text | 0 | 4 | up to four characters | Empty custom input falls back to a comma delimiter. | Vertical bar |
| XML root tag | text | 0 | 32 | invalid characters removed, numeric prefix prefixed with “n” | Missing or empty name falls back to rows. |
rows |
| XML row tag | text | 0 | 32 | same sanitiser as the root tag | Missing or empty name falls back to row. |
row |
| SQL table name | text | 0 | 64 | non alphanumeric characters replaced, leading digits prefixed | Invalid or empty input falls back to a generic dataset table identifier. |
dataset |
| Type detection switches | booleans | n/a | n/a | detect types, detect dates, empty as null | Changes only affect future parses, not already exported strings. | various defaults |
| Parsing errors | status | n/a | n/a | state machine detects quoting problems | Reports messages such as “Unable to resolve delimiter” or “Reached end of file while inside quotes”. | none |
| Input | Accepted families | Output | Encoding / precision | Rounding policy |
|---|---|---|---|---|
| Pasted text | CSV like lines with comma, semicolon, tab, pipe, space, or custom delimiter | Preview table, CSV copy, TSV text | UTF-8 text, quoted where needed | No extra rounding beyond JavaScript number representation |
| Dropped file | .csv and .txt files read as text | Same exports as pasted text | File contents treated as a single text buffer | Values preserved exactly except for configured trimming |
| JSON arrays | Rows as arrays of typed values | Indented JSON string and downloadable file | Standard JSON encoding with two space indentation | Numbers serialised using built in JSON stringify |
| JSON records | Objects mapped from header keys | Indented JSON and JSON Lines | One object per row, keys derived from headers | Same as arrays, plus string escaping for keys |
| XML | Sanitised root and row tag names | Well formed XML document with optional row index attributes | UTF-8 text with character entity escaping | Values converted to strings without numeric rounding |
| SQL | Header based columns and typed row values | Single INSERT statement with multiple value tuples | Identifiers escaped with backticks, values quoted and escaped | Numbers and booleans preserved as literals, null as NULL |
| Spreadsheet exports | Header and rows as an array of arrays | XLSX workbook with one sheet, DOCX document | External workbook library plus document export helper | Cell values mirror current preview and header configuration |
null or undefined in the source become nulls when detection is enabled, which may surprise downstream tools.All parsing, profiling, and export work is performed in a browser based interface, and the code does not transmit your CSV contents to a server. To support spreadsheet export an external library script is downloaded, but only the code is fetched rather than your data. Files are processed locally; nothing is uploaded, so you should still avoid pasting credentials, secrets, or highly sensitive personal information.
Working with comma separated values data here is about turning a raw text table into clear, reusable structures for analysis or import.
For example you can load the bundled sample dataset, confirm that the header row is detected and scores are typed as numbers, then copy JSON Lines for an ingestion pipeline while also exporting SQL inserts for a staging database.
The outcome is a single source of truth for your tabular data that is easy to reuse across scripts, notebooks, or database tools.
Pro tip: once you find delimiter, header, and typing settings that suit a common data source, reuse them for every export from that system so JSON, XML, and SQL outputs remain consistent over time.
You can paste any delimited text that resembles a table or drop a plain text file with a consistent delimiter. Comma, tab, semicolon, pipe, space, and short custom delimiters are supported, and irregular rows are still accepted with warnings rather than being discarded.
Structured binary spreadsheets must be exported to text first.Header detection combines keyword checks, uniqueness, proper name patterns, and type distribution differences between the first row and later rows. In many everyday files this reliably identifies column labels, but you can always force “Use first row” or “Treat all as data” when the preview suggests that the automatic choice is wrong.
Trust the preview and profile when in doubt.Parsing, profiling, and conversions run inside your browser session, and the script does not send your CSV contents to a server. A public workbook library is fetched for spreadsheet export, but the request carries only code, not your dataset. Files are processed locally and are not persisted by this interface.
Avoid pasting highly sensitive information.The converter emits JSON arrays, JSON records, JSON Lines, XML, HTML tables, Markdown tables, TSV, CSV, SQL insert statements, XLSX workbooks, and DOCX documents. All text output is UTF-8, numbers and booleans follow standard JSON and SQL representations, and XML entities are escaped as needed.
Binary spreadsheets are provided via the downloaded workbook library.Once the page and its libraries have loaded, core parsing, profiling, and text based exports continue to work without further connections. Spreadsheet export depends on the external library being available in the browser cache, so XLSX downloads may fail if that script is not yet cached and the network is unavailable.
Refresh after reconnecting if exports fail.The interface exposes no sign in, billing, or license fields and simply converts the data you provide. Any broader usage terms come from the platform hosting this page, so you should review those terms separately if you plan to integrate exports into production workflows or commercial tools.
Check your organisation’s policies for data handling.When a column mixes numbers and text, the profiler chooses the type with the most non empty samples and reports the count. That means an identifier column with a few missing values may still be treated as text, while a mostly numeric column with one stray label can appear numeric but still show that label as a sample value.
Use both the type and sample when judging columns.Ensure the header row matches your target table column names, adjust JSON keys and display labels if needed, then navigate to the SQL tab. Copy or download the generated insert statement and run it in a safe staging environment before applying it to live data, confirming that types and null handling match your schema expectations.
Always test on a small subset first.Blocking error: “Reached end of file while inside quotes”.
This means a quoted field was never closed. Fix it by checking lines with long text for missing closing quotes or stray delimiters inside quote pairs, then paste the corrected text again so parsing can complete.
Use these ideas when you are tuning repeated imports or building pipelines around the converter.