| # | {{ header }} | Copy |
|---|---|---|
| {{ row.index }} | {{ cell }} | |
| No rows to display. |
| Column | Key | Type | Non-empty | Null | Empty | Unique | Sample value | Copy |
|---|---|---|---|---|---|---|---|---|
| {{ profile.index }}. {{ profile.label }} | {{ profile.key }} | {{ profile.type || 'empty' }} | {{ profile.nonEmpty.toLocaleString() }} | {{ profile.nullCount.toLocaleString() }} | {{ profile.emptyCount.toLocaleString() }} | {{ profile.unique.toLocaleString() }} | {{ profile.sample || '—' }} | |
| No columns profiled yet. | ||||||||
null markers. Empty counts represent blank strings after trimming.
{{ sqlInsertText }}
{{ xmlText }}
{{ htmlTable }}
{{ markdownTable }}
{{ csvText }}
Tab separated values are plain text tables where each line represents a row and each tab character marks the next column. They are useful when you want a simple tab separated values converter for moving data between analytics tools or quick inspections. This page focuses on making that conversion predictable while showing where the table structure might be messy.
You paste or drop your text and the converter inspects the first rows to pick a likely delimiter and header row. It reports how many rows and columns it sees and highlights common data types. That summary helps you catch issues early before you send the output to another system.
Once the layout looks right you can explore different views of the same dataset. One tab shows a scrollable table that mirrors the parsed text and another shows a profile of each column with counts of empty and null values. Other views focus on ready to copy representations such as structured text arrays markup or database insert statements.
Because the parser is fully automatic it can still make mistakes when data is inconsistent or when multiple values share the same pattern. For best results keep similar rows together use consistent delimiters and scan the warnings panel whenever something looks unexpected. Be cautious with sensitive information and avoid pasting anything that should not stay on your device.
Tab-Separated Values (TSV) and related delimited text formats represent rectangular data as rows separated by newlines and fields separated by a chosen delimiter. The core quantities the converter works with are row count, column count, header presence, and basic data types inferred per column.
After parsing, the data is transformed into two parallel structures — raw strings and coerced values. Coerced values are classified as numbers, booleans, dates, strings, objects, nulls, or empty cells. These derived structures drive the profile view, type summaries, and structured exports such as JSON, XML, SQL, CSV, HTML, Markdown, DOCX, and XLSX.
Interpretation focuses on how many fields are non-empty, how often values repeat, and which coarse type dominates each column. A profile row lists non-empty, null, empty, and unique counts alongside a sample value, making it easier to spot identifier columns, sparse metrics, and mismatched types before exporting.
Comparability across runs depends on keeping delimiter settings, header mode, trimming, and type detection choices stable. Header detection can be fixed to “use first row” or “treat all as data,” or left in automatic mode where a scoring heuristic compares the first line against several sample rows for keyword matches and type distribution differences.
| Parameter | Meaning | Unit/Datatype | Typical range | Sensitivity | Notes |
|---|---|---|---|---|---|
| Delimiter mode | Selects field separator or enables automatic detection. | Enumeration | Auto, comma, semicolon, tab, pipe, space, custom | High | Controls how lines are split into columns. |
| Custom delimiter | Explicit delimiter when presets do not match. | String | 1–4 characters | High | Defaults to comma if left blank in custom mode. |
| Quote character | Marks fields that may contain delimiters or newlines. | Double quote, single quote, or none | Double quote | High | Unclosed quotes trigger a parsing error. |
| Escape style | Chooses quoting escape convention inside fields. | Enumeration | Repeat quote or backslash | Medium | Backslash interprets \n and \t inside quoted values. |
| Header mode | Controls whether the first row becomes column labels. | Enumeration | Auto, header, none | High | Affects JSON objects, JSON Lines, SQL, and XML tags. |
| Preview rows | Limits how many rows appear in the on-page table. | Integer | 10–1000, default 200 | Low | Exports always include all parsed rows. |
| Trim fields | Removes leading and trailing whitespace around values. | Boolean | On or off | Medium | Influences emptiness tests and type classification. |
| Skip empty rows | Drops rows where all fields are blank after trimming. | Boolean | On or off | Medium | Reduces noise in previews and exports. |
| Detect data types | Converts obvious numbers, booleans, and dates from text. | Boolean | On or off | High | Also drives column type labels in the profile view. |
| Detect dates | Recognizes ISO-like timestamps for date classification. | Boolean | On or off | Medium | Applies only when type detection is enabled. |
| Empty as null | Treats empty strings as explicit null values. | Boolean | On or off | Medium | Affects JSON, SQL, XML, and profiling null counts. |
| XML root and row tags | Names the outer and row elements in XML exports. | Strings | Up to 32 characters each | Low | Invalid characters are stripped and leading digits prefixed. |
| Row index attribute | Optionally adds a 1-based index attribute to each XML row. | Boolean | On or off | Low | Helps trace exports back to original row positions. |
| SQL table name | Base name for generated INSERT statements. | String | Up to 64 characters | High | Sanitized to lowercase, alphanumeric, and underscore with a safe prefix if needed. |
| Column naming overrides | Custom display labels and JSON keys per column. | Arrays of strings | One entry per column | High | Keys are deduplicated and sanitized before use in objects and XML tags. |
| Field | Type | Min | Max | Step / pattern | Error text | Placeholder / default |
|---|---|---|---|---|---|---|
| Input text | Multiline string | 1 character | Device memory | Arbitrary Unicode | “Unable to resolve delimiter.” “Failed to parse TSV.” “No rows detected after parsing.” “Reached end of file while inside quotes.” | Sample table of identifiers, names, and scores |
| Preview rows | Number | 10 | 1000 | Step 10 | Values outside the range are clamped. | 200 |
| Custom delimiter | Text | 0 characters | 4 characters | Literal sequence | Falls back to comma when blank in custom mode. | “|” |
| XML root tag | Text | 1 character | 32 characters | Letters, digits, underscore, dash, colon, dot | Invalid characters are dropped, leading digits prefixed. | rows |
| XML row tag | Text | 1 character | 32 characters | Letters, digits, underscore, dash, colon, dot | Invalid characters are dropped, leading digits prefixed. | row |
| SQL table name | Text | 1 character | 64 characters | Alphanumeric and underscores after sanitization | Empty or unsafe names are replaced with a safe fallback. | dataset |
| Input | Accepted families | Output | Encoding / precision | Rounding |
|---|---|---|---|---|
| Text area or dropped file | TSV, CSV, delimited plain text | Parsed rows and columns | UTF-8 strings in memory | No additional rounding, numeric text preserved when not coerced. |
| Parsed table | Detected header and typed cells | JSON arrays and JSON records | Pretty-printed with two-space indentation | Numbers emitted exactly as stored when finite. |
| Parsed table | Header row and coerced values | JSON Lines (NDJSON) | One compact JSON object per line | No line-level aggregation or rounding. |
| Parsed table | Header row and typed cells | SQL INSERT statements | Quoted strings with doubled single quotes, TRUE/FALSE, NULL for missing values | Numeric values written as-is when finite. |
| Parsed table | Any delimited input | XML document | Escapes &, <, >, quotes, and apostrophes | No numeric formatting beyond string conversion. |
| Parsed table | Header and data rows | CSV, HTML, Markdown, DOCX, XLSX | Standard text encodings, spreadsheet tabular cells | Values exported as text unless previously coerced. |
Performance is effectively linear in the length of the input text because each character is visited once during parsing. Preview rendering focuses on at most 1000 rows, while exports iterate over the full parsed dataset. For very large files, perceived performance is determined mainly by browser memory and rendering overhead.
Parsing and exporting are deterministic for a fixed configuration: given the same input text, delimiter settings, header mode, trimming, and type detection choices, the resulting metrics and exports are identical. Column warnings and type summaries are recomputed whenever parameters change.
Files are processed locally; nothing is uploaded. Structured exports are generated in the page and downloaded directly. Sensitive or regulated data should still be handled according to your organization’s privacy and compliance policies.
Tabular text conversion here is designed to turn pasted or dropped delimited files into consistent, reusable outputs without manual reformatting.
Example run: paste a small team roster, let automatic header detection label “id”, “name”, and “team”, then export JSON Lines for ingestion into a logging or analytics pipeline.
The more consistently you configure delimiter and type settings between runs, the easier it becomes to compare exports and automate their use in scripts or data flows.
You can export the parsed table as JSON arrays, JSON records, JSON Lines, CSV, XML, HTML, Markdown, DOCX, and XLSX. All exports are generated from the same underlying parsed data so they stay consistent.
Header detection uses keywords, uniqueness, and type patterns from several sample rows. It works well on typical tables, but ambiguous first rows can still be misclassified, so you can always override the mode manually.
When type detection is enabled, the converter looks for plain numerals, common boolean words, explicit “null” markers, and ISO-like date strings. Everything else is treated as text, and mixed columns are classified by whichever type appears most often.
Parsing and export happen in the page, and files are processed locally; nothing is uploaded. A spreadsheet helper script may be fetched for XLSX export, but your table contents are not included in that request.
Yes. Set header mode to “none” to treat every row as data. JSON arrays, XML, and CSV still export correctly, but formats that need field names such as JSON objects, JSON Lines, and SQL require a header or explicit column keys.
The parser computes the maximum column count and pads shorter rows with empty cells. It records warnings for any row whose length differs from the maximum so you can review and clean up irregular lines before exporting.
There is no hard coded maximum, but everything runs in your device’s memory. Preview tables are capped at 1000 rows for usability, while exports use the full parsed dataset. Very large files may feel slow or unresponsive on older hardware.
A “text” label means strings dominate that column after type detection. This often indicates identifiers, mixed formats, or values containing symbols. If you expected pure numbers or dates, check for stray characters and adjust trimming, delimiter, or type detection settings.