TSV Summary
{{ metrics.rowCount.toLocaleString() }} rows {{ metrics.columnCount.toLocaleString() }} columns {{ metrics.header ? 'Header detected' : 'No header' }} {{ metrics.delimiterLabel }} Preview trimmed
{{ t }}
{{ w }}
TSV:
rows
Column naming
Column Display label JSON key
{{ base || `Column ${idx + 1}` }}
Active key: {{ headers[idx] || baseHeaderKeys[idx] }}
No columns detected yet.
Leave fields blank to reuse detected labels and keys. JSON keys are sanitized and deduplicated automatically.
# {{ header }} Copy
{{ row.index }} {{ cell }}
No rows to display.
Showing first {{ max_preview_rows }} rows. Export actions include the full dataset.
Column Key Type Non-empty Null Empty Unique Sample value Copy
{{ profile.index }}. {{ profile.label }} {{ profile.key }} {{ profile.type || 'empty' }} {{ profile.nonEmpty.toLocaleString() }} {{ profile.nullCount.toLocaleString() }} {{ profile.emptyCount.toLocaleString() }} {{ profile.unique.toLocaleString() }} {{ profile.sample || '—' }}
No columns profiled yet.
Null counts include explicit null markers. Empty counts represent blank strings after trimming.
{{ sqlInsertText }}
{{ xmlText }}
{{ htmlTable }}
{{ markdownTable }}
{{ csvText }}
:

Introduction:

Tab separated values are plain text tables where each line represents a row and each tab character marks the next column. They are useful when you want a simple tab separated values converter for moving data between analytics tools or quick inspections. This page focuses on making that conversion predictable while showing where the table structure might be messy.

You paste or drop your text and the converter inspects the first rows to pick a likely delimiter and header row. It reports how many rows and columns it sees and highlights common data types. That summary helps you catch issues early before you send the output to another system.

Once the layout looks right you can explore different views of the same dataset. One tab shows a scrollable table that mirrors the parsed text and another shows a profile of each column with counts of empty and null values. Other views focus on ready to copy representations such as structured text arrays markup or database insert statements.

Because the parser is fully automatic it can still make mistakes when data is inconsistent or when multiple values share the same pattern. For best results keep similar rows together use consistent delimiters and scan the warnings panel whenever something looks unexpected. Be cautious with sensitive information and avoid pasting anything that should not stay on your device.

Technical Details:

Tab-Separated Values (TSV) and related delimited text formats represent rectangular data as rows separated by newlines and fields separated by a chosen delimiter. The core quantities the converter works with are row count, column count, header presence, and basic data types inferred per column.

After parsing, the data is transformed into two parallel structures — raw strings and coerced values. Coerced values are classified as numbers, booleans, dates, strings, objects, nulls, or empty cells. These derived structures drive the profile view, type summaries, and structured exports such as JSON, XML, SQL, CSV, HTML, Markdown, DOCX, and XLSX.

Interpretation focuses on how many fields are non-empty, how often values repeat, and which coarse type dominates each column. A profile row lists non-empty, null, empty, and unique counts alongside a sample value, making it easier to spot identifier columns, sparse metrics, and mismatched types before exporting.

Comparability across runs depends on keeping delimiter settings, header mode, trimming, and type detection choices stable. Header detection can be fixed to “use first row” or “treat all as data,” or left in automatic mode where a scoring heuristic compares the first line against several sample rows for keyword matches and type distribution differences.

  1. Normalize line endings so all rows use a single newline style.
  2. Resolve the delimiter from user choice or automatic detection over common candidates.
  3. Parse each line with the active quote and escape rules, honoring quoted newlines and escaped delimiters.
  4. Skip or keep empty rows, then pad all rows so they share the same column count.
  5. Decide whether the first row is a header based on header mode and heuristic scoring.
  6. Coerce each cell to number, boolean, date, null, or string according to type detection settings.
  7. Build preview rows, full export rows, column profiles, and multi-format outputs from the resulting arrays and objects.
Parameter overview for TSV parsing and export
Parameter Meaning Unit/Datatype Typical range Sensitivity Notes
Delimiter mode Selects field separator or enables automatic detection. Enumeration Auto, comma, semicolon, tab, pipe, space, custom High Controls how lines are split into columns.
Custom delimiter Explicit delimiter when presets do not match. String 1–4 characters High Defaults to comma if left blank in custom mode.
Quote character Marks fields that may contain delimiters or newlines. Double quote, single quote, or none Double quote High Unclosed quotes trigger a parsing error.
Escape style Chooses quoting escape convention inside fields. Enumeration Repeat quote or backslash Medium Backslash interprets \n and \t inside quoted values.
Header mode Controls whether the first row becomes column labels. Enumeration Auto, header, none High Affects JSON objects, JSON Lines, SQL, and XML tags.
Preview rows Limits how many rows appear in the on-page table. Integer 10–1000, default 200 Low Exports always include all parsed rows.
Trim fields Removes leading and trailing whitespace around values. Boolean On or off Medium Influences emptiness tests and type classification.
Skip empty rows Drops rows where all fields are blank after trimming. Boolean On or off Medium Reduces noise in previews and exports.
Detect data types Converts obvious numbers, booleans, and dates from text. Boolean On or off High Also drives column type labels in the profile view.
Detect dates Recognizes ISO-like timestamps for date classification. Boolean On or off Medium Applies only when type detection is enabled.
Empty as null Treats empty strings as explicit null values. Boolean On or off Medium Affects JSON, SQL, XML, and profiling null counts.
XML root and row tags Names the outer and row elements in XML exports. Strings Up to 32 characters each Low Invalid characters are stripped and leading digits prefixed.
Row index attribute Optionally adds a 1-based index attribute to each XML row. Boolean On or off Low Helps trace exports back to original row positions.
SQL table name Base name for generated INSERT statements. String Up to 64 characters High Sanitized to lowercase, alphanumeric, and underscore with a safe prefix if needed.
Column naming overrides Custom display labels and JSON keys per column. Arrays of strings One entry per column High Keys are deduplicated and sanitized before use in objects and XML tags.
Validation and bounds for key inputs
Field Type Min Max Step / pattern Error text Placeholder / default
Input text Multiline string 1 character Device memory Arbitrary Unicode “Unable to resolve delimiter.” “Failed to parse TSV.” “No rows detected after parsing.” “Reached end of file while inside quotes.” Sample table of identifiers, names, and scores
Preview rows Number 10 1000 Step 10 Values outside the range are clamped. 200
Custom delimiter Text 0 characters 4 characters Literal sequence Falls back to comma when blank in custom mode. “|”
XML root tag Text 1 character 32 characters Letters, digits, underscore, dash, colon, dot Invalid characters are dropped, leading digits prefixed. rows
XML row tag Text 1 character 32 characters Letters, digits, underscore, dash, colon, dot Invalid characters are dropped, leading digits prefixed. row
SQL table name Text 1 character 64 characters Alphanumeric and underscores after sanitization Empty or unsafe names are replaced with a safe fallback. dataset
Input and output format families
Input Accepted families Output Encoding / precision Rounding
Text area or dropped file TSV, CSV, delimited plain text Parsed rows and columns UTF-8 strings in memory No additional rounding, numeric text preserved when not coerced.
Parsed table Detected header and typed cells JSON arrays and JSON records Pretty-printed with two-space indentation Numbers emitted exactly as stored when finite.
Parsed table Header row and coerced values JSON Lines (NDJSON) One compact JSON object per line No line-level aggregation or rounding.
Parsed table Header row and typed cells SQL INSERT statements Quoted strings with doubled single quotes, TRUE/FALSE, NULL for missing values Numeric values written as-is when finite.
Parsed table Any delimited input XML document Escapes &, <, >, quotes, and apostrophes No numeric formatting beyond string conversion.
Parsed table Header and data rows CSV, HTML, Markdown, DOCX, XLSX Standard text encodings, spreadsheet tabular cells Values exported as text unless previously coerced.

Performance is effectively linear in the length of the input text because each character is visited once during parsing. Preview rendering focuses on at most 1000 rows, while exports iterate over the full parsed dataset. For very large files, perceived performance is determined mainly by browser memory and rendering overhead.

Parsing and exporting are deterministic for a fixed configuration: given the same input text, delimiter settings, header mode, trimming, and type detection choices, the resulting metrics and exports are identical. Column warnings and type summaries are recomputed whenever parameters change.

Assumptions & limitations

  • Input is treated as UTF-8 text; binary or mixed encodings may display incorrectly.
  • Automatic delimiter detection assumes a single dominant delimiter per file.
  • Header detection relies on keywords and type patterns and may misclassify highly irregular data.
  • Type detection expects plain numerals; grouped formats such as “1,234.50” are treated as strings.
  • Loose date recognition focuses on ISO-like patterns and a few common variations.
  • JSON, XML, and SQL exports do not enforce external schema constraints or database constraints.
  • Spreadsheet exports depend on a helper script that is loaded only when requested.
  • Heads-up Extremely wide tables or very long text fields can make preview scrolling slower on older devices.

Edge cases & error sources

  • Unbalanced quotes cause a hard parse error that stops processing until the text is fixed.
  • Lines with fewer or more fields than expected are padded or flagged as irregular in the warnings list.
  • Delimiter characters inside unquoted fields can shift column boundaries unexpectedly.
  • Backslash escapes are honored only in backslash mode and only inside quoted fields.
  • Empty rows are skipped only when all fields are blank after trimming.
  • Columns that mix numbers and text may be classified as text even when many cells are numeric.
  • Type detection can misinterpret identifiers such as “00123” as numbers rather than strings.
  • Date detection depends on recognizable patterns and does not validate calendar correctness.
  • Changing header mode after exporting may produce different JSON key sets or SQL column names.
  • Very large paste operations or drag-and-drop files may be constrained by available browser memory.

Privacy & compliance

Files are processed locally; nothing is uploaded. Structured exports are generated in the page and downloaded directly. Sensitive or regulated data should still be handled according to your organization’s privacy and compliance policies.

Step-by-Step Guide:

Tabular text conversion here is designed to turn pasted or dropped delimited files into consistent, reusable outputs without manual reformatting.

  1. Paste or drop your tabular text into the main input box marked TSV.
  2. Open the advanced panel and confirm the delimiter, quote style, and header mode match your source file.
  3. Decide whether to trim spaces, skip empty rows, and enable type and date detection.
  4. Review the summary badges for row and column counts, header status, delimiter label, and type mix.
  5. Use the Data and Profile tabs to confirm that important columns look correct and rename columns where helpful.
  6. Select an export tab such as JSON, SQL, XML, CSV, HTML, Markdown, DOCX, or XLSX and use the copy or download buttons.
  7. Check warnings whenever you see irregular row messages before using the exported data downstream.

Example run: paste a small team roster, let automatic header detection label “id”, “name”, and “team”, then export JSON Lines for ingestion into a logging or analytics pipeline.

The more consistently you configure delimiter and type settings between runs, the easier it becomes to compare exports and automate their use in scripts or data flows.

  • Pro tip: use the sample dataset button to see how header detection, type detection, and column profiling behave on a known input before pasting your own data.

Features:

  • Automatic delimiter detection across common separators, with a custom delimiter option for unusual layouts.
  • Heuristic header detection with support for forcing or disabling header usage as needed for a dataset.
  • Column profiling that reports type, non-empty, null, empty, and unique counts plus a sample value per column.
  • Multi-format exports including JSON arrays, JSON records, JSON Lines, XML, CSV, HTML, Markdown, DOCX, and XLSX.
  • Column renaming controls that adjust display labels, JSON keys, XML tags, and SQL identifiers while preserving uniqueness.
  • Copy and download helpers for each output so you can move structured data directly into editors, databases, or notebooks.

FAQ:

What formats can I export to?

You can export the parsed table as JSON arrays, JSON records, JSON Lines, CSV, XML, HTML, Markdown, DOCX, and XLSX. All exports are generated from the same underlying parsed data so they stay consistent.

How accurate is header detection?

Header detection uses keywords, uniqueness, and type patterns from several sample rows. It works well on typical tables, but ambiguous first rows can still be misclassified, so you can always override the mode manually.

How are data types detected?

When type detection is enabled, the converter looks for plain numerals, common boolean words, explicit “null” markers, and ISO-like date strings. Everything else is treated as text, and mixed columns are classified by whichever type appears most often.

Is my data stored or sent anywhere?

Parsing and export happen in the page, and files are processed locally; nothing is uploaded. A spreadsheet helper script may be fetched for XLSX export, but your table contents are not included in that request.

Can I use it without a header row?

Yes. Set header mode to “none” to treat every row as data. JSON arrays, XML, and CSV still export correctly, but formats that need field names such as JSON objects, JSON Lines, and SQL require a header or explicit column keys.

What happens when rows have different lengths?

The parser computes the maximum column count and pads shorter rows with empty cells. It records warnings for any row whose length differs from the maximum so you can review and clean up irregular lines before exporting.

How big a file can I work with?

There is no hard coded maximum, but everything runs in your device’s memory. Preview tables are capped at 1000 rows for usability, while exports use the full parsed dataset. Very large files may feel slow or unresponsive on older hardware.

What does a “text” column type imply?

A “text” label means strings dominate that column after type detection. This often indicates identifiers, mixed formats, or values containing symbols. If you expected pure numbers or dates, check for stray characters and adjust trimming, delimiter, or type detection settings.

Troubleshooting:

  • The summary shows zero rows: confirm your input has at least one non-empty line and the chosen delimiter actually appears in the text.
  • Columns are shifted: check for delimiter characters inside unquoted fields and adjust quote character or escape style.
  • Numbers appear as text: ensure type detection is enabled and that values do not contain thousands separators or extra symbols.
  • Dates are not recognized: enable date detection and verify the format matches ISO-like patterns or the supported loose formats.
  • JSON exports fail validation elsewhere: confirm that header labels are valid identifiers and that problematic columns are renamed.
  • SQL insert statements do not run: check that the table name matches your database schema and that column names do not collide with reserved words.
  • XML consumers reject the file: verify that root and row tags are valid for your system and that nested JSON-like values are escaped correctly.
  • Spreadsheet exports do nothing: ensure your connection allowed the helper script to load and try again after reloading the page.
  • Copy buttons appear to fail: some clipboard blockers interfere, so try again after granting permission or use the download option instead.

Advanced Tips:

  • Tip Use automatic delimiter detection first, then lock the chosen delimiter for repeated runs of similar files to keep exports comparable.
  • Tip When debugging messy imports, temporarily disable type detection so you can see the exact raw strings the parser receives.
  • Tip Override JSON keys to match an existing API or schema, then use JSON Lines export as a quick way to seed logs or test endpoints.
  • Tip Use column profiles to spot candidate primary keys by looking for columns with high uniqueness and very few null or empty values.
  • Tip Add an XML row index attribute when you need a stable reference back to original line numbers after downstream processing or filtering.
  • Tip Keep a saved snippet of a “known good” TSV sample and occasionally rerun it to confirm that settings and headers still behave as expected.

Glossary:

Tab-Separated Values (TSV)
Text format where fields in each row are separated by tab characters.
Delimiter
Character or token that marks where one field ends and the next begins.
Header row
First row that supplies human readable column labels rather than data.
JSON Lines
Format with one JSON object per line, often used for logs and streaming.
Null value
Explicit marker that a field is missing or intentionally left blank.
Column profile
Summary of type, emptiness, null count, uniqueness, and a sample for a column.
Preview rows
Subset of rows shown in the on-page table to keep scrolling manageable.
SQL identifier
Sanitized name for tables or columns used in generated SQL statements.