TSV Summary
{{ metrics.rowCount.toLocaleString() }} rows {{ metrics.columnCount.toLocaleString() }} columns {{ metrics.header ? 'Header detected' : 'No header' }} {{ metrics.delimiterLabel }} Preview trimmed
{{ t }}
{{ w }}
TSV
rows:
Column naming
Column Display label JSON key
{{ base || `Column ${idx + 1}` }}
Active key: {{ headers[idx] || baseHeaderKeys[idx] }}
No columns detected yet.
Leave fields blank to reuse detected labels and keys. JSON keys are sanitized and deduplicated automatically.
# {{ header }} Copy
{{ row.index }} {{ cell }}
No rows to display.
Column Key Type Non-empty Null Empty Unique Sample value Copy
{{ profile.index }}. {{ profile.label }} {{ profile.key }} {{ profile.type || 'empty' }} {{ profile.nonEmpty.toLocaleString() }} {{ profile.nullCount.toLocaleString() }} {{ profile.emptyCount.toLocaleString() }} {{ profile.unique.toLocaleString() }} {{ profile.sample || '—' }}
No columns profiled yet.
Null counts include explicit null markers. Empty counts represent blank strings after trimming.
{{ sqlInsertText }}
{{ xmlText }}
{{ htmlTable }}
{{ markdownTable }}
{{ csvText }}
:

Introduction:

Tab-separated values (TSV) files store a table as plain text, with each newline representing a row and each tab marking the next field. That simplicity makes TSV easy to exchange, but it also means one structural mistake can shift every downstream column. This converter parses the text first and shows how the table will be understood before you send it into another format.

The ambiguous cases are where most import problems begin. A first row might be labels or it might be real data, quoted cells can hide tabs or line breaks, and uneven rows can conceal missing fields until a database or spreadsheet reads them differently than you expected. Catching that structure early is more valuable than producing a fast but misleading export.

This tool is built for the moment when the raw text is still fixable: checking a pasted report, cleaning a vendor extract, or reshaping a quick export for JSON, SQL, XML, Markdown, HTML, CSV, DOCX, or XLSX. Instead of jumping straight to the final format, you see row count, column count, delimiter choice, header detection, dominant type badges, and warnings while the source is still easy to reason about.

The page also separates presentation from schema. You can inspect the parsed table, profile each column, and override display labels or machine-facing keys without rewriting the raw input. That is especially helpful when the source contains duplicate headers, blank headers, or human-readable labels that are awkward to reuse as keys.

A clean preview still deserves caution. Type detection can turn account codes into numbers, date detection only recognizes ISO-like timestamps when enabled, and spreadsheet-oriented exports preserve formula-like cells rather than neutralizing them for you. The goal is clarity about structure, not a guarantee that every downstream tool will interpret the content the way you intend.

Everyday Use & Decision Guide:

A good first pass is to paste the source text and leave Delimiter on tab and Header row on auto. The summary badges tell you quickly whether the parser saw the file the way you did. If the delimiter badge is wrong or the page claims Header detected when row 1 is really data, fix that before reading any export tab.

  • Keep the first row as a header only when it truly contains labels. Header-aware outputs such as JSON, JSON Lines, and SQL depend on that decision.
  • Turn Detect data types off for ZIP codes, SKUs, account numbers, or IDs with leading zeros. Otherwise those values can be coerced into numbers and lose their original shape.
  • Turn Detect dates on only when the source uses ISO-style dates or timestamps and you want typed date strings instead of plain text.
  • Review warning badges before copying any output. Irregular row lengths and duplicate headers are often more important than a successful conversion preview.

The most reliable trust check is to compare the Data tab with the Profile tab. If the row preview looks right and the profile for a sensitive column shows the expected type, sample value, null count, and uniqueness pattern, the export tabs are much more likely to be meaningful.

This is a strong fit for cleaning structured text that is mostly right but not fully trustworthy. It is a weak fit for semantic validation. A column that parses as numeric is not automatically a valid account number, and a column that looks like a date is not automatically in the business timezone you expected.

Technical Details:

The parser starts by normalizing line endings and resolving the active delimiter from your chosen mode or from an automatic scan of common separators. It then reads the text character by character using the selected quote character and escape style, which allows quoted delimiters and quoted newlines to survive inside a field. If a quoted field never closes, the parser stops with a specific error instead of guessing where the row should end.

Once parsing succeeds, the tool computes the widest row and pads shorter rows with blanks so every export is rectangular. That rectangularization is visible in the warnings list. If one row has fewer fields than the rest, the output remains usable, but the warning makes it clear that missing cells were synthesized rather than present in the source.

Header detection in auto mode is heuristic rather than binary magic. The first row is compared with sample rows below it for common header keywords, type-distribution changes, repeated values, proper-name patterns, and data-like token shapes. Missing labels fall back to Column 1, Column 2, and so on, while duplicates are deduplicated so later exports have stable names.

The tool keeps two parallel views of the data. Raw cell text drives the table preview and text-oriented exports, while a typed layer is built when detection is enabled. In that typed layer, booleans such as true and yes, numeric tokens, explicit null markers, and optionally ISO-like dates are coerced into structured values. That decision directly affects column profiles, JSON output, SQL literals, XML values, and null counts.

Header overrides let you separate human-facing labels from machine-facing keys. The display label is what you see in the table, while the key override becomes the raw key for object-style JSON and is also sanitized and deduplicated for machine-safe uses such as XML tag names and SQL identifiers. The preview limit only trims the on-page table, not the full export set, and XLSX is generated from the same export rows after the page loads the workbook library on demand.

Transformation Core:

The converter follows a fixed parse, normalize, and export pipeline.

TSV conversion stages
Stage Rule What you see
Delimiter resolution Use the selected delimiter or auto-detect among common candidates Delimiter badge in the summary
Quoted parsing Honor the chosen quote character and either repeated-quote or backslash escapes Parsed rows or a quote-related error
Row shaping Pad short rows to the widest column count and warn when widths differ Rectangular tables plus irregular-row warnings
Header resolution Use explicit mode or score the first row against sample data Header detected badge and object-style exports
Type coercion Convert eligible strings to booleans, numbers, nulls, and optional ISO-like dates Type summary badges, column profiles, typed JSON and SQL values
Schema cleanup Apply display-label and key overrides, then sanitize machine-facing keys Updated table headers, profile keys, XML tags, and SQL identifiers

Boundary Conditions:

TSV interpretation boundaries
Condition Behavior
No header row JSON, JSON Lines, and SQL do not emit header-based records.
Unclosed quote The parser stops with Reached end of file while inside quotes.
Irregular row lengths Missing cells are filled with blanks and a warning is added.
Detect data types off All values stay as text and the profile reflects string-heavy columns.
Detect dates on Only ISO-like date patterns are converted to ISO timestamp strings.
Preview rows exceeded The summary shows Preview trimmed, but exports still use the full parsed dataset.

Step-by-Step Guide:

Use this sequence when you need a reliable conversion instead of a quick but fragile export.

  1. Paste the source into the TSV field or drop a text file onto it. The TSV Summary block should populate automatically.
  2. Check the badges for row count, column count, delimiter, and header status. If the delimiter or header interpretation looks wrong, stop there and correct it before moving on.
  3. Open Advanced and adjust Delimiter, Quote character, and Escape style until the Data tab matches the source rows. If you hit Reached end of file while inside quotes., fix the broken quote or choose the quote rule that matches the file.
  4. Set Header row to auto, Use first row, or Treat all as data, then use the column naming table to correct any duplicate, blank, or awkward labels.
  5. Choose typing behavior with Detect data types, Detect dates, and Empty as null, and confirm the result in the Profile tab before trusting object-style exports.
  6. If you need machine-facing output, set SQL table name or the XML naming controls first, then open JSON, JSON Lines, SQL, or XML and verify one sample row against the preview.
  7. Only after the preview and profile agree should you copy or download the final format. For spreadsheet-style output, review formula-like cells before choosing Download XLSX or a CSV handoff.

The safest conversion is the one where the summary badges, row preview, column profile, and final export all tell the same structural story.

Interpreting Results:

The top summary is the fastest health check. Delimiter and Header detected are the two readings that shape almost every downstream export, so treat them as primary signals rather than decorative badges.

  • Header confidence cue: if Header detected is on but row 1 is really data, every header-based export will be shifted by one row.
  • Rectangularization cue: an irregular-row warning means missing cells were filled with blanks to keep outputs rectangular.
  • Preview cue: Preview trimmed affects only the on-page table. The exported dataset still includes all parsed rows.
  • Typing cue: a dominant type badge such as numeric or date is a heuristic summary, not a schema guarantee. Verify sensitive columns in Profile.
  • Header-based export cue: empty or disabled object-style tabs usually mean the file is being treated as data-only and needs a header decision.

A successful parse can still create false confidence. Codes with leading zeros may have been coerced, duplicate labels may have been renamed, and spreadsheet-style formulas remain untouched in the final cells. Before handing the output to another system, compare one row in Data with the corresponding JSON, SQL, XML, or spreadsheet-oriented export you plan to use.

Worked Examples:

Example 1: Clean report import. A file with a true header row, consistent tabs, and ISO timestamps usually parses cleanly on the first pass. The summary shows a header, the profile identifies numeric and text columns, and header-based exports such as JSON records and SQL become usable immediately.

Example 2: Leading-zero identifiers. A customer-code column such as 00123 looks numeric to the type detector and can become 123 in typed exports. The corrective path is to turn Detect data types off, rerun the parse, and confirm in Profile that the column now remains text.

Example 3: Troubleshooting malformed input. If a file arrives with semicolons instead of tabs or with one unclosed quote, the row count, delimiter badge, or parser error will look wrong immediately. Adjust the delimiter or quote settings first, then recheck the table before trusting any export.

FAQ:

Does this upload my table anywhere?

Parsing, profiling, and the text-oriented exports run in the page. If you choose Download XLSX, the page loads the workbook library referenced by the package at that moment and then builds the spreadsheet from the same parsed rows.

Why are the JSON, JSON Lines, or SQL outputs unavailable?

Those formats depend on column names. If the file is being treated as data-only, switch Header row to Use first row or provide a source that really has labels.

Why did 00123 turn into 123?

The type detector recognized the cell as numeric. Disable Detect data types for identifiers that must keep leading zeros.

What causes the quote-related parse error?

The active quote and escape rules never found a closing quote. Fix the source text or change Quote character and Escape style so they match the file.

Why were some headers renamed?

Blank headers are replaced with fallback labels and duplicates are adjusted so later exports have unique keys and identifiers.

Should I worry about formula-like cells in CSV or XLSX output?

Yes. This converter preserves the cell text it parsed. Review values that start with spreadsheet formula triggers such as =, +, -, or @ before opening the result in spreadsheet software.

Glossary:

Delimiter
The character or character sequence used to separate one field from the next.
Header row
The first row when it is interpreted as column labels rather than ordinary data.
Quoted field
A cell wrapped in a quote character so delimiters or line breaks inside the value are treated as content.
Column profile
The per-column summary showing dominant type, null count, empty count, uniqueness, and a sample value.
JSON Lines
A newline-delimited stream of one JSON object per row.
Null
An explicit missing value, distinct from an empty string, when type detection and null handling convert it that way.

References: