| {{ header }} |
|---|
| {{ cell }} |
| No repeating record path detected. Pick a different path in Advanced. |
{{ csvText }}
{{ tsvText }}
{{ prettyXml }}
{{ htmlTable }}
{{ markdownTable }}
{{ sqlInsertText }}
{{ treeText }}
| Path | Count | Attributes | Child tags | Text? |
|---|---|---|---|---|
| {{ row.path }} | {{ row.count.toLocaleString() }} | {{ row.attributes }} | {{ row.childTags }} | {{ row.hasText ? 'Yes' : 'No' }} |
| Schema summary unavailable. | ||||
Extensible Markup Language documents are structured text files that arrange information as nested tags and values so you can represent records, configuration, feeds, and logs in a single tree. Understanding how that structure behaves across a whole file helps you decide how to extract rows, check feeds from another team, or prepare imports for other tools.
Here you paste or drop an Extensible Markup Language document and the converter inspects every element to show counts, depth, common paths, and a quick preview of candidate records. It is especially useful when you receive unfamiliar data from another system and want a gentle schema overview before you write any code.
You provide the document text or a plain text file then choose or adjust the record path so repeating items turn into tidy rows that resemble a small spreadsheet. From there you can review nested structure in a tree view, inspect a path based schema summary, and switch between full document output and record level formats.
For example you might paste a catalog of products and immediately see that one tag repeats hundreds of times making it a natural source of rows for reporting. Remember that this converter focuses on structure and simple values rather than business meaning so always review a sample and avoid pasting confidential data when a safer sample will do.
Extensible Markup Language (XML) is treated as a tree of elements, attributes, and text nodes, and the converter walks this tree to profile structure before extracting records. It counts every element node, tallies attributes, tracks unique tag names, and records the maximum depth so you can quickly gauge document size and complexity.
JavaScript Object Notation (JSON) outputs represent the full document as a nested object whose top level key is the root element name. A parallel record view selects one repeating path, converts each matching element into an object respecting attributes and child elements, and then flattens nested keys into dot paths for tabular exports.
Record detection relies on a schema profile where each unique element path stores how often it appears, how many child tags and attributes it carries, and whether it contains text. Paths that repeat and have richer child structure score higher, so auto detection typically selects the element that behaves most like a row in a table.
| Symbol | Meaning | Unit/Datatype | Source |
|---|---|---|---|
| Set of all element nodes in the parsed document | Set | Derived from XML tree | |
| Single element node within the document | Node | Derived from XML tree | |
| Chosen record path made of element names | String | Auto detection or user input | |
| Subset of elements whose path equals P | Set | Filtered XML tree | |
|
|
Number of records produced from path P | Integer | Derived from R |
With the bundled sample, the root element named <dataset> contains two top level <item> children, so the repeating path dataset>item becomes the default record source.
Each record turns into a JSON object with fields such as identifiers, scores, and nested project entries, while the summary badges report counts and depth for the entire document.
| Input | Accepted families | Output | Encoding / precision | Rounding |
|---|---|---|---|---|
| XML source | Pasted text or dropped .xml/.txt files containing well formed XML |
Parsed document tree and structural metrics | Native text strings in the page | Not applicable |
| Record view | Elements matching the chosen record path | Full document JSON and record JSON array | Pretty printed JSON with configurable indentation | No numeric rounding applied |
| Tabular exports | Flattened record rows with headers | CSV, TSV, HTML table, Markdown table | Text values separated by comma or tab characters | Numbers rendered as plain strings |
| Database exports | Same flattened rows | SQL INSERT statements with quoted literals | Booleans as TRUE or FALSE, nulls as NULL | Numeric text preserved as given |
| Field | Type | Min | Max | Step / pattern | Placeholder or notes |
|---|---|---|---|---|---|
| Indent size | Number | 2 | 8 | Whole numbers only | Controls JSON and XML pretty printing |
| Record preview rows | Number | 25 | 1000 | Increments of 25 | Preview trims beyond this limit |
| Tree depth | Number | 3 | 12 | Whole numbers only | Controls depth of tree preview |
| Attribute prefix | Text | 1 | 5 | Arbitrary characters | Prepended to attribute names in records |
| Text key | Text | 1 | 12 | Arbitrary characters | Key used for text node values |
| SQL table name | Text | 1 | 64 | Sanitized to lower case identifier | Defaults to dataset if left blank |
Files are processed locally; nothing is uploaded. No data is transmitted or stored server side beyond the downloads you explicitly save from the page.
Spreadsheet exports rely on a small helper script loaded on demand, but all conversions and metrics are computed deterministically in the page from the XML you provide.
XML structure inspection and conversion from raw document to JSON, tables, and SQL inserts follows a short, repeatable sequence.
For a staff directory export, choose the person level node as the record path, then copy CSV or SQL exports directly into your spreadsheet or database environment.
Pro tip: once you find a good record path, reuse it across similar feeds to keep downstream schemas consistent.
Your XML is parsed directly in the page and used only to build the previews and exports you see. Files are processed locally; nothing is uploaded, and no server side storage or logging of document content occurs.
Close the page or clear the editor to remove the content from view.Element counts, attribute counts, and depth are derived directly from the parsed document tree, and exports reflect that same tree. Trimming, whitespace collapse, and type coercion options can change how text appears, so review a few records before relying on the output.
If the source XML is not well formed, results may be incomplete or unavailable.You can paste XML text or drop simple .xml or .txt files into the editor. Namespaces and attributes are supported, but the document must be well formed, with a single root element, for parsing to succeed.
Once the page is open, parsing, schema profiling, and most exports run entirely in the page. Generating spreadsheet files may require a helper script; if it cannot load, you will see a warning and can still use CSV and other text formats.
Keep a local copy of important exports rather than relying on repeated sessions.Choose a record path that matches the element you consider a row, then confirm that the Data tab shows sensible headers and values. From there, open the CSV or TSV tab to copy or download a table ready for spreadsheets or databases.
If the Data tab stays empty, adjust the record path in the Advanced panel.When automatic detection chooses a path that yields very few records or missing fields, it usually indicates that the document mixes several structures. In those situations treat the summary as a hint, experiment with nearby paths, and confirm that key fields appear consistently before adopting the result.
Structural validity does not guarantee that the business data itself is correct.Tip Use the schema tab to spot deeply nested paths with high counts and promote them to record paths when appropriate.
Tip Keep the attribute prefix short, such as a single character, to avoid excessively long column names after flattening.
Tip When preparing data for databases, align the SQL table name and column prefixes with naming conventions used in your schema migrations.
Tip Use lower preview row counts while experimenting with paths on large documents to keep the interface responsive and readable.
Tip Turn off whitespace collapse when text nodes carry meaningful line breaks, then reenable it once structural fields are confirmed.
Tip For repeat use cases, note the chosen record path and option settings in project documentation to ensure reproducible conversions.