| Field | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} |
{{ normalizedYaml }}
{{ iniText }}
{{ envText }}
{{ propertiesText }}
| Document | Path | Type | Preview | Copy |
|---|---|---|---|---|
| Doc {{ row.docIndex }} | {{ row.path || '(root)' }} | {{ formatTypeLabel(row.type) }} | {{ row.preview }} |
| Path | Types | Docs | Example |
|---|---|---|---|
| {{ row.path }} | {{ row.types }} | {{ row.docs }} | {{ row.example }} |
YAML is a plain-text way to describe nested settings, lists, and records, and small indentation changes can alter the shape of the data in ways that are hard to catch by eye. That matters when the file controls deployments, service defaults, firewall rules, or feature flags. This converter parses YAML and turns it into structure summaries and alternate text outputs so you can review what the document means before you pass it to another system.
The tool is useful when you need more than a pretty print. It shows how many documents were found, what root types they use, how deeply values are nested, and which keys and scalar values appear first. That makes it easier to compare a vendor sample with your own file, inspect a change request, or explain a configuration shape to someone who does not want to read raw indentation.
Multi-document YAML deserves special attention because one pasted block can hold several logical documents separated by markers. The converter keeps those documents separate while it computes summaries and flattened paths, and it adds document labels to scalar exports when there is more than one document in the input. That prevents a quick export from blurring together values that came from different documents.
A common use case is receiving a deployment snippet from another team and needing fast answers to two questions: is the text valid enough to parse, and what would the leaf values look like as JSON, environment-style lines, or INI entries? Another is normalizing a long manifest before a code review so that indentation, quoting, and key order are consistent across the file. In both cases the value comes from seeing structure and derived output side by side rather than copying fragments into ad hoc scripts.
Structural clarity is not the same as semantic correctness. A document can parse cleanly and still fail the application that consumes it, and configuration text often contains credentials, hostnames, or internal paths that deserve careful handling. Treat the converter as a review and transformation aid, not as a schema validator or security control.
Paste YAML directly into the editor or drop a file onto it, and the parser recalculates the result panes from the current text. If the input is actually standalone JSON, the tool stops early and asks for YAML instead. That choice keeps the workflow focused on YAML review rather than becoming a general-purpose JSON converter with ambiguous behavior around formatting and document markers.
The first stop is usually the Overview tab. It reports per-document key counts, scalar counts, maximum depth, sample keys, and sample values, while the summary badges above the editor roll up total documents, total keys, scalar leaves, the deepest nesting observed, and the most common root types. Those numbers are practical signals: an unexpected extra document, a sudden jump in depth, or a root type of Array instead of Object usually tells you to slow down before exporting anything.
From there the Paths and Schema tabs answer different review questions. Paths lists the flattened leaf paths exactly as the converter walks them, including document number, type, and a short preview. Schema groups those paths and shows which types were observed at each one, how many documents contain it, and one example value. If you are asking "What values exist?" use Paths. If you are asking "What shape does this configuration usually have?" use Schema.
| Surface | What it gives you | Best used for |
|---|---|---|
| YAML | Normalized YAML text with the current indentation, wrapping, sorting, quoting, and anchor settings. | Preparing a cleaner version of the source for review or editing. |
| JSON | A JSON serialization of one parsed document or an array of documents. | Passing structured data into tools that already expect JSON. |
| Env | Scalar leaf values rewritten as key-equals-value lines. | Checking which settings can be mapped into environment variables. |
| INI | Scalar leaf values grouped into sections derived from the first path segment. | Bridging newer YAML sources into older configuration conventions. |
| Properties | Flattened path keys paired with scalar values as text lines. | Keeping the original path naming visible in a simple export format. |
The processing pipeline begins by normalizing line endings so the parser sees a consistent newline style. The tool then checks whether the entire input is standalone JSON by looking for a leading object or array marker and attempting a JSON parse. If that succeeds, it deliberately refuses the text instead of silently treating JSON as YAML. This is a scope decision, not a parser limitation: the package is built to explain YAML documents and their derived views, not to blur the difference between neighboring syntaxes.
Actual parsing uses YAML document loading rather than a line-by-line heuristic. Each defined document becomes a native object, array, or scalar value. From there the converter walks the structure depth first, counting nested object keys, scalar leaves, and maximum depth while collecting a few representative sample values. Arrays contribute depth and leaf values but not object-key counts, which keeps the summary badges aligned with how people usually think about configuration shape.
Normalized YAML is generated from the parsed structure rather than from raw text replacement. That means indentation, line wrapping, key sorting, quote forcing, and anchor expansion are applied to the data model, not to whatever spacing happened to be in the original paste. Indentation is sanitized into a safe range from one to eight spaces. Line width is capped, and a value of zero is treated as no wrapping. When anchor reuse is disabled, repeated structures are written out in full instead of being emitted as aliases.
JSON output is derived from the same parsed representation. A single YAML document becomes one JSON value, while multiple documents are wrapped into a JSON array in the order they were read. Pretty printing uses the chosen JSON spacing, and minified mode removes added whitespace. The JSON serializer also includes cycle protection, so if the runtime encounters an unexpected circular structure while preparing output, it uses a placeholder rather than failing with an unhelpful exception.
Flattening is where the review surfaces become especially useful. In dot mode, simple object keys stay as dotted segments and array positions appear as brackets such as servers[0].limits.cpu_percent. Keys that would break dotted notation are escaped into bracketed string form. In slash mode, each segment is separated by a slash and literal slashes inside keys are escaped. Every visited path is stored with document number, observed type, and a short preview, then rolled into the Schema table by path so mixed types across documents stay visible.
Scalar exports are intentionally conservative. Only scalar leaf values become env lines, INI entries, or properties pairs. Environment keys are built by collapsing separators into underscores, removing unsafe characters, optionally uppercasing the result, and adding a prefix when you provide one. Multi-document input adds a DOCn_ tag so values from separate documents do not collide. INI output uses the first path segment as a section name and lowercases the normalized tokens, while properties output keeps the original flattened path and prefixes docN. only when more than one document is present.
Preview size is controlled separately from export completeness. The Paths pane shows a configurable slice of the flattened rows so very large files stay readable in the browser, but CSV exports still include every discovered row. The same browser-local design applies to the whole tool: parsing, flattening, copying, and download generation happen in the current session without a helper endpoint receiving the configuration text.
If the parser reports an error, fix that before trusting any derived output. YAML mistakes are often small, such as one shifted indent level or a misplaced separator, but they can change the entire tree.
The summary badges are meant to tell you where to look next. A document count above one means the input contains multiple logical documents, so you should expect document tags in scalar exports. A high depth value often means nested lists or repeated subtrees, which makes flattened review more important than scanning the raw text. The dominant root-type badges help you spot whether most documents begin as objects, arrays, or scalar values.
Sample keys and sample values are hints, not exhaustive inventories. They tell you what appears early in traversal, which is helpful for orientation, but they do not guarantee full coverage. For complete coverage use Paths, where each row reflects one visited path and preview, or Schema, which shows whether the same path was seen with different types across documents.
A trimmed-preview warning is easy to misread. It does not mean data was lost from the conversion. It only means the visible Paths pane stopped after the configured row limit to stay manageable. Copy and download actions still use the full discovered path set, so a large export can contain more rows than the preview table shows.
The most important boundary is semantic meaning. If a leaf appears in env or INI output, that only proves it was a scalar value at a reachable path. It does not prove the destination system accepts that name, value, or type without additional validation.
Suppose you paste a service configuration with environment.region, environment.timezone, and servers[0].limits.cpu_percent. The Overview tab quickly confirms one document with an object root, several nested keys, and a moderate depth. Paths then exposes each leaf path, while Env turns those leaves into names such as ENVIRONMENT_REGION=us-west-2 when uppercasing is enabled.
In a second case, imagine a pasted bundle containing two YAML documents separated by markers. The converter reports two documents, keeps their summaries separate, and prefixes exported scalar keys with DOC1_ and DOC2_ in env output. That small difference matters when you are comparing staging and production values from one paste and do not want the exports to overwrite each other.
Another common case is review-ready cleanup. You can sort keys, enforce consistent indentation, expand anchors, and then apply the normalized YAML back to the editor before copying it into a ticket or merge request. The structure stays the same, but the layout becomes easier for another person to read.
Because the package is intentionally scoped to YAML review. JSON is close enough to YAML to create confusion, so the converter stops early rather than pretending every JSON paste is YAML input that should flow through the same description and export language.
It changes only the order in which object keys are written in the normalized YAML output. That is often harmless for configuration review, but any downstream system that depends on source order should be checked separately.
Because scalar exports are limited to scalar leaf values. Objects and arrays are part of the structure review, but they are not emitted as direct env or INI entries.
Environment and INI outputs normalize separators and remove characters that do not fit those conventions cleanly. Properties output keeps the original flattened path more closely, which is why it is useful when you want readability over destination-specific naming.
No helper endpoint is part of this bundle. Parsing, flattening, copying, and file generation all happen in the browser session that already has the text.