| Key | Value | Copy |
|---|---|---|
| {{ p.key }} | {{ p.value }} | |
| No query parameters | ||
| # | Char | Code point | UTF-8 | ASCII | Unreserved | Reserved | Copy |
|---|---|---|---|---|---|---|---|
| {{ r.idx }} | {{ r.charDisplay }} | {{ r.u }} | {{ r.utf8 }} | {{ r.isASCII ? 'Yes' : 'No' }} | {{ r.isUnreserved ? 'Yes' : 'No' }} | {{ r.isReserved ? 'Yes' : 'No' }} |
Encoded links are often harder to judge than they first appear. Percent escapes can hide ordinary characters, repeated encoding can turn a simple path into noise, and long query strings can bury the difference between a useful parameter and a tracking tag. That makes routine inspection harder for developers, marketers, analysts, and anyone trying to understand what a copied link really contains.
This decoder handles both web addresses and plain text. It can repeatedly decode escaped bytes, optionally treat plus signs as spaces, then decide whether the result is a URL or just decoded text. If it is a URL, the tool can normalize the host, path, query string, and fragment. If it is not a URL, the same decoded output still gets character-level inspection and payload checks.
The practical value is that the tool does more than replace %20 with a space. You can remove trackers before sharing a link, prune duplicate or empty parameters, inspect whether a suspicious blob is JSON or Base64 text, and see which characters remain reserved or unreserved after decoding.
What this does not mean is that a cleaned link is automatically trustworthy. Readability and safety are not the same thing, so the safest workflow is still to inspect the normalized host, scheme, and path before opening anything unfamiliar.
This tool is helpful whenever a link or text sample needs to be understood before it is reused. A support engineer can paste a redirect URL from a ticket and immediately see the decoded host and parameters. A campaign owner can remove tracking tags before sharing a public-facing link. A developer can inspect whether a callback value is single-encoded, double-encoded, or carrying structured payload text inside the string.
The cleanup controls are built for judgment rather than blind rewriting. Allow-list and block-list fields decide which parameters survive. Duplicate handling keeps the first or last occurrence when repeated keys appear. Fragment removal, slash collapsing, default-port removal, and host normalization make it easier to compare two links that should be equivalent but are formatted differently.
The character analysis is useful when a decoded result still looks wrong. Instead of guessing whether a glyph is a delimiter, a non-ASCII symbol, or a hidden control, you can inspect the code point and UTF-8 bytes directly. All of this, including QR generation and exports, stays in the browser.
URI and URL decoding is really two jobs. One is turning percent-encoded octets back into characters where that is safe to do. The other is deciding which characters are structural delimiters and which are ordinary data inside the component you are looking at. This tool starts with the decoded string, then decides whether the result can be treated as a URL or should remain plain text for inspection.
The decode stage is iterative. A single pass is often enough for ordinary links, but copied redirect targets can contain multiple layers of escaping. The depth control repeats decoding until the configured limit or until the string stops changing. If you enable plus-as-space, plus signs are converted before each pass, which is useful for form-style query strings but optional because plus is not a universal URI rule.
Once a URL is recognized, normalization is applied in a predictable order. Scheme forcing, embedded-credential stripping, host cleanup, path cleanup, and fragment removal all happen before the query string is rebuilt. Query cleanup then applies allow-list filtering, block-list filtering, tracker removal, empty-value removal, duplicate pruning, and sorting.
| Stage | What the tool does | Why it matters |
|---|---|---|
| Decode passes | Runs one or more percent-decoding passes, with optional plus-to-space conversion before each pass. | Exposes text that was encoded multiple times or copied from form-style query strings. |
| Unicode normalization | Optionally applies NFC or NFKC to the fully decoded string. | Useful when visually similar characters should be compared in a normalized form. |
| URL recognition | Parses full URLs directly and also accepts scheme-less host/path inputs by assuming http:// for parsing only. |
Lets ordinary copied inputs such as example.com/path be inspected as URLs. |
| URL normalization | Can force scheme, strip embedded credentials, lower-case the host, remove leading www., drop default ports, collapse repeated slashes, trim a trailing slash, and remove the fragment. |
Makes semantically similar links easier to compare. |
| Query rebuild | Filters parameters, prunes duplicates, optionally sorts them, and serializes the final list back into a stable query string. | Separates meaningful parameters from noise and leaves a reproducible output string. |
Internationalized domain names can be shown in Unicode for display, but that display choice affects only the rendered hostname. Character analysis is built from the final decoded output and reports the code point, UTF-8 bytes, ASCII membership, and reserved or unreserved classification for each character.
The payload view checks the fully decoded string for JSON first, then for standard or URL-safe Base64 text. Malformed percent sequences are handled defensively: if the browser's full decoder fails, the tool falls back to partial byte replacement where safe rather than discarding the string.
%2520 becoming %20 on the first pass and a space on the next.If the Open button becomes available, treat that as a transport check, not a safety check. It only means the final string has an http or https scheme that the browser can open.
The summary header tells you what kind of output you are looking at. Normalized URL means the decoded string parsed successfully as a URL after the selected transformations. Decoded Text means the input remained plain text after decoding, so the character table and payload view matter more than URL-specific fields.
| Output | How to read it | Verification cue |
|---|---|---|
| Parameters | A count of 0 can mean there was never a query string or that filters removed every surviving parameter. |
Compare the cleaned output with the original query and check the removal badges. |
| Trackers removed, duplicates removed, empty removed, filtered | These badges are literal counts of what the cleanup pipeline discarded. | If a count looks too high, review allow-list and block-list rules first because they run before tracker removal. |
| Depth | The badge appears only when more than one decode pass was needed. | If the output stops changing after one pass, higher depth does not improve the result. |
| Payload summary | Shows whether the final decoded string looks like JSON, Base64 to JSON, or Base64 to text. | Use the payload preview to confirm structure before assuming the text is safe to forward. |
When the output still looks odd after decoding, the character table is usually the next place to look. A remaining delimiter may explain why the parser treated the string differently from what you expected, and a non-ASCII character may explain why two visually similar links are not actually identical.
The false-confidence trap is to assume that a tidy link is a trustworthy one. Before opening or sharing the result, read the host, scheme, and path in the normalized output. The tool improves visibility; it does not authenticate destination, content, or intent.
Example 1: Cleaning a marketing link. A copied campaign URL contains utm_ parameters, fbclid, and a fragment. Turn on tracker removal and fragment removal, then sort the remaining parameters. The output becomes easier to share publicly while the summary badges prove exactly how much was stripped away.
Example 2: Decoding a repeated redirect target. A help-desk ticket includes a link where the path still contains visible percent escapes after one pass, such as a value that needs two rounds of decoding to become readable text. Raise decode depth to two or three and compare the character table after each change. If the string stabilizes early, extra passes are unnecessary.
Example 3: Handling duplicate and empty parameters. A callback URL arrives with repeated id values, blank parameters, and a mix of useful and noisy keys. Use keep-first or keep-last depending on which occurrence should win, then remove empty parameters and optionally apply an allow-list. The final query becomes much easier to compare with application logs.
Because plus is treated as a space only when you enable that option. Many query strings use plus for spaces, but literal plus characters also exist.
example.com/path become a URL even without https://?The parser accepts scheme-less host/path input by assuming http:// for recognition only. That makes copied hostnames and paths easier to inspect without claiming the original string carried an explicit scheme.
The button is enabled only when the final output parses as an http or https URL. Plain text, other schemes, or malformed results stay viewable but are not opened.
Either the input never had a query string, the decoded result was not recognized as a URL, or your filters removed every parameter. Check the normalized output and the removal badges.
Only the hostname display. It makes internationalized domain names easier to read, but it does not change query cleanup rules or validate the domain.
Treat the result as a troubleshooting clue rather than a clean decode. Partial replacement can reveal the problem without proving the original string was valid.
#, often used for in-page navigation or client-side state.