{{ decodedIsUrl ? 'Decoded URL' : 'Decoded Text' }}
{{ summaryOutput }}
{{ decodedIsUrl ? 'Percent escapes resolved and URL parts inspected.' : 'Percent escapes resolved as plain text.' }}
{{ decodedIsUrl ? 'URL mode' : 'Text mode' }} {{ charCount }} chars {{ paramCount }} params Cleaned {{ cleanupCount }} {{ payloadSummary }}
Paste a full URL or encoded component, for example https%3A%2F%2Fexample.com%2Fsearch%3Fq%3Dcoffee%2520beans.
Range: 1-12 passes; raise one step when %25 or %2F remains visible.
Turn on for form-encoded query values; leave off when + signs are literal data.
{{ decode_plus_is_space ? 'On' : 'Off' }}
Only affects URL host display; raw decoded text is still available in exports.
{{ idn_to_unicode ? 'On' : 'Off' }}
Targets keys such as utm_*, gclid, fbclid, msclkid, and similar campaign IDs.
{{ strip_trackers ? 'On' : 'Off' }}
Use No change by default; NFC/NFKC can make visually similar decoded text compare consistently.
Choose No change, http, or https; plain text and URL components are left as decoded text.
Keep all preserves order; keep first or last only when duplicates are accidental.
Comma-separated query keys to keep, for example id,lang,q.
Comma-separated query keys to remove, for example sid,session,token.
Removes the fragment from rebuilt URLs; leave off when # carries app routing.
{{ remove_fragment ? 'On' : 'Off' }}
Trims only a trailing path slash, not the slash after the host.
{{ trim_slash ? 'On' : 'Off' }}
Lowercases only the hostname; path and query casing stay as decoded.
{{ lower_host ? 'On' : 'Off' }}
Removes only a leading www. label from the hostname.
{{ remove_www ? 'On' : 'Off' }}
Sorts by lowercase key, then value, after filtering and de-duplication.
{{ sort_params ? 'On' : 'Off' }}
Clears username:password@ from rebuilt URLs when present.
{{ strip_auth ? 'On' : 'Off' }}
Removes only default ports: 80 for http and 443 for https.
{{ remove_default_port ? 'On' : 'Off' }}
Turns repeated path slashes into one slash; does not alter the // after a scheme.
{{ collapse_slashes ? 'On' : 'Off' }}
Removes query pairs whose value is empty, such as ref= or flag=.
{{ remove_empty_params ? 'On' : 'Off' }}
{{ mainOutput }}
Query parameters extracted from the decoded URL
Key Value Copy
{{ p.key }} {{ p.value }}
No query parameters
Character analysis of the decoded output
# Char Code point UTF-8 ASCII Unreserved Reserved Copy
{{ r.idx }} {{ r.charDisplay }} {{ r.u }} {{ r.utf8 }} {{ r.isASCII ? 'Yes' : 'No' }} {{ r.isUnreserved ? 'Yes' : 'No' }} {{ r.isReserved ? 'Yes' : 'No' }}

                

                
Customize
Advanced
:

Introduction:

Percent-encoded URLs hide meaning behind triplets such as %2F, %3F, and %23, and those triplets do not all behave the same way when they are decoded. Some reveal ordinary text. Others reveal separators that change where a path ends, where a query starts, or whether part of the string is only a fragment.

Repeated encoding makes copied links even harder to read. A redirect target can arrive as one encoded round wrapped inside another, so a first decode still leaves visible percent triplets behind. Query strings add another wrinkle because form-style data often uses + for spaces, while a plus sign in other URL contexts can be literal data.

Encoded input moves through one or more decode passes, then branches into URL normalization or plain-text inspection.

Internationalized domain names add another complication because the same hostname can appear in an ASCII A-label form that begins with xn-- or a Unicode U-label that is easier for people to read. Unicode normalization can also make two visually similar strings compare more consistently when their code points differ.

URL decoding matters when you need to inspect, compare, or clean a copied link before you open or share it. Better readability helps, but it does not authenticate the destination. The host, path, and remaining parameters still need a human check.

Technical Details:

RFC 3986 defines percent-encoding as a percent sign followed by two hexadecimal digits that represent one octet. Decoding is easy when that octet maps to ordinary data, but it changes meaning when it expands to a reserved separator such as /, ?, #, &, or =. Those characters are not just text. They decide URL structure.

The unreserved set is different. Letters, digits, hyphen, period, underscore, and tilde are data characters, so their percent-encoded and plain forms are equivalent for comparison. That is why cleanup work usually focuses on canonical host case, default ports, fragments, and query ordering instead of treating every visible percent triplet as equally important.

Form-style query parsing follows another rule set. The application/x-www-form-urlencoded format treats + as a space during parsing and emits + for spaces during serialization. That convention is common in query strings and form bodies, but it is not a generic rule for every URL component, so plus-to-space decoding is safest as an explicit choice.

Hostnames with non-ASCII characters also have two valid faces. RFC 5890 describes the ASCII-compatible A-label form and the Unicode U-label form. Unicode normalization can help when text looks the same but is built from different code points, yet compatibility normalization can also fold distinctions that were present in the original string. That makes normalization useful for comparison, not for proving equivalence on its own.

Transformation Core:

Main decode and cleanup stages for a percent-encoded URL or text string
Stage Rule Why it matters
Percent-decoding Each %HH triplet decodes to one octet, and extra passes matter only when one pass reveals another encoded round. Explains why %252F becomes %2F before it becomes /.
URL recognition A decoded string behaves like a URL only when host and separator structure survive parsing. Otherwise it remains plain text. Stops tokens, notes, and malformed strings from being overread as links.
Host and path cleanup Case normalization, leading www. removal, default-port removal, slash cleanup, trailing-slash trimming, and fragment removal reduce formatting noise. Makes two near-equivalent links easier to compare.
Query rebuild Allow-list filtering, block-list filtering, tracker stripping, empty-value removal, duplicate handling, and optional sorting decide which pairs survive. Turns a noisy query string into a stable comparison target.
Post-decode inspection The final decoded string can still be JSON, Base64, base64url, or ordinary text. Useful when the result carries structured data rather than a destination URL.

Triplets That Often Change Structure:

Percent-encoded triplets that commonly affect URL structure after decoding
Triplet Decoded form Typical effect
%2F / Splits path segments instead of staying as literal data.
%3F ? Starts the query string.
%23 # Starts the fragment, which browsers keep client-side.
%26 and %3D & and = Split query pairs into parameter names and values.
%2B + Usually stays a plus sign, but form-style parsing can treat it as a space.

Worked Transformation Path:

Example of a multi-pass decode and cleanup path
Step Example state Effect
Copied string example.com/%257Eme?utm_source=news&id=7&id=9&lang=en#top The host is readable, but the path still contains another encoded round.
After pass 1 example.com/%7Eme?utm_source=news&id=7&id=9&lang=en#top One encoded round is gone, but %7E is still visible.
After pass 2 http://example.com/~me?utm_source=news&id=7&id=9&lang=en#top The string now parses cleanly as a URL and the path is readable.
After cleanup http://example.com/~me?id=9&lang=en Tracker stripping removes utm_source, keep-last duplicate handling keeps the later id, the fragment is dropped, and sorting makes the query stable.

RFC 4648 also matters once a decoded string stops looking like a URL and starts looking like a token. Standard Base64 uses + and /, while base64url swaps them for - and _. A decoder that accepts both forms can surface JSON or text hidden inside tokens, but that is still inspection, not proof that the payload is trustworthy.

Everyday Use & Decision Guide:

Start with the raw string and the lightest settings. Paste into Text or URL, leave Decode depth at 1, and read the big result line first. If you still see visible triplets such as %25, %2F, or %3D, raise Decode depth one step at a time. If the text turns into nonsense, back off before adding more cleanup.

  • For a link you plan to share, Strip common trackers, Remove fragment (#...), and Sort query parameters usually give the fastest cleanup.
  • For side-by-side comparison, add Lower-case host, Remove default port (80/443), Collapse multiple slashes, and Trim trailing slash so formatting noise stops hiding real differences.
  • When repeated keys matter, choose De-duplicate params deliberately. Keep first preserves the earliest value. Keep last matches last-wins patterns seen in many web apps.
  • If Filtered is higher than expected, check Allow-list params and Block-list params before assuming tracker stripping removed too much.

Use the tab that matches the result. Parameters is the fastest check for a parsed URL. Characters is better when one separator or non-ASCII symbol looks wrong. Payload is the right stop when the decoded string is really JSON or Base64 text. The Open button only means the result is http or https. It does not vouch for the destination. Settle the final string before you use QR or JSON.

Step-by-Step Guide:

A short pass through the main controls usually tells you whether you are cleaning a real URL or decoding a text payload.

  1. Paste the value into Text or URL. If the field is blank, the result panels stay empty.
  2. Read the summary header. If it shows Decoded Text but the output still contains percent triplets, raise Decode depth from 1 to 2 or 3. If it still never becomes Normalized URL, stop forcing URL cleanup and inspect the text instead.
  3. Enable Treat + as space only when the source came from a form or query string that uses plus signs for spaces. If a hostname begins with xn--, turn on Convert IDN (punycode -> Unicode) only for easier reading.
  4. Choose cleanup controls that match your goal. Strip common trackers and Remove fragment (#...) suit sharing. Allow-list params, Block-list params, and De-duplicate params suit comparison and troubleshooting.
  5. Inspect the right tab. Use Parameters for surviving keys and values, Characters when one glyph looks suspicious, and Payload when the output appears to be JSON or Base64 text. Open JSON if you need a structured snapshot of the current result.
  6. Verify the final output before reuse. Check the host, path, and counts beneath Normalized URL, or confirm that Decoded Text is really the form you intended. Only then copy, open, generate a QR, or download the structured output.

The safest finish is to reuse the result only after the summary line, surviving parameters, and removal counts match what you meant to keep.

Interpreting Results:

Start with the big result line, then the counts beneath it. Normalized URL means the final string parsed as a URL after the selected changes. Decoded Text means no URL structure survived parsing, so the text itself, the character table, and any payload preview matter more than the absence of parameters.

How to read the main decode URL result signals
Signal What it means What to verify next
Parameters: 0 Either there was never a query string or every surviving pair was removed. Compare the final output with the original input and check Filtered, Trackers removed, and Empty removed.
Trackers removed, Duplicates removed, Empty removed, Filtered These are literal counts of what the cleanup path discarded. If Filtered jumps unexpectedly, review allow-list and block-list rules before anything else.
Payload The decoded string also looks like JSON, Base64 to JSON, or Base64 to text. Read the preview before you forward or trust the token. Decodable data is not the same as safe data.
Open The final result uses an http or https scheme. Check the host and path before clicking. A tidy link can still point somewhere you do not trust.

A cleaner link is not a safer link. Lowercasing the host, removing trackers, or collapsing slashes does not authenticate the destination. When confidence matters, compare the final host and path with a known-good source before you open or share the result.

Worked Examples:

Cleaning a share link. Suppose you paste https://shop.example.com/product?id=42&utm_source=newsletter&fbclid=abc123#top. Turn on Strip common trackers and Remove fragment (#...). The summary becomes Normalized URL, Parameters drops to 1, and Trackers removed shows 2. That is the cue that the destination stayed the same while the marketing noise disappeared.

Decoding a repeated redirect target. Paste https%253A%252F%252Fexample.com%252Fwelcome%253Fname%253DLee%252BAnn with the default settings. After one pass, the big result still contains visible percent triplets, so raise Decode depth to 2. Now the summary becomes Normalized URL. If you also enable Treat + as space, the Parameters tab shows name as Lee Ann instead of Lee+Ann.

Finding a payload instead of a link. Paste eyJyb2xlIjoiYWRtaW4iLCJleHAiOjE3MTIzNDU2Nzh9. The result stays Decoded Text, Open remains disabled, and Parameters never appears. That combination is the clue to stop looking for URL structure. Open Payload and you will get Base64 -> JSON, which is the useful result for inspection.

FAQ:

Why did a bare hostname gain an explicit scheme?

Scheme-less host and path text can still be parsed as a URL, so the result is shown with an explicit scheme for comparison and opening. That helps when you paste hostnames copied from logs, messages, or browser text fields.

Why did the plus sign stay as a plus sign?

Because + only turns into a space when you enable Treat + as space. That behavior fits form-style query data, but a plus sign can also be literal content.

Why are there no parameters after decoding?

Either the result never parsed as a URL, the URL had no query string, or your cleanup rules removed every pair. Check whether the summary says Normalized URL, then compare the final string with Filtered, Trackers removed, and Empty removed.

Why is Open disabled?

The button only enables when the final output is an http or https URL. Plain text, malformed strings, and non-web schemes remain visible but are not opened.

Does the decoder keep data local?

Decoding, normalization, character inspection, and payload checks all happen in your browser, and there is no tool-specific upload step. The important catch is that changed input and settings are mirrored into the page URL. Do not paste secrets you would not want in your address bar, browser history, or a copied page link.

What changed when I enabled IDN to Unicode?

Only the displayed hostname. It swaps the ASCII-compatible A-label form for a Unicode U-label when that conversion is available. It does not validate the domain or change query cleanup rules.

Why did a Payload tab appear for text that is not a link?

Because the final decoded string also looks like JSON or Base64 text. That is common with tokens, callback values, and data blobs that travel inside URLs or logs.

What if malformed percent escapes still look broken?

Treat the result as a clue, not as a clean decode. Invalid escape sequences can only be recovered partially, so compare the output with the original string before you trust the decoded form.

Glossary:

Percent-encoding
A URL syntax rule that represents one octet as a percent sign followed by two hexadecimal digits.
Reserved character
A character such as /, ?, or & that can act as a URL separator instead of ordinary data.
Query parameter
A name-value pair carried after the question mark in a URL.
Fragment
The part after #, usually interpreted by the browser after the main request is made.
A-label
The ASCII-compatible form of an internationalized domain label, typically beginning with xn--.
U-label
The Unicode form of an internationalized domain label that is easier for people to read.
Base64url
A URL-safe Base64 alphabet that uses - and _ instead of + and /.