Nifty Tools

url decoder

Use a free url decoder in your browser. Decode percent-encoded URLs, query strings, and OAuth redirects locally. No upload, no signup.

Processing mode: Local Browser-local

Paste a URL or percent-encoded value to decode.

How to use it

URL Decoder — Free, In Your Browser

  1. Paste a URL or a percent-encoded value into the input panel. Component-aware mode is on by default; toggle it off if you want a single `decodeURIComponent` pass over the whole input as a string (useful when the input is an isolated value, not a full URL).
  2. Click Decode. With component-aware mode on, the page parses the URL with `new URL(input)`, decodes scheme, host, path, query parameters, and fragment separately, and lays the query parameters out as a key/value table so you can see the readable structure at a glance. With it off, the page runs `decodeURIComponent` once over the input.
  3. To go the other direction, toggle Encode. Type or paste the readable value, optionally turn on form-encoding for `application/x-www-form-urlencoded` bodies, and click Encode. Copy or download the result. Nothing leaves your browser.

Good for

Common use cases

People reach for a url decoder when a URL arrived percent-encoded and the readable value is the only thing that helps debug what the URL actually points at. The classic situations are an OAuth redirect URL pulled out of a network capture (the `state` parameter is a Base64 blob, the `code` is short-lived, and the `redirect_uri` is itself percent-encoded twice because the OAuth client URL-encoded the redirect target before the provider URL-encoded the whole callback), a tracking link with `%2F` and `%3A` everywhere because somebody URL-encoded a full URL into a query parameter, an S3 or CloudFront signed URL with `?X-Amz-Signature=` and a long `X-Amz-Credential` value where the readable part is the credential scope, a webhook callback URL captured before and after a flag flip where the question is "did this parameter change," a UTM-laden share link that you want to clean up before pasting into a doc, an email click-tracker where the real destination is wrapped in a `?u=` parameter, an API request URL inside a Postman or DevTools network log where the readable query string answers "did this client hit the right endpoint," and the slightly-different sibling case of an `application/x-www-form-urlencoded` POST body where `+` means space and `&` separates name/value pairs. The same pipeline gets called URL decoding, percent-decoding, or "URL decode" depending on which tool wrote the docs you read; they all describe the same operation — take a percent-encoded value, run it through `decodeURIComponent`, and surface the readable string. This page does that operation locally with the browser's native `decodeURIComponent` plus the WHATWG URL parser, so the decoded values match what the consuming HTTP client would see after percent-decoding (the per-component query table is the authoritative structure; the reassembled URL string in the output panel is a readable rendering, not a byte-for-byte URL — once the values are decoded, a literal `&` or `=` inside a value can no longer be distinguished from the URL's structural separators in flat-string form). Doing it browser-local matters because URLs in front of you routinely contain access tokens, customer email addresses, partner identifiers, signed-URL bodies, and unreleased feature flags that have no business going through an arbitrary online service.

Processing mode

Browser-local

Files are processed by your browser. They never reach our servers.

Questions

URL Decoder — Free, In Your Browser FAQ

When do I use `encodeURI` vs `encodeURIComponent` — and which one is this page?

This page uses `encodeURIComponent` (and its inverse `decodeURIComponent`), with a small wrapper for strict RFC 3986 conformance. The two native functions are not interchangeable. `encodeURI` is intended for whole URLs and deliberately does NOT escape the URL-syntax characters `:` `/` `?` `#` `[` `]` `@` `!` `$` `&` `'` `(` `)` `*` `+` `,` `;` `=` — because escaping them would break the URL into a different shape. `encodeURIComponent` is intended for individual URL components (a single query parameter value, a single path segment) and DOES escape most URL-syntax characters — but for backward compatibility with the older RFC 2396 spec, it leaves the five sub-delim characters `!` `*` `'` `(` `)` unescaped. RFC 3986 §2.2 lists those five as reserved, so a strict component encoder should escape them too. This page wraps `encodeURIComponent` and additionally percent-escapes those five — `!` → `%21`, `*` → `%2A`, `'` → `%27`, `(` → `%28`, `)` → `%29` — so every reserved character is encoded and the output round-trips safely through every URI parser, including ones that treat sub-delims as structural. If you have a whole URL string and want to lightly escape stray spaces or unicode, `encodeURI` is the right native call — but that case is rare in practice and we deliberately do not expose it because the boundary between "this is a whole URL" and "this is a URL component" is exactly where percent-encoding bugs come from.

Why does `+` decode to a space sometimes but stay literal other times?

Because two different specs are at play. RFC 3986 (URLs) says `+` is a literal `+` and reserves no special meaning for it; the HTML spec for `application/x-www-form-urlencoded` (form bodies and the `?key=value&key=value` query string when produced by a browser form submission) says space encodes to `+` and `+` decodes to space. So the same byte means different things in different parts of the same URL: a `+` in the path is a literal plus sign, but a `+` in a form-shaped query string is a space. Component-aware mode follows that rule — `+` in the parsed query parameters decodes to space, `+` in the parsed path stays literal. Component-aware mode off treats the whole input as a single component-shaped value and the form-encoding toggle decides whether `+` means space.

What does this tool do with malformed escapes like `%ZZ`, a lone `%`, or a truncated `%2`?

It surfaces a precise error in the status row — `Malformed percent-escape at position N: "%2"` — instead of silently substituting U+FFFD, returning the input unchanged, or dropping the bad sequence on the floor. The native `decodeURIComponent` throws a `URIError` for any of these cases; the page catches the throw, finds the offending position by walking the input, and reports it. This is the bit that distinguishes "honest decode" from "best-effort decode": a percent-decoder that silently swallowed `%ZZ` would also silently swallow a real bug — a missing character, an over-eager URL writer that double-escaped, a tracking link that landed in a database column with the trailing characters truncated. The page wants to be the thing that tells you "this URL is malformed and here is where," not the thing that papers over the malformation.

Why is my decoded URL still showing `%2F` or `%3A` after decode?

Because the source was double-encoded. A common pattern is an OAuth client that already URL-encodes its `redirect_uri` parameter, then the OAuth provider URL-encodes the whole callback URL when redirecting the browser — so by the time the URL lands in your address bar, the `redirect_uri` value has been encoded twice. The first `decodeURIComponent` pass turns `%252F` into `%2F`; a second pass on the same value turns `%2F` into `/`. Paste the result back into the input and decode again. The page does not auto-double-decode (an aggressive normaliser would silently corrupt URLs whose values legitimately contain percent-escaped percent signs, e.g. a search URL pointing at a literal "%2F" in a code snippet); we surface the result of one pass and let you decide whether a second pass is what you actually want.

Can I decode a JWT or Base64 payload from inside a URL?

Yes — but it is a two-step process, on purpose. URL-decoding the URL first surfaces the parameter values; if one of those values is a JWT, copy the middle segment into the [Base64 decoder](/base64-decoder/) (JWTs use the URL-safe Base64 alphabet, which the Base64 decoder handles), and the JSON claims come back. If one of those values is itself a Base64 blob (a `state` parameter for an OAuth flow, a serialised form for a wizard), the same Base64 decoder unwraps it. Keeping the two operations separate lets each tool stay narrow and honest about what it does — a URL decoder that silently chained into a Base64 decoder would either over-decode strings that incidentally look Base64-shaped, or under-decode Base64 values that did not match the chain's heuristic.

Is there a difference between RFC 3986 percent-encoding and `application/x-www-form-urlencoded`?

Yes. RFC 3986 (URLs) percent-encodes the reserved characters and leaves space as `%20`. HTML's `application/x-www-form-urlencoded` (form bodies and browser-submitted query strings) percent-encodes the reserved characters and additionally encodes space as `+` (and any literal `+` as `%2B`). The two formats overlap on most characters but disagree on the space byte and on the meaning of `+`. The encode mode toggle here exposes the difference explicitly: default is RFC 3986, the form-encoding switch flips to the HTML form rule. Browsers internally use the form rule when assembling `<form method="get">` requests and the URL constructor's `URLSearchParams` API uses the form rule too — which is why a query string that came out of a form looks like `?q=hello+world` rather than `?q=hello%20world`.

Does this tool send my URL to a server?

No. `decodeURIComponent`, `encodeURIComponent`, and the WHATWG `URL` constructor are all native browser APIs and run inside this tab. The URL, the decoded components, the encoded output, and any download all stay on your device. There is no signup, no watermark, no analytics on the URL itself. Safe for OAuth redirect URLs (which contain `code` and `state` values that grant access to the authorising account), signed S3 URLs (which contain credential scopes), and tracking links (which routinely carry partner identifiers and customer email addresses).

Is there a length cap?

Browsers and servers cap URLs at very different limits — Chromium accepts URLs up to roughly 2 MB in the address bar, server frameworks default to anywhere from 8 KB to 1 MB, and IIS / classic load balancers may cap below 8 KB. The page itself works on whatever fits in a textarea, with the transform call capped at 1 MB to keep the parser snappy on lower-RAM devices. If you are decoding a multi-megabyte URL the right tool is almost certainly a streaming approach in Node (`new URL()` on the value, then iterate `searchParams`) rather than a browser textarea.

Will this tool stay free?

The basic workflow is designed to stay free. Paid upgrades later will focus on bigger limits, batch work, OCR, saved presets, and ad-free use.

Will this tool stay free?

The basic workflow is designed to stay free. Paid upgrades later will focus on bigger limits, batch work, OCR, saved presets, and ad-free use.