What's the difference between standard Base64 and URL-safe Base64?
Standard Base64 (RFC 4648 §4) uses the alphabet `A–Z a–z 0–9 + /` plus `=` padding. URL-safe Base64 (RFC 4648 §5) replaces `+` with `-` and `/` with `_` so the encoded value is safe inside URL query strings, URL fragments, and filenames — `+` is interpreted as a space in `application/x-www-form-urlencoded` payloads, and `/` is a path separator. JWTs, JWS signatures, and most signed-URL APIs use the URL-safe alphabet, usually without `=` padding (the `=` would need to be percent-encoded inside a URL anyway). This page handles both: on decode, `-` and `_` are mapped back to `+` and `/` before `atob`; on encode, the URL-safe checkbox swaps the output alphabet and the strip-padding checkbox drops trailing `=`.
Why did my decode succeed but the output is unreadable garbage?
Because the original was binary, not text. Base64 is alphabet-agnostic — it's just a way to pack 3 bytes into 4 ASCII characters; nothing about the encoding tells you whether the bytes inside are text or a PNG. This page runs `TextDecoder("utf-8", { fatal: true })` over the decoded bytes; if any byte sequence is not valid UTF-8, the page tells you honestly that the payload is binary and offers Save as file. If you saw "garbage" in the output, that's the older single-button behavior — the dual-mode page here splits the result so you don't have to guess. PNG/JPEG/PDF/ZIP/MP3 all decode fine as bytes; you just need to download them, not stare at them.
Why does my Base64 throw an error about `length % 4 === 1`?
Because that length is structurally impossible for valid Base64 under any padding convention. The Base64 algorithm packs 3 bytes into 4 ASCII characters; the only valid remainder lengths are 0 (full triplet, no padding), 2 (one byte left, two `=` of padding), or 3 (two bytes left, one `=` of padding). A length of 1 modulo 4 cannot represent any complete byte and is never a legitimate Base64 input — it's almost always a copy-paste truncation, an off-by-one slice, or a stray character that drifted in from the surrounding text. The page surfaces that honestly rather than padding the input out to a length that decodes to a guaranteed-wrong value. Re-copy the source, or check whether you trimmed one character too many.
Can I decode a `data:` URI directly without stripping the prefix?
Yes. Paste the whole `data:image/png;base64,iVBOR...` value into the input panel and click Decode. The page parses the URI shape — the mediatype (`image/png`, `application/pdf`, `text/plain;charset=utf-8`), any `;charset=` parameter, and the `;base64` flag — and uses the payload after the comma as the actual Base64. The captured mediatype shows up as a chip next to the result, which is what disambiguates "decode to text" from "decode to file" when the bytes themselves don't make the answer obvious. `data:` URIs without the `;base64` flag are recognised as "this is percent-encoded, not Base64" and surface that distinction in the status row instead of attempting an `atob` that would throw on the percent escapes.
Can I decode a Base64 string out of a PEM-armored key, cert, or signed payload?
Yes. PEM armor (`-----BEGIN PUBLIC KEY-----`, `-----BEGIN CERTIFICATE-----`, `-----BEGIN PGP MESSAGE-----`, etc., and the matching `-----END ... -----` line) is stripped before decoding, and the line wraps that PEM uses (typically every 64 characters) are absorbed by the whitespace strip. So pasting a whole PEM block decodes the inner Base64 to the underlying bytes, which is what you want when you're holding a private key, an X.509 cert, or a signed-payload envelope and want to read what's inside. The decoder does not parse the bytes further (it doesn't ASN.1-decode a public key or verify a cert chain) — that's a separate problem; here you get the raw bytes for your downstream tool to consume.
Does this tool send my Base64 to a server?
No. `atob`, `btoa`, `TextDecoder`, and `TextEncoder` are all native browser APIs and run inside this tab. The Base64, the decoded bytes, the encoded output, any file you load, and the resulting download all stay on your device. There is no signup, no watermark, no analytics on the payload itself. Safe for JWT claims (which routinely contain email addresses and account identifiers), signed-URL bodies, API tokens, and config exports that have no business going through an arbitrary online decoder.
Is there a file size limit?
The page works on whatever fits in a browser textarea or in memory; the loaded-file path is capped at roughly 10 MB on the transform call itself, because `atob` allocates the whole decoded string before yielding it back to the page and the textarea round-trip allocates again on top of that. For larger payloads the right tool is a streaming Base64 decoder — `base64-stream` (Node), `openssl base64 -d` (command line), or `iconv` for charset conversion alongside the decode. Encoding a multi-megabyte file works, but expect a noticeable freeze while `btoa` runs; the encode is single-pass with no streaming.
What about Base32, hex, or "Base64 with a custom alphabet"?
Out of scope on purpose. Base32 (RFC 4648 §6) is a different alphabet with different padding rules; hex is a different encoding entirely. "Base64 with a custom alphabet" — bcrypt's Base64, the URL-safe-but-rotated alphabets some legacy systems use — needs a parametrised decoder that this page deliberately does not provide, because picking the wrong alphabet silently yields wrong bytes that look right enough to pass a casual eyeball test. For Base32 use `base32` (Node) or `base32-decode` in the browser; for hex use `Buffer.from(s, "hex")` (Node) or a tiny in-page conversion; for custom-alphabet variants, use the library that ships with the system that produced the value.
Will this tool stay free?
The basic workflow is designed to stay free. Paid upgrades later will focus on bigger limits, batch work, OCR, saved presets, and ad-free use.