Nifty Tools

XML to JSON

Convert xml to json in your browser. Native DOMParser handles attributes, repeated siblings, namespaces, CDATA, and comments. No upload.

Processing mode: Local Browser-local

Waiting for XML.

How to use it

XML to JSON Converter — Free, In Your Browser

  1. Paste XML into the editor or load a `.xml` / `.xsd` / `.rss` / `.svg` file (up to ~10 MB per run). Pick whether single-occurrence siblings should be wrapped in an array, whether comments should be kept, and the indent.
  2. Click Convert. The browser's `DOMParser` parses the XML, the walker materialises every element with attributes, text, and children, and the result is serialised to pretty JSON.
  3. Copy the JSON to the clipboard or download it as a `.json` file. Nothing leaves your browser.

Good for

Common use cases

People convert XML to JSON when the source is markup and the destination is a modern JavaScript or Python consumer that already speaks JSON natively. XML is what SOAP envelopes, RSS and Atom feeds, sitemaps, SVG fragments, OPML imports, Spring and .NET configuration files, Excel `.xlsx` shared-strings tables, ONIX book metadata, financial messaging (FIX, ISO 20022 XML form, OFX), and most pre-2010 enterprise APIs still emit. JSON is what every modern fetch handler, JavaScript test fixture, log shipper, and dashboard inspector expects. The conversion is awkward to do in a terminal because the obvious shell tools (`xmllint --xpath`, a one-liner in Python with `xmltodict`, a quick `node -e` with `fast-xml-parser`) all force a context switch and force a decision about how attributes, repeated children, and namespaces map into JSON before any output appears. The honest version of this conversion has to make those decisions explicitly — and surface them on the page — because XML and JSON do not have a one-to-one mapping. XML has attributes, JSON does not. XML elements can repeat siblings with the same name; JSON object keys cannot. XML preserves element order across mixed-name children; JSON object key order is well-defined in modern engines but is not part of the data model the way a JSON array is. XML carries comments and CDATA sections; JSON has neither. The tool here uses the browser's native `DOMParser` (the same parser the platform already uses for `XMLHttpRequest` responseXML and the SVG renderer), walks the resulting DOM tree against a documented mapping, and lets you toggle the two decisions that bite consumers most often — single-occurrence siblings collapsing to a scalar versus always being an array, and whether comments survive into the JSON output.

Processing mode

Browser-local

Files are processed by your browser. They never reach our servers.

Questions

XML to JSON Converter — Free, In Your Browser FAQ

How do attributes, text, and child elements map into JSON?

Attributes on an element collect under `@attributes` as an object of name to string. Plain text content on a leaf element with no attributes and no element children becomes a scalar string (so `<city>London</city>` becomes `"London"`, not `{"#text": "London"}`). Empty leaf elements become `null`. When an element has both text and attributes or children, the text joins under `#text` so neither is lost. Child elements appear as keys with the element name, with repeated siblings of the same name collapsed into a JSON array. Namespace prefixes on element and attribute names are preserved verbatim — `soap:Envelope` stays `soap:Envelope`. The XML declaration (`<?xml ... ?>`) and other processing instructions are dropped.

Why not always wrap repeated siblings in an array?

Because most readable JSON does not. `<book><title>One</title></book>` reading back as `{ book: { title: "One" } }` is closer to what a human reviewer expects than `{ book: { title: ["One"] } }`. The trade-off is that downstream code that needs to iterate over `title` has to check `Array.isArray` first, because two-or-more occurrences DO produce an array. Toggle `Force array for repeated-name siblings` on when you are generating a fixture for code that always wants the array shape and never wants to branch on cardinality. Default-off matches the most common convention used by `xml2js` (with `explicitArray: false`), `xmltodict`, and most ad-hoc converters.

Are namespaces preserved?

Yes — element and attribute names keep their prefix verbatim (`xml:lang`, `xlink:href`, `soap:Envelope`, `xsi:type`). The prefix-to-URI bindings declared by `xmlns:*` attributes are preserved as ordinary attributes on whichever element declared them, so a downstream consumer can rebuild the binding if it needs to. We do NOT materialise a separate namespaces object — the prefix on the qualified name carries the discriminating information any downstream XPath-style consumer would need, and adding a parallel namespaces map is the kind of "lossless by adding noise" choice that bloats the output for every consumer to satisfy a small minority.

Is XML to JSON lossless?

No, not in the strict round-trip sense. JSON has no comment node, so XML comments are dropped by default (with an opt-in to keep them under `#comment` as strings, useful for debugging but not part of the data). JSON has no equivalent for processing instructions or the XML declaration, so those are dropped. JSON object keys cannot repeat, so two siblings with the same name become an array — meaning the XML's strict element ordering across DIFFERENT names is preserved by JavaScript object key order (well-defined in modern engines for string keys) but is not formally guaranteed by the JSON spec. Mixed content where text and elements are interleaved (`<p>Hello <em>world</em>!</p>`) loses the interleaving — the text concatenates into `#text` and the elements live alongside it as keys. Treat the JSON output as a structural projection of the XML that suits modern code, not as a byte-for-byte alternate encoding.

How are CDATA sections converted?

A `<![CDATA[ ... ]]>` block is treated as ordinary text — its content joins the element's text node and ends up under `#text` (or as a scalar string if the element has no other content). The CDATA wrapper itself is not preserved, because JSON has no CDATA construct and the only thing CDATA does in XML is escape characters that would otherwise need entity encoding. Round-tripping back to XML through the `json-to-xml` tool will produce the same characters under regular text encoding.

What happens when the XML is malformed?

`DOMParser` does not throw on malformed XML — it returns a Document whose root (or, in Firefox, body) is a `<parsererror>` element. The wrapper detects this in two reliable shapes — the Mozilla parser-error namespace and a root-level `parsererror` element — and throws with the parser's own message, which in current Chromium and Firefox includes line and column information. We deliberately do NOT do an unconditional descendant search for `<parsererror>` because valid XML can legitimately contain a `<parsererror>` element in its own data (an audit log, a CI status feed, a payload that documents another parser's output) and we will not mis-flag it. Mismatched tags, an unclosed element, an undefined entity, an invalid character in a name, and most other well-formedness errors produce a single status message rather than partial JSON. The page does not promise that every XML file in the wild parses — it promises that when the parser stops, the message tells you where to look.

Is there a file size limit for XML to JSON?

Each run stays under roughly 10 MB. The parser materialises the full document tree in memory before serialising to JSON, so very large feeds (a several-thousand-item RSS dump, a sitemap with tens of thousands of URLs, a multi-megabyte XLIFF) can stall on lower-RAM devices. If your XML is larger, split it on top-level element boundaries, convert each chunk, and concatenate the resulting JSON arrays. For multi-megabyte production payloads the right tool is usually `xq` (the XML cousin of `jq`) or a streaming SAX parser in Node — this tool is built for the everyday "paste a SOAP response or RSS feed into a debug payload" case.

Will this tool stay free?

The basic workflow is designed to stay free. Paid upgrades later will focus on bigger limits, batch work, OCR, saved presets, and ad-free use.