dev101.io

URL Parser

Break URLs into parts — scheme, host, port, path, query, fragment, IDN, and PSL.

Loading tool…

How to use URL Parser

  1. Paste a URL into the top input.
  2. Open Add base URL if the URL is relative or protocol-relative.
  3. Read the Parts card for protocol, host, pathname, credentials, and origin.
  4. Scan the Domain breakdown row for subdomain / SLD / TLD.
  5. Edit the Query parameters table to tweak keys, values, or ordering.
  6. Copy the Cleaned URL or Share a link that restores the session.

URL Parser

A client-side URL inspector. Paste any URL — absolute, relative, with credentials, with an IPv6 literal host, with a punycode IDN — and see every component split out with per-field copy buttons, an editable query-parameter table, and a live Cleaned URL at the top. Runs in your browser, never touches a server, uses only the WHATWG URL and URLSearchParams globals.

Why a dedicated URL Parser

Every time you debug an OAuth redirect, a tracking-link misconfiguration, or a CDN rewrite rule, you end up writing the same one-liner: new URL(x); u.searchParams.getAll("utm_source"); u.pathname.split("/"). It works, but it's also the moment where silent bugs creep in. Is localhost a domain or a special case? Is the port 0 or an empty string? Did that trailing slash become an empty last segment or get stripped? Is the hostname xn--something a punycode label, and if so, what's the Unicode form? The built-in URL parser answers the first question but forces you to answer the rest by hand, every time.

The URL Parser tool runs the full pipeline in one go. It classifies the host as domain, ipv4, or ipv6. It splits domains against a built-in mini-PSL so sub.example.co.uk correctly shows co.uk as the TLD and example as the SLD. It percent-decodes every path segment and compares against the re-encoded form, flagging segments that the browser would re-normalise. It groups duplicate query keys so ?a=1&a=2&a=3 shows a single row with an ×3 badge, and it lets you edit those rows live — including reordering — with the Cleaned URL staying in sync on every keystroke.

What it does

  • Component breakdown. Protocol, origin, host (with port), hostname, port, pathname, hash — every WHATWG URL field, each with a one-click copy button.
  • Credential handling. Percent-decodes username and password into a redacted display; a Reveal toggle reveals the plaintext when you explicitly ask.
  • Host classification. Marks IPv4 (127.0.0.1), IPv6 ([::1]:80), and domain hosts with distinct badges. Empty hosts (data URLs) are flagged too.
  • IDN / punycode. Shows both the Unicode and punycode forms for any host containing xn-- labels, using a built-in RFC 3492 decoder — zero deps, zero network.
  • PSL split. Extracts subdomain / SLD / TLD using a curated list of ~80 common two-label suffixes. Falls back cleanly to single-label TLDs.
  • Path segments. Lists each path segment with the raw, decoded, and re-encoded forms. Segments where the re-encoding differs from the raw form get a warning badge — a common source of CDN-routing surprises.
  • Editable query table. Key, value, reorder, delete, add — every edit round-trips through URLSearchParams so duplicate keys and special characters are preserved exactly.
  • Relative resolution. Provide an optional base URL and relative or protocol-relative inputs resolve against it.
  • Cleaned URL. The canonical output after new URL(...) normalises ports, case, and encoding. Copy it with a single click.
  • Shareable state. The input URL (and optional base) round-trip through the hash fragment, so links you share restore the whole session without ever hitting a server.

What's not supported (yet)

  • Full Mozilla Public Suffix List. The mini-PSL handles common country-code TLDs; exotic ones may fall back to single-label. Use a Node script with tldts if you need the full list.
  • data: URL inner parsing. Data URLs are recognised and marked opaque, but the embedded MIME type and body are not decoded here — see the Base64 and URL Encode tools.
  • mailto:/javascript: opaque schemes. They parse but don't get the structured breakdown — there's no host or path to split.

Related tools

  • URL Encode / Decode — percent-encode and decode individual URL values.
  • JWT Decode — inspect JSON Web Tokens that often travel in query parameters.
  • JSON Formatter — pretty-print JSON values pulled out of query strings.

Frequently asked questions

How is this different from just calling new URL() in the console?

The WHATWG `URL` constructor gives you the raw fields — protocol, host, pathname, search, hash — but stops there. It doesn't split the path into segments, doesn't enumerate duplicate query keys as arrays, doesn't tell you whether the hostname is IPv4, IPv6, or a domain, doesn't run the public-suffix-list split into subdomain/SLD/TLD, and doesn't surface the Unicode display form next to the punycode one. The URL Parser tool runs all of that on top of `new URL()` so you can see every component at a glance, round-trip edits in place, and copy any single field without selecting text out of the browser console output.

Does it support internationalised domain names (IDN)?

Yes. When you paste a Unicode hostname like `https://例え.jp`, the WHATWG `URL` parser converts it to punycode (`xn--r8jz45g.jp`) — that's how it gets sent on the wire and that's what the tool stores in the cleaned URL. The tool also ships a built-in RFC 3492 punycode decoder, so when a hostname contains `xn--` labels the Unicode display form is shown next to the ASCII form with an `idn` badge. No external library required, no server round-trip.

Why don't you use the full Mozilla Public Suffix List?

The full PSL is a ~250 KB text file with thousands of entries and its own licensing considerations. For a privacy-first browser tool, shipping it would blow the bundle budget for a feature most users don't need. The URL Parser instead carries a curated table of about 80 common two-label TLDs — `co.uk`, `com.au`, `co.jp`, `com.br`, and friends — which correctly classifies the vast majority of real-world URLs. Anything else falls back to a one-label TLD, which is the right answer for `example.com`, `acme.io`, and other single-label public suffixes. If you need the full PSL, the Node ecosystem has `tldts` and `psl`; we deliberately don't bundle them.

Can I edit the query parameters in place?

Yes. The Query parameters table lets you edit any key or value, delete rows, reorder them with the up/down arrows, and add new rows at the bottom. Every edit re-assembles the URL live using `URLSearchParams` — which means duplicate keys are preserved, special characters are re-encoded correctly, and the Cleaned URL at the top stays in sync. When you're done, hit Copy cleaned URL to grab the result, or Share to produce a link that restores the whole state.

Related tools