{{ summaryHeading }}
{{ comparisonPercentLabel }}
{{ summarySecondaryLine }}
{{ verdict.badgeText }} {{ comparisonBadge }} {{ selectedResolverCount }} resolvers {{ holdoutBadge }} {{ uniqueAnswerBadge }} {{ failureCount }} failures Needs rerun
{{ badge.text }}
Last run: {{ lastRunLocal }}
DNS propagation inputs
{{ expectedAnswerHelper }}
{{ resolver_keys.length }} / {{ resolverCatalog.length }} selected
ms
{{ resolver_keys.length }} resolvers {{ comparisonModeLabel }} DO {{ dnssec_doBool ? 'on' : 'off' }}, CD {{ dnssec_cdBool ? 'on' : 'off' }} ECS {{ ecs }} {{ timeout_ms }} ms {{ retries }} retries
Current verdict
{{ verdict.title }}
{{ verdict.lead }}
{{ verdict.detail }}
Recommended next checks
  1. {{ step }}
Operational facts
  • {{ fact }}
Observed answer groups
State Answer Count Resolvers
{{ group.label }} {{ group.answerDisplay }} {{ group.count }} {{ group.resolverList }}
# Resolver Status Answer TTL AD Time (ms) Copy
{{ index + 1 }}
{{ row.resolver }}
{{ row.resolverPolicyShort }}
{{ row.statusLabel }} {{ row.answer || '—' }} {{ row.ttl !== null ? row.ttl : '—' }} {{ row.ad === true ? 'Yes' : (row.ad === false ? 'No' : '—') }} {{ row.ms !== null ? row.ms : '—' }}
No data.

                
:

Introduction

DNS records tell recursive resolvers which answers to return for a name, but changes do not appear everywhere at once. Cached data expires on different schedules, providers answer from different anycast locations, and some records are intentionally tailored by network or region, so propagation is always a moving snapshot rather than a single universal moment.

This checker gives you that snapshot by querying selected public resolvers for one record type and summarizing how many of them currently agree on the same answer. It also shows TTL, authenticated-data state, and response time, which makes it easier to judge whether a cutover is settling or still fragmented.

The current package supports A, AAAA, CNAME, MX, NS, TXT, SOA, SRV, and CAA lookups. You can run a simple check with a domain and record type, or open the advanced controls to choose resolvers, set DNSSEC request flags, add an EDNS client subnet hint, and adjust timeout and retry behavior.

A common use case is a website move from one IP address to another. Instead of checking one resolver at a time, you can compare a group of public resolvers, see which answer is now the majority, and decide whether it is sensible to switch traffic, wait longer, or investigate why a subset still returns the old value.

The propagation percentage is a resolver-agreement score, not a guarantee about every cache on the internet. It is also not an authoritative-nameserver trace, so mixed answers can mean real lag, but they can also reflect geosteering, DNSSEC behavior, or different resolver policies by design.

Everyday Use & Decision Guide

For most checks, the fast path is enough: enter a fully qualified domain name, choose the record type, and run the default resolver set. The tool then builds a resolver table, an answer-distribution chart, a latency chart, and a JSON export from the same result set.

The domain input is slightly more forgiving than the placeholder suggests. The package extracts the hostname from a pasted HTTP URL if needed, removes a trailing dot before querying, and still validates the result as a proper domain name. It also allows underscores in labels, which matters for service and policy names such as _dmarc.example.com or selector-based TXT records.

Common controls and what they affect
Control What It Changes Why You Would Use It
Record type Switches the question between A, AAAA, CNAME, MX, NS, TXT, SOA, SRV, and CAA. Use the exact record family you are changing rather than inferring from a different type.
Resolvers Chooses which public recursive resolvers are queried. Useful when you want a broader or narrower public snapshot.
DNSSEC DO bit Requests DNSSEC-related data from supporting resolvers. Helpful when you want propagation plus DNSSEC context in the same pass.
DNSSEC CD bit Requests checking-disabled semantics from supporting resolvers. Useful for comparative troubleshooting when validated and non-validated views may differ.
EDNS client subnet Adds a client-subnet hint to the query. Helpful when CDN or geo answers may change by network.
Timeout and retries Changes how long the tool waits and how many attempts it makes per resolver. Useful when you need a stricter or more patient snapshot.

The default resolver set is Cloudflare, Google, Quad9, AdGuard, and DNS.SB, while a full selection can add AliDNS and OpenDNS. The table is usually the first place to look when a result is ambiguous because you can sort by input order, answer, latency, or resolver name and quickly separate the majority result from outliers and failures.

The two charts serve different jobs. Answer Distribution shows whether the selected resolvers converge on one answer or split across several. Latency Trend highlights which resolvers were slow or failed.

This tool sends the queried domain to public DNS-over-HTTPS services, and the current inputs are also mirrored into the page address as query parameters. That is convenient for repeat checks, but it means you should avoid using sensitive internal names or sharing a link that exposes more than you intend.

Technical Details

The checker works entirely in the browser and queries each selected resolver concurrently. Most providers are queried through JSON-based DNS-over-HTTPS endpoints. Quad9 and OpenDNS are handled differently in this package: the app builds a DNS wire-format request, Base64URL-encodes it into a dns= query parameter, and then parses the binary response after it returns.

Some providers can be called directly from the browser, while others in this package use a browser-compatible proxy path because of cross-origin restrictions. The output still becomes the same normalized per-resolver row containing answer text, minimum TTL, AD status, latency in milliseconds, and a normalized comparison key.

The response parser is intentionally conservative. For JSON answers, it ignores OPT pseudo-records, pulls the minimum TTL across answer records, strips surrounding quotes from TXT data, removes trailing dots from hostname-style answers, sorts unique values, and joins them into one display string. The majority comparison is therefore based on normalized content rather than on formatting differences between resolvers.

M = count of rows whose normalized answer equals the majority key N = total selected resolvers, including failures P = round(MN×100)
Validation and transport behavior
Field or Rule Current Package Behavior
Domain validation Maximum 253 characters, at least two labels, labels up to 63 characters, no leading or trailing dash, underscores allowed.
Timeout User input is clamped to a practical floor of 500 ms per attempt during execution.
Retries The tool performs retries + 1 total attempts per resolver.
Success selection When a resolver returns more than one successful attempt, the fastest successful attempt wins.
Failure rows No answer produces an empty answer field, null TTL, null or absent AD, and still counts in the denominator for propagation percentage.
ECS parsing The package accepts IPv4 or IPv6 CIDR input, normalizes the prefix length, masks partial bytes correctly, and encodes the result into EDNS option 8 when present.

The result table, charts, and JSON export all draw from the same normalized dataset. The table can be copied as CSV, downloaded as CSV, or exported as DOCX. Each chart can be downloaded as PNG, WebP, JPEG, or CSV. The JSON tab contains the original inputs, aggregate statistics, and the raw per-resolver result rows.

The answer chart is a donut whose categories are the unique answer strings plus a Fail bucket for missing answers. The latency chart is a bar chart of response time per resolver, with missing timings kept aligned to the resolver list.

TTL is shown as the minimum TTL across the returned answer set, not as an average. That is a useful conservative choice because it highlights the shortest caching horizon visible in the resolver's current response, though it still represents what that resolver returned at query time rather than the original zone value at publication time.

Because this is a public-resolver snapshot, the tool does not contact authoritative nameservers directly, and it does not prove a parent referral or a full delegation chain. It is best treated as a recursive-view comparison tool whose strength is showing public agreement, disagreement, and lag across a selected resolver set.

Interpreting Results

A high propagation percentage means that most selected resolvers currently normalize to the same answer. It does not necessarily mean the answer is correct, reachable, or visible to every client. A stale 100 percent can happen if all selected resolvers still have the same old record, and a mixed result can happen even when the change is working as intended if answers vary by geography or network.

The majority answer badge is a practical shortcut, not a verdict. When one answer dominates and the remaining rows are a small minority, you are probably looking at normal propagation lag. When two answers split more evenly, you are either in the middle of a real change window or dealing with a record family that is intentionally diverse across resolvers.

The TTL column helps explain why caches might linger. A high TTL suggests the resolver may continue serving that answer for longer. A low TTL means it is closer to requerying, though it still does not guarantee when every downstream client or intermediate cache will refresh.

The AD column is only the resolver's reported authenticated-data state for that response. It is useful DNSSEC context, but it is not end-to-end proof that every client path validated the same data. A blank answer may mean timeout, refusal, no matching records, or a temporary transport issue, and failures still lower the propagation percentage because they remain in the denominator.

Latency tells you about response speed, not about correctness. A slow resolver that returns the majority answer can still be operationally important, and a fast resolver that returns a minority answer can still be the real signal that propagation is incomplete for some users.

Step-by-Step Guide

  1. Enter the domain you want to inspect. If you paste a URL, the package extracts the hostname before validation.
  2. Choose the exact record type you changed or expect clients to read, such as A for host routing, MX for mail delivery, or TXT for policy records.
  3. Run the default resolver set first unless you already know you need a wider or narrower comparison.
  4. Open the advanced panel only when the situation calls for it. Use DO or CD for DNSSEC-oriented checks, ECS for geo-sensitive answers, and longer timeouts or retries when networks are unstable.
  5. Read the resolver table before the charts. It tells you which resolvers agree, which differ, which failed, and how the TTL and AD values line up.
  6. Use the Answer Distribution and Latency Trend charts to summarize the pattern, then export CSV, DOCX, chart files, or JSON if you need a record of the snapshot.
Worked propagation example. If 5 of 7 selected resolvers now return the new A record while 2 still return the old one, the tool reports a 71 percent propagation score after rounding. That is usually a sign that the change is visible in much of the selected public resolver set but not yet uniform.

Worked Examples

Website cutover. You move www.example.com to a new IPv4 address and query the A record. The majority answer changes quickly, but two resolvers still return the old IP with a sizable TTL. That tells you the public view is trending in the right direction while a minority still needs time to age out.

Mail migration. You change an MX record and rerun the checker several times over an hour. If the majority answer stabilizes and failures stay low, you have a cleaner signal that the recursive public view is settling, though you should still test actual mail flow separately because this tool does not verify delivery.

Policy TXT record. You query _dmarc.example.com as TXT. Because the validator allows underscore labels, the package accepts the service-style name directly. The answer parser strips surrounding TXT quotes before majority comparison so equivalent strings are less likely to look different just because resolvers formatted them differently.

FAQ

Does this query authoritative nameservers?

No. The package compares public recursive resolvers. That is useful for visibility checks, but it is different from tracing the authoritative chain directly.

Why can two resolvers disagree when nothing is broken?

They may have different cache ages, different anycast vantage points, different DNSSEC behavior, or intentionally different answers because of traffic steering and ECS logic.

What does a blank answer row mean?

It means the tool did not produce a usable answer for that resolver under the current settings. Timeouts, empty responses, status errors, or transport issues can all lead to that outcome.

Can I paste a full URL instead of only a hostname?

Yes. The package extracts the hostname when the input begins with an HTTP or HTTPS scheme, then validates it before querying.

Does the propagation percentage ignore failed resolvers?

No. The denominator is the total selected resolver count, so failures lower the percentage even when the successful answers all agree.

Does this prove that the target service is reachable?

No. The checker compares DNS answers and response timings from recursive resolvers. It does not test reachability, delivery, TLS configuration, or application health.

Glossary

Recursive resolver
A DNS service that looks up answers on behalf of clients and may cache them for later use.
TTL
Time to live, the cache lifetime associated with a returned record set.
DNS-over-HTTPS
A way to send DNS queries over HTTPS rather than traditional UDP or TCP port 53 transport.
DO bit
A DNSSEC-related request flag that asks supporting resolvers to include DNSSEC records.
CD bit
A checking-disabled request flag used when you want to compare responses without normal resolver validation behavior.
AD flag
An authenticated-data response signal returned by supporting resolvers when they report validated data.
ECS
EDNS Client Subnet, a hint that can influence geo-sensitive DNS answers by supplying a client network prefix.

References