| State | Answer | Count | Resolvers |
|---|---|---|---|
| {{ group.label }} | {{ group.answerDisplay }} | {{ group.count }} | {{ group.resolverList }} |
| # | Resolver | Status | Answer | TTL | AD | Time (ms) | Copy |
|---|---|---|---|---|---|---|---|
| {{ index + 1 }} |
{{ row.resolver }}
{{ row.resolverPolicyShort }}
|
{{ row.statusLabel }} | {{ row.answer || '—' }} | {{ row.ttl !== null ? row.ttl : '—' }} | {{ row.ad === true ? 'Yes' : (row.ad === false ? 'No' : '—') }} | {{ row.ms !== null ? row.ms : '—' }} | |
| No data. | |||||||
DNS changes do not appear everywhere at the same moment. Recursive resolvers keep cached answers until each record's time to live expires, and different providers may refresh at different times or from different network locations.
This checker compares that public recursive view for one record type at a time. It asks selected DNS-over-HTTPS resolvers for A, AAAA, CNAME, MX, NS, TXT, SOA, SRV, or CAA data, then lines up the returned answer text, TTL, authenticated-data state, and response time so you can see whether the selected resolver set is converging.
You can use it as a broad snapshot or as a cutover check. Leave the expected answer blank when you want to see which answer currently leads, or paste the value you want and score each resolver against it with exact text, contains-text, or regular-expression matching.
The percentage on screen is a resolver agreement score, not a guarantee about every client cache. A mixed result can reflect ordinary cache lag, but it can also come from location-aware answers, DNSSEC policy differences, or resolver-specific handling of EDNS Client Subnet.
Most of the work stays in the browser. The comparisons, charts, copy actions, and file exports are generated locally from the returned resolver data. The lookup itself does not stay local: the queried domain, record type, DNSSEC flags, EDNS Client Subnet value, and related request settings are sent to the selected public resolvers, and some resolver paths are relayed through a compatibility proxy so the browser can reach them. The current inputs are also reflected in the page address, which is convenient for reruns but not suitable for sensitive internal names.
The resolver catalog mixes several public DNS-over-HTTPS styles. Some providers are queried through JSON responses, while others are queried as binary DNS messages carried over HTTPS. When you enable the DNSSEC flags or enter an EDNS Client Subnet, the checker adds those request options to the outgoing query where the selected resolver path supports them.
Each selected resolver is queried concurrently. Retries add extra attempts after the first request, and if more than one attempt succeeds the tool keeps the fastest successful answer for that resolver row. Returned values are then normalized before comparison: trailing dots are removed from hostname-style answers, surrounding TXT quotes are stripped, duplicate values are removed, and multi-value answers are sorted into a stable display string.
TTL is shown as the minimum TTL found in the returned answer set, not an average and not a fresh read from the authoritative zone. That conservative choice helps when a multi-record answer includes entries with different remaining cache lifetimes.
| Output | How it is derived | Why it matters |
|---|---|---|
| Comparison score | Uses either expected-answer matches or the largest normalized answer group, then divides by the total resolver count. | Keeps failures and holdouts visible instead of hiding them outside the percentage. |
| Status badge | Marks each row as expected or lead, different or other, or fail depending on whether a usable answer was returned. | Shows immediately which resolvers support your cutover target and which do not. |
| TTL | Takes the lowest TTL seen in the resolver's answer set. | Gives a practical upper bound for how long stale cache entries may still linger there. |
| AD | Reads the authenticated-data signal reported in the resolver response. | Helps separate cache lag from validation-policy differences when DNSSEC matters. |
| Time (ms) | Measures elapsed request time for the winning resolver response. | Highlights slow providers and timeouts without confusing speed with correctness. |
| Exports | Builds CSV, DOCX, chart images, chart CSV, and JSON from the same normalized result set. | Makes it easier to share evidence from the exact snapshot you reviewed. |
Start simple unless the situation is clearly unusual. Enter the hostname, pick the exact record type you changed, and run the default resolver set first. The default group is Cloudflare, Google, Quad9, AdGuard, and DNS.SB, while AliDNS and OpenDNS are available when you need a wider comparison.
The expected-answer field is the main decision fork. If you leave it empty, the tool answers a descriptive question: which value is leading right now? If you fill it in, the tool answers an operational question: how many selected resolvers are already serving the value I intend to publish or depend on?
Use exact matching when you know the full answer text and want strict cutover evidence. Use contains-text when you only need one token inside a multi-value answer, such as one IP in a round-robin A set or one hostname inside an MX or NS answer. Keep regular expressions for edge cases. They are powerful, but they are also easier to misuse than plain text matching.
The advanced settings are most useful when you already know what might distort the snapshot. DNSSEC flags can explain why answers differ in validation state. EDNS Client Subnet can help when a CDN or other location-aware system changes answers by source network. Timeout and retries matter when you are seeing transport noise rather than a clear DNS pattern.
| Setting | Best time to use it | What not to assume |
|---|---|---|
| Expected answer | When a rollout depends on one specific answer becoming visible. | A leading answer is not automatically the correct answer unless you compare it to the intended value. |
| Comparison rule | Use exact for full answers, contains-text for one token inside a longer answer, and regex only for uncommon match patterns. | Broader matching can hide important differences if you make the pattern too loose. |
| Resolver set | Start with defaults, then widen the set if your audience depends on more providers or regions. | Adding more resolvers does not make the result authoritative. It only broadens the recursive snapshot. |
| DO and CD | Use them when signed-zone validation is part of the incident or deployment review. | AD differences can reflect resolver policy and validation state, not just propagation lag. |
| EDNS Client Subnet | Use it when answers may vary by user network, especially with traffic-steered services. | Some resolvers ignore or restrict ECS behavior, so a flat result does not prove the subnet hint had no effect. |
| Timeout and retries | Raise them when transient failures are overwhelming the snapshot. | More patience improves signal quality, but it does not change the underlying DNS state. |
If you ran the checker without an expected answer, the score tells you how strong the current lead answer is among the selected resolvers. A high number means the recursive snapshot is mostly aligned around one normalized answer. It does not mean that answer is necessarily the one you published or that every client has reached the same state.
If you did provide an expected answer, the score becomes stricter. It now measures how many selected resolvers returned the specific target value under your chosen comparison rule. That is the better mode for cutovers, rollbacks, DNSSEC recovery work, or any change where the difference between dominant and correct actually matters.
TTL is often the most useful tie-breaker when answers are split. A resolver that still shows the old answer with a large remaining TTL can be behaving exactly as expected. A resolver that still shows the old answer with a near-zero TTL is a stronger hint that something else is wrong, such as an upstream cache refresh issue or a record set you did not fully update.
AD, latency, and failures each need restraint. AD tells you about the resolver's authenticated-data view, not what every downstream client validated. Latency measures speed, not correctness. A fail row means the checker did not get a usable answer for that resolver under the current settings, and those failed rows still remain in the denominator so the score does not look cleaner than the evidence really is.
| Signal | Practical reading | Common overread |
|---|---|---|
| 100% score | All selected resolvers agree, or all selected resolvers match the target value. | It does not prove every client, ISP cache, or private resolver sees the same thing. |
| Lead or Expected rows | These are the resolvers currently supporting the dominant or target answer. | They do not prove the answer is operationally correct unless it matches the value you intended. |
| Other or Different rows | These are holdouts or alternate answers that still need explanation. | They are not always stale caches. Geo-steered or subnet-sensitive answers can create them by design. |
| TTL wait cue | Useful for pacing your next rerun when old answers are still present. | It is not a countdown to universal completion. It only describes the returned resolver snapshot. |
| AD mismatch | Worth checking when signed zones matter or CD mode was involved. | It does not automatically mean the record change failed. Validation policy can differ across resolvers. |
| Needs rerun | The on-screen snapshot no longer matches the current inputs. | Do not keep interpreting the old score after changing the domain, record type, resolver set, or advanced flags. |
You move www.example.com from 203.0.113.10 to 203.0.113.20 and paste the new address as the expected answer. Five resolvers match the new value, one still returns the old address with a TTL of 900 seconds, and one times out. That is a strong sign that the change is underway, but not a clean moment to switch high-risk traffic if the missing resolver population matters to you.
You replace an older mail host with 10 mail2.example.net. Exact matching is appropriate because the preference and hostname both matter. If the lead answer already matches the new MX but a minority of resolvers still return the old host with non-trivial TTLs, the checker tells you that public recursive visibility is improving while delivery paths may still be split for some senders.
You update _dmarc.example.com or a sender-policy TXT record and only care whether one required token is present. Contains-text matching lets you test for that token without demanding the entire displayed TXT string in exactly one order. That is useful when the answer is long, but you should still read the full resolver matrix before declaring the policy settled.
No. It compares selected public recursive resolvers. That is the right view for public visibility checks, but it is different from tracing the authoritative chain itself.
They may have cached answers with different remaining TTLs, they may reach different location-aware backends, or they may apply DNSSEC and ECS behavior differently.
It means the checker did not obtain a usable answer from that resolver under the current settings. Timeouts, empty answers, status errors, or transport limits can all produce that result.
No. It means the selected resolver sample is aligned. Other public resolvers, private caches, enterprise forwarders, and client devices may still be on a different timetable.
The analysis and exports stay in the browser, but the DNS lookups are still sent to the selected public resolvers, and some resolver requests may pass through a compatibility relay so the browser can reach them.