| Field | Value | Copy |
|---|---|---|
| {{ row.label }} | {{ row.value }} | |
| No metrics available. | ||
| Network | Category | Confidence | Score | Evidence | Supply Path | Sources | Sample | Copy |
|---|---|---|---|---|---|---|---|---|
| {{ row.network }} | {{ row.categoryLabel }} | {{ row.confidenceLabel }} | {{ row.score }} | {{ row.evidenceCount }} | {{ row.supplyPathLabel }} | {{ row.sources }} | {{ row.evidence }} | |
| No known ad-network signals were detected. | ||||||||
| Network | Source | Pattern | Value | Copy |
|---|---|---|---|---|
| {{ row.network }} | {{ row.source }} | {{ row.pattern }} | {{ row.value }} | |
| No evidence available. | ||||
| Priority | Action | Why now | First move | Cadence | Focus | Copy |
|---|---|---|---|---|---|---|
| {{ row.priority }} | {{ row.action }} | {{ row.reason }} | {{ row.next_step }} | {{ row.cadence }} | {{ row.focus }} | |
| No remediation recommendations available. | ||||||
| Seller Domain | Account ID | Type | Alignment | Cert Authority | Matched Networks | Copy |
|---|---|---|---|---|---|---|
| {{ row.seller_domain }} | {{ row.account_id || '—' }} | {{ row.relationship || '—' }} | {{ row.alignmentLabel }} | {{ row.cert_authority_id || '—' }} | {{ row.matched_networks || '—' }} | |
| No ads.txt seller rows were captured. | ||||||
Advertising on the web often depends on several layers that are hard to see from the page alone. One partner may provide an ad server tag, another may supply header bidding, and a third may appear only in the publisher's seller declarations. This detector turns that scattered surface into a single audit view for one public URL.
The practical value is speed with context. Instead of reading raw markup and seller files by hand, you can submit one site address and get a report that shows which known ad networks were matched, how strong the evidence was, whether redirects changed the scan path, and whether ads.txt sellers align with the detected stack.
That makes the tool useful for several nearby jobs. A publisher can sanity-check a live page before a tag rollout. An agency or revops team can compare the declared supply path against the partners they expect to see. A technical auditor can use one run as a baseline before opening deeper browser or request-level diagnostics.
The scan is deliberately narrow. It fetches one public page and, when enabled, tries the related seller file for that host. It does not crawl a whole domain, it does not log into private pages, and it does not execute the page's client-side JavaScript. In other words, the result is a disciplined first-pass inventory, not a complete rendering trace.
That boundary matters because a quiet result is not the same thing as a clean monetization stack. If the page loads partners only after consent, deferred scripts, or other browser events, the tool can miss them even when live ads eventually appear in a user session.
A strong first use case is a public page where you want a fast answer to a simple question: what ad-tech signals are visible from a straightforward fetch? The detector is best on article pages, category pages, or landing pages that expose a representative markup path. It is weak on authenticated flows, localhost targets, private IP ranges, and anything that only reveals demand partners after browser execution.
Start with one canonical URL and leave the default scan behavior in place unless you have a reason to narrow it. The optional ads.txt scan is important when you care about seller declarations and supply-path transparency, while redirect following helps when the public entry point lands somewhere other than the typed address. If a run comes back with truncation or warnings, widen the scan envelope before you turn the result into a decision.
The summary is easiest to read in three layers. First, check whether the helper reached the right place by comparing the target and final route, the HTTP code, and the fetch time. Second, look at Networks Detected, High Confidence Networks, and the confidence mix to decide whether the result is mostly direct evidence or only weak hints. Third, compare the page findings with the seller-file fields to see whether the declared supply chain supports the same picture.
The most common mistake is over-reading a small number of clues. A high-confidence match is a strong signal that a known partner left recognizable evidence in the fetched page or seller file, but it does not tell you whether that partner is contractually expected, privacy-compliant, or correctly configured. The opposite mistake is treating zero findings as proof that no monetization exists. That can also mean the route was not representative, the body was capped, or the partner logic lived entirely in deferred browser execution.
Use the result as a ranking tool, not a verdict. When the scan looks stable, the recommendation list helps you decide what to verify next. When the scan looks thin or noisy, the better move is usually another controlled rerun rather than a stronger conclusion.
This package has both browser and server-side parts. The page itself collects the URL and scan options, then posts them to a helper that performs the fetch. That helper normalizes the address, assumes HTTPS when the scheme is missing, rejects unsupported schemes, and refuses any target outside ports 80 and 443. It also blocks localhost, loopback, private ranges, and link-local addresses before any request is attempted.
Once a target passes those guardrails, the helper fetches the page with manual redirect handling and a bounded body read. Redirect following can continue for up to six hops on the main page fetch. The body reader stops at the selected byte cap, which is why the result can explicitly mark the page as truncated. That matters because truncated markup can lower evidence density and hide late sections of a page that would otherwise expose partner signals.
The actual detection step is rule based. The helper extracts script sources, iframe sources, image sources, stylesheet links, inline URLs found inside script content, and a limited set of inline script bodies. It then compares those clues against a known rule table for networks such as Google Ad Manager, Amazon Publisher Services, Prebid.js, Criteo, Taboola, Outbrain, PubMatic, OpenX, Magnite, Index Exchange, Xandr, and others. Each rule carries URL patterns, keyword patterns, and, where relevant, seller domains for ads.txt matching.
If seller-file scanning is enabled, the helper also tries the host's /ads.txt path and a simple www or bare-host variant. That second fetch path has its own redirect limit and byte cap, and the payload trims the returned entries when the file is very large. Parsed seller rows are then compared against the network rule set so the tool can report seller matches separately from page-body signatures.
The final result is an aggregation layer rather than a raw evidence dump. Multiple clues for the same network are merged into one finding row that keeps the category, matched source classes, a sample evidence string, the matched patterns, the evidence count, a score, and a confidence label. The recommendation table is then prioritized from that normalized result, the fetch state, warning count, the seller-file outcome, and the selected remediation focus.
The score uses the strongest weight seen for each source class, then adds a small diversity bonus and a small evidence-density bonus. In this implementation, direct page references such as script, iframe, or inline URL matches carry more weight than image, stylesheet, or generic HTML hints, while seller-file matches sit in the middle. Confidence is then assigned from the score: high at 8 or above, medium from 4 to just under 8, and low below 4.
| Evidence source | What it means here | Weight |
|---|---|---|
| Direct page reference | script-src, iframe-src, or inline URL that points at a known partner host |
4 |
| Seller-file match | A parsed ads.txt seller domain that maps to a known network rule |
3 |
| Inline script keyword | Partner-specific loader or keyword pattern inside inline script content | 2 |
| Supporting markup clue | Image, stylesheet, or generic HTML keyword evidence | 1 |
| Field or guardrail | Package behavior | Why it matters |
|---|---|---|
Follow redirects |
Allows up to six main-page redirect hops before scoring the final route | Prevents false confidence when the typed URL lands somewhere else |
Max page bytes |
Caps how much response body text is inspected | Large pages can otherwise hide findings behind truncation |
Include ads.txt scan |
Fetches candidate seller files, parses seller rows, and reports matched sellers | Adds declared supply-path context that page markup alone cannot provide |
| Private-host block | Rejects localhost, loopback, private, and link-local destinations | Keeps the helper from being used as an internal network fetcher |
Warnings |
Highlights truncation, fetch issues, non-HTML content, and seller-file problems | Warns when the absence of findings may be low quality rather than low activity |
The privacy boundary is important. The scan is not local-only: the browser sends the target URL and selected options to a remote helper, and that helper performs the network requests. The helper response sets cache-control: no-store, but scanned targets are still data you are deliberately submitting for remote evaluation.
Start by separating fetch quality from partner evidence. A result built on a reachable page with a stable final URL and no major warnings is much more trustworthy than one built on truncation, unusual content type, or seller-file errors. In other words, poor scan conditions weaken the meaning of every downstream table.
Then read the finding rows by confidence and source mix. High confidence means the tool saw enough strong evidence to treat the network as a credible match for that page. Medium and low confidence are still useful, but they are better treated as leads for follow-up than as definitive partner inventory. If the evidence source is only a weak markup clue, the safer interpretation is provisional.
The seller-file section answers a different question from the page findings. It helps you see whether the publisher's declared authorized sellers line up with the networks detected from the fetched page. A mismatch does not automatically mean fraud or misconfiguration, but it is a good reason to review seller declarations, route choice, and partner inventory.
The tool is most reliable as a repeatable baseline. If you scan the same route before and after a tag change, a stable difference in findings is usually more useful than the absolute result from a single run.
A publisher checks a public article page after a header-bidding deployment. The scan shows one high-confidence ad server match, one medium-confidence exchange match, and several seller rows in ads.txt. That does not finish the audit, but it gives the revops team a quick list of partners that deserve confirmation in request logs and tag-manager history.
An analyst gets zero findings but also sees that the page body was truncated and the fetch took several seconds. In that situation the safest conclusion is not “no ad stack.” It is “the scan surface was weak.” Raising the byte cap or trying a simpler public route is the better next step.
A site shows several parsed seller rows but only one matched seller against the detected findings. That pattern can mean the seller file is broader than the current page, the page is not representative, or the active markup no longer matches declared partners. The recommendation list then becomes a cleanup queue rather than a final judgment.
No. It fetches text and markup through a helper, extracts signatures from what was returned, and does not simulate a full browser rendering session.
Because the visible evidence may live behind consent flow, deferred scripts, a different route, or a body section that was not captured before truncation.
It adds publisher-declared authorized seller rows and tries to map those seller domains to the known network rule set, which is useful supply-path context but not a substitute for live traffic validation.
No. The helper blocks localhost, private and link-local targets, and ports other than 80 and 443.
ads.txt declaration of authorized digital sellers.