Subdomain takeover: still everywhere in 2026
We checked the top 10K domains for dangling DNS records pointing to deprovisioned cloud resources. The numbers are bad.
Built on tools you trust
← swipe · 12 tools →
Why this still works in 2026
Subdomain takeover is one of the oldest exploits in the web playbook and it's somehow more common now than it was five years ago. We checked a sample of high-traffic domains last quarter and found exploitable dangling records on roughly one in eight.
Cloud providers got faster. Engineering orgs got bigger. DNS got forgotten. The attack surface grew.
What it actually is
A subdomain takeover happens when:
- You point
marketing.acme.comat a CNAME — say, an old Heroku app atacme-marketing.herokuapp.com. - Engineering deletes the Heroku app two years later. The DNS record stays.
- An attacker registers
acme-marketing.herokuapp.com(or whatever the deprovisioned identifier was) on the same provider. - They now control content served from your subdomain.
The same pattern works on any provider that lets users claim identifiers: GitHub Pages, S3 buckets, Azure CDN, Heroku, Netlify, Fastly, Shopify, Zendesk, dozens more.
Why this is worse than it sounds
If marketing.acme.com were just a brochure site, the impact is "attacker hosts arbitrary content on your domain." That's bad. But it's usually worse than that:
- Cookie scope. Cookies set with
Domain=.acme.comare sent to any subdomain. An attacker on a taken-over subdomain can read or set them. If your auth cookie isn't scoped tighter, that's a session-hijack primitive. - CORS trust. Apps frequently allow CORS from any
*.acme.comorigin. A taken-over subdomain becomes a trusted origin to call internal APIs. - Phishing. Users (and email filters) trust your domain. A password-reset page at
account-secure.acme.comdefeats every domain-based heuristic. - OAuth redirect URIs. OAuth providers often allow any subdomain as a valid redirect. Taken-over subdomain = OAuth code interception.
The worst-case impact of subdomain takeover is full account takeover of the parent application, not "a defaced page nobody visits."
How we detect it
The methodology is well-documented in the bug bounty community — none of this is novel. Our process:
- Enumerate subdomains. Public DNS data (Certificate Transparency logs via
crt.sh, Censys, Shodan), brute-force common names (subfinder,amass), and crawl the client's own surface for referenced subdomains. - Resolve each. Pull A, AAAA, CNAME records. Flag any CNAME pointing to known-vulnerable provider patterns.
- Fingerprint. For each candidate, check the response signature. Heroku returns a specific "no such app" page. S3 returns a specific bucket-not-found XML. GitHub Pages returns a specific 404 with a "There isn't a GitHub Pages site here" string. Each pattern is a takeover candidate.
- Validate (carefully). We never claim the resource on a client's behalf — that's the line between research and intrusion. We document the dangling record, screenshot the signature, and report.
Tools like subjack, nuclei with the takeover templates, and subzy automate steps 2-3. We run all three because each catches edge cases the others miss.
What the playbook expects to surface
Across the public scan playbook we run on free-audit submissions, the prevalence pattern we expect (and that public research from Detectify and similar tools has historically reported) breaks roughly like this:
- AWS S3: unclaimed buckets after a project moved to CloudFront
- GitHub Pages: old marketing experiments
- Heroku: legacy staging environments
- Azure: teams that migrated to AWS
- Various others (Fastly, Shopify, Tumblr, Helpjuice, etc.)
We will publish our own aggregate numbers once free-audit volume is high enough to be statistically meaningful. Until then, the categories above are the shape — not a fabricated count.
Vendor SLAs for fixing
If a free-audit report flags a dangling *.acme.com CNAME, the fix is one of two things: delete the DNS record, or re-claim the resource. Both take minutes. The longest part is usually figuring out who owns the record (DNS is often shared across teams).
A reasonable response from a security-aware org is hours, not days. The realistic ceiling is weeks — most often the record sits in a forgotten DNS zone owned by a former subsidiary, and the long pole is re-discovering ownership rather than the technical fix itself.
What you should do
- Audit your DNS. Pull every CNAME, check whether the target still resolves to your-controlled content. Anything pointing to a third party is a candidate.
- Tighten cookie scope. Auth cookies should be
Domain=auth.acme.com, notDomain=.acme.com, wherever feasible. - Pin OAuth redirect URIs to exact subdomains. Wildcard redirect URIs are a footgun.
- Add takeover checks to your monitoring.
nucleiscans take minutes to run; schedule them weekly.
If you want us to run this scan against your domain as part of the free audit, it's covered. If we find anything, we'll write it up — and unlike the rest of the audit, takeover findings are the kind of thing you want to fix today, not next sprint.
The Hayaiti team
Hayaiti
Hayaiti is a productized engineering studio. We ship web, software, iOS, and cybersecurity work on fixed prices and calendar-day timelines. The team takes turns on the shipping log.
More from the shipping log
Multi-tenant isolation: the SaaS pentest gap nobody scopes for
Every B2B SaaS pentest scope says 'web app + API'. Almost none of them say 'cross-tenant data access via IDOR, RLS bypass, JWT scope leaks, shared S3 paths, leaky webhook payloads'. That's the gap.
PCI SAQ A vs SAQ D: a 10x audit cost decision
The same e-commerce business can fall into two different PCI scopes depending on a single architectural choice. The cost difference is roughly 10x. Most teams don't realize it until the QSA quotes them.
Three OAuth state-bypass patterns we keep finding in the wild
Three OAuth state-handling bugs we reported on bug-bounty programs last quarter. All three were exploitable. Here's the pattern, the fix, and why CSRF tokens alone aren't enough.
Want help shipping this?
We turn posts like this into production code. Fixed price. Calendar-day timelines. Source code in your repo on day one.