Websites resistant to agent automation: every guide debates the same three Playwright stealth flavors. There is a fourth path nobody is writing about.

M
Matthew Diakonov
9 min read

If you opened the first six results for this topic, every one of them ran the same playbook: list the anti-bot vendors (Cloudflare, DataDome, PerimeterX, Akamai, Imperva, Kasada), enumerate the leaks (navigator.webdriver, JA3, CDP side effects, behavioral models), recommend a stealth fork (undetected-chromedriver, Puppeteer Stealth, Camoufox, SeleniumBase UC, Nodriver), recommend residential proxies, recommend a CAPTCHA solver service. One of them was honest enough to conclude that stealth Playwright is already losing the arms race and to propose a custom Chromium fork as the answer. None mentioned the path that is sitting in plain sight, and that screen readers have been using since the 1990s.

Direct answer (verified 2026-05-04)

When a site detects your agent, the detection is happening inside the browser tab: a JavaScript bundle is reading navigator.webdriver, inspecting the Permissions API, timing CDP side effects, and comparing your TLS fingerprint to a known automation set. Every stealth tool is a patch against one of those signals; the next anti-bot release re-detects them. The path that does not need patching is to drive a real, hand-launched Chrome through the OS accessibility layer, Windows UI Automation or macOS AX, the same APIs screen readers use. There is no automation socket attached, so there is nothing to fingerprint. Terminator's examples/recaptcha-resolver/resolver.ts is 188 lines of TypeScript that handles checkbox reCAPTCHA, image reCAPTCHA, and Cloudflare's "Verify you are human" challenge against a regular Chrome window with no stealth fork and no proxy.

The thing every guide is fingerprinting

The reason stealth Playwright keeps losing is not that the patches are bad. The patches are excellent. The reason is that the surface they patch is the wrong surface. Every leak below is a property of the automation socket, not the human:

What anti-bot scripts read on a Playwright page

  • navigator.webdriver flag (set when CDP is attached)
  • Permissions API behavior (subtly different under CDP)
  • CDP timing side effects (Runtime.evaluate latency, debug events)
  • Stealth plugin signatures (the patches themselves are detectable)
  • TLS / JA3 fingerprint (Node TLS != real Chrome)
  • Behavioral model (mouse curve, scroll velocity, dwell time)

You can fix navigator.webdriver. The next release of the anti-bot script reads a different CDP-only side effect. You can spoof the TLS stack with curl_cffi. The vendor builds a model on the JA3 plus the IP plus the click cadence. You can simulate human mouse curves with Bezier paths. They notice that the curves are too consistent across sessions. The arms race is real and they have more engineers on it than you do.

The honest read on what is happening: the anti-bot industry has accepted that they cannot tell humans from bots from the network alone, so they make the browser itself the witness. The browser leaks its own automation status because the W3C WebDriver spec said it should. CDP leaks because Chrome DevTools needs to. Stealth tools patch what the page can read, but the page can read more next month.

The surface they cannot see

A JavaScript bundle running in a Chrome tab can read what is exposed to it: window properties, navigator, the DOM, network timing, GPU fingerprints, the Permissions API. It cannot read what the operating system is doing above the browser. It cannot tell whether the click that just landed on a button came from a mouse, from a Voice Control command, from a Magnifier accessibility action, or from a UIA synthesized event. They are the same input stream by the time they reach the page.

Two different paths from agent intent to a click on the page

Agent codeDriverChromePage JSclick(button.save)CDP: Input.dispatchMouseEventclick event + navigator.webdriver=trueanti-bot script flags sessionclick(role:Button, name:Save)UIA: Invoke pattern, OS event injectionclick event, no automation flaganti-bot script sees ordinary user

The first four messages are what Playwright (or any CDP-based driver) looks like from inside the page. The page sees a navigator.webdriver flag and a known CDP-attached state. The last four are what an accessibility-driven click looks like. The page sees the click event and nothing else. The OS-side injection is invisible to the JS runtime because there is no API exposing it.

The 188-line proof: Cloudflare in one accessibility selector

Anything in this argument that cannot be reduced to a code path is handwaving, so here is the code path. The anchor is examples/recaptcha-resolver/resolver.ts in the Terminator repo. 188 lines of TypeScript, importing one thing from terminator.js. The whole flow:

const desktop = new Desktop();
const chrome_app = desktop.application("chrome").window()!;
chrome_app.focus();

const browser_webview = chrome_app.locator(
  "classname:BrowserRootView >> nativeid:RootWebArea"
);
await browser_webview.wait(5000);

// Cloudflare path: one accessibility click.
const cloudflareCheckbox = await browser_webview
  .locator("role:checkbox|name:Verify you are human")
  .first();
cloudflareCheckbox.click();

The Cloudflare branch is at line 160 of the file. It runs when no reCAPTCHA anchor is present on the page. The selector is the same shape an accessibility tester would use: a role and a name. The click is dispatched through Windows UI Automation's Invoke pattern (or, on macOS, through AXUIElementPerformAction with kAXPressAction), the same pattern Voice Control uses to fire buttons.

The reCAPTCHA branch is the same shape, with two extra moves. It captures the image-grid challenge as a screenshot through the accessibility tree (recaptcha_webview.capture()), sends the screenshot plus the cell HTML to Gemini 2.5 Flash with a prompt asking which cell IDs match the target object, parses the ID array, and dispatches clicks back through the same accessibility locator. The image solver needs an LLM because the challenge is visual; the click dispatch path is identical to the Cloudflare case.

What is not in the file: a stealth plugin import, a webdriver flag patch, a TLS rewrite, a residential proxy config, a CAPTCHA solver API key (other than the Gemini key for the image grid), a custom Chromium build. The Chrome window the script controls is the user's regular Chrome, with the user's extensions and the user's cookies, focused with chrome_app.focus() and then talked to through the same accessibility tree a screen reader would walk.

What the page sees

Page reads navigator.webdriver, finds true. Page reads Permissions.query for notifications, finds the CDP-specific stub. Page checks Runtime evaluation timing, finds the millisecond cliff. Page hashes the TLS handshake, finds the Node curl_cffi signature. Score adds up; you get a challenge page or a silent ban.

  • navigator.webdriver = true (or patched, but the patch is detectable)
  • JA3 fingerprint matches known automation
  • Mouse path is too straight
  • Stealth plugin signatures present in the runtime

What is and is not visible from the page

Below is the same detection stack from the first section, run against a Terminator-driven Chrome session. Six items the anti-bot script could read on a Playwright session, and what each of them returns when the driver is not in the browser.

What anti-bot scripts read on a Terminator-driven Chrome session

  • navigator.webdriver: false (no CDP attached)
  • Permissions API: identical to a normal user (no instrumentation)
  • TLS: real Chrome's BoringSSL stack on the user's machine
  • Browser binary: hand-launched Chrome with the user's profile
  • Click dispatch: OS-level event injection, same path Voice Control uses
  • Surface visible to anti-bot JS: indistinguishable from a screen reader user

Where the OS layer wins, where it does not

The honest comparison. The OS accessibility approach is not universally better than CDP automation, and the cases where it loses are worth knowing before you commit to it.

FeatureCDP / WebDriver (Playwright stealth)OS accessibility (Terminator)
navigator.webdriver flagtrue under CDP, patches detectablefalse (no CDP attached)
TLS / JA3 fingerprintNode TLS, distinct JA3real Chrome BoringSSL stack
Browser profilefresh, empty, statistically suspicioususer's actual profile, cookies, extensions
Headless supportheadless mode, but leaks separatelyneeds a real display (Xvfb on Linux)
Speed on pages with no bot protectionfastest path, sub-100mshuman-perceptible click latency
Survives Cloudflare 'Verify you are human'needs solver service or stealth forksingle accessibility click, no solver service
Survives reCAPTCHA v2 checkbox + image gridneeds 2Captcha or anti-captcha APIclick + Gemini Flash for image grid

The CDP path is faster on cooperative pages, supports headless mode out of the box, and is what the entire web testing ecosystem is built around. Use it for the 95% of pages that do not push back. The OS path costs you a real display, costs you a few hundred MB per session, and runs at human-perceptible click latency. In exchange you get a click that anti-bot scripts cannot tell apart from a Voice Control user, and a code surface that does not need a stealth fork maintained against the next anti-bot release. The most reliable production setup is to run both: CDP first, fall back to OS accessibility on pages that detect you.

The one objection that holds up

The OS accessibility layer is not a magical bypass for everything. The anti-bot industry will eventually start fingerprinting accessibility automation directly: AT-SPI hooks on Linux, UIA provider attachment on Windows, AXObserver on macOS. These are weaker signals than the CDP attach because real users with accessibility software running look the same way. But they are not zero. If you are scraping a high-value endpoint where the site has dedicated detection engineers, expect a future round of the arms race to land here too.

The two things that will keep working past that round: behavioral fidelity (replaying recorded human timing instead of synthesizing clicks), and TLS / IP fidelity (using the user's real Chrome on the user's real network, not a datacenter proxy). The OS layer makes both of those cheap to combine because the browser really is the user's Chrome. A stealth Playwright running on a Hetzner box is fighting a different fight.

The other objection that comes up: this is heavier than HTTP scraping. Yes. If the site is happy to serve you JSON without a challenge, do not use a browser at all. The OS accessibility approach is for the case where the site has explicitly decided it does not want automated traffic, and where you have a legitimate reason to be there anyway (your own account, your customer's account, an accessibility audit, a regulatory disclosure, an internal tool that drives a vendor portal that has no API).

Bringing this to a site that keeps catching your agent?

Walk through your detection trace with the team. Bring the page that is flagging you and the script that is getting flagged.

Frequently asked questions

Which websites are actually resistant to agent automation in 2026?

Sites in front of Cloudflare Bot Management, DataDome, PerimeterX (Akamai), Imperva, Kasada, and Akamai Bot Manager are the load-bearing four or five vendors. Real-world examples that show up most often in scraper forums: Indeed, Glassdoor, ticketing sites (StubHub, SeatGeek, AXS), most large e-commerce (Nike, Walmart, Best Buy), banking and trading platforms, every airline checkout, the major social platforms, and an increasing share of news sites. The pattern is not 'big site equals protected.' It is 'the site has revenue tied to scraping, account-takeover, or scalping risk.' If a competitor or a bot can monetize the data, the site is probably behind one of the vendors above.

Why does Playwright get caught even with stealth plugins?

Three layers of leaks. First, the Chrome DevTools Protocol leaves observable side effects when attached: navigator.webdriver returns true, the Permissions API behaves slightly differently, certain timing primitives change. Stealth plugins patch the obvious flags but new ones surface. Second, the network fingerprint (TLS JA3, HTTP/2 frame ordering, IP reputation if you are on a datacenter range) is not something a JavaScript patch can change; it lives at a lower layer than the browser. Third, behavior. Anti-bot vendors model mouse curves, scroll velocity, dwell time, and inter-event jitter. Most automation drives clicks too cleanly. You can fix one or two of these. Fixing all three with a Playwright stealth fork is a moving target.

How does an OS accessibility layer change that?

It changes what the site is fingerprinting. Anti-bot scripts can only see what runs inside the browser tab: navigator properties, observable side effects of CDP, behavior in the JS event stream. They cannot see the operating system above the browser. When Terminator drives Chrome by talking to Windows UI Automation or macOS Accessibility (the same APIs screen readers use), the click is synthesized at the OS level and dispatched into the Chrome window the way a Voice Control or Magnifier user would dispatch it. Inside the page there is no automation flag, no webdriver attribute, no debug port, because none of those exist. The browser is the user's Chrome, launched by hand or by the OS shell, with the user's profile and the user's TLS stack.

Where is this in the Terminator source?

examples/recaptcha-resolver/resolver.ts in github.com/mediar-ai/terminator. 188 lines of TypeScript using terminator.js. The file resolves three things in one pass: Google's checkbox reCAPTCHA (line 28 onward), Google's image-grid reCAPTCHA via a Gemini 2.5 Flash vision call against the captured tile (line 48 onward), and Cloudflare's 'Verify you are human' checkbox via a single accessibility selector at line 160. The Cloudflare path is one click: browser_webview.locator('role:checkbox|name:Verify you are human').first().click(). The browser_webview itself is rooted in desktop.application('chrome').window(), which is a Windows UIA or macOS AX handle on the running Chrome window, not a CDP session.

Does this make every site fall over?

No. Three categories still resist this approach. (1) Sites that gate behavior on TLS fingerprint or IP reputation will still flag a click that came from a residential consumer Chrome on a clean IP a tier slower than a datacenter scraper, but they will flag the datacenter scraper too. The OS layer fix removes one detection vector, not all of them. (2) Sites with strong behavioral models will still notice that you click in straight lines. Real users do not. The fix here is to record a real workflow with terminator-workflow-recorder and replay it, which carries the human's actual timing curve. (3) Sites that explicitly test for accessibility automation (rare in 2026, more common at high-value endpoints) can detect that AT-SPI / UIA hooks are attached. A normal Chrome user has accessibility services running too, so this signal is weaker than CDP detection, but it is not zero.

What about LinkedIn, Reddit, Twitter, the platforms that aggressively block automation?

We do not run scripts against LinkedIn from this product and we do not recommend it; LinkedIn has flagged scripted browser activity in the past and routinely terminates accounts. For Reddit and X, the OS-layer approach gets you further than CDP-driven Playwright because the site cannot see the automation socket, but you are still bound by their terms of service and their rate limits. The honest answer is that the framework removes a technical detection layer; it does not remove the policy layer. If your goal is mass scraping of a platform that does not want you, no automation framework, including this one, will keep you out of trouble forever.

Is the OS accessibility tree slow compared to CDP?

It is faster than vision (which is the alternative when a site explicitly blocks automation), and it is comparable to or faster than CDP for typical click and type operations because the accessibility tree is already structured. There is no 'wait for selector' loop polling the DOM at the JS layer; the accessibility tree update is what fires the wait. On Windows the hot path is UIAutomationCore.dll (in-process for native UIA-aware apps, out-of-process for the browser via the WinEvents bridge); on macOS it is AXUIElementCopyAttributeValue. Both are designed to drive a screen reader at human-perceptible latency, which is the latency budget you want anyway.

What about headless mode?

There is no headless mode. The point is the browser is a real user-visible window. If you need to run this on a server, you give the server a graphical session: Xvfb plus a real Chrome on Linux, or a Windows VM with the desktop session live. The cost is one display server and a few hundred MB of RAM per concurrent worker, in exchange for not getting flagged. Most production scrapers that need to clear bot protection are paying this cost anyway, either through residential proxy networks or through hosted browser farms; Terminator just moves the trade-off into your own infra.

Does this work on the same machine as Playwright?

Yes, and the most reliable hybrid is exactly this: use Playwright (or vanilla CDP) for the 95% of pages that do not push back, and fall back to a Terminator session driving the same Chrome profile for the pages that do. The accessibility selector engine in Terminator can target Chrome regardless of which extension or profile is loaded, so you can keep your existing scraper and only invoke the OS layer when you hit a challenge page. The recaptcha-resolver example is shaped exactly like this kind of fallback handler.

Where do I start if I want to try this?

Install the Rust crate terminator-rs or the npm bridge terminator.js, install the Terminator browser bridge extension in your real Chrome, and run examples/recaptcha-resolver/resolver.ts against a page with reCAPTCHA. The whole loop is 188 lines and the only external dependency is a Gemini API key for the image-grid solver. The Cloudflare checkbox path needs no API key at all because it is one accessibility click. From there, the same selector engine (role:Button, role:TextBox, name:..., nativeid:...) works against any other site you point it at.

terminatorDesktop automation SDK
© 2026 terminator. All rights reserved.