Your Site Is Costing AI Agents Money

Your Site Is Costing AI Agents Money

Your Site Is Costing AI Agents Money

MARCH 30, 2026

Here's a number that should bother you. A single AI agent visiting an unoptimized website burns 277,000 tokens just to figure out what the site does, how to authenticate, and how to get data out of it. The same interaction on an optimized site? 24,000 tokens. That's a 91% reduction.

At Claude Sonnet rates, the difference is $0.76 per visit. Multiply by a thousand agent visits a day and you're looking at $22,800 a month in pure waste. Not your waste — the agent's. But it's your problem, because agents that burn tokens on your site will stop coming back. They'll pick the competitor that costs less to talk to.

The agent tax is real

I've been building for AI agents for a while now. I rebuilt this entire site so agents could discover me, evaluate me, and work with me without rendering a webpage. Then I built BotVisibility to audit any site's agent readiness. And in the process of scanning hundreds of sites — including Stripe, GitHub, Shopify, Salesforce, the New York Times — I found a pattern that's hard to ignore.

Almost nobody is optimized for agents. And the cost of that neglect is quantifiable.

BotVisibility agent readiness scan

I wrote a full research breakdown called The Agent Tax that walks through all 30 optimizations and the token math behind each one. Here's the short version of why it matters.

Where the tokens go

A typical HTML page is 15,000–30,000 tokens. The actual useful content? Maybe 2,000–3,000. That's a 10–20% signal-to-noise ratio. Every AI agent that visits your site is paying full price for your navigation bar, your JavaScript bundles, your cookie banners, and your footer links. They don't need any of it.

An llms.txt file replaces all of that with 200–1,500 tokens of clean, structured information. That single file — which takes about ten minutes to write — saves 90–99% of discovery tokens. Over 600 sites have adopted it, including Anthropic, Stripe, Cloudflare, and Vercel. If you don't have one, you're already behind.

An OpenAPI spec eliminates trial-and-error API discovery. Without one, agents guess at parameters, parse error responses, and iterate. For a medium-complexity API, that's 200,000–500,000 tokens of fumbling. With a spec, it's 10,000–30,000. Still expensive. But 60–90% cheaper than the alternative.

Structured error responses — JSON with error codes instead of HTML error pages — reduce error handling from 12,000 tokens to 80. AWS research found that clear terminal states cut tool calls from 14 to 2. That's an 86% reduction from one change.

The three layers

Discovery is where 70% of the savings live. Can agents find you? llms.txt, agent cards, OpenAPI specs, MCP servers, skill files, AI meta tags. Without these, agents crawl HTML like it's 2005. This layer alone can save $15,000–$25,000 a month at scale.

Usability is operational friction. Can agents authenticate without a five-step OAuth dance? Can they write data without filling out forms? Can they retry safely with idempotency keys? Every missing piece adds a token tax. API key auth costs 20 tokens. OAuth costs 5,000. Same result. Wildly different price.

Optimization is per-request efficiency. Sparse fields cut API responses by 80–90%. Cursor pagination is 17x faster than offset at depth. Server-side filtering prevents the worst-case scenario — an agent downloading 100,000 tokens of records to find the 10 that match.

This is a revenue problem

Gartner predicts that by 2027, over 40% of agentic AI projects will be canceled before production. The primary reason: cost overruns. A lot of those overruns come from agents burning tokens on sites that weren't built for them.

Flip it around. If your site is cheap for agents to use, they'll use it more. If your API is structured, predictable, and efficient, agents will integrate with you first. Agent readiness isn't a nice-to-have. It's a competitive moat.

The sites that figure this out early will get disproportionate agent traffic. The ones that don't will wonder why their API usage is flat while their competitor's is growing 10x.

Start with the math

Run your site through BotVisibility. It's free. You'll get a score across 37 checks and an Agent Tax number that tells you exactly how much overhead agents are paying to interact with you. Then read the full Agent Tax research for the optimization-by-optimization breakdown with token math and dollar impact.

Or run it from your terminal: npx botvisibility yoursite.com.

The agents are already deciding which sites are worth their tokens. Yours should be one of them.