Build for Agents, Not Just Humans — The Operator Playbook Google Didn’t Publish
Google’s web.dev guide validated the direction. It also stops where the real work begins. Here are the seven layers Google left out — and the 14-day sprint to get there before I/O.
The Quiet Announcement Most Operators Missed
On May 1, Google’s web.dev published a guide titled “Build agent-friendly websites.” No keynote. No I/O moment. A single blog post on a developer documentation site.
The message inside is the most consequential thing Google has said about the open web in 2026: websites must now be designed for two audiences — humans and AI agents. Not as a thought experiment. As practical engineering guidance, on Google’s own developer site, telling builders to stop assuming a human visitor.
I’ve been writing about this shift since 2023, when I published an 88-slide deck titled AI’s Imminent Impact on SEO — before GEO and AEO existed as categories. The agentic shift is no longer a thesis. It’s now Google’s developer documentation.
But the guide stops at the floor. The interesting work is everything on top of it.
Google’s web.dev guide is HTML hygiene for agents. It is necessary. It is nowhere near sufficient.
What Google Actually Said
The guide identifies three ways AI agents perceive a website:
- Screenshots — vision models identify elements visually, the same way a human sees a page
- Raw HTML — DOM structure, hierarchy, attributes
- Accessibility tree — described as a “high-fidelity map” of interactive elements, stripped of visual noise
Google’s recommendations for builders:
- Use semantic elements (
<button>,<a>) instead of styled<div>wrappers - Maintain stable layouts across navigation
- Link
<label>tags to inputs using theforattribute - Set
cursor: pointeron clickable elements so vision models recognize them - Sign up for the WebMCP early preview — a proposed standard where websites register tools with defined input/output schemas, and agents discover and call them as functions
The central principle Google states: “Everything we suggest to make a site agent-ready also makes sites better for humans.”
This is accessibility-first thinking, ported into the agent era. The recommendations are correct. They’re also the floor — not the ceiling.
The Gap — What Google Didn’t Say
Google’s guide answers one question: how do agents see my site?
It does not answer the questions that determine whether you actually get cited:
- How do agents decide which site to cite when six options are equally readable?
- How do they extract a chunk worth quoting?
- How do they distinguish my brand from a competitor’s noise?
- How do they verify my claims when sources disagree?
- Why do they cite some sites 53% of the time and identical-looking competitors 0%?
Those questions are answered in content architecture, schema integrity, brand citability, and citation graph density — none of which are in Google’s guide.
This is where the operator playbook starts. Google handed builders HTML hygiene. The work that actually changes citation rate is layered on top of that hygiene, not contained within it.
In the 14 months I’ve spent rebuilding indexableai.com from the ground up as an agent-native site, and in the audits we’ve run on $1B+ brand portfolios, the pattern is consistent: brands that nail Google’s web.dev floor still lose AI search if they skip the seven layers above it.
The 7-Layer Operator Playbook
Floor (Google’s web.dev guide): semantic HTML, stable layouts, accessibility tree, WebMCP preview.
Seven operator layers above the floor:
- JSON-LD schema as agent contract
- Stable URLs as stable identity
- Zero JavaScript on critical paths
- Codename-led content — facts, not adjectives
- Citation density and chunk shape
- Citation graph as authority moat
- WebMCP isn’t the destination — it’s table stakes
Each layer is independent of Google’s recommendations. Each layer compounds the others. None is optional for brands that intend to be cited in AI search through 2026 and beyond.
Layer-by-Layer Walkthrough
Layer 1 — JSON-LD Schema Is Your Agent Contract
Google’s web.dev guide skips schema entirely. That is the most important omission. JSON-LD is the machine-readable contract between your brand and a language model. It tells an agent who you are, what you sell, who works there, what claims you’ve made, where you operate.
When an agent must choose between citing a page with rich Organization + Product + Article + FAQPage schema versus a page with no schema at all, the structured page wins by default. We inject seven JSON-LD types site-wide on indexableai.com: Organization, WebSite, Service, Article, BreadcrumbList, Person, and FAQPage. It is not a nice-to-have. It is the meta-text agents read first.
Layer 2 — Stable URLs Are Stable Identity
Agents do not follow redirect chains the way humans do. A URL that 301s through three hops is a URL the agent treats as unstable — and unstable URLs get downranked in citation. Canonicalize aggressively. Pick the URL you want cited in AI search and make every alias collapse into it. Treat URLs as identity, not as routing.
Layer 3 — Zero JavaScript on Critical Paths
This is the hardest layer for most teams to swallow. Agents can execute JavaScript — but most don’t reliably, and the ones that do penalize JS-rendered content for latency and inconsistency. The open agent-readiness scanners on the market today (Cloudflare’s public scanner is the cleanest baseline) score JS-heavy pages in the 20s out of 100. A pure HTML page scores in the 90s.
We removed all JavaScript from /pricing, /how-it-works, /about, and /why. The score went from a starting baseline to 100/100. Vanilla HTML is an unfair advantage in 2026.
Layer 4 — Codename-Led Content. Facts, Not Adjectives.
Agents extract claims, not vibes. Every product page, agent page, and pricing page on indexableai.com leads with a codename and a measurable claim. Our SEO Manager agent, our Content Engineer, our Technical SEO — each has a codename, a concrete responsibility, and a numeric output. Agents quote facts. They don’t quote “industry-leading” or “next-generation.” Strip the adjectives. Lead with the codename and the numeric claim.
Layer 5 — Citation Density and Chunk Shape
A 42,971-citation analysis (Shashko, 2025) found that the median AI citation is 10 words long, and that structured content (lists, tables, definitions) is cited at a 91.3% advantage over flowing prose. Top 35% of citations originate from content positioned in the first third of the page.
The implication is direct: agents extract chunks, not articles. Write each page as a sequence of self-contained, copy-pasteable chunks. The first sentence of every section should be a complete claim, readable out of context. Lead with the chunk, then explain.
Layer 6 — Citation Graph as Authority Moat
This is the hidden layer. Google’s web.dev guide treats every site as if it stands alone. It does not. Agents weight citations from other authoritative sites pointing back at you — precisely like Google’s PageRank, but applied to the citation graph instead of the link graph.
A new domain can earn agent trust by being cited by 27 internal pages on its own site, plus a smaller set of trusted external sources. Three days ago, we shipped 27 internal inbound links to our flagship article on indexableai.com. That is not vanity SEO. It is citation graph engineering.
Layer 7 — WebMCP Isn’t the Destination. It’s Table Stakes.
Google’s preview of WebMCP is real and important, but it solves agent action — letting agents call your tools as functions. It does not solve agent citation — getting your brand mentioned in the first place.
WebMCP is the steering wheel. Layers 1 through 6 are the engine. Brands that ship WebMCP without the engine are giving agents a steering wheel attached to nothing. Sign up for the preview — but do not let it become a distraction from the foundational work.
We Built Our Own Site This Way
indexableai.com is built agent-first. The site you are reading this on is the case study. Live proof points (every one of these is verifiable from the open web):
- Zero JavaScript on the high-intent pages:
/pricing,/how-it-works,/about,/why - Seven JSON-LD schema types injected site-wide: Organization, WebSite, Service, Article, BreadcrumbList, Person, FAQPage
- Codename-led H1s on all 10 AI SEO Agent pages
- 27 internal inbound links from existing pages back to flagship articles — citation graph in motion
- Sub-200ms TTFB via Cloudflare Pages with immutable asset caching
- Stable canonical URLs — no redirect chains, no JavaScript-only routes
We did not run a study and write about it. We rebuilt our own site from scratch as the experiment, and we publish the live results as we ship. The agent-native publishing infrastructure we use to do this is the same infrastructure we deploy for customers.
Agent-readiness, indexableai.com
Site-wide schema injection
To flagship articles, citation graph
On critical paths
Want the playbook deployed for you?
Indexable AI runs a 24/7 team of AI SEO Agents that operate the 7-Layer Playbook on enterprise sites — strategy, content, technical, AI visibility, schema, software engineering. Not a tool. A full team.
Request your auditThe 14-Day Agent-Readiness Sprint
You don’t need to overhaul your stack. You need a 14-day sprint with seven owners and one objective: score 90+ on the open agent-readiness scanner before Google I/O on May 19-20.
Days 1-2 — Run the scanner and get your baseline
Most enterprise sites we audit start in the 20s out of 100. The number is shocking the first time. Document it. The baseline becomes the proof of progress.
Days 3-5 — Strip JavaScript from your top 5 highest-traffic landing pages
The hardest step. The biggest score lift. Replace JS-rendered content with static HTML. Defer JS to non-critical interactions only. Most sites move from 25 to 60 on this step alone.
Days 6-8 — Inject Organization + Service + Article JSON-LD site-wide
One schema file, included in the site-wide <head>. Validate every type with Google’s Rich Results Test before deploy. Score climbs from 60 to 80.
Days 9-10 — Audit URL structure
Collapse all redirect chains. Pick a canonical for every duplicate. Redirect everything to the chosen canonical with a single 301 hop. No chains.
Days 11-12 — Rewrite top 5 page H1s and first paragraphs
Codename-led. Fact-led. Chunk-shaped. Every section opens with a self-contained claim. Adjectives go in the marketing department. Codenames and metrics live on the page.
Days 13-14 — Inject internal inbound links from existing high-traffic pages to flagship pages
Citation graph in motion. Identify your top 20 highest-traffic existing pages, and add one contextual link from each into your most important new flagship article. Score lands at 90+.
By end of Day 14, you should be scoring 90+ on the open agent-readiness scanner. By Day 21, you should see citation rate climb in your AI visibility tracker for category prompts.
Why the Window Closes at I/O
Google I/O runs May 19-20, 2026 — 15 days from this article’s publication. Chrome will demonstrate the next phase of agent interaction. WebMCP will graduate from preview to broader rollout. Every brand that’s ready will compound. Every brand that isn’t will lose six to twelve months of catch-up work.
The dynamic is similar to mobile-first indexing in 2018. Brands that prepared for mobile-first before Google made it the default earned a lasting advantage that compounded for years. Brands that waited spent the next two years chasing.
This is the same shape. Google’s web.dev guide is the early warning. The 14-day sprint is the prepared response. I/O is the deadline.
If you wait for Google to tell you the agent shift has arrived, you’ve already lost the head start. They just told you. The clock is running.
The Operator Bet
I’m Vijay. I led SEO at Uber as the first SEO hire and scaled organic to 12 million monthly visits. Then I led SEO programs at Zendesk, Williams-Sonoma, and Pottery Barn — generating over $1 billion in cumulative organic revenue across those programs. I’ve been writing about LLMs and search since 2023, before GEO and AEO existed as categories.
The shift to agentic search is the largest infrastructure change to the open web since mobile-first indexing. The operator playbook above isn’t a thesis. It’s the work I’ve been doing on indexableai.com for 14 months and on customer sites for the past 6.
Build for agents. Not just humans. The window is 15 days.
Related Reading
The 7 layers above the floor map cleanly onto a 3-leg execution framework. Brand. Technical. Content.
Read The 3-Legged GEO Stool: Why Brand + Technical + Content Wins AI Search →Frequently Asked Questions
What is an AI native website?
An AI native website is a site designed from the ground up to be readable, citable, and actionable by AI agents alongside humans. The defining traits are pure HTML on critical paths, comprehensive JSON-LD schema, stable canonical URLs, codename-led content, structured chunks, dense internal citation graphs, and (where available) WebMCP tool registration.
Will agents replace organic search?
No. They extend it. Google still drives the majority of total search volume. AI agents add a new citation layer on top. The brands that win in 2026 are the ones visible in both surfaces — traditional Google search and AI agent answers.
Do I need to ship WebMCP today?
Sign up for the preview, but Layers 1 through 6 are higher priority. WebMCP without schema and citation density is a steering wheel attached to nothing. Build the engine first. Add the steering wheel when WebMCP graduates from preview.
Can Agentic SEO and traditional SEO coexist?
They are the same discipline, expanded. Agentic SEO is the operator method for optimizing for AI agents within the broader practice of SEO. The 7 layers are pure SEO best practice extended to AI agents. A team that does both is just doing SEO well in 2026.
How do I measure agent-readiness?
Two metrics. (1) An open agent-readiness score (Cloudflare’s public scanner is the cleanest baseline). Target 90+. (2) Share of Voice in AI search engines for your category-specific buyer prompts. Track monthly. Both should rise together as the seven layers are built.
Is this only for tech companies?
No. The 7 layers apply to every sector with agentic search exposure: retail, finance, health, B2B SaaS, services. The discipline is universal. Only the surfaces differ.
Where does Indexable fit?
Indexable AI runs a full team of 10 specialized AI SEO Agents plus a Forward-Deployed Enterprise Strategist who embeds with the customer’s CMO on site. The agents operate the 7-Layer Playbook 24/7 — strategy, content, technical, AI visibility, schema, software engineering. Not a tool. Not a single hire. A full team with senior leadership in your building.
Build for agents. Not just humans.
The 7-Layer Operator Playbook is what we deploy for our enterprise customers. Series C+ B2B SaaS or $50M+ ARR. 7-day Free Enterprise Audit, PDF deliverable, 30-minute walkthrough.