Talk to an Architect
Thought Leadership

From 23 to 100: An Honest Walk-Through of My Site's Agent-Readiness Sprint

I ran indexableai.com through Cloudflare's open agent-readiness scanner. We started at 23. Here is exactly what we shipped to get to 100 — and the part of the answer the score does not tell you.

Vijay Vasu April 27, 2026 9 min read

Why I Ran My Own Site Through the Scanner

Our company is called Indexable. The promise embedded in the brand is that the things we ship are the things AI agents and search engines can actually find, parse, and reuse. So when Cloudflare quietly published isitagentready.com — a free, open scanner that grades any URL on 14 agent-readiness items across 5 categories — the question was no longer whether to run my own domain through it. It was when.

I ran it on Saturday afternoon, April 25, 2026.

Credit where it is due: the scanner is the work of Sebastian Griffin and the team at Cloudflare. It is the first openly-instrumented checkpoint I have seen for the agentic-web protocols — a single URL field that returns a category-by-category breakdown of whether your site is reachable by AI agents at the conventions the ecosystem is converging on. The fact that anyone can run it on any domain, including ours, is exactly why I wanted to start there.

The first scan returned a 23 out of 100. Level 1: Basic Web Presence.

I am writing this post because by the end of that same Saturday, the score was 100. The honest version of what happened in between is more useful than either celebrating the number or hiding the starting point.

23 Baseline Score

The first scan. April 25, 2026, 1pm PT

100 Final Score

April 25, 2026, 5pm PT. Same Saturday

7 Metadata Files Shipped

All under /.well-known/*

14 Items Across 5 Categories

The scanner's grading rubric

23Baseline
46Batch 1
54Batch 2
92Batch 3
93Web Bot Auth
100A2A Fix
The Diagnosis

What Did the Scanner Say at 23?


The scanner grades five categories: Bot Access Control, Content Accessibility, API & Authentication, Agent Discovery, and Commerce Readiness. Here is what it found on a domain that, until that morning, I would have told you was already pretty agent-friendly.

Category Baseline Score What Was Missing
Bot Access Control25 / 100No Content Signals in robots.txt; AI crawlers not explicitly allow-listed
Content Accessibility0 / 100No text/markdown alternative; HTML-only responses
API & Authentication0 / 100No API catalog, no OAuth discovery, no protected resource metadata
Agent Discovery0 / 100No A2A Agent Card, no MCP Server Card, no Agent Skills index, no WebMCP
Commerce ReadinessN/ANot applicable (no on-site checkout)

Reading down that column was uncomfortable. The honest interpretation: the site was readable by humans and by traditional crawlers, and that was the entire surface. There was nothing on the domain that an autonomous AI agent could use to identify what we are, which skills we expose, or how to interact with us programmatically. The category that hits hardest is Agent Discovery at zero — that is the one that decides whether an agent ever finds you in the first place.

This is also, broadly, where most enterprise sites are sitting in April 2026. The scanner is not an outlier metric. It is the floor.

The Inventory

What Protocols Actually Matter in April 2026?


If you are mostly steeped in classic SEO — sitemap.xml, robots.txt, schema.org — the agent-web stack looks unfamiliar at first. It is a different set of conventions, and most of them did not exist in stable form a year ago. Here is the working inventory I used to close the gap, in plain English, with the references I leaned on.

  • Content Signals in robots.txt. Cloudflare's Content Signals proposal extends robots.txt with three machine-readable signals: ai-train, search, and ai-input. They tell crawlers, in one line, what your content can be used for. This is the cheapest, fastest signal on the entire stack.
  • Link headers with agent relations. RFC 8288 Link headers, returned at the root, advertise agent-relevant resources (API catalog, A2A card, MCP card, skills index, OAuth discovery). It lets an agent discover your protocol surface in a single HEAD request.
  • API Catalog (RFC 9727). A linkset+json document at /.well-known/api-catalog that lists every machine-readable endpoint on the domain. RFC 9727 is the only one of these conventions that is a finalized RFC.
  • A2A Agent Card. Google's Agent2Agent protocol's discovery document, published at /.well-known/agent-card.json. It declares the agent's name, capabilities, supported interfaces, and skills. The required url field on each interface is the most common validation slip-up — we hit it ourselves on the way to 100.
  • MCP Server Card. Anthropic's Model Context Protocol companion: a JSON document describing an MCP server's tools and transports, published at /.well-known/mcp/server-card.json.
  • Agent Skills Discovery (RFC v0.2.0). A list-of-skills index at /.well-known/agent-skills/index.json, with each skill described by a slug, a title, and a content hash. The hash is the part most teams skip. We computed real sha256 values from each agent landing page so the index is auditable, not decorative.
  • WebMCP. A small JavaScript surface, defined by the WebMCP working group, that exposes site-level tools to in-page agents through navigator.modelContext.provideContext(). We wired three tools on the homepage: list_geo_agents, get_agent_readiness_score, and request_pilot.
  • OAuth/OIDC discovery + Protected Resource Metadata. /.well-known/openid-configuration and /.well-known/oauth-protected-resource, the conventions agents use to negotiate authentication. They are well-trodden in the API world; they are new for marketing-led web teams.
  • Web Bot Auth. HTTP Message Signatures (RFC 9421) + a JWKS at /.well-known/http-message-signatures-directory, with an Ed25519 public key. This is how trustworthy bots will sign their requests in 2026 and beyond, and it is verifiable by any origin.

None of those nine items existed on indexableai.com Saturday morning. By Saturday evening, all nine were live. Eight are static metadata files, declarable in a single deploy. The ninth — the JWKS — required generating an Ed25519 keypair (Python's cryptography library, ~10 lines), publishing the public half, and storing the private half outside the deploy directory. Total elapsed time, including a 30-minute near-miss when I almost shipped the private key publicly: about four hours.

The full sprint is published in public

Snapshots updated every Monday. The runtime behind the metadata is what we are building over the next four weeks — live MCP server, A2A agent service, OAuth flows, x402 commerce paths.

See the Live Tracker
The Sequencing Argument

Why Ship the Discovery Layer Before the Runtime?


Every engineer's first instinct is the opposite. Build the backend, prove it works end-to-end, then advertise it. That instinct is correct in almost every other context. It is wrong here, and the reason is structural.

An AI agent's first contact with your domain is a discovery request. If /.well-known/agent-card.json returns 404, it does not matter how good your live MCP server is. The agent never finds the live MCP server. It moves on, and you are absent from the answer it returns to the user.

Discoverability is the sorting hat. It runs before any of the runtime questions get asked. Treating it as something to do after the runtime is built means the runtime never gets traffic.

The discoverability surface is what AI agents read first. Without it, the runtime does not matter, because nothing can find it.

So the right order, on a domain that intends to be agent-native, is: ship the well-formed metadata declarations on day one, ship the runtime that backs them in public over the following weeks, and treat the gap honestly. We are publishing a weekly tracker at /agent-readiness/ that names exactly which runtime endpoints are still placeholders versus shipped, with snapshots dated Monday-by-Monday.

That is the second instinct most teams do not act on: say what is real and what is rolling out. Vaporware claims that an LLM cannot verify cost you nothing in 2025. They will cost you everything in 2026, when the agent doing the citing is also the one running the scanner.

The Honest Framing

What Does 100/100 Actually Mean?


The score measures discoverability. It tests whether the agentic-web protocols are declared, well-formed, and reachable at the conventions the AI agent ecosystem is converging on. That is a real, meaningful, verifiable property — and it is genuinely hard to have at 100 today, because most domains are missing four or more of the nine surfaces above.

What it does not measure: whether every backend behind those declarations is fully wired. We declared the discovery layer; the 30-day sprint also continues building the runtime behind it — a real MCP server endpoint, a live A2A agent service, OAuth flows that issue real tokens, x402 commerce paths.

This is an intentional sequencing choice. I am explaining it on the same domain where the score is published, because I would rather a customer, a competitor, or a journalist read this post and understand the distinction than infer it from the score alone.

Or, in one line: the score is the receipt; the runtime is the work.

The Argument Behind the Sprint

Why the Score Is the Receipt, Not the Work


I think most companies will get this exactly backwards in the next 12 months.

They will run the scanner, see a low score, and treat the response as a marketing problem. They will either avoid running it again, or they will quietly add metadata files in a way that looks good but is not backed by anything. That is the path of least resistance, and it is also the path that AI agents will route around the moment they start verifying claims at runtime — which they already do for citations and will increasingly do for capability declarations.

The other path: run the scanner publicly, name the gap, ship the metadata and the runtime behind it on a public timeline, and let anyone — customer, competitor, AI agent — verify the work themselves. That is what we are doing at /agent-readiness/. Snapshots are published every Monday. The 30-day commitment is to take the discovery layer to 100 (done) and the runtime behind it to "live, not stub" by May 25.

I am not arguing that discoverability is the whole game. I am arguing that discoverability is the gate, and that pretending the gate is closed when it is open is the worst available trade. Score the gate honestly. Ship the runtime in public. Let the receipt and the work argue for themselves.

The Wider View

What Does This Change for the Wider Web?


Three things, from where I sit.

One: the agent-readiness gap is the next big public scorecard. Core Web Vitals took five years to become a board-level metric. Agent-readiness will take twelve to eighteen months, because the agents themselves are doing the grading and the consequences are immediate — either you are findable in an LLM answer or you are not. A free, open scanner accelerates that timeline meaningfully.

Two: the conventions are stable enough to act on. RFC 9727 is finalized. A2A v0.9, MCP, and Agent Skills RFC v0.2.0 are stable enough that the major model providers and Cloudflare are converging on the same /.well-known/* conventions. WebMCP is still moving but the surface is small. None of these are speculative anymore. Treating them as not-yet-real is itself a strategic mistake.

Three: the receipt-vs-runtime distinction will become a brand-trust signal. Domains that publish a public tracker showing what is declared versus what is live behind it will accumulate trust faster than domains that ship metadata and stay silent. The customers and journalists who care enough to ask are also the ones who will reward the honesty when they get it.

The window in which this is a competitive advantage is short. The window in which it is a baseline expectation is closing fast.

FAQ

FAQ: Agent-Readiness, Honestly


Is a 100 on isitagentready.com the same as being fully agent-native?

No. The scanner grades discoverability — whether your protocol surface is declared, well-formed, and reachable. It does not run end-to-end transactions against your live runtime. A 100 means agents can find the protocols you support. Whether each protocol's runtime is fully wired is a separate question, and one any honest team should answer in public alongside the score.

Which protocols moved the score the most on indexableai.com?

In order of points-per-effort: Content Signals in robots.txt, the A2A Agent Card, the MCP Server Card, the Agent Skills index, the API catalog (RFC 9727), and OAuth/OIDC discovery. WebMCP and Web Bot Auth (with a real Ed25519 JWKS) closed the last gap to 100. The single hardest item to validate cleanly was the A2A Agent Card — the required url field on the supported interface is a common slip-up.

Can a static site really hit 100, or do I need a backend?

A static site can hit 100 on the discoverability rubric, because every required surface is a static document at a /.well-known/* path or in HTTP headers. The runtime behind those documents — live MCP server, A2A agent service, OAuth flows — is a separate engineering track. Indexable's site is on Cloudflare Pages with no backend code in the request path; the metadata is files, the JWKS is files, WebMCP is a small piece of vanilla JS on the homepage.

Is publishing a JWKS public key safe?

Yes — the public half of an Ed25519 keypair is designed to be public. The danger is the private half. We generate the keypair locally, publish only the public key in the JWKS at /.well-known/http-message-signatures-directory, and store the private key outside the deploy directory. We caught one near-miss during the sprint where the keys were initially generated inside the deploy folder; the fix is to relocate the private key before any build runs. If you do this, verify with a simple curl that the private path returns 404 publicly while the JWKS returns 200.

What does this mean for traditional SEO?

Classic SEO surfaces — sitemap.xml, semantic HTML, schema.org, internal linking — remain necessary. Agent-readiness is additive, not a replacement. Think of it as a second indexing surface, parallel to Google's, designed for autonomous AI agents that browse on a user's behalf. Sites that score well on both surfaces will be cited more often in LLM answers and traditional search results, because the underlying signals reinforce each other.

Where is the scanner and who built it?

The scanner lives at isitagentready.com and is the work of Sebastian Griffin and the team at Cloudflare. It is free and runs against any URL. I have no affiliation with Cloudflare beyond being a customer of their Pages product, and I think open scorecards like this one are exactly the kind of public infrastructure the agentic-web shift needs.

VV

Vijay Vasu

Founder & Chief AI Officer, Indexable AI

Vijay Vasu is the Founder and Chief AI Officer of Indexable AI, a GEO platform deploying 10 specialized agents for AI search optimization and SEO. Indexable AI is built by operators who scaled $1B+ in organic revenue at Uber, Zendesk, and Williams-Sonoma. Indexable's brand promise is being indexable — in classic search, in generative engines, and in the agentic web. Public sprint tracker at indexableai.com/agent-readiness.

Build in Public

The receipt is at /agent-readiness. The runtime is the work.

We are publishing weekly snapshots of the runtime behind the metadata: live MCP server, A2A agent service, OAuth flows, x402 commerce paths. Every Monday, dated and verifiable.