Every firm selling AI visibility promises growth. Almost none of them can show you the numbers that prove it. Ask them how they measure citation rate and you'll get vague language about “brand presence.” Ask for a baseline and they'll change the subject.
That's a problem. If you can't measure what you're buying, you can't tell when it stops working. Or worse — you can't tell if it ever started.
We publish our measurement framework openly so you can verify us against it, and steal the framework itself if you want to do the work in-house. Four numbers. Every month. Every engagement.
Layer 1: Are you actually cited?
The first and most important number: when we query ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews with the prompts your buyers actually use, does your brand show up?
We run between thirty and sixty target prompts per engagement, tiered by buyer intent. Informational (“what is X consulting”). Navigational (your brand plus terms, competitor comparisons). Transactional (“hire X consultant for Y”). For every prompt, we record whether your brand was named, where in the response, and how the AI framed you.
We then convert that into a position-weighted citation score. Being the first source cited is worth far more than being the fifth. The score goes up every month or we find out why.
Layer 2: Can AI crawlers even see your site?
A site the AI crawlers can't read is a site the AI won't cite. This layer is plumbing, and most sites have broken plumbing.
Every month we verify: your robots.txt explicitly allows the bots that matter (GPTBot, OAI-SearchBot, ClaudeBot, PerplexityBot, CCBot, Google-Extended, and the rest). Your schema.org markup is complete and validates. Your llms.txt is present and up to date. Your Core Web Vitals pass Google's thresholds at the 75th percentile.
Any failure gets logged. Any broken access becomes the next sprint's priority.
Layer 3: Does AI know who you are?
AI systems don't want to cite sources they can't identify. Your brand needs to be resolvable across the web as a distinct entity — one company, one identity, same facts wherever the AI checks.
This is your entity graph: Wikidata, Google Knowledge Panel, Google Business Profile, LinkedIn, GitHub, Crunchbase, G2, Clutch. All cross-linked via schema so the AI can traverse them and confirm you're real.
This is the most under-invested layer across most firms we audit. It's also the one that moves citation rate the fastest when it's weak — because fixing entity resolution often unlocks citations you weren't getting for reasons nobody had ever investigated.
Layer 4: Is your Google rank pulling its weight?
Classical Google rank still matters. AI retrieval often pulls from pages that already rank well in Google, so your organic position is a partial leading indicator of your citation rate.
We track your Search Console impressions, clicks, CTR, and average position per target query. We track your keyword rank and domain authority in Ahrefs or Semrush. We correlate movements against Layer 1 and look for non-linearities — because there are plenty of cases where improving one Google position moves AI citation more than expected, or where a content-side change helps AI citation without moving Google at all.
You see all four layers in one report every month. Actions taken, numbers moved, plan for next month.
Why this matters
- Four concrete numbers. No vanity metrics, no “brand sentiment score,” no vague progress.
- Every number is auditable. You can verify every claim in your own tools.
- Every number has a specific action tied to it. You know what moved and why.
- Every number compounds. This is the framework that keeps working as the models change.
If you want to see this framework applied to your engagement, the GEO service page has the full picture. If you want to see what it looks like at audit kickoff, the audit walkthrough goes step by step.