AI search visibility (GEO)
Entity recognition, retrieval-augmented generation, schema markup, Wikidata, and the per-assistant retrieval stack. The technical playbook for being cited in AI answers.
AI search surfaces local businesses two ways: as entities recommended in the answer, and as pages the model cites or quotes from. Most local-SEO writing focuses on the first and ignores the second. For a local business, both matter, and the right balance shifts depending on the query type and the surface the user is on. This guide covers both layers, surface by surface.
Two ways AI search surfaces a local business
Almost every conversation about AI visibility collapses into one of two narrow framings: "schema and Knowledge Graph" (entity-only) or "great content" (page-only). For local SEO neither is sufficient on its own. Here is the actual picture.
As an entity
The AI names your business in its answer. You are the recommendation. Driven by Knowledge Graph, GBP, Wikidata, and entity convergence across sources.
As a cited page
The AI quotes from one of your URLs. Your page is the source. Driven by topical authority, page-level schema, structure, and original content.
Via earned media
A third-party authoritative page that mentions your business gets cited. You appear by association. Driven by mentions in recognised press, sector publications, regulator listings.
For a local business optimising for AI visibility, all three pathways exist and all three deserve effort. They are not interchangeable. A perfectly-built GBP with rich Knowledge Graph presence still loses for informational queries where the AI is pulling from a how-to article. A beautifully-written guide on your site loses for "best plumber in Camden" if your GBP and reviews are weak. Earned media in TimeOut or a recognised trade publication can win both at once.
Local SEO has always been the messy middle between entity-shaped queries ("ABC Plumbing") and page-shaped queries ("how often should I service a boiler"). AI search inherits that mess. Optimising only the entity side or only the page side leaves half the surfaces uncovered.
Different queries trigger different surfaces
"Will I appear in AI search?" is the wrong question. The right question is "for which queries, on which surfaces, and as which type of result?". The same query can produce a Map Pack on Google, an AI Overview above the SERP, an AI Mode conversation, a Knowledge Panel on a brand search, or a synthesised paragraph in ChatGPT or Perplexity. Each surface ranks different things differently.
Brand query (e.g. 'ABC Plumbing London')
- •Surface: Knowledge Panel, Map Pack
- •Primary signal: entity (Knowledge Graph node)
- •What wins: clean GBP, strong Wikidata if notable, schema sameAs convergence
- •Pages quoted: rare; the answer is the entity itself
Local recommendation (e.g. 'best plumber Camden')
- •Surface: Map Pack, AI Overview, AI Mode
- •Primary signals: entity (GBP, reviews) plus pages (curated lists, third-party 'best of' content)
- •What wins: combination of strong GBP, review depth, AND mentions in third-party authoritative pages
- •Pages quoted: directories, local press, 'best of' round-ups
Informational with local intent (e.g. 'cost of boiler service in London')
- •Surface: AI Overview, AI Mode, Featured Snippet
- •Primary signal: page content with topical authority and clear answer structure
- •What wins: a well-structured, original-data page on your domain or a third-party site
- •Entity may be referenced, but the page is the citation source
How-to (e.g. 'how to bleed a radiator')
- •Surface: AI Overview, AI Mode, sometimes traditional SERP with HowTo rich result
- •Primary signal: page content (HowTo schema, clear step structure, expert authorship)
- •What wins: long-form, well-structured, accurate how-to pages
- •Local entity rarely featured unless the page itself ties to a service area
Conversational AI Mode (e.g. 'I have a leak in my kitchen, what should I do?')
- •Surface: Google AI Mode, ChatGPT, Claude, Perplexity
- •Primary signal: multi-source synthesis from many pages plus relevant entities
- •What wins: the AI fans the question into sub-queries (urgent steps, when to call a pro, how to find a plumber locally) and pulls from a mix of pages and local-business sources
- •Both layers matter: pages explain what to do; entities surface as 'who to call'
Comparison (e.g. 'tankless vs combi boiler')
- •Surface: AI Overview, AI Mode, Perplexity
- •Primary signal: comparison-style page content with parallel structure (tables, side-by-side)
- •What wins: a page on your domain (or third-party) that genuinely compares the two with original analysis
- •Entity rarely featured; this is page-shaped territory
The four moving parts of an AI answer
Every AI assistant answer (and increasingly every AI Overview / AI Mode result) goes through four stages. Different products implement each stage differently, but the structure is the same.
- 1
Query interpretation
The model parses the natural-language question into structured intent. "Recommend a Saturday dentist near Brighton who takes nervous patients" gets decomposed into entity type (dentist), location (Brighton), temporal constraint (Saturday), and an attribute filter (nervous patient care). Conversational queries in AI Mode often get fanned out into multiple sub-queries here.
- 2
Retrieval (entity + page indexes in parallel)
The system queries multiple sources at once. A web search index (Bing, Google, Brave, or proprietary) returns candidate pages. A structured data source (Knowledge Graph, Maps API, Wikidata) returns candidate entities. Embedding-based vector retrieval may run against pre-indexed content. Hundreds to thousands of candidates feed the next stage.
- 3
Reranking, entity resolution, and source selection
Candidates are reranked by relevance, source authority, and recency. The model resolves multiple references (a Knowledge Graph node, a GBP record, a Wikidata entry, mentions on third-party sites) to a single canonical entity. Pages are scored for coverage of the answer, recency, and trust. Ambiguous business names or fragmented identities lose visibility here.
- 4
Synthesis with citation selection
The model writes the answer using the highest-scoring retrieved sources as grounding. Citation selection is partly about coverage (does this source carry the bit of the answer the model needs?) and partly about authority (is this a source the model trusts to cite?). The user reads the answer; whether they ever click through depends on the surface and the user.
Layer 1: The entity layer
An entity in the search-engine sense is a disambiguated, structured identity for a thing in the world. Google's Knowledge Graph has held billions of entity nodes since 2012. Wikidata is a public knowledge base with well over 100 million entity records. Each entity has a stable identifier (a Knowledge Graph machine-readable ID, a Wikidata Q-number), a type (LocalBusiness, Person, Organization, Product), and properties.
Your business is, or wants to be, one of those entities. The signals that let AI retrieval systems identify "you" come from multiple sources, which need to converge:
Knowledge Graph node
Google's structured record of your business. Drives Knowledge Panel, AI Overviews, Gemini, and AI Mode answers for brand and local queries.
Wikidata Q-number
Open structured-data entry. Notability bar far lower than Wikipedia. Direct feed into many AI systems' grounding data.
Google Business Profile
The canonical source for local-business attributes (NAP, hours, services, reviews). Feeds the Knowledge Graph for local entities.
Domain and schema markup
Your homepage, with LocalBusiness schema and a sameAs array pointing to the above. Closes the loop for entity convergence.
When all four sources reference the same entity, retrieval systems converge on a single canonical record with high confidence. When they disagree (different names, addresses, missing nodes), the system either picks one and gets it partially right, or fails to surface you at all. Entity convergence is the entity-layer goal.
Layer 2: The page layer
A page in this context is one URL on your domain (or someone else's) that an AI might quote, summarise, or link to in a synthesised answer. Pages get cited based on different signals than entities get recommended. Treating "AI visibility" as purely an entity problem leaves all the informational, comparison, and how-to queries on the table for whoever puts in the page work.
What makes a page citation-worthy
- 1
Topical authority of the page and the surrounding cluster
FoundationPages embedded in a coherent topical cluster (a hub page plus several deep pages on related sub-topics, internally linked) are weighted as more authoritative than isolated pages on the same domain.
- 2
Page-level E-E-A-T signals
StrongNamed author with credentials, dated content, last-updated metadata, links to professional profiles via Person schema, transparent affiliations and conflicts of interest. Models favour pages that look credible to a human reviewer.
- 3
Original content not derivable elsewhere
StrongPrimary research, original data, expert commentary, case studies, first-party survey results. Models prefer to cite the source of a fact rather than the aggregator.
- 4
Clear answer-shaped structure
High leverageQuestion-style headings, definition openers, parallel-structured lists, comparison tables, FAQ blocks. Pages structured around answering specific user questions get extracted more cleanly than narrative-style pages.
- 5
Page-level schema (Article, HowTo, FAQPage, Service)
High leverageStructured data at the page level tells extraction systems what the page is for and how to chunk it. Article, NewsArticle, BlogPosting for editorial content; HowTo for instructional; FAQPage for Q&A; Service for service descriptions tied to your LocalBusiness.
- 6
Recency and freshness signals
VariableLast-updated dates, fresh internal links from new content, refreshed primary data. Heavier weight for time-sensitive sectors and for Perplexity, which weights recency more strongly than competitors.
- 7
Outbound links to authoritative sources
FoundationPages that themselves cite recognised authorities (regulators, peer-reviewed sources, official documentation) tend to be cited more than pages that cite nothing. This is the citation-of-citations effect.
Page-level schema worth implementing
Beyond the LocalBusiness and Service schema covered in the entity layer, page-level schema is what tells AI extraction systems what a specific URL is for. The high-value types:
Article(orBlogPosting,NewsArticle): for editorial pages, withauthorreferencing aPersonnode,datePublished,dateModified,publisherreferencing yourOrganization.HowTo: for step-by-step instructional pages, withstepentries,toolandsupplyif applicable. Maps directly to AI Overviews for how-to queries.FAQPage: for FAQ sections or pages, withQuestionandAnswerentities. Maps directly to chunk-level extraction.BreadcrumbList: navigational context. Helps AI systems understand where the page sits in your site architecture.Person: for any named author, withjobTitle,worksFor,hasCredential, andsameAslinking to LinkedIn, regulator listings, academic profiles. Important for E-E-A-T in YMYL sectors.Serviceon service pages, referencing the parentLocalBusinessviaprovider, withareaServedand optionallyoffers.
Chunk-friendly content structure
Retrieval systems break pages into chunks, embed each chunk into a vector, and retrieve the most relevant chunks per query. Chunks that read as standalone units win. Chunks that depend on context several scrolls earlier lose. The structural patterns that work:
- Clear H2/H3 heading hierarchy with descriptive headings (not 'Why us' but 'Why choose a Brighton-based dentist for nervous patients')
- Self-contained paragraphs that quote cleanly without context
- Definition-style opening lines for each section (the question being answered, then the answer)
- FAQ blocks with FAQPage schema, since these map directly to chunk-level extraction
- Tables for comparison data (machine-readable, easily extracted)
- Lists with parallel structure (each item answers the same implicit question)
- One topic per page where possible; long mixed-topic pages chunk badly
- Original data, statistics, or primary research (citation-worthy chunks)
- Internal links to related pages on your domain (signals topical cluster membership)
Layer 3: Earned media (third-party citations)
For competitive "best of" queries and local recommendation queries, the page the AI cites is often not yours. It is a third-party authoritative page listing several businesses, your business included. Local press, sector publications, professional body directories, regulator listings, "best of" round-ups: each of these can be the citation source for an AI answer that names your business.
Practical patterns:
- 1
Earn mentions in recognised local press
Highest leverageTimeOut, regional newspapers, BBC local, sector trade press. Each mention is a citation an AI model can pull from. Higher leverage than your own content for competitive 'best of' queries.
- 2
Get listed in professional body or regulator directories
Strong (especially YMYL)Law Society directory, GMC register, GDC register, FCA register, ICAEW directory. AI systems treat these as high-trust sources for medical, legal, and financial sectors.
- 3
Sponsor or contribute to local content
ModerateLocal festivals, community publications, charity events. Often produces unlinked or lightly-linked mentions that still feed entity recognition and earned-media citation.
- 4
Get into curated 'best of' content
Strong for hospitality, beauty, retailRound-up articles in regional or vertical publications. Pitch directly; provide differentiation; provide a quote or original data point for the writer. Each inclusion is a citation source.
How each major assistant retrieves
Different assistants are wired into different stacks. Knowing which stack each uses tells you which signals you need to invest in for that surface.
ChatGPT (OpenAI)
ChatGPT search (formerly SearchGPT). Combines OpenAI's own index with third-party providers including Bing. Crawlers: GPTBot, OAI-SearchBot, ChatGPT-User.
Gemini, AI Overviews, AI Mode (Google)
Google's index plus the Knowledge Graph plus Maps API. The same underlying signals power Gemini, AI Overviews on the SERP, and AI Mode in Search.
Perplexity
Multi-source: PerplexityBot crawl plus multiple search APIs. Always cites sources. Strong on recency and on long-form citation-worthy pages.
Claude (Anthropic)
Brave Search for live web retrieval, plus the pre-trained corpus. ClaudeBot crawls; Claude-Web fetches user-triggered.
Two further surfaces worth knowing about. Microsoft Copilot (which absorbed Bing Chat in 2024) is also Bing-grounded, so the same Bing Webmaster Tools and Bing Places work that helps ChatGPT search visibility helps Copilot. Apple Intelligence in iOS surfaces businesses through Apple's own ecosystem (Apple Maps, Apple Business Connect, Spotlight) and routes some queries to ChatGPT when the user opts in, which means Apple Business Connect completeness is increasingly worth the same effort as GBP for iOS-heavy audiences.
What this means for local-SEO investment
Bing visibility helps ChatGPT and Copilot
- •Bing Webmaster Tools: claim and verify your domain; submit your sitemap
- •Consider IndexNow for faster Bing indexation of new or updated pages
- •Bing Places listing aligned with your GBP
- •Allow GPTBot, OAI-SearchBot, ChatGPT-User in robots.txt
- •Your pages get cited via Bing's index; your business gets named via Bing Places + Knowledge Graph crossover
Knowledge Graph richness drives Gemini, AI Overviews, AI Mode
- •Comprehensive GBP (the dominant Knowledge Graph signal for local businesses)
- •Wikidata Q-number with sameAs links to your domain
- •Schema.org Organization or LocalBusiness on your homepage
- •Page-level schema (Article, HowTo, FAQPage, Service) on relevant URLs
- •Mentions on authoritative co-references (Wikipedia, news, regulators)
- •Allow Google-Extended in robots.txt (separate from Googlebot; controls AI training and AI surface usage)
Perplexity rewards depth, recency, and citation-worthy pages
- •PerplexityBot and Perplexity-User allowed in robots.txt
- •Updated, dated content (Perplexity weights freshness heavily)
- •Long-form content with clear section structure and original data
- •Page-level schema and Person markup for authors
- •Visibility across the search APIs Perplexity draws from (Brave, others; the exact mix has shifted over time)
Claude weights authority heavily across both layers
- •Authoritative third-party mentions (.gov, .edu, .org, recognised press)
- •Schema.org Person markup for authors and experts
- •Clear E-E-A-T signals (author bylines, credentials, dates)
- •ClaudeBot and anthropic-ai allowed in robots.txt
- •Brave Search visibility (Claude uses Brave for live retrieval)
AI Mode and query fan-out
Google's AI Mode (a more conversational AI search experience launched in 2025 and now widely available) is structurally different from AI Overviews. The key mechanic is query fan-out: a single user question gets decomposed into multiple sub-queries, each retrieved against its own most relevant sources, with the synthesis layer assembling a multi-source answer.
A query like "I run a small Italian restaurant in Hackney and I want more weekday lunch covers, what should I do?" might fan out into sub-queries on local marketing tactics, lunch-trade benchmarks for hospitality, restaurant SEO basics, common mistakes operators make, and what specific tools or services exist. Each sub-query has different best sources. The user sees one synthesised answer that draws from all of them.
Surface-by-surface signal mix
For a quick reference, here is which signals dominate which surface for local-intent queries.
Knowledge Panel
- •Triggered by: brand or specific entity queries
- •Primary signal: entity (Knowledge Graph node, GBP, Wikidata)
- •Pages cited: rare; the panel is the entity
- •What to invest in: entity convergence, sameAs, GBP completeness
Map Pack
- •Triggered by: local-intent queries with clear category and location
- •Primary signal: entity (GBP, reviews, proximity, NAP consistency)
- •Pages cited: not directly; the listing is the result
- •What to invest in: GBP, reviews, citations, location-specific content
AI Overview
- •Triggered by: most queries with a clear answer Google can synthesise
- •Primary signal: hybrid (entity for 'best of' queries; pages for informational)
- •Pages cited: typically 3 to 7, mix of authoritative third-party and original sources
- •What to invest in: both layers, plus earned media for competitive queries
AI Mode
- •Triggered by: conversational, exploratory, multi-part queries
- •Primary signal: query fan-out across multiple sub-queries; mix of pages and entities
- •Pages cited: wider set than AI Overview; rewards topical depth across a cluster
- •What to invest in: page library breadth, original content, page-level schema
AI assistant answers (ChatGPT, Perplexity, Claude)
- •Triggered by: any user query in the assistant
- •Primary signal: depends on assistant; ChatGPT and Copilot lean on Bing; Gemini on Google KG; Perplexity and Claude on diverse retrieval plus pre-training
- •Pages cited: Perplexity and ChatGPT search show citations; Claude often does
- •What to invest in: visibility across Bing, Brave, and Google indexes; schema; chunk-friendly structure
Featured Snippets / Rich Results
- •Triggered by: queries with a clear factual answer
- •Primary signal: page content (heavily structure and schema dependent)
- •Pages cited: a single source extracted directly
- •What to invest in: page-level schema (HowTo, FAQPage), clear question-answer structure
Crawler access: the audit nobody runs
AI crawlers obey robots.txt. Many sites accidentally block them via wildcard rules, agency-installed defaults, or templates inherited from years ago when AI crawlers did not exist. Audit your robots.txt and verify each of these is intentionally allowed (or intentionally blocked):
OpenAI / ChatGPT
- •
GPTBot: training crawler - •
OAI-SearchBot: search crawler for ChatGPT search - •
ChatGPT-User: user-triggered fetches when ChatGPT browses on a user's behalf
- •
Googlebot: standard search crawler - •
Google-Extended: separate token controlling AI training and use in Google's AI surfaces (Gemini, AI Overviews, AI Mode) - •Allowing Googlebot while blocking Google-Extended limits AI surface inclusion without affecting standard Search
Anthropic / Claude
- •
ClaudeBot: primary crawler (consolidated since 2024) - •
anthropic-ai: legacy crawler name; some operators still match it for safety - •
Claude-Web: user-triggered fetches
Apple, Perplexity, others
- •
Applebot-Extended: Apple's separate AI-training token (paired with the standard Applebot) - •
PerplexityBotandPerplexity-User: indexer plus user-triggered fetches - •
CCBot(Common Crawl): widely used as a training-data feeder - •
Bytespider,Meta-ExternalAgent: ByteDance, Meta
An emerging proposal worth knowing about is llms.txt, a Markdown file at your domain root describing your site's content for LLM consumers. Adoption through 2025 has been gradual: a growing share of publishers have published one, and some AI products have added experimental support, but it is not yet a universally honoured standard the way robots.txt is. Adding one is cheap and the signal-cost is zero if it never gets used.
Wikidata: the open Knowledge Graph
Wikidata is the structured-data sibling of Wikipedia. It powers parts of Google's Knowledge Graph, feeds many AI systems' training data, and has a far lower notability bar than Wikipedia. Most established local businesses, especially those with a few years of trading history and any media coverage, qualify for an entry.
Practical approach:
- Search Wikidata for your business. If an entry exists, claim it (or have a Wikidata-active editor add the missing properties on your behalf).
- If no entry exists and you meet notability requirements (some independent sources covering the business), create one. Editing your own entry is technically allowed but discouraged. Better to provide editors with sourced facts and let them create the entry.
- Populate key properties:
P31instance-of (e.g.Q4830453for a generic business, or a more specific subtype where appropriate),P17country,P159headquarters location,P856official website, and any others relevant to your sector. - Add the resulting Q-number URL (e.g.
https://www.wikidata.org/wiki/Q123456789) to your domain'ssameAsarray in your Organization schema.
The 9 GEO levers, in priority order
For a local business covering both layers, this is the order of impact most consistently. The top items multiply the effect of items below.
- 1
Comprehensive GBP plus entity-side schema (sameAs convergence)
HighestYour business as an entity Google can identify with high confidence. LocalBusiness, Service, Organization with sameAs to Wikidata, GBP, social profiles, regulator pages.
- 2
Page library covering the real questions in your sector
HighestService pages, FAQ pages, guide content answering 'how often', 'what does it cost', 'when do I need', 'what to look for' style queries. Page-level schema (Article, HowTo, FAQPage). The page-citation source layer.
- 3
Wikidata entity created and maintained
HighOpen Knowledge Graph entry with structured properties and sameAs back to your domain.
- 4
Reviews with specific service phrasing
HighAI assistants extract specific phrases from reviews. Encourage reviews mentioning the actual service or attribute, not generic praise. Feeds both entity recommendation and AI-Overview citation context.
- 5
Earned mentions in authoritative third-party sources
HighRecognised press, regulators, professional bodies. Each authoritative mention is a citation-source the AI can pull from. Often the difference for competitive 'best of' queries.
- 6
Crawler access audit (robots.txt for all major AI bots)
FoundationConfirm none of the AI crawlers are accidentally blocked. Five-minute audit, often material impact.
- 7
Page-level Person schema for any expert authors
Strong (especially YMYL)Author bylines with credentials, qualifications, sameAs to professional profiles. Page-level E-E-A-T signal that translates directly to AI source-trust scoring.
- 8
Chunk-friendly content structure
FoundationClear headings, self-contained paragraphs, FAQ blocks, tables, parallel lists. Makes your content extractable at the section level for both AI Overviews and AI Mode.
- 9
Bing Places, Apple Business Connect, plus traditional SEO foundations
FloorBing visibility (ChatGPT search and Microsoft Copilot are Bing-grounded), Apple Business Connect (Apple Intelligence + Apple Maps), plus the underlying SEO work in our other guides. AI search rests on solid traditional SEO across the major indexes.
YMYL: where AI plays it safer
Your Money or Your Life queries (medical, legal, financial, safety) are handled more conservatively by every major assistant. Models default to authoritative sources, surface caveats more readily, and decline to recommend specific providers more often than in lower-risk sectors.
Sector heuristics
Patterns we have observed across sectors for what AI assistants weight most heavily, with the entity-vs-page balance broken out.
Hospitality / beauty / retail
- •Entity-heavy: review sentiment dominant, GBP completeness drives Map Pack
- •Pages: 'best of' round-ups in regional press are the main citation source
- •Specific phrases in reviews carry weight ('best brunch in Hackney', 'great for groups')
- •Wikidata rarely matters for small independents
Legal / medical / financial (YMYL)
- •Entity: authoritative sources dominate (regulator listings, professional bodies)
- •Pages: original content with expert authorship; Person schema critical
- •AI is more cautious recommending specific providers; 'see a qualified [profession]' is a common framing
- •Wikidata and Wikipedia mentions matter for established firms
Trades / home services / B2B
- •Entity: reviews plus specific service mentions plus aggregator sites
- •Pages: how-to and informational guides (e.g. 'how often boilers need servicing') get cited for AI Overviews
- •Local press and trade publication mentions matter for bigger trades businesses
- •Visual content matters less than for hospitality
Measuring AI visibility
AI search has no single "rank" to track. You measure visibility through structured query testing across surfaces and assistants, looking at both layers: are you the entity recommended, and are your pages cited.
The query test set
Cover the realistic phrasing space across surfaces:
- Brand: '[your business name]' (Knowledge Panel)
- Local recommendation: 'best [category] in [city]' and '[category] near me [city]'
- Local recommendation: 'recommend a [category] in [city]' and 'where should I go for [service] near [location]?'
- Specific need: 'I need a [category] who can [specific need]'
- Sub-area: 'a good [category] in [neighbourhood]'
- Comparison: 'compare [category A] to [category B]'
- Filter: 'cheap or affordable [category] near [area]', 'highly-rated [category] in [city]'
- Attribute: '[category] with [specific attribute] in [city]'
- Time-sensitive: 'open now [category] near me [city]'
- Conversational AI Mode: 'I have a [specific situation] and need help, what should I do?'
- Informational: 'what does a [category] cost in [region]?', 'how often should I [task]?'
- How-to: 'how do I [common task]?'
Metrics to track
- 1
Entity inclusion rate
PrimaryPercentage of relevant queries where you appear in the answer as a recommended business. Measure per assistant and per surface (Map Pack, AI Overview, AI Mode, ChatGPT, Perplexity, Claude). Track month over month.
- 2
Page citation rate
PrimaryWhere the assistant surfaces source links, are your pages cited? This is a different metric from entity inclusion and can move independently. Track both.
- 3
Earned-media citation rate
ImportantOf the third-party pages cited in answers about your sector or area, which mention you and which do not? Often the highest-leverage gap to close for competitive queries.
- 4
Sentiment in mentions
ImportantWhat language is used about you? Specific positive phrasing is what conversion happens on; generic neutral mentions do not move the needle.
- 5
Competitive set
ImportantWho else is mentioned alongside you, and who is mentioned but you are not? Often reveals competitor strategy or content gaps.
- 6
Share of voice across the query set
StrategicAcross your full query set, what percentage of total mentions are you? Track relative to your top 3 to 5 competitors over time.
Running this manually across many queries weekly is impractical. Our AI Visibility Tracker automates this so you can scan dozens of queries weekly across all major assistants without hand-running each one.
Common mistakes
The 90-day GEO programme
A practical phased plan for a local business. Each phase covers both the entity layer and the page layer.
- 1
Days 1 to 30: foundations on both layers
Audit robots.txt for all major AI crawlers. Implement comprehensive schema.org markup: LocalBusiness on the homepage, Service on each service page, FAQPage on FAQ sections, Article and Person on any editorial content. Validate every JSON-LD block. Confirm Bing Places and Apple Business Connect both match your GBP. Audit your existing page library for the obvious gaps in informational and how-to queries.
- 2
Days 31 to 60: entity convergence and page depth
Create or claim your Wikidata Q-number. Add
sameAslinking your domain to Wikidata, GBP, social profiles, Companies House, regulator pages where relevant. Publish or refresh 3 to 5 citation-worthy pages: original data, expert commentary, primary research, or genuinely useful how-to content. Add Person schema for the authors. Aim for 1 to 2 earned mentions in authoritative third-party sources (recognised press, professional bodies, sector publications). - 3
Days 61 to 90: measure across both layers and iterate
Run your structured query set across ChatGPT, Gemini, AI Overviews, AI Mode, Perplexity, Claude, and Microsoft Copilot. For each: note entity inclusion rate, page citation rate, earned-media citation rate, sentiment, and competitive set. Identify gaps (where competitors appear and you do not, on which surface, for which query type) and target the underlying signal. Re-test monthly.
Reference numbers
30-90d
Realistic signal lag
Between deploying a schema or content change and it surfacing consistently in AI answers
3 layers
Optimise across
Entity (Knowledge Graph, GBP, Wikidata, schema), Page (your URLs as citation sources), Earned media (third-party authoritative pages)
10+
AI crawlers to audit
GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, anthropic-ai, Claude-Web, PerplexityBot, Perplexity-User, Google-Extended, Applebot-Extended, CCBot
Monthly
Re-test cadence
Models update their data and answers frequently. Re-test the same query set every 30 days, across surfaces.
Where this is going
AI search is not replacing Google overnight, but it is eating top-of-funnel click-through and expanding the surfaces a local business has to be visible on. The businesses that win in three years are the ones whose entity identity converges cleanly across the Knowledge Graph, Wikidata, GBP, and their domain; whose page library covers the real questions in their sector with credible original content; and who maintain solid traditional SEO underneath. The ones that do not get cited slowly disappear, even if their organic rankings hold, because the user never sees the SERP. They got the answer.
Where to go next
Keep reading