All guides
Advanced11 min read

AI search visibility (GEO)

How ChatGPT, Perplexity, Gemini and Claude pick businesses to mention, and how you get into those answers. Generative Engine Optimisation, explained.

AI assistants like ChatGPT, Perplexity, Claude, Gemini, and Google's AI Overviews are already the first place a meaningful share of users go for local recommendations. "Best dentist in Brighton". "Where can I get my watch repaired in central London?". Whether you're mentioned in those answers, and how, is decided by a different set of signals than traditional SEO. This is GEO: Generative Engine Optimisation.

Why GEO is different from SEO

Traditional SEO ranks pages. AI assistants don't show pages. They generate answers and optionally cite sources. The mechanics:

  1. 1

    User asks a natural-language question

    Often longer and more contextual than a typed Google query. "I need a dentist in Brighton open on Saturdays who takes nervous patients." A model can parse all three constraints at once.

  2. 2

    The model runs underlying searches

    One or more searches against web indexes (Bing, Google, Brave, internal indexes), Maps APIs, news feeds, partner data sources. The model interprets the user's question into one or more search queries.

  3. 3

    The model synthesises an answer

    It picks which sources to cite, which businesses to name, and how to phrase it. The selection is partly about what's most relevant, partly about what's most reliably summarisable.

  4. 4

    The user sees a paragraph or two

    Sometimes with citation links, sometimes without. The user typically does not click through to the SERP. They've already got their answer.

Two things follow from this. First, "ranking #1" doesn't exist for these queries. You're either in the answer or you're not. Second, the model decides what to mention based on what it can read about you across the web, plus its underlying-search results. Your job is to make sure both layers say something favourable.

Traditional SEO is about being found. GEO is about being cited. The mechanics differ, but the foundations overlap heavily.

The most useful framing we've found

SEO vs GEO: the key differences

Traditional SEO

  • User sees a list of links
  • Position 1 to 10 is the visibility ladder
  • Traffic flows to your website
  • Click-through is the success metric
  • Ranks change slowly, daily-weekly cadence

GEO (AI search)

  • User sees a paragraph or two
  • You're cited or you're not
  • Traffic may not reach your website at all
  • Inclusion in the answer is the success metric
  • Answers change daily-monthly as models update

What AI assistants actually use to pick businesses

Patterns we've observed, replicated, and stress-tested across the major assistants:

  1. 1

    Underlying search results dominate

    Foundation

    ChatGPT (with browsing), Perplexity, Gemini, and Claude with web access all run real-time searches. If you don't appear in the top 10 for the query the model translated the user's question into, you're unlikely to be mentioned.

  2. 2

    Reviews and review sentiment

    Disproportionate

    Models read review snippets and summarise sentiment. A business with 200 reviews averaging 4.8, with phrases like 'incredibly professional' and 'fixed it on the first visit', is dramatically more likely to be mentioned than one with 50 reviews at 4.3.

  3. 3

    Mentions in third-party 'best of' content

    Strong

    If you're listed in TimeOut's 'best Italian restaurants in Soho', that piece of content becomes a reference the model can pull from. Earned media is GEO leverage.

  4. 4

    Wikipedia, Wikidata, Crunchbase

    Variable

    Matters for larger businesses, less for small local. Larger entities benefit from getting onto Wikipedia and Wikidata for entity-resolution reasons.

  5. 5

    Schema markup

    Helpful

    Clean LocalBusiness JSON-LD on your site makes it easier for crawlers (including AI training crawlers) to extract your details cleanly.

  6. 6

    Your own website's content

    Foundation

    AI assistants pull from web pages they can read. Pages with clear structure, headings, FAQs, and location information are easier to summarise from.

The query patterns: how to test where you appear

Don't just check "best [your category] in [your city]". Real users phrase queries in a hundred different ways. Test across these patterns:

  • Recommend a [category] in [city]
  • Where should I go for [specific service] near [location]?
  • I need a [category] who can [specific need]
  • What's a good [category] in [neighbourhood]?
  • Compare [competitor] to alternatives
  • Best [category] for [type of customer] in [city]
  • Cheap / affordable [category] near [area]
  • Highly-rated [category] in [city]
  • [Category] with [specific attribute] in [city]
  • Open now [category] near me [city]

Run these across ChatGPT, Perplexity, Gemini, and Claude. Note where you appear, what's said about you, and which competitors are mentioned that you didn't expect. The competitive map in AI search is sometimes very different from the Google competitive map. Our AI Visibility Tracker automates this so you can scan dozens of queries weekly without hand-running each one.

What to actually do

  1. 1

    Make sure your traditional SEO is strong

    AI search uses web search as a foundation. Without organic visibility for your target queries, GEO is uphill. See the complete guide to local SEO for the foundations.

  2. 2

    Build review depth

    100+ reviews with specific service mentions is the realistic floor for being quoted. See how to get more Google reviews. Encourage specific reviews (mentioning the service, not just generic praise) because those are what AI assistants quote.

  3. 3

    Earn mentions in third-party 'best of' content

    Pitch local press and round-up writers. Sponsor local content where it makes sense. Each earned mention is a citation an AI model can pull from. Aim for 3 to 5 high-quality mentions per year, in publications a model might trust.

  4. 4

    Add comprehensive schema markup

    LocalBusiness, Service, FAQ, Review, Product where applicable. The clearer your data is, the more accurately you'll be summarised. Models prefer structured sources over free-text ones because they're easier to extract from reliably.

  5. 5

    Write clear, direct service pages

    Pages structured around real questions with clear answers ("How long does a [service] take?", "What does it cost?", "Do you offer evening appointments?") map directly onto how AI models summarise. Pages full of generic marketing copy do not.

  6. 6

    Track and iterate

    AI assistants change their answers surprisingly often. Re-test monthly. Compare to where you rank in traditional search to see which signal is dominant for your sector. Some sectors (legal, medical) lean heavily on authoritative sources. Others (restaurants, beauty) lean on review sentiment.

What not to do

Sector-by-sector heuristics

We've seen consistent patterns across sectors for what AI assistants weight most heavily. Useful for prioritising effort.

Restaurant / hospitality / beauty

  • Review sentiment is dominant
  • Photos and visual content amplify reach (if AI search adds image inclusion)
  • Specific phrases in reviews carry weight ('best brunch in Hackney', 'great for groups')
  • Wikipedia rarely matters

Legal / medical / financial

  • Authoritative sources dominate (.gov, .org, regulator listings)
  • Wikipedia and trade-body mentions matter
  • Reviews matter but less than authority
  • AI tends to be more cautious recommending in YMYL areas

Trades / home services / retail

  • Reviews + specific service mentions
  • Mentions in trade publications and aggregator sites (Checkatrade, Trustpilot)
  • Local press for bigger trades businesses
  • Visual content matters less than for hospitality

Where this is going

AI search isn't replacing Google overnight, but it's eating click-through to websites at the top of the funnel. The businesses that win in three years are the ones that get cited in answers AND rank in traditional search. The ones that don't get cited slowly disappear, even if their organic rankings hold, because the user never sees the SERP. They just got an answer.

~15-25%

Of users

Estimated to use AI assistants for at least some local discovery (and growing)

60-90 days

Signal lag

Realistic delay between making a change and seeing it surface consistently in AI answers

Monthly

Re-test cadence

Models update their data and answers frequently. Re-test the same query set every 30 days.

Where to go next

Keep reading

Start tracking your real rankings today

See where you actually rank on Google Maps, not where Google tells you. Get started free with 250 credits.