All guides
Advanced20 min read

Google's local search algorithm and ranking factors (May 2026)

The deepest current treatment of how Google's local algorithm works. Relevance, distance, prominence, behavioural signals, every named update from Pigeon to the May 2026 core refresh, and the AI Overview retrieval layer.

Google's local search algorithm is the system that turns a single user-typed query into a ranked, filtered, and personalised list of nearby businesses, usually inside 200ms. It is the most operationally important algorithm any local business needs to understand, and it has changed more between January 2024 and May 2026 than in the eight years before that. This is the deepest current treatment of how the algorithm actually works, the signal stack it ranks on, and what every major update from Pigeon through to the May 2026 core refresh changed in practice.

The mental model: three public pillars, one silent fourth

Google's own help documentation lists three inputs the local algorithm uses to rank results: relevance, distance, and prominence. That framing is real and load-bearing, and we will treat each one in depth below. What it understates, especially in 2026, is the role of a fourth class of signals Google reads continuously but does not name in the public help: behavioural data from the searches and interactions users have with listings.

Pillar

Relevance

~35-40% of ranking weight

How well your listing matches the query Google has just parsed. The largest single signal here is your primary GBP category. Then services, attributes, products, name, description, website content, reviews content, and schema markup, roughly in that order.

Pillar

Distance

~25-30% of ranking weight

Where the searcher is, relative to your business pin or service-area centroid. Weighted heavily on mobile (precise GPS), less precisely on desktop (IP geolocation). Modified by explicit geo modifiers in the query and by Google's local intent classifier.

Pillar

Prominence

~25-30% of ranking weight

How well-known your business is. Reviews are the dominant signal in this pillar by some margin. Then backlinks, brand mentions, citation footprint, news coverage, and Knowledge Graph completeness. This is the pillar that compounds slowest but compounds hardest.
Google's three publicly-stated pillars. Weights are approximate and based on our aggregate observational work across customer portfolios, not an official Google statement.

What changed in 2024 to May 2026

If you were last seriously across the local algorithm in 2022 or 2023, the most important shifts to internalise are these:

  • AI Mode and AI Overviews retrieve from the same local index. When a query carries local intent and Google's AI surfaces respond, they pull candidates from the same Map Pack index and re-rank them with an entity overlay. The Map Pack winners are not a separate problem from the AI Overview citations; they are the same candidate set scored slightly differently.
  • Multi-vector retrieval (MUVERA-style) is in production for local. Candidate selection now combines lexical match (traditional keyword matching), entity match (Knowledge Graph alignment), and semantic match (vector similarity to query embedding). A listing that wins on only one of the three is rarely a candidate against listings that win on two or three.
  • Query fan-out splits user queries. A single typed query like "best italian restaurant for date night" is internally decomposed into multiple sub-queries (italian restaurant near me, romantic restaurant, dinner, etc.). The candidate set is the union; the ranking weights the intersection. Listings that match more decomposed sub-queries win.
  • Schema.org v30.0 (March 2026) added entity types that matter for local. Credential, OnlineMarketplace, ConferenceEvent, and equivalence annotations to GS1 and Dublin Core gave Google more direct typing for regulated professions and specialised commerce.
  • Anthropic, OpenAI, and Perplexity now retrieve local data with dedicated pathways. Claude, ChatGPT, and Perplexity each call out to local sources (often via Google or Bing's local index) and re-rank with their own preferences. Whether you appear in their answers is correlated with Map Pack performance but not identical.
  • Photo quality is now scored. Google's image classifier weights sharpness, lighting, scene relevance, and freshness. A listing with 12 high-quality recent photos consistently outperforms one with 60 mixed-quality photos in our before-after testing.
  • Behavioural signal weight has grown materially. Compared to 2022 internal correlation work, behavioural signals appear to carry roughly twice the relative weight they did then. This is the single biggest practical shift in the algorithm.

How a local query becomes a result

The end-to-end flow from typed query to ranked Map Pack runs through seven stages. The total wall-clock time on a warm cache is under 200ms, but inside that window the system is doing significant work:

  1. 1

    Query parsing and tokenisation

    The user query is tokenised, entities are extracted (brand names, locations, attribute words), and a normalised form is produced. "best italian rest. near me open now" becomes a structured representation: category=italian restaurant, modifier=best, time-constraint=open now, geo-anchor=user location.
  2. 2

    Local intent classification

    A binary classifier decides whether this query should surface local results at all. The classifier is conservative; queries like "Italian recipes" do not trigger a Map Pack, but "Italian food" usually does. The output also includes a local-intent strength score that influences whether the Map Pack appears, how many results, and whether AI Overviews are offered alongside.
  3. 3

    Geo-anchor selection

    Where to centre the search radius. On mobile this is the user's GPS coordinates. On desktop it is IP-based (less precise, typically accurate to city level). An explicit geo modifier in the query ("italian restaurant in Manchester") overrides both with the named location's centroid.
  4. 4

    Multi-vector candidate retrieval

    Candidates are pulled through three parallel retrieval paths: lexical (keyword matching against listing fields), entity (matching against the Knowledge Graph for businesses of the right type within the radius), and semantic (vector similarity between the query embedding and listing embeddings). The union forms the candidate set; the intersection earns a candidate-quality boost.
  5. 5

    Multi-signal ranking

    The candidate set is scored against the four pillars (relevance, distance, prominence, behavioural). The contribution of each pillar to the final score is query-dependent: short-distance queries weight distance more, long-tail specialist queries weight relevance more.
  6. 6

    Filters and de-duplication

    The Possum filter de-duplicates listings sharing addresses or sharing very similar names and categories within close proximity. The spam filter removes obviously fake or policy-violating listings. The quality filter demotes incomplete or stale listings. Personalisation then re-orders based on the searcher's history.
  7. 7

    Surface routing

    The top results are placed into one of several surfaces: the SERP Local Pack (typically three results), the Local Finder (when "more places" is clicked), the Maps app, an AI Overview with local intent, or AI Mode's conversational response. The ranking is similar across surfaces but not identical; each surface has its own re-ranking step.

The ranking factor stack

Pulling together a decade of independent ranking-factor surveys, the observational pattern across our own customer base, and Google's own documentation, the approximate weight of each signal category on Map Pack ranking in May 2026 looks like this:

  1. Google Business Profile signals

    ~30%

    Primary and secondary categories, services and products, attributes, completeness, photo depth and freshness, hours, Q&A activity, posts cadence. Primary category alone is the largest single field inside this category, and inside local SEO generally.

  2. Reviews

    ~17%

    Volume, velocity, recency, response rate, response time, content (BERT-extracted), sentiment, and cross-platform consistency. Average rating is the part most operators focus on but the part that affects ranking least directly.

  3. On-page SEO

    ~14%

    Title tags, H1, body content, internal linking, schema markup (LocalBusiness, Organization, FAQPage where relevant), and local-relevance signals in copy. Service-area businesses also benefit from city-level and service-level landing pages where the content is genuinely distinct.

  4. Behavioural signals

    ~13%

    CTR from impression to listing, calls placed, direction requests, website clicks, photo views, "save" actions, dwell time on listing, and search-then-direction (a strong intent signal). These compound: a high-behavioural listing gets an algorithmic boost on top of its raw signals.

  5. Backlinks

    ~10%

    Domain authority of the linking sites, topical relevance, local relevance (links from local publications and other local businesses), anchor text. Less dominant than for organic ranking, but still significant for the prominence pillar.

  6. Citations and NAP consistency

    ~8%

    Presence on authoritative local directories, accuracy of Name, Address, Phone across the citation footprint. Has lost weight relative to a decade ago, but consistency across the high-trust sources remains a meaningful prominence signal, especially for newer businesses.

  7. Personalisation

    ~5%

    The searcher's history with your business, their past clicks for similar queries, their preferred businesses, and their typical search behaviour. Not something you optimise directly, but worth knowing about when you compare what you see to what your customer sees.

  8. Entity and schema convergence

    ~3%

    Clean LocalBusiness schema on your site, sameAs links from authoritative identifiers (Companies House, regulator IDs, Wikidata where applicable), and consistency between your GBP entity record and other entity sources. Small standalone weight, but qualifies you for more candidate sets and is the signal AI Overview retrieval pays most attention to.

Approximate signal weights for Map Pack ranking, May 2026. Weights are observational, not Google-stated. They add to approximately 100% but vary by query type.

Pillar 1: Relevance, in depth

Relevance is how the algorithm decides whether your listing should be a candidate for a specific query at all. The signal stack inside relevance, in approximate descending order of impact:

  1. 1

    Primary GBP category

    highest single field

    The structured, machine-readable claim about what kind of business you are. Multiple ranking-factor surveys and our own correlation work consistently place it at the top.

  2. 2

    Secondary GBP categories

    high

    Up to nine additional categories. Each one opens additional candidate sets but dilutes if unrelated. Three to six honest ones is the working sweet spot.

  3. 3

    Services and service descriptions

    medium-high

    Structured offerings within your category. Each one widens the lexical match net; long-tail services pull in long-tail queries. Aim for 10 to 30 with brief descriptions.

  4. 4

    Attributes

    medium-high

    Wheelchair accessible, free Wi-Fi, outdoor seating, online appointments, LGBTQ+ friendly. Drives filtered-search appearances (queries with implicit or explicit attribute filters).

  5. 5

    Products

    medium

    For retail and product-led businesses, the structured product list is heavily weighted. Less impactful for pure service businesses.

  6. 6

    Reviews content

    medium

    Google's BERT-style language understanding reads review text. Reviews that mention specific services, attributes, or use cases feed back into the relevance signal for those terms.

  7. 7

    Business name

    low-medium (but high-risk)

    Your name carries weight when it genuinely describes you. Adding descriptors not in your registered name is a documented suspension trigger, post-Vicinity.

  8. 8

    Description and website content

    low-medium

    Lower weight than the structured fields above. Useful as supporting context but should not be where you put your relevance bets.

  9. 9

    Schema markup

    low (qualifying)

    Less about direct rank weight, more about qualifying your listing for entity-based candidate sets and AI Overview retrieval.

Relevance signals, in approximate order of impact for most local-intent queries.

Pillar 2: Distance, in depth

Distance is the most-misunderstood pillar because the phrase implies a simple straight-line measurement. In practice, distance is a relevance-weighted radius modified by user signals and query intent. Four mechanics are worth understanding:

Searcher geolocation precision

  • Mobile: GPS coordinates, typically accurate to a few metres
  • Desktop: IP-based geolocation, typically accurate to city level
  • Browser location permission: more precise on desktop if granted
  • Wi-Fi network triangulation: improves desktop precision in dense urban areas

Geo-anchor selection

  • Implicit local ('plumber'): centred on the searcher
  • Explicit local ('plumber in Manchester'): centred on the named location
  • 'Near me' modifier: same as implicit, with stronger proximity weighting
  • Travel-intent context: centred on the searcher's likely destination

Service-area vs storefront mechanics

  • Storefront: distance to your business pin
  • Service-area: distance to the centroid of your defined service polygon
  • Hybrid: distance to either pin or polygon, whichever is shorter
  • Service-area precision: smaller, well-defined areas outperform sprawling areas

Vicinity (December 2021) effects

  • Reduced ability of distant businesses to rank for proximity queries
  • Hit hardest: keyword-stuffed business names ranking far from searcher
  • Reduced businesses appearing across an entire metro from a single suburban address
  • Strongly tightened the proximity weighting on 'near me' queries

Pillar 3: Prominence, in depth

Prominence is how well-known your business is to Google and to its users. It is the slowest pillar to move and the one with the highest ceiling. The signal stack:

Reviews

The dominant signal in this pillar. Volume, velocity, recency, response rate, content depth. Reviews are read for content, not just rated for stars.

Backlinks

Less dominant than for organic search, but still meaningful. Local link relevance and topical relevance matter more than raw count.

Mentions and coverage

Brand mentions on news sites and trusted publications, whether linked or not. Google's entity matching identifies mentions even without an explicit link.

Citation footprint

Presence on authoritative local directories and trade bodies. Has lost weight relative to 2015 but consistency across the high-trust sources still matters.

Knowledge Graph completeness

The completeness of your entity record in Google's Knowledge Graph. Driven primarily by GBP fields, schema markup on your site, and sameAs convergence.

Behavioural prominence

Aggregate behavioural signals across the listing: total interactions, click rate from impressions, saved actions. A prominence-by-engagement loop.

Pillar 4: Behavioural signals, the silent accelerant

Behavioural signals are the part of the local algorithm that has grown the most in the past three years. Google does not name them in its public documentation, but the patterns are visible in any decent before-and-after testing across listings with otherwise-identical signal stacks. The behavioural signals the algorithm reads:

CTR

Impression to click

If your listing is shown 100 times in the Map Pack and clicked 12 times, your CTR is 12%. Higher CTR for a position is a signal that the listing matched intent.

Calls

From the listing

Tap-to-call from a Maps listing or Map Pack. A strong purchase-intent signal that Google can attribute to your listing.

Routes

Direction requests

Route requested to your address. One of the strongest behavioural signals because it implies physical-visit intent, not just informational lookup.

Dwell

Time on listing

How long users spend on your listing before bouncing. Long dwell implies they found something worth reading; short dwell implies a mismatch.

These signals compound with the other three pillars. A high-CTR listing for a category-relevant query gets a relevance bump above what its raw signal stack would predict. A high-direction-request listing gets a prominence bump. The compounding is the reason listings that "shouldn't" rank sometimes do, and listings that "should" rank sometimes don't.

The behavioural signal stack has roughly doubled in relative influence since 2022. A listing with strong behavioural signals outranks a listing with strictly higher raw signals approximately 62% of the time in our matched-pair testing.

our internal observation, May 2026

Filters Google applies after ranking

After candidates are scored, several filters operate on the ranked list before it is shown to the user. These filters are the cause of most "we should be ranking but we aren't" diagnoses we run for agencies:

  • Possum filter (September 2016, modified by Hawk in 2017). De-duplicates listings sharing the same address or very similar names and categories within close proximity. After Hawk, the filter only triggers when the listings are very close together; mid-distance siblings are no longer always filtered. The most common cause of unexpected ranking absence in multi-location businesses with multiple locations in the same building or business park.
  • Vicinity filter (December 2021). Reduced ability of distant businesses with keyword-stuffed names to rank for proximity queries. The filter applies a sharp distance falloff past a category-dependent radius.
  • Spam filter (continuous). Fake businesses, obvious lead-generation listings, and listings flagged through the report-a-problem flow. Suspension is an upstream binary, not a filter.
  • Quality filter (continuous). Incomplete or stale listings are demoted within candidate sets. Listings with no photos, no description, no Q&A activity, and no recent posts can rank, but they will lose head-to-head to a more complete peer with similar raw signals.
  • Personalisation re-ranking (continuous). Search history, past clicks for similar queries, and preferred businesses re-order results for the individual searcher. This is why two people standing next to each other can see different Map Packs for the same query.

The surfaces: where local results actually appear

The same underlying algorithm produces results across several different surfaces. The ranking is similar across surfaces but not identical; each surface has its own re-ranking step and its own display constraints:

Local Pack
3-pack on SERP
Local Finder
'More places'
Maps app
Native maps
AI Overviews
Synthesised answer
AI Mode
Conversational
Triggered byLocal-intent queries on SERPClick 'More places' on a Local PackAny search in the Maps appStrong local-intent + answerable queryConversational local query in AI Mode
Result countTypically 3 (sometimes 2 or 4)Up to 20 per pageContinuous list, scrollable1-3 cited businesses inside the answerVariable, 1-5 mentioned
Re-ranking layerSERP-context re-rank: pack composition for visual diversityCloser to raw ranking; minimal re-rankMap-context re-rank: visual map proximity weightedEntity-quality + answer-quality re-rankConversational context + recent-mention re-rank
Best to optimise forCategory, distance, prominenceSame as Local Pack, with depthMobile-first listing completenessEntity record, schema, citationsSame as AI Overviews + prose explainers
Surface-by-surface comparison of how the same underlying algorithm produces different output.

Algorithm history: the named updates that matter

Local algorithm changes happen continuously, but a handful of named updates produced step-change effects that still shape how the system behaves today. The community names are not Google's, but they refer to documented or confirmed Google updates:

  1. July 2014

    Pigeon

    The first major local update to bring traditional ranking factors (links, content, on-page SEO) into closer alignment with the local algorithm. Before Pigeon, local and organic were more siloed. After Pigeon, they began to share signals.
  2. September 2016

    Possum

    Diversified the local pack by filtering listings sharing addresses or sharing very similar names and categories close together. Also increased the weight on proximity. The first time many multi-location businesses noticed certain of their locations disappearing from the pack.
  3. August 2017

    Hawk

    Tightened the Possum filter. Previously, listings within hundreds of metres of each other could be filtered against one another; after Hawk, only very close listings (typically same building or immediate neighbours) get filtered.
  4. November 2019

    Bedlam (neural matching for local)

    Brought BERT-style neural matching to local queries. Queries with implicit intent ("good place for steak") started producing more accurate matches even when none of the words in the query directly appeared in listings.
  5. December 2021

    Vicinity

    The proximity update. Reduced the ability of businesses to rank far from the searcher when their relevance was driven by keyword stuffing in the business name. One of the largest practical impacts on the everyday Map Pack since Possum.
  6. November 2022

    Local search update (spam targeting)

    A series of spam-fighting changes focused on lead-generation listings, fake business profiles, and category abuse. Visible in the bulk-removal of certain home-services category listings in the months following.
  7. September 2023

    Reviews and helpful-content integration

    Reviews and content signals began sharing more weight inside the local algorithm. Listings with thin or stale content on their primary website saw step-down ranking changes; listings with depth and recency saw step-ups.
  8. March 2024

    Core update with local impact

    A broad core update that had documented local effects. Behavioural signal weighting increased in our matched-pair testing, and listings with weak engagement signals lost ground regardless of raw signal completeness.
  9. August 2024

    Helpful content + reviews integration deepened

    Further integration between content quality assessments and local prominence scoring. Service-area landing pages with thin or programmatically-generated content saw substantial losses.
  10. February 2025

    AI Mode rollout begins

    Google's AI Mode (conversational search) rolled out to broader audiences. AI Overviews with local intent began citing GBP profiles directly as sources, alongside schema-rich web content.
  11. March 2026

    Schema.org v30.0

    Schema.org released v30 with new types (Credential, OnlineMarketplace, ConferenceEvent) and equivalence annotations to GS1 and Dublin Core. Most directly affects regulated professions and specialised commerce, but the broader entity-typing improvements rippled through local retrieval.
  12. May 2026

    Latest core update

    The most recent core update at time of writing. The observable pattern across our portfolio is further weight shift toward behavioural signals and entity-record cleanliness, with corresponding losses for listings relying on legacy citation footprint or thin programmatic content.
The named local updates since Pigeon and what each one changed. Major shifts highlighted in red, structural shifts in amber.

AI search and local in May 2026

The local algorithm now feeds both traditional surfaces (Map Pack, Local Finder, Maps app) and the AI surfaces (AI Overviews, AI Mode, and through retrieval, Claude, ChatGPT, and Perplexity). The mechanics differ in subtle but important ways:

AI Overviews with local intent

  • Triggered on local-intent queries with strong informational component
  • Cites businesses inside the answer (typically 1 to 3)
  • Retrieval pulls from the Map Pack candidate set plus schema-rich pages
  • Re-ranks for answerability and entity quality, not raw rank
  • Strong entity record beats marginal rank position

AI Mode (conversational)

  • Multi-turn conversational search across topics
  • Local results surface inline when relevant to the conversation
  • Heavier weight on consistency between GBP, schema, and on-site content
  • Mentions are not always linked; entity recognition is what matters
  • Strongly prefers listings with rich Q&A and structured services data

Across third-party AI assistants, the pattern in May 2026 is that local data is retrieved either through Google's local APIs (where licensed), Bing's local index, or by sending a real-time search to a search engine and re-ranking. Pages associated with a clean entity record tend to be cited disproportionately, even when their raw rankings are middling.

Common myths and what is actually true

Myth

  • More citations always means better local rankings
  • Posts on GBP directly move rank position
  • Average review rating is the strongest review signal
  • Proximity is fixed and overrides everything else
  • Backlinks don't matter for local SEO
  • AI Overviews are killing local clicks

What is actually true

  • NAP consistency matters more than count. After about 20 to 40 high-quality citations, additional ones produce diminishing returns
  • Posts signal active management and can drive click-throughs, but they do not directly move ranking position. Treat them as engagement, not ranking
  • Volume, velocity, recency, content, and response rate together matter more than rating. A 4.5 with depth beats a 5.0 with three reviews
  • Distance is heavily weighted but combined with prominence inside a category-dependent radius. Prominence can outrank closer competitors
  • Backlinks have lower weight than for organic ranking but are still meaningful for prominence. Local-relevant links specifically matter most
  • Engagement patterns are more nuanced. AI surfaces can drive new long-tail traffic that Map Pack does not. The total picture is not net-negative for well-optimised listings

How to test and instrument

The local algorithm responds to changes on timescales ranging from hours (some behavioural and proximity changes) to weeks (review velocity changes, content updates) to months (link and citation changes, brand-mention compounding). Reliable instrumentation is the difference between knowing what is working and guessing:

  • Geo-grid rank tracking. Measure rank for your target queries at multiple geo-points around your service area, not just a single point. Rankings vary continuously across metres of geography; a single-point average is misleading. Our Geo-Grid Rank Tracking feature is purpose-built for this.
  • Mobile and desktop separately. Proximity weighting differs between mobile (GPS-precise) and desktop (IP-coarse). Track both for any geography you care about.
  • Before-and-after testing on single levers. Change one thing at a time, wait for the algorithm to absorb the change (a week is usually enough for GBP-field changes), then measure delta. Multi-lever changes are diagnostically useless.
  • Behavioural metrics from GBP Insights. Pull the Performance API into a warehouse and watch CTR, calls, direction requests, and website clicks over time. These leading indicators predict ranking changes before rank tracking catches them.
  • Cross-platform monitoring. Map Pack rank is one number. Total business visibility is rank across the Map Pack, Local Finder, Maps, AI Overview citations, and AI Mode mentions. The same listing optimisations move all of them, but not at the same speed.

The audit checklist

  • Primary GBP category is the narrowest accurate option Google offers; review against the full searchable list
  • Three to six secondary categories that each genuinely describe additional work you do
  • Services list populated with 10 to 30 entries, each with a brief description
  • Every applicable attribute ticked; reviewed at least quarterly
  • 20+ recent photos across exterior, interior, team, products, and work-in-progress
  • Description uses the full 750 characters, leads with what you do and who you serve
  • Every review from the last quarter has a reply within 48 hours of being posted
  • Q&A section has answers from you to the top 10 questions a customer would ask
  • Posts published within the last four weeks
  • Special hours scheduled for upcoming public holidays and closures
  • Website has LocalBusiness schema markup with consistent NAP
  • sameAs links from authoritative identifiers (Companies House, regulator IDs, Wikidata if applicable)
  • Citation footprint consistent across the high-trust local directories for your country
  • Backlinks from at least a handful of local publications, partners, or industry bodies
  • Geo-grid rank tracking running for your top three commercial queries
  • GBP Performance API or Insights reviewed monthly for behavioural-signal trend changes
  • Mobile and desktop ranking tracked separately for the same queries
  • AI Overview and AI Mode visibility checked for your top three queries each month

Where the algorithm is heading

Looking at the trajectory of the past four updates, the practical expectations for the rest of 2026 and into 2027 are:

  • Continued weight shift toward behavioural signals. Every update since 2022 has nudged behavioural weight higher. Expect this to continue.
  • Tighter entity-record requirements. Listings without clean schema, sameAs convergence, and consistent NAP will increasingly underperform listings with the same raw GBP signals but cleaner entity records.
  • AI surfaces becoming a larger share of impression counts. AI Overviews and AI Mode are expanding their query coverage; the Map Pack remains the dominant surface but its share of total local impressions has declined.
  • Reviews continuing to gain weight, but conditioned on authenticity. Review spam detection has tightened materially in 2024 and 2025. Volume and velocity matter, but only if the reviews look organic.
  • Lower returns from citation-volume tactics. Building citations on long-tail directories has been losing weight for years. The trend is continuing.

Sources

Factual claims on this page are drawn from Google's own documentation, Schema.org, our own observational testing across customer portfolios, and the standard set of community-named Google update designations:

Where to go next

Keep reading

Start tracking your real rankings today

See where you actually rank on Google Maps, not where Google tells you. Get started free with 250 credits.