The complete guide to local SEO
What local SEO actually is, why it's different from regular SEO, and the full ranking-factor stack: GBP, citations, reviews, schema, and beyond.
Local SEO is the discipline of getting your business identified, indexed, and recommended by the systems people use to find local services. Google Search and Maps are the obvious surfaces. Apple Maps, Bing, and AI assistants like ChatGPT, Perplexity, Gemini, and Claude are the increasingly important secondary surfaces. The mechanics overlap with general SEO at the retrieval layer but diverge sharply at the entity layer, where local SEO is about your business as a structured record, not your website as a set of pages.
What local SEO actually is
General SEO is about ranking pages for queries. Local SEO is about being the business a retrieval system identifies, retrieves, and recommends in response to a local-intent query. The unit of optimization is the business entity, not the page. An entity has structured properties (name, address, phone, category, opening hours, attributes, services, products, photos, reviews) that live across multiple sources: Google Business Profile, Bing Places, Apple Business Connect, directory listings, your website's structured data, and increasingly Wikidata.
When the systems behind Google Search, Google Maps, AI assistants, or third-party directories receive a query with local intent, they consult these entity sources to identify candidates, rank them, and surface the answer. That answer might be a Map Pack of three results above a SERP, a list of pins on Maps, an AI-generated recommendation paragraph, or a single Knowledge Panel for an explicit brand query. All of them rest on the same underlying entity layer.
Local SEO is making sure the world's retrieval systems agree on who you are, where you operate, what you do, and how well you do it. Everything else follows from that.
The official Google ranking factors
Google has publicly stated, in its own help documentation, that local results in Search and Maps are decided by three factors: relevance, distance, and prominence. Almost everything else you read about local SEO ranking factor weights is inference, observation, or survey-based. Useful, but not the same thing. Treat anything more specific than these three as a hypothesis worth testing rather than a confirmed mechanic.
Relevance
How well your entity record matches the query. Driven primarily by primary category, services, business name, description, and review content.
Distance
How close you are to the searcher, or to the location implied by the query. The proximity bias the algorithm applies before any other factor.
Prominence
How well-known and trusted your entity appears to be. Inferred from reviews, links, mentions, citations, age, and brand-search volume.
Distance is bounded by physics. You cannot move your premises. You can decide which queries you compete for, and you can set your service area honestly to match where you actually operate. Relevance is the easiest factor to influence within days of editing your GBP. Prominence is the long game where most defensible advantage is built.
What the industry surveys tell us about factor weights
Several established industry surveys attempt to quantify what works. The most widely cited polls expert practitioners annually on what they believe drives Map Pack and Localised Organic rankings. These surveys are observational and consensus-based, not algorithmic, but the results have been broadly stable for years and are useful as a sanity check on industry priorities.
The picture that emerges across recent years of these surveys, taken with appropriate caution:
- 1
Google Business Profile signals
Top weightPrimary category, completeness, services, attributes, photos, posts, Q&A. Consistently the highest-weighted bucket in Map Pack ranking factor surveys.
- 2
Review signals
High weightCount, rating, velocity, recency, response rate, and increasingly the language and specificity of review text. Both Map Pack and AI-search inputs.
- 3
On-page signals
Moderate weightTitle tags, headings, content relevance, schema markup, internal linking. Higher weight for Localised Organic ranking than for Map Pack ranking.
- 4
Link signals
Moderate weightLocally relevant inbound links carry more weight than generic ones. Link weight features more strongly in Localised Organic than in Map Pack.
- 5
Citation signals
FoundationNAP consistency across major directories. Direct ranking weight has decreased in recent years, but inconsistencies still actively hurt entity resolution.
- 6
Behavioral signals
InferredClick-through, calls, direction requests, photo views, dwell time. Difficult to measure directly; influenced indirectly by being clickable.
- 7
Personalisation
Per-querySearcher's history, login state, prior interactions, device. Affects what each user sees and is the reason a single rank check is misleading.
The local SERP: four surfaces, different mechanics
"Ranking" in local SEO is not one thing. There are at least four distinct surfaces a prospect might use to find you, and they do not always show the same results. Each has its own ranking algorithm, its own click behavior, and its own commercial intent.
Map Pack (in SERP)
- •Three local results above the blue links
- •High commercial intent; most clicks land in the GBP listing rather than the website
- •Heavily prominence-weighted
- •Highly competitive with limited slots
Google Maps (app or maps.google.com)
- •Many results visible at once
- •More proximity-weighted than the Map Pack
- •Mobile-first context, drives discovery and 'save for later'
- •More forgiving of newer entities than Map Pack
Localised organic blue links
- •Below the Map Pack
- •Closer to traditional SEO mechanics (links, on-page, content)
- •Your website ranks here, not your GBP
- •Lower CTR than the Map Pack for local intent
AI assistant answers
- •ChatGPT, Perplexity, Gemini, Claude, AI Overviews
- •Synthesised paragraph; you are cited or you are not
- •Different per assistant: ChatGPT uses Bing, Gemini uses Google's KG, Claude uses Brave
- •Drives less traffic but eats top-of-funnel awareness
The four surfaces compound. Strong Map Pack presence builds brand-search volume, which feeds the Knowledge Graph, which feeds Gemini and AI Overviews. Solid traditional SEO is the floor that lets ChatGPT cite you via Bing. Optimizing one surface in isolation tends to underperform optimizing the system. See the Maps ranking guide and the AI search visibility (GEO) guide for the per-surface mechanics.
The GBP layer: why it dominates
Google Business Profile is the canonical entity record for most local businesses. It feeds the Map Pack directly, populates Knowledge Panels, supplies most of the structured local data that Google's Knowledge Graph holds, and is what most AI assistants pick up when running underlying searches for local queries.
The fields Google has progressively added to GBP over the years (services, products, attributes, posts, Q&A, social links, AI-generated descriptions) are the ones the algorithm has reason to read. Empty fields are missed signals. Most businesses fill the obvious ones (NAP, hours, a few photos) and stop. The depth and accuracy of the less-obvious fields is one of the main differentiators between listings that rank and listings that do not.
The full optimization playbook for GBP, in priority order, is in the GBP optimization guide. The single biggest field, by a meaningful margin in every ranking factor survey we have seen, is the primary category.
The website layer: technical local SEO that still matters
Although most local-SEO leverage moves through GBP, your website still does material work, especially for Localised Organic ranking and for AI search retrieval. The mechanisms that matter:
Schema.org structured data
Schema.org markup is how you make your website's claims about your business machine-readable. The minimum viable stack for a single-location business:
LocalBusinesson your homepage (or a more specific subtype likeRestaurant,Dentist,Plumber,FinancialService). Use the most specific subtype Schema.org defines for your sector. Subtypes inherit all parent properties and add their own.- Full set of properties:
name,image,address(broken intostreetAddress,addressLocality,addressRegion,postalCode,addressCountry),telephone,geo(withlatitudeandlongitude),openingHoursSpecification(one entry per day-of-week range),priceRange,paymentAccepted,areaServed. - A
sameAsarray linking your domain to your GBP listing, your Wikidata Q-number if you have one, and your major social profiles. This is the explicit declaration that those URLs all describe the same entity. - On service pages:
Serviceentries withserviceType,providerreferencing yourLocalBusiness, andareaServed. - On any FAQ section:
FAQPage. Both Google and AI assistants extract these directly into answers. - For multi-location businesses:
Organizationon the corporate domain withhasPartlinking to each location'sLocalBusinessnode. See the multi-location guide for the architecture.
Site architecture for local intent
For a single-location business, the site architecture is straightforward. For multi-location businesses, the URL structure becomes the single most important decision (covered in the multi-location guide).
For a single location, the patterns that work:
- Homepage: positions the entity, contains the canonical NAP, carries the
LocalBusinessschema, embeds a Google Map. - One service page per major service: each ranks for service-plus-location queries, carries
Serviceschema, has location-specific copy, links back to the homepage and to other service pages where relevant. - About / team page with
Personschema for staff or experts where credentials matter (especially for YMYL sectors like medical, legal, financial). - Contact page with the canonical NAP, the Google Map embed, and contact methods.
Page experience and Core Web Vitals
Google has explicitly confirmed that page experience signals (Largest Contentful Paint, Interaction to Next Paint, Cumulative Layout Shift) are ranking factors, though their weight relative to relevance and authority is small. Run your homepage and top service pages through PageSpeed Insights. If any are failing Core Web Vitals, fix the worst one before optimizing further. The benefit compounds with everything else.
The reviews layer
Reviews are simultaneously a ranking input, a click-through input, and a conversion input. The full mechanics, scripts, and the operational playbook are in how to get more Google reviews. The summary for this guide:
- Count and velocity matter. A business accumulating reviews steadily over years signals genuine ongoing activity. A sudden spike or extended drought reads as suspicious or stale.
- Specific phrasing in review text matters more than it used to, especially for AI search. Reviews mentioning specific services or attributes are extracted as relevance signals.
- Response rate and time matter. Replies are public, evaluated for sentiment and recency, and influence the next reader.
- Recency matters. A 4.8-star average from reviews ten years old carries less weight than a 4.6-star average from active recent reviews.
Annual consumer review research consistently finds that the large majority of consumers consult reviews before choosing a local business and that rating, recency, and volume are the three properties they pay most attention to. Industry rank-factor surveys mirror this from the supply side: reviews are consistently in the top two or three buckets year after year.
The citation layer
Citations are mentions of your business across the web, with or without a link. Their direct ranking weight has decreased over the past decade as Google has improved at synthesising entity information from richer signals. They still matter for two reasons.
- Entity corroboration. When the same NAP appears across Companies House, GBP, Yell, Yelp, Apple Business Connect, sector-specific directories, and your website, Google has multiple independent confirmations of your existence and identity. This strengthens entity resolution, which feeds everything else.
- Inconsistency damage. The reverse is also true: a single outdated phone number on a long-running directory can split your prominence signals across two perceived entities. Fixing inconsistency unlocks signals that were already there but were not converging.
The full directory list, audit process, and duplicate-handling playbook are in NAP consistency & local citations. The summary: do the top 15 to 25 directories well rather than chasing volume. Volume is no longer a ranking signal in itself.
AI search and the entity convergence problem
AI search is a fourth surface, currently smaller than the others by traffic volume but growing as a share of top-of-funnel discovery. The mechanics are different enough to warrant its own guide. Briefly:
- AI assistants run a retrieval step (against Bing, Google, Brave, or proprietary indexes depending on the assistant) followed by a synthesis step that picks which entities to mention and which sources to cite.
- Your visibility depends on entity convergence: your Knowledge Graph node, Wikidata entry (if any), GBP record, and domain all referencing the same entity, with explicit
sameAslinks between them. - AI crawlers (GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, others) need access. Audit robots.txt to confirm none are accidentally blocked.
Full detail in the AI search visibility (GEO) guide. The point for this guide: traditional local SEO foundations are the floor. Without them, GEO efforts struggle.
Measuring local SEO honestly
Local rankings are personalized by location, device, and search history. Checking your rank from a single point gives you a flattering, mostly useless number.
Geo-grid ranking
The honest measurement is geo-grid ranking: defining a grid of points across your service area, checking your rank from each point for a given keyword, and plotting the result on a map. The boundary between where you win and where you lose tells you where to invest. The change in that boundary over months tells you whether your work is landing.
Geo-grid resolution varies by tool. Most products offer 5×5 to 21×21 grids, with the higher resolutions giving more accurate boundary detection at the cost of more API calls and higher pricing. For most single-location businesses, a 7×7 or 9×9 grid run monthly on the top five commercial keywords is sufficient. See geo-grid rank tracking for our implementation.
Other metrics worth tracking
- GBP profile views, calls, direction requests, website clicks (from the GBP Insights tab or via the GBP API)
- Review velocity (new reviews per week or month) and average rating
- Brand-search volume in Google Search Console (queries containing your business name)
- Localised organic ranking for non-Map Pack keywords (your service-page rankings)
- Citation accuracy across the top 15 to 25 directories (annual or quarterly audit)
- AI visibility for a defined query set across ChatGPT, Gemini, Perplexity, Claude (monthly)
- Conversion data tied back to GBP and search source (calls answered, leads booked, customers served)
The 90-day starting plan
For a business starting from a low or unmaintained baseline, this is the order of work that produces the most visible movement in the shortest realistic time.
- 1
Days 1 to 14: Claim, verify, and complete GBP
Search Google for your business. Click "Own this business?" if you do not already have access. Verify (postcard, video, or phone, depending on what Google offers your business). Add a backup owner. Then fill every field: primary category, 3 to 6 secondary categories, 10 to 30 services, every applicable attribute, 20+ photos, 750-character description, opening hours, special hours for the next 12 months of bank holidays, 10 Q&A entries you write yourself.
- 2
Days 14 to 21: Audit citations on the top 15 to 25 directories
Decide your canonical NAP format. Visit each major directory (Bing Places, Apple Business Connect, Yell, Yelp, Facebook, sector-specific directories, local council registries). Fix any inconsistencies. Hunt for duplicates and request removal where they exist. Confirm Companies House matches.
- 3
Days 21 to 30: Wire up reviews
Generate your direct GBP review link. Add it to receipts, post-service SMS messages, post-purchase emails, in-store QR codes. Train staff on when to ask. Set a 48-hour SLA on review replies. Document the brand voice for those replies.
- 4
Days 30 to 45: Technical website work
Add LocalBusiness JSON-LD to your homepage. Add Service schema to each major service page. Add FAQPage schema to any FAQ sections. Validate everything. Audit Core Web Vitals on the homepage and top service page. Fix the worst issue.
- 5
Days 45 to 60: Entity convergence
Add
sameAsto your homepage Organization or LocalBusiness schema. If your business meets Wikidata's notability bar (any independent press coverage), create or claim a Wikidata Q-number and link it back viasameAs. Audit robots.txt for AI crawlers per the GEO guide. - 6
Days 60 to 90: Measure and iterate
Run your first geo-grid scan on five commercial keywords. Note where you rank from each grid point and where the boundary sits. Re-run monthly. Identify gaps where competitors appear and you do not, and target the underlying signal. By day 90 you should be seeing measurable boundary movement on at least two of your five keywords.
Common patterns we see go wrong
Quick-reference quarterly checklist
Run through this once a quarter. If any line is "no", that is a leak.
- GBP is verified and has a backup owner who is not an ex-employee
- Primary category is the most specific accurate match Google offers for your sector
- Services and products are populated with current offerings
- Every applicable attribute is ticked
- Photos uploaded in the last 30 days, including at least one new exterior or interior shot
- Description uses the full 750 characters and reflects current positioning
- Top 15 to 25 citations have matching NAP, with no known duplicates
- Review velocity averages at least a few new reviews per month, depending on customer volume
- Every review in the last 90 days has a reply within 48 hours of being posted
- LocalBusiness schema is on the homepage and validates cleanly
- sameAs array is on the schema and links to GBP, social profiles, and Wikidata if applicable
- robots.txt has been audited for AI crawler access
- Core Web Vitals on the homepage and top service page pass
- Geo-grid scans on top 5 keywords have been run in the last 30 days
- GBP Insights have been reviewed for trend changes (sudden drop in calls, photo views, profile views)
Where to go next
Each layer above has its own deep guide. Start with whichever is most broken in your business right now.
Keep reading
And when you are ready to measure what is actually working, that is what SearchOps is built for.