Operator review average
4.8
87 reviews accumulated over 14 years
Stan Consulting · Marketing Atlas · Case File · Construction Marketing
case_type: composite cluster: construction-marketing published: 2026-05-10
It was a Tuesday morning in May. The owner of a California general contracting firm typed "general contractor [his city]" into his phone, on his own coffee table, sitting in his own living room. The map pack on the search result displayed three businesses. None of them was his. The third position was occupied by a contractor he had never heard of. He clicked the unfamiliar contractor's profile and read it for forty-five seconds.
Eleven months in business. Four-point-three star average across twenty-four reviews. A street address two miles from his own office. He scrolled back to his own profile. Four-point-eight stars across eighty-seven reviews. Fourteen years in business. License in good standing. The math made no sense to him. He typed a sentence into an email he sent to the audit team that afternoon: competitors with worse reviews outrank me.
The composite is a residential and light-commercial GC. Four-point-eight million annualized. Fourteen years operating in the same metropolitan area. Eighteen employees, three project managers, a dedicated estimating team. The firm built remodels, ADUs, and the occasional ground-up custom home. Roughly forty percent of revenue had come from Google traffic over the prior three years, almost entirely through the local pack and the firm's website. The firm had not been investing in Google Business Profile activity since the GBP rebrand from Google My Business years prior.
The owner's working theory was that the algorithm had changed and he was being penalized for something he could not name. The audit's working theory before pulling the data was that the algorithm had not changed; the inputs the algorithm reads had been static against a competitor's compounding inputs for three years.
The audit was scoped on a Friday. The brief was three sentences. Pull the GBP activity feed on both profiles for the trailing ninety days. Pull NAP citations across the major directories. Tell us what the algorithm is reading.
The owner's email contained two numbers and an implied question. Star average and review count. The numbers told a story the local-pack algorithm was not reading. Six numbers below produce the read the algorithm is reading.
Operator review average
4.8
87 reviews accumulated over 14 years
Competitor review average
4.3
24 reviews accumulated over 11 months
Operator review velocity
0
New reviews in the trailing 5 months
Competitor review velocity
2.2
Average new reviews per month over the trailing 11 months
Operator GBP activities (90d)
2
Posts, Q&A answers, photo uploads, offer activities total
Competitor GBP activities (90d)
47
Posts twice weekly, every Q&A answered, weekly photo uploads
The owner had been measuring his profile against the two numbers a customer reads. The algorithm reads five additional numbers a customer does not typically see. The five additional numbers describe what the operator's profile has done lately, not what the operator's profile has accumulated historically. The competitor was winning the lately number. The accumulation number did not save the operator on the daily ranking refresh.
The owner had also assumed his website was carrying the local-pack ranking. The audit pulled the website ranking data. The website ranked page two on the primary query and page three on the long tail. The website had not been receiving meaningful organic clicks from the city term for the prior eighteen months. The GBP was carrying the local pack on its own, and the GBP had been static.
Four explanations were on the table when the audit started. Each one let the owner believe the situation was unfair rather than unmanaged.
"Google must be penalizing me." The algorithmic-conspiracy read. The argument is that some unknown action triggered a manual or automated penalty against the operator's profile, demoting it below newer entrants. The argument fails because penalties are rare against well-aged legitimate profiles, and when they fire, they produce diagnostic signals inside the GBP dashboard. There were none. The profile was not penalized. The profile was just not being read as active.
"My website needs to rank to feed the local pack." The website-first read. The argument is that the local pack ranking is a function of website domain authority, and the fix is to rebuild the site for SEO. There is a correlation, but the local pack is not primarily a website ranking. The local pack reads GBP signals first, citation consistency second, on-page signals from the linked website third. Rebuilding the site is real work that produces real outcomes after six months. The GBP activity gap could be closed inside three weeks. The owner was being offered a long install that did not address the immediate gap.
"The newer competitor must have shady review-buying." The bad-actor read. The argument is that the competitor's twenty-four reviews are artificially produced. The audit checked the reviewer profiles. Most were local Google accounts with photos, history, and other businesses reviewed in the area. The reviews looked real. Some may not have been; the operator's recourse on suspect reviews is to report them through the GBP flag-review tool. The flag-review tool would not solve the activity-feed gap, and the activity-feed gap was the larger ranking driver against this competitor.
"It's a temporary blip; the algorithm will sort itself out." The patience read. Local-pack rankings do fluctuate, with seventy-three percent of local businesses experiencing significant ranking shifts at least once per year. The audit confirmed the operator's competitors had been moving in and out of the pack across the prior year. The operator had not been in the top three for any of the trailing eleven weeks. That is not a blip. That is a sustained read of the operator's profile as below the competitor's profile against the current algorithm. Waiting does not change the inputs; the inputs are what the algorithm is reading.
All four explanations let the operator avoid the work of running the profile. The competitor was doing the work. The algorithm was reading the work. The ranking was the read.
The GBP was being run as a static directory listing. The competitor was running the GBP as a publishing surface. The algorithm reads "active" and "static" as two completely different inputs. The historical-accumulation profile lost to the recent-velocity profile because the algorithm grades freshness as a separate signal from authority.
Three structural defects compounded over thirty-six months. NAP inconsistency that crept in during a phone-number change two years prior. A primary category set during onboarding eight years ago and never reviewed against the current GBP category taxonomy. A review-velocity collapse that started when the firm's office manager retired and nobody picked up the review-request workflow she had been running.
The GBP activity-feed comparison is the artifact this case file is built around. Forty-seven competitor activities against two operator activities over ninety days. The decomposition below names the three defects that produced the ranking and ranks them in install order against the competitor's specific moves.
The competitor's GBP activity feed over ninety days held forty-seven entries. Twenty-six were GBP Posts (twice weekly). Twelve were photo uploads (weekly cadence with project-by-project tagging). Six were Q&A answers against incoming questions. Three were Offer activities with date-bounded promotions. The operator's feed held two entries: one photo upload from a holiday post in December and one Q&A answer from January. The algorithm reads activity as a freshness signal independent of historical depth. The operator was being graded against forty-seven recent signals at zero. The historical depth of eighty-seven reviews and fourteen years of operation did not offset the freshness gap because freshness is a separate signal, not a discount on authority.
The audit pulled the operator's NAP (name, address, phone) data across the major construction-industry directories. Yelp, BBB, Houzz, Angi, HomeAdvisor, Yellow Pages, Bing Places, Apple Maps, Foursquare, Manta, Hotfrog, MerchantCircle, the local chamber-of-commerce listing, and the operator's CSLB record. Fourteen surfaces. The operator's phone number had changed two years prior during a phone-system migration. Eleven of the fourteen directories still showed the old number. Two showed a transposed digit. One showed the new number correctly. Address listings were stable across all fourteen. Business-name capitalization varied across nine of the fourteen, with three using "LLC" suffixes and six using the bare-name version. The algorithm reads citation consistency as a trust signal. The operator's trust signal had been degraded for two years.
The GBP's primary category had been set to "Contractor" during onboarding eight years prior. Google's GBP category taxonomy had evolved substantially over the prior five years, with "General Contractor" emerging as the dominant category for the operator's service mix and additional categories like "Home Builder," "Custom Home Builder," "Remodeler," and "Construction Company" available as primary or secondary options. The operator's "Contractor" primary was technically valid and read by the algorithm as a less-specific match than "General Contractor." The competitor's primary category was "General Contractor" with secondary categories of "Home Builder" and "Remodeler." The category mismatch was costing the operator query-relevance against the dominant search query.
The operator's review-velocity history showed roughly one new review per month from 2018 through early 2024, then dropped to zero. The drop coincided with the retirement of the firm's office manager, who had been running a post-completion review-request workflow against every closed project. The replacement office team had not picked up the workflow. The algorithm reads review velocity as a freshness signal. Five months of zero new reviews against a competitor producing two reviews per month is a gap the historical-depth column does not close.
Forty-seven to two. Eleven directories with stale phone. Primary category set when Obama was in his second term. Five months of zero reviews. The historical-depth profile lost to the recent-velocity profile because the algorithm grades the two columns separately and the competitor was winning every column the algorithm checks weekly.
The verdict named the install order. Five steps, sequenced so the activity feed starts producing freshness signals on day one while the NAP cleanup runs in parallel. The fastest-moving signals get installed first.
The GBP primary category is moved from "Contractor" to "General Contractor." Secondary categories are added: "Home Builder," "Remodeler," "Construction Company," "ADU Builder" where the operator's service mix matches. Category changes are submitted through the GBP dashboard and verified inside seventy-two hours. The category update is a one-time fifteen-minute action with a measurable downstream signal against the dominant query.
A twice-weekly GBP Post schedule is installed. Tuesday and Friday. Each Post is a short project update, a finished-job photo with a caption, a seasonal note, or a service-specific update. Posts include the city name, the service term, and a call-to-action. The cadence is the activity signal the algorithm reads. The Posts are not marketing; the Posts are freshness data the operator owes the algorithm if the operator wants the algorithm to read the profile as active. The cadence becomes the office's standing Tuesday-Friday twenty-minute task.
Each of the fourteen directories receives an update against the canonical NAP record: legal business name (with LLC suffix where required), street address as it appears on the CSLB record, and the current phone number. The work is scheduled at two directories per week against a fourteen-week timeline; the directories with highest citation weight (Yelp, BBB, Apple Maps, Bing Places, Houzz) get priority weeks one through three. A spreadsheet tracks the state of each directory and the date of last update. Citation aggregators (Whitespark, Moz Local, or BrightLocal) can compress the work if the operator opts into a $300-$600 annual subscription, but the manual approach is the verdict's baseline because the audit is not selling a tool.
The retired office manager's post-completion workflow is rebuilt against the new office team. Every closed project triggers a templated text-message review request within forty-eight hours of project completion, with a direct link to the operator's GBP review page. The template is short, personal, and asks for a review against a specific aspect of the project (cleanliness, communication, timeline, quality). Industry benchmarks on this workflow put review-conversion at roughly one in four requests, which would put the operator's velocity at roughly one to two new reviews per month inside the first ninety days and closer to three per month inside six months. The workflow is the long-term answer; the short-term answer is the activity feed.
The operator's office checks the GBP Q&A tab once daily and answers any new question within twenty-four hours. Past questions are answered if they have been sitting unanswered. Photos are uploaded weekly, tagged with project-specific captions, geotagged where possible. The combined cadence of Posts, Q&A, and photos puts the operator's ninety-day activity count above forty by the end of month three, matching the competitor's read. The matched activity count plus the higher review velocity plus the fixed citation profile produces the algorithm input the operator had been missing for three years.
The compounding mechanism is freshness-versus-authority confusion. Authority is what the operator has accumulated. Freshness is what the operator has done lately. The algorithm reads both as separate signals, and a profile with high authority and zero freshness loses to a profile with low authority and high freshness on freshness-weighted queries. The operator had been investing in authority for fourteen years and freshness for none.
The competitor was not doing anything sophisticated. The competitor was running a GBP the way GBP is meant to be run: as a publishing surface that produces a stream of freshness data the algorithm grades against the operator's stale feed. The competitor's eleven months of operation had been enough to compound forty-seven activities against the operator's two over ninety days. The activity count is the operating metric. The operator had not been measuring it.
What this case file is for: any contractor whose local-pack ranking has slid against newer competitors despite higher review counts and longer operating history. The diagnosis almost never lives in the review numbers the operator has been watching. The diagnosis lives in the activity feed the operator has not been watching. The Conversion Second Opinion produces the activity-feed comparison, the NAP audit, and the install order against a fourteen-week schedule.
Five Cents · Stan's note
The part of this case file I keep coming back to is the moment the owner stops blaming Google. He had been blaming Google for about ninety days when he wrote the email. The blame was a kind of comfort because the alternative was naming a defect in his own operation. The defect was small. The office manager retired. The post-completion review-request workflow she had been running for six years stopped running. Nobody noticed because the booked jobs kept coming. The booked jobs kept coming until they did not. Then the owner typed his own company name into his own phone.
What I keep seeing in contractor GBPs is the assumption that the profile is set-and-forget. Fourteen-year-old businesses with eighty-seven reviews think the profile carries itself. The profile does not carry itself. The algorithm reads the profile weekly. The weekly read is a freshness read. The freshness read is the column the operator has been ignoring. The work to fix the column is small. The compounding cost of not doing the work is the local pack going to an eleven-month-old competitor.
What I want operators to take from this is to open their own GBP dashboard tomorrow and count the activities in the trailing ninety days. Posts, Q&A answers, photo uploads, offers. If the number is under ten, the profile is being read as static. If the number is under five, the profile is being read as abandoned. The fix is twenty minutes twice a week against an indefinite horizon. The Conversion Second Opinion produces the activity-feed comparison and the install order. The work after that is small and weekly. The compounding outcome is the next quarter of leads the operator was missing.
Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages define the term. Position pages give the firm's defended doctrine. The hub gives the map.
If this is the pattern in your local pack
If the case file maps to your firm — aged business, strong reviews, slipping local-pack position against newer competitors — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the GBP activity comparison, the NAP citation audit, and the category configuration, with the install order on a fourteen-week schedule.