Skip to main content Stan Consulting LLC · Marketing Atlas · Case File · Construction · The HVAC Shop That Spent Five Figures on Tire-Kickers

Stan Consulting · Marketing Atlas · Case File · Construction Marketing

The HVAC Shop That Spent Five Figures on Tire-Kickers.

case_type: composite
cluster: construction-marketing
published: 2026-05-10
01 Section 01 · The setup The setup.

It was a Tuesday in early April. The CFO of an HVAC contractor was reading the trailing-ninety-day P&L on a Tuesday morning because she had been hearing the word "ROAS" in operating reviews for six months and she had not yet seen the bank confirm anything the word described. Eighteen-thousand monthly on Google Ads. Seven-thousand monthly on Local Services Ads. Twenty-five-thousand a month against a four-point-eight-times return-on-ad-spend the marketing director had presented at the last quarterly.

She opened the books and pulled paid-tagged revenue against the same window. Trailing-ninety-day paid revenue read forty-six-thousand. Trailing-ninety-day paid spend read seventy-five-thousand. The real ratio was zero-point-six. The platform's ratio was four-point-eight. The platforms were not lying. The marketing director was not lying. The two numbers were measuring different things.

The composite is an HVAC contractor running residential maintenance, repair, and replacement. Five-point-one million annualized. Twenty-four-person operation, eight install crews, two service vans on call, a small office team. The shop had been on Google Ads for four years and on Local Services Ads for two, with the LSA spend increasing every quarter on the theory that the platform "self-optimizes."

The marketing director had been hired eighteen months prior and was running paid in-house with a contractor PPC tool that pulled platform-reported data into a single dashboard. The dashboard read against platform conversions. The platform conversions counted clicks-to-call as conversions. The CFO did not know that.

The audit was scoped on a Friday. The brief was one sentence. Tell us where five-figures of monthly paid spend is actually going, and tell us which conversion goal is counting the bank.

Trade
Residential HVAC · service, repair, replacement
Annualized revenue
$5.1M
Team size
24 including 8 install crews
Monthly paid spend
$25K (Google Ads $18K + LSAs $7K)
Reported ROAS
4.8x · per platform-reported conversions
Bank-confirmed ROAS
0.6x · per trailing-90-day paid revenue
Counting conventions in conflict
4 named in the verdict
Engagement
Conversion Second Opinion · written verdict
02 Section 02 · The visible problem The visible problem.

The marketing director brought a dashboard. The CFO brought a P&L. The two documents agreed on the spend numbers. They disagreed on every revenue number. Below are the six figures that defined the disagreement.

Google Ads · trailing 90

$54K

Spend across Search, Performance Max, and remarketing

Local Services Ads · trailing 90

$21K

Paid calls and messages routed through the LSA dashboard

Platform-reported conversions

412

Form-fills plus click-to-call events counted across both platforms

Platform-attributed revenue

$360K

Platform-side rough revenue read against the 412 conversions

Bank-confirmed paid revenue

$46K

Trailing-90-day revenue from leads tagged as paid-source on intake

Real ROAS

0.6x

$46K bank-confirmed revenue divided by $75K paid spend

The marketing director's dashboard read against the four-hundred-twelve platform-reported conversions. The four-hundred-twelve conversions were almost entirely click-to-call events fired from the mobile ad on the search results page. Roughly thirty percent of those calls connected to the office and roughly twelve percent of the connected calls became jobs. The conversion goal was counting the click, not the call. The dashboard was reading the click as if the click were the revenue.

The CFO's P&L read against the bank. The bank only saw what the office collected. The office only collected against the dispatch tickets the call-takers booked. The two reads were both correct. They were measuring two different verbs.

03 Section 03 · The wrong explanation The wrong explanation.

Four explanations were on the table when the audit started. Each one let the marketing director defer the question the CFO had asked.

Wrong reason 01

"The platform's ROAS read is fine; the bank tag is incomplete." The defense most marketers reach for. The argument is that the office isn't tagging every paid-source lead correctly, so the bank-confirmed revenue is understated. There is some truth at the edges; the office's UTM hygiene was not perfect. But the gap was not at the edges. The platform was reading four-hundred-twelve conversions where the bank read about fifty paid jobs. No amount of intake-tag cleanup closes a gap that wide. The platform was counting events the bank had no equivalent for.

Wrong reason 02

"Performance Max is doing the heavy lifting; we just can't see it." The Google-rep argument. The campaign type whose attribution surface is opaque on purpose was credited with the revenue the search campaigns could not source. The argument fails because the Performance Max campaigns were running heavily on YouTube remarketing and Display, with a small Search slice that was capturing branded queries the brand would have closed anyway. The "invisible lift" the rep was attributing to Performance Max was almost entirely existing-customer recapture and branded search interception.

Wrong reason 03

"Seasonality · Q1 is always soft." The patience argument. HVAC in this region does flatten in Q1 between the heating season tail and the cooling season start. The marketing director argued the trailing-ninety-day window was unrepresentative and asked the CFO to wait until Q2 to re-read the math. The CFO had been hearing the same argument for the same window the prior year. The seasonality argument was real for the gross revenue line and irrelevant for the ratio of paid spend to paid-attributed jobs. The ratio is normalized against the season already.

Wrong reason 04

"We need more conversion data; install a CDP." The tech argument. A customer-data platform stitching the dispatch system back into Google's offline conversions feed would absolutely improve the attribution surface. It would not change which conversion goal the platform was optimizing against. The platform was optimizing against click-to-call. The CDP would have given the platform clean offline data the platform would have used to optimize against the same click-to-call goal. The tool was real and the install order was wrong.

All four explanations let the spend continue at twenty-five-thousand a month. None of them required the marketing director to define which event on the funnel was the conversion the business runs against.

04 Section 04 · The structural cause The structural cause.

The platform's conversion goal was the click. The business's conversion was the booked dispatch ticket. The two events sat four hand-offs apart on the funnel, and nobody had ever written which event the spend was optimizing against. The platform was honest. The marketing director was honest. The goal was wrong.

The decomposition lives in the next section. Before it, the cause names itself: four counting conventions were running against the spend, and the office's own intake process was being read as the conversion only by the bank. The platforms had been reading the funnel one event upstream of revenue for two years.

05 Section 05 · The reveal The decomposition. The reveal.

The audit pulled the branded-search query report on the Search campaigns. The query report is what told the rest of the story. Three convention failures plus a fourth made the gap. The branded-query report was the artifact that made the gap visible.

C1 Click-to-call counted as conversion without call duration Conversion-goal failure

The Google Ads conversion goal was set to fire on click-to-call events from the mobile ad. There was no call-duration threshold. A user tapping the ad's call button, hearing the office line ring, and hanging up before connection was counted as a conversion. Roughly forty percent of the four-hundred-twelve platform conversions were calls under thirty seconds. The office's call-tracking software showed the connection rate on those calls was near zero. The goal was counting taps, not conversations. The marketing director did not know the goal was configured this way because the goal had been imported from a prior agency setup three years prior.

  • Conversion goal · click-to-call · no duration threshold
  • Calls under 30 seconds in the trailing 90 · ~165 of 412 platform conversions
  • Connection rate on under-30s calls · effectively zero
  • Goal config age · ~3 years · never reviewed
C2 Out-of-area clicks at 70-80% of spend Geo-targeting failure

The Search campaigns were running the default "presence or interest" location setting. The setting included people searching for the shop's service terms from anywhere in the country if Google's interest model believed they had local intent. Industry data on small-contractor Google Ads waste puts seventy-to-eighty percent of clicks on irrelevant traffic. The campaigns on this account ran inside that band. The Performance Max campaigns layered on top, with location signals even less constrained. The branded campaigns were the only campaigns where geo behaved as expected because the brand was the targeting itself.

  • Location setting · "presence or interest" (default)
  • Out-of-area click rate · ~74% on non-branded Search campaigns
  • Spend attributable to out-of-area clicks · ~$26K trailing 90
  • Industry benchmark for waste at this configuration · 70-80%
C3 Branded search eating organic-direct revenue Attribution failure

The branded campaigns were bidding on the shop's company name. Users typing the shop's name into Google had already chosen the shop; they were searching the way they would have looked up the phone number. The ad above the organic result intercepted the click. The platform credited the click and the call to Google Ads. The bank credited the revenue to "Google" without distinguishing branded-paid from organic. The marketing director's dashboard showed the brand campaigns at six-times ROAS; the actual incrementality was near zero because the customer was already an intent-confirmed customer of the shop. Branded paid was double-counting revenue the shop would have received without the spend.

  • Branded campaign trailing-90 spend · ~$9K
  • Branded query volume Google trends as branded · 87% repeat-customer pattern
  • Estimated incrementality of branded-paid · near zero
  • Revenue stolen from organic-direct attribution · the gap visible in the query report
C4 Service-not-offered inquiries counted as conversions Goal-definition failure

The shop did residential HVAC. The shop did not do commercial rooftop work, refrigeration, or new-construction HVAC. The Search campaigns were bidding on broad-match keywords that included commercial and new-construction terms. The form-fill conversion goal counted any form submission. Roughly fifteen percent of trailing-ninety form submissions were from commercial property managers, GCs, or refrigeration contractors. The platform counted the form as conversion. The office answered the inquiry and said "we don't do that." The platform-attributed revenue against those leads was zero. The platform's optimization signal was still positive on those leads, which meant the algorithm was learning to find more of them.

  • Form submissions trailing 90 · ~240
  • Service-not-offered submissions · ~36 (15%)
  • Revenue from service-not-offered submissions · $0
  • Algorithm signal · positive, training Google toward more wrong-fit leads

Four conventions stacking. The platform-side number was technically correct against the goal it was given. The goal it was given was not the conversion the business runs against. The branded-search query report was the artifact that made the case to the CFO in fifteen minutes.

06 Section 06 · The fix or better move The fix, in install order.

The verdict named the install order. Six steps. Order matters; redefining the conversion goal before fixing the geo configuration produces a clean signal against a poisoned audience.

  1. Day one · Redefine the conversion goal as booked-dispatch

    The conversion goal is moved from click-to-call to a server-side event fired when the office's dispatch software books a job. The dispatch software's API or Zapier hook into Google Ads offline conversions becomes the conversion surface. Click-to-call survives as a diagnostic metric in the dashboard, not as the optimization signal. The platform now optimizes toward booked jobs. The marketing director's dashboard rebuilds against the new goal. The CFO is signing off on what counts.

  2. Week one · Reset geo to "presence" and shrink the radius to drive-time

    The Search campaigns' location setting moves from "presence or interest" to "presence." The radius collapses to the shop's actual ninety-minute service area defined as a custom polygon, not a default radius. Out-of-area negative-location lists are populated against the historical waste. Performance Max location signals are constrained to the same polygon. The geo change typically removes thirty-to-fifty percent of click volume on the day it ships and shifts the surviving clicks toward the audience the office can actually serve.

  3. Week two · Cut the branded campaign or scope it to defense-only

    The branded campaign is the trickiest call. If competitors are bidding on the shop's brand name (and on this account, two were), the brand campaign serves as defensive interception. If no competitor is bidding, the brand campaign is cut entirely; the click goes to organic at no cost. The verdict measured competitor activity weekly and ran the brand campaign with a defensive-only budget cap of about a third of prior spend, with the saved budget reallocated to non-branded high-intent terms.

  4. Week two · Negative-keyword the service-not-offered terms

    A negative-keyword list is built from the trailing-ninety query report against terms the shop does not service. Commercial rooftop, refrigeration, new construction, multi-family install, hospital HVAC, and the long tail of property-management queries. The list is applied across all Search and Performance Max campaigns. The form-fill conversion goal is reconfigured to require a service category match on submission. The algorithm stops training toward wrong-fit leads inside three to four weeks.

  5. Week three · Rebuild the dashboard against the spend-leak ledger

    A single weekly ledger replaces the prior dashboard. Spend by campaign. Booked-dispatch conversions by campaign. Cost-per-booked-dispatch by campaign. Out-of-area click rate by campaign. Service-not-offered form-fill rate by campaign. Branded versus non-branded revenue split. The ledger is the operating contract between the marketing director and the CFO. Variance from the ledger above ten percent is investigated before the next budget cycle.

  6. Month two onward · Renegotiate the LSA bid against booked-dispatch

    Local Services Ads spend gets a parallel treatment. The LSA dashboard does not allow native offline-conversion import; the workaround is a weekly reconciliation between the LSA call log and the dispatch system, producing a cost-per-booked-dispatch read for LSAs that runs alongside the Google Ads read. The LSA bid is moved off "maximum leads" and onto a target cost-per-lead set against the booked-dispatch math. Dispute templates are built for the credit-eligible calls. The seventy-five-thousand quarterly spend stabilizes at about forty-five-thousand against doubled booked-dispatch volume.

07 Section 07 · The lesson The lesson.

The compounding mechanism is goal drift. Conversion goals do not stay aligned to the business by default. The default optimization surface inside Google Ads and inside Local Services Ads counts events upstream of revenue and reports those events as conversions. Every quarter the goal is not redefined, the platform drifts further from the bank. The dashboard the marketing director reads stays internally consistent. The disagreement with the CFO compounds.

The branded-search query report is the artifact most operators have not opened. The report is one click inside Google Ads. The report names which queries are eating the budget and which queries the spend is actually serving. In ninety percent of the contractor PPC accounts the audit touches, the report tells the story the dashboard cannot.

What this case file is for: any contractor running paid spend above ten-thousand monthly whose marketing read and bank read are more than twenty percent apart. The gap is a goal-definition gap, not a tracking gap. The Conversion Second Opinion produces the written goal definition and the spend-leak ledger that names the four conventions and ranks them by recoverable spend.

Five Cents · Stan's note

Five Cents

What I keep seeing in contractor PPC accounts is that the dashboard the marketing person built and the P&L the CFO reads have a quiet contract neither side has signed. The dashboard reports against platform-supplied conversion data. The P&L reports against the bank. The two have never been formally tied. As long as the dashboard's number is bigger than the spend, nobody calls the question. The day the bank's number is smaller than the spend, the question gets called. The conversation is usually a year too late.

The CFO in this composite was not asking a tracking question. She was asking a definition question. She did not know the question was about definitions because the marketing director's vocabulary made the disagreement sound technical. ROAS is not a technical word. ROAS is a contract about what counts as a return and what counts as a sale. The platforms have their answer. The bank has its answer. The operator owes both sides the document that says which answer the business runs against.

What I want operators to take from this is to look at the branded-search query report inside Google Ads tomorrow. It is one click deep. If the report shows your own company name and a tail of high-intent queries with low cost-per-click and impossibly good conversion rates, the brand campaign is eating revenue it did not produce. The fix is not always to cut it; competitor activity can make defensive bidding worth the spend. The fix is to know. The Conversion Second Opinion is the engagement that produces the written knowing.

Stan Tscherenkow · Marketing Atlas · 2026-05-10
09 Section 09 · Related Atlas entries Related Atlas entries.

Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages define the term. Position pages give the firm's defended doctrine. The hub gives the map.

If this is the pattern in your account

Define the conversion. Read the spend against the bank.

If the case file maps to your account — platform-reported ROAS that the P&L cannot confirm, branded campaigns running, click-to-call counted as conversion, the CFO asking the question — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the four-convention framework with the trailing-ninety spend-leak ledger and a recoverable-spend number you can present.