HomeAdvisor · Q3
19
Leads purchased at an average of $52 per lead
Stan Consulting · Marketing Atlas · Case File · Construction Marketing
case_type: composite cluster: construction-marketing published: 2026-05-10
It was a Tuesday morning in October. The roofer's wife, who keeps the books, walked into the office with a yellow legal pad and a calculator. She had been reconciling Q3 credit-card statements. She had a number on the pad. Eleven-thousand-two-hundred dollars. That was the quarter's spend across HomeAdvisor, Angi, and Local Services Ads. She had a second number. Three. That was closed jobs sourced from those platforms over the same ninety days.
She did the math out loud. Three-thousand-seven-hundred-thirty-three dollars in lead cost per closed job. The average roofing ticket was fourteen-thousand-eight-hundred. The lead cost was eating twenty-five percent of revenue before any truck rolled, any material got ordered, any crew got paid. She put the legal pad in front of her husband and said the number twice.
The composite is a California roofer. Two-point-four million annualized. Twelve employees including the crew. Roughly twenty-five-percent residential reroof, the rest insurance and storm work. The shop had been running paid lead platforms for two seasons because every roofer in the local trade-association group was running them.
The roofer's read of the situation was that lead quality was bad. He had been saying that for three months. He had complained to his HomeAdvisor rep, who offered credits. He had complained to his Angi rep, who explained that credits do not refund as cash. He had stopped complaining to the Local Services Ads dashboard because there was nobody to complain to.
The audit was scoped on a Friday. The brief was one paragraph. Pull the forty-seven leads. Grade them against a written rubric. Tell us how many of the forty-seven were actually closeable. Tell us where the spend went on the rest.
The roofer brought four numbers to the audit. Each one looked fine in isolation. None of them named the leak.
HomeAdvisor · Q3
19
Leads purchased at an average of $52 per lead
Angi · Q3
21
Leads delivered against the subscription plus per-lead model
Local Services Ads · Q3
7
Paid calls and messages routed through the LSA dashboard
Aggregate
47
Total Q3 paid leads · $11,200 in spend
Closed jobs from paid
3
Closed roofs at an average ticket of $14,800
Lead cost per close
$3,733
Spend divided by closes · 25% of average ticket
The roofer's working theory was that the platforms were selling bad leads. The platforms' working theory was that the roofer was slow on response time and would be helped by their automated dialer add-on. Neither theory engaged the question the wife had written on the legal pad. The question was not why the closes were low. The question was how many of the forty-seven leads had ever been closeable to begin with.
The audit started by pulling every one of the forty-seven into a single spreadsheet. Name, contact, platform, service requested, area, time of inquiry, first-contact attempt by the office, time-to-first-contact, eventual disposition. Nine columns. Forty-seven rows. The spreadsheet took a half-day to build because the platforms each export differently and three of the leads had no working phone number to begin with.
Four explanations were live when the audit started. Each was almost right and pointed away from the layer that mattered.
"The lead quality is bad." The roofer's own read. True at the surface. False as a diagnosis. Lead quality is a property of the lead pool; the operator does not control the pool. What the operator controls is which leads in the pool get worked, in what order, at what speed, against what rubric. Calling the problem "lead quality" hands the steering wheel to the platform. The platforms have no incentive to improve the pool. The diagnosis terminated at the layer the operator could not change.
"Response time is killing us · install the dialer." The platform's read. The HomeAdvisor rep had pitched the automated dialer add-on as the close-rate solution. Faster response time helps. Faster response time on a lead the roofer does not service, in a city the roofer does not cover, for a job the roofer does not perform, helps nothing. The dialer add-on would have raised contact rate against the unfiltered pool and produced more conversations that ended in "we don't do that." The fix was real for a different problem.
"Buy more leads to average out the bad ones." The volume-scales argument. If 3 closes against 47 leads is the working ratio, then doubling the spend doubles the closes. The math is appealing and the math is wrong on this pool. The 3 closed roofs were the only 3 closeable leads in the 47. Doubling the buy doubles the pool size; it does not double the closeable count, because closeable is a property of the lead, not a property of the spend. The roofer was being asked to multiply the wrong fraction.
"Switch platforms." The peer-group read. Every roofer who has ever had a bad Angi quarter has been told to try a different lead platform. The other platforms have the same pool dynamics, the same windowing, the same area coverage rules, the same definition of "exclusive" that is not exclusive. Switching platforms moves the leak. It does not close it. The roofer had already switched once from a regional platform two years prior; the math had not improved.
All four explanations let the roofer keep buying leads. None of them required him to look at the forty-seven individually and ask whether the leads were the right leads to have bought.
There was no written rubric on the office wall for what a closeable roofing lead looked like, before the lead arrived. Every lead got the same intake treatment. The platform decided the pool. The office decided nothing. The structural defect was the absence of a grading rule, not the presence of bad leads.
This is the section where the spreadsheet started telling a different story. Forty-seven rows. The audit graded each row against four conditions: was the contact reachable, was the address inside the roofer's service area, was the service requested actually a service the roofer performed, and did the timing of the inquiry fall inside the roofer's response window. Pass on all four conditions, the lead was closeable. Fail on any one, the lead was diagnostic noise.
The graded spreadsheet is the artifact this case file is built around. The forty-seven leads decomposed in four buckets. The buckets are below in install order.
Twenty-eight of the forty-seven leads were never reached by a human voice. The office had a callback window of four-to-six hours during peak season; the industry data on roofing speed-to-lead says the lead's intent collapses inside sixty minutes. By the time the office dialed, the prospect had already taken the second or third estimate from a competitor. Of the twenty-eight, the office made an average of one-point-four dial attempts before marking the lead as cold. Some were genuinely unreachable. Most were reachable inside the first hour and unreachable by hour five.
Eleven leads were physically outside the roofer's service area. The roofer's coverage map ran ninety minutes from the shop in three directions and forty-five minutes in the fourth, where a mountain pass made longer hauls uneconomic. The platforms were not respecting this geometry. HomeAdvisor and Angi both used a zip-code radius set in the original onboarding two years prior. The radius included three out-of-area zip codes that the roofer had never serviced. The Local Services Ads geo targeting was using a five-mile bidding boost on a market center that had no relation to the shop's drive-time reality. The roofer was paying for clicks and contacts from regions where the job economics did not work even if he closed them.
Five leads asked for services the roofer did not perform. Three were gutter installations. One was solar-panel removal. One was attic ventilation work as a standalone job. The Angi profile had been set up with the category-cluster default that bundled adjacent services without the roofer's awareness. The LSA primary category was correct; the secondary categories had auto-populated with "gutter installation" and "attic insulation" based on platform-side keyword matching that the roofer had never seen. Spend went out the door against keyword categories the shop could not service. Each of the five conversations ended in "we don't do that," but the platform charged for the lead regardless.
Three of the forty-seven leads passed all four grading conditions. Reachable inside the response window. Inside the service area. For a service the roofer actually performs. Inquiring during a time the roofer was available to estimate. The roofer closed all three. The close rate against the closeable pool was one-hundred percent. The close rate against the gross pool was six-point-four percent. The first number is the operator's number. The second number is the platform's number. The two numbers describe two different things, and the operator had been running the business against the wrong one.
Three of forty-seven. That was the closeable count. The other forty-four were never the roofer's leads to close. The platforms had charged for them anyway, and the office had treated them all the same way. The grading rubric had to live on the wall before the next lead arrived, not after.
The verdict named the install order. Order matters. Cutting platform spend before installing the rubric leaves the office working leads against the same unwritten standard at lower volume. The rubric had to ship first.
One sheet of paper. Four conditions. Reachable inside the response window. Inside the service area against the drive-time map, not the zip radius. Service requested matches a service the shop performs. Inquiry timing inside available estimating hours. Every inbound lead gets scored on the four conditions before the office dials. The scorecard is the operator's lead-quality test, not the platform's. The same scorecard becomes the dispute language sent back to the platform on the ones that fail.
The HomeAdvisor and Angi zip lists are rebuilt against the ninety-minute and forty-five-minute drive-time radii, not the historical radius from onboarding. The Angi category bundle is collapsed to roofing only, with gutter, attic, and adjacent categories explicitly removed. The Local Services Ads geo configuration is recentered against the shop address with a service-area polygon rather than the default market-center radius. Categories are confirmed against the LSA secondary list with operator sign-off.
The office adopts a five-minute first-attempt rule during business hours, owner-enforced. A missed-call text-back automation is installed against the office line: any inbound that rings without pickup triggers a templated text within sixty seconds. The text-back recovers seventy-eight percent of misses against industry data and converts roughly thirty percent of those recoveries into a returned call. The cost of installation is one configuration step; the cost of running it is zero.
Pull the trailing forty-seven again. Score each lead against the four conditions. Calculate two numbers per platform: closeable-rate (closeable leads divided by leads received) and cost-per-closeable (spend divided by closeable count). The two numbers replace cost-per-lead as the platform-comparison metrics. The platforms with closeable-rate under twenty percent get the spend cut by half. The platforms with closeable-rate under ten percent get the spend cut to zero. The cut is conditional on the score, not on the operator's mood.
Each "wrong service," "out of area," or "unreachable inside the platform's own service-level promise" lead gets a dispute filed inside the platform's dispute window. The template is one paragraph naming which scorecard condition failed and citing the platform's own service-area or category-coverage promise. Credits are tracked in a running ledger; cash refunds are not pursued because the platforms do not offer them. The credit count is the operator's standing argument on the next renegotiation conversation.
The eleven-thousand-two-hundred quarterly spend gets reallocated against a Google Business Profile activity push, two paid Google Search campaigns against the shop's own brand and against three high-intent storm-roofing queries, and a referral-program rebate for past customers. The reallocation does not raise the total spend. It moves spend toward channels where the operator owns the geo and category configuration and the closeable-rate runs above forty percent rather than under ten.
The compounding mechanism is denominator confusion. The roofer was reading cost-per-close against the gross lead count instead of the closeable lead count. The two reads describe different operations. Reading against the gross pool blames the platform, the dialer, the response time. Reading against the closeable pool blames the absence of a scorecard. Same forty-seven leads. Two completely different diagnoses.
What this case file is for: any contractor running paid lead platforms above one thousand dollars per month who cannot say, in writing, what the closeable-rate is against the leads being received. The fraction is the read; the scorecard makes the fraction visible. Without the scorecard, the operator is running the business against the platform's definition of a lead and paying for the gap.
The wife's legal pad was the right document, asked the wrong way. Lead cost per close was the platform's metric. Lead cost per closeable was the operator's metric. The first number stayed at three-thousand-seven-hundred-thirty-three dollars. The second number, once the scorecard ran, fell to about nine-hundred dollars per closeable lead inside the first sixty days. The math was the same. The denominator had moved.
Five Cents · Stan's note
The thing I keep seeing in contractor accounts is that the operator has been told the problem is lead quality for so long that the words have become a kind of fog. Lead quality is real. Lead quality is not what the operator controls. What the operator controls is the rubric the office uses to decide which of the inbound leads is worth a forty-minute drive and a sit-down estimate. The rubric is one sheet of paper. The platforms do not want the rubric to exist because the rubric makes the operator the buyer of leads rather than the receiver of leads.
The wife on the Tuesday morning with the legal pad is the person I do the work for. She is right about the number. She is wrong about which number. Three-thousand-seven-hundred-thirty-three dollars is the cost per close. It is also the cost of running an office without a written grading rule. The platforms charged for forty-seven. The closeable pool was three. The other forty-four were a tax on the absence of the scorecard. The scorecard costs nothing to print and changes everything about how the next inbound is handled.
What this case file is the answer to: if you are paying for leads and the math is starting to look like the math on the legal pad, the question is not which platform is better. The question is which of the leads you have already paid for were closeable to begin with. The Conversion Second Opinion produces the written rubric and the retroactive grade. The grade tells you which platforms to keep, which to cut, and which to dispute.
Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages define the term. Position pages give the firm's defended doctrine. The hub gives the map.
If this is the pattern in your shop
If the case file maps to your account — paid lead platforms running, closes flat, the cost-per-close number sitting on the bookkeeper's pad — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the four-condition grading rubric, with the retroactive grade on your trailing quarter of leads. If the verdict says keep spending, you keep spending against a known closeable-rate. If the verdict says cut, the cut is defensible against a number.