Skip to main content Stan Consulting LLC · Marketing Atlas · Position · Lead Quantity Is Not Lead Quality

Stan Consulting · Marketing Atlas · Position · Construction Marketing

Lead Quantity Is Not Lead Quality.

Most contractors are buying lead quantity from platforms that systematically deliver low-quality leads, and grading those leads by tone of voice instead of by structural signal. The diagnostic replaces platform-vendor grading with a seven-signal contractor-side scorecard. The contractor scores the last thirty leads. The score, not the platform, is the read.

01 Section 01 · The claim The claim.

Most contractors are buying lead quantity from platforms that systematically deliver low-quality leads, and then grading those leads by tone of voice when they should be grading by structural signal. The fix is a seven-signal contractor-side scorecard scored against the last thirty leads.

The claim has two parts. The first is structural: lead-platform grading is a marketing instrument owned by the seller. The platform grades the leads it ships. The platform credits the leads the contractor disputes. The platform decides which disputes are valid. None of that produces a defensible read on what the contractor actually bought. A contractor who reads platform-side dispositions as quality data is reading the seller's marketing as truth.

The second part is operational: the read that does produce a defensible answer is the seven-signal scorecard scored from the contractor's own desk. Source quality. Geo match. Intent signal. Urgency window. Budget signal. Decision authority. History pattern. The signals are mechanical, the scoring is consistent, and the composite produces a number per lead that the contractor can act on. Below threshold gets a templated message and zero bench time. Above threshold gets the full callback and the full diagnostic call.

The position is not "all paid leads are bad." Paid leads are sometimes excellent. The position is the contractor needs a grading system the platform does not own. Without it, every spend conversation runs on the seller's read of the product the seller sold.

02 Section 02 · The conventional view What most people believe.

The conventional read is that lead quality is what the platforms tell you it is. The contractor opens the Angi dashboard, reads "Premium Lead" against a price tag, and treats the label as a quality signal. When the lead does not close, the contractor blames sales follow-up, the market, or the season. The platform's grading stays unchallenged because the platform owns the only system the contractor has ever scored against.

Belief 01

"If I just buy more leads, the math will work out." The argument is that paid leads are statistical and that volume averages out the bad ones. The argument is mathematically defensible at very high volumes. The argument fails for the small contractor running 30 to 100 leads a month, because the bad fraction is structurally higher (out-of-area, out-of-service, tire-kicker) and the price-per-real-lead climbs as the volume climbs. The contractor is not averaging. The contractor is paying the platform a recurring fee to send more of the same.

Belief 02

"The platform's lead-quality grade is the quality grade." The argument is that Angi, HomeAdvisor, Local Services Ads, and similar platforms grade the leads they ship, and that the grade is a signal. The grade is the seller's marketing. A "Premium Lead" tag does not mean the lead is in-service-area, has a real job, has a budget, or has decision authority. The grade is correlated with the platform's price tier, not with the contractor's close rate. The grade is sales copy at the SKU level.

Belief 03

"The disputes process protects me." The argument is that the platform refunds bad leads on dispute, so the cost of low quality is bounded. The dispute process is a credit system, not a refund. Credits force the contractor to buy more leads to recover the spend on the bad ones. Throughout 2025, the LSA disputes process was reduced to the point where out-of-city and out-of-industry leads now route through automation that does not credit them. The dispute process is a friction tax, not a defense.

Belief 04

"Tone-of-voice grading on the first call is enough." The argument is that contractors learn to read a lead in the first thirty seconds of the call. The argument is partly true and dangerously partial. Tone-of-voice grading catches the obvious tire-kicker. It misses the polite caller who has no budget. It misses the in-spec caller who is the wrong decision-maker. It misses the urgent-sounding caller whose job is six months out. The signals that actually predict close are structural, not tonal, and they need a scorecard.

Every belief in this list is supported by the seller of the leads. The seller's read of the product the seller sold is not the audit. The audit is what the contractor builds on the contractor's side of the desk.

03 Section 03 · Why the conventional view fails Why that belief fails.

The structural argument is that lead-platform grading produces a number that correlates with platform pricing tiers, not with contractor close rate. The two read are decoupled. Improvements in platform grading do not move close rate. Improvements in scorecard grading do.

Five failure modes follow.

Failure mode one. The platform writes the grade against its own inventory. A platform that grades "Premium" against its own pricing tiers has no incentive to ship leads that close at fifty percent. It has incentive to ship leads that look like Premium leads at the price point. The two are different. The contractor reads the grade and pays the price and never builds the read that says "the grade did not predict the close." The seller does not produce the read against the seller's product. The buyer has to.

Failure mode two. Tone-of-voice grading is high-recall and low-precision. The contractor's gut on the first call is sharp on the obvious. The obvious tire-kicker is one of several patterns that does not close. The polite caller with no budget closes at zero. The in-spec caller who is the renter, not the owner, closes at zero. The urgent-sounding caller whose job is six months out closes at five percent in week one and not at all after. Tone-of-voice catches the first pattern and lets the other three through. The composite scorecard catches all four.

Failure mode three. The disputes process is a credit ladder, not a refund. Platforms credit disputes in their own currency. The credits force the contractor back into the pipeline. The contractor who disputed twelve leads in a month finds the platform reduced future bills by the equivalent of three normal leads, none of which were guaranteed to be better quality than the disputed ones. The credit system is a marketing instrument that retains the contractor's spend without retaining the contractor's confidence. The fix is not better disputes. The fix is the contractor-side scorecard that prevents the contractor from paying full price for inventory that scored zero before the credit even applied.

Failure mode four. Bench time is the hidden cost. A bad lead does not cost the contractor only the platform's per-lead price. It costs the truck-roll, the estimator's hour, the dispatcher's call, the missed callback to a good lead that was on hold while the bad lead got the conversation. On a small contractor with three field crews, the bench-time cost of a fifty-dollar bad lead is often three hundred dollars in opportunity cost. The platform never carries the bench cost; the contractor always does. The scorecard reroutes bench time to the leads that actually score.

Failure mode five. The contractor cannot defend the spend conversation. When the spouse, the partner, the bookkeeper, or the lender asks why the lead spend keeps climbing and the closed-job count does not, the contractor has no defensible read. The platform's grading is the seller's. The contractor's gut is anecdotal. The scorecard is the missing artifact that makes the conversation defensible. With the scorecard in hand, the contractor can name how many leads scored above threshold, how many of those closed, and what the real cost per closed job was. Without it, the conversation is feelings against feelings.

The conventional view treats lead quality as something the platform measures and the contractor receives. The structural reality is lead quality is something the contractor measures against the platform's inventory.

04 Section 04 · The SC position The SC position.

Lead quality is a seven-signal composite scored from the contractor's desk. Source quality. Geo match. Intent. Urgency. Budget. Decision authority. History. The composite produces a 0-100 score per lead. The score, not the platform tag, is the read.

Each signal is named below with its scope, its scoring rule, and the test that says it is doing its job.

S1

Source quality

The lead source matters more than the platform's grade against the lead. Direct call from a Google Business Profile listing carries different baseline quality than a paid form-fill on Angi. The signal is the source channel and the channel's historical close rate, not the platform's tag on the lead.

  • Direct phone · from GBP, organic search, or referral · baseline high
  • Local Services Ads · service-match accuracy higher than HomeAdvisor average
  • Paid form-fill · HomeAdvisor / Angi / Meta · baseline medium-low
  • Aggregator resold lead · baseline low; high tire-kicker fraction

Test it is doing its job: the source-quality signal predicts close rate by source within ten percentage points after thirty leads.

S2

Geo match

The lead's geographic match against the contractor's service area. In-service-area at the address-line scores high. Adjacent-zip with a defensible drive scores medium. Out-of-area, regardless of how the platform billed, scores zero. The geo signal exposes the platforms shipping out-of-area inventory the contractor is charged for.

  • In-service-area · address-line match · full credit
  • Adjacent-zip · defensible drive · partial credit
  • Out-of-area · zero credit, dispute-eligible if platform supports it
  • Service-area-undefined · treat as zero until the prospect names the address

Test it is doing its job: the geo-match signal exposes the out-of-area inventory the contractor was previously paying for without knowing.

S3

Intent signal

The intent encoded in the lead itself. Stated specific job with scope detail scores high. Generic "looking for a quote" scores medium. Form-filler with no detail or wrong service entirely scores low. The intent signal separates the buyer who described the job from the prospect who filled the form.

  • Specific job stated · "replace 30-year-old water heater" · high
  • Service category stated · "need plumbing work" · medium
  • Form-filler · no scope, generic ask, browsing pattern · low
  • Wrong service entirely · HVAC form against roofing contractor · zero

Test it is doing its job: the intent signal predicts close by tier within fifteen percentage points after thirty leads.

S4

Urgency window

The window the prospect stated for service. Same-week scores high. 30-day scores medium. Six-months-out scores low. Urgency is a structural close predictor and a re-engagement trigger; leads that score low here go into the nurture sequence, not the active callback queue.

  • Same-week or emergency · high · route to active callback
  • 30-day window · medium · route to active callback with reduced priority
  • 90-day window · medium-low · route to scheduled follow-up
  • Six-months-plus · low · route to nurture sequence only

Test it is doing its job: the urgency signal correlates with first-30-day close rate within ten percentage points.

S5

Budget signal

The budget signal the prospect carries into the call. Stated budget at or above service floor scores high. No budget stated scores medium. Stated budget below floor scores zero, because the lead cannot be closed at margin and was never going to be. The budget signal exposes the leads the contractor was never going to close on the unit economics.

  • At or above service floor · full credit
  • No budget stated · partial credit, surface in diagnostic call
  • Below service floor · zero credit, surface tactfully
  • Free-estimate-only · partial credit, classify by other signals

Test it is doing its job: the budget signal exposes the leads that closed at zero margin or lost margin in the historical book.

S6

Decision authority

The lead's authority to say yes. Owner-occupant decision-maker scores high. One spouse without the other scores medium until the other is in the room. Renter or tenant scores low. The authority signal exposes the leads that produce a long quote conversation with no decision capacity at the end.

  • Owner-occupant, decision-maker present · full credit
  • One of two spouses · partial credit, route to "both on the call" follow-up
  • Renter / tenant · low credit, surface landlord pathway if applicable
  • Referral-from-someone-else · partial credit, route to source verification

Test it is doing its job: the authority signal predicts which quotes go silent versus which quotes get a decision.

S7

History pattern

The prospect's history with the contractor and the category. First call in this category scores neutral. Third call after two prior bids scores high (real urgency, real intent). Known quote-shopper or known tire-kicker scores zero. The history signal exposes the patterns the platforms do not see because they live in the contractor's own records.

  • First call in category · neutral baseline
  • Third call after two bids · high · real urgency
  • Known quote-shopper · zero · route to nurture only
  • Prior client in different category · high · route to warm callback

Test it is doing its job: the history signal flags the patterns that the contractor knew anecdotally and never documented.

05 Section 05 · The mechanism The mechanism.

The working spec is ten numbered steps. The audit runs against the contractor's last thirty paid leads. The install completes in roughly seventy-two hours of audit time and produces the scorecard as an operating filter, not a one-time report.

M1 The audit pass Score the last thirty leads

Pull the last 30 paid leads in one sheet

Export the last thirty paid-platform leads into a single sheet with timestamp, source, contact details, stated job, geo, and current disposition. Angi, HomeAdvisor, Local Services Ads, Meta lead-form leads all in one column set. The single sheet is the audit foundation; the platforms cannot produce it and will resist exporting cleanly. The export step is half the work and is the step contractors most often skip.

Score Signal 1 · Source quality

Score each lead against the lead source. Direct phone from a GBP listing scores high. Local Services Ads ranks above HomeAdvisor on average for service-match accuracy, below organic search for intent quality. Aggregator-resold leads score lowest. The source-quality grade is a function of the channel, not of the platform's tag on the lead.

Score Signal 2 · Geo match

Score each lead against the geo signal. In-service-area at address-line scores high. Adjacent-zip with defensible drive scores medium. Out-of-area, regardless of platform billing, scores zero. The geo grade is the first place the audit usually surfaces money the contractor was paying without knowing.

Score Signal 3 · Intent signal

Score each lead against intent. Stated specific job with detail scores high. Generic "looking for a quote" scores medium. Form-filler with no detail or wrong service scores low. The intent grade is what separates a stated buyer from a prospect who filled a form.

Score Signal 4 · Urgency window

Score each lead against urgency. Same-week service requests score high. 30-day windows score medium. Six-months-out exploratory scores low. The urgency grade routes the lead into the right queue (active callback, scheduled follow-up, nurture) and the queue routing is what protects bench time downstream.

Score Signal 5 · Budget signal

Score each lead against budget. Stated budget within service tier scores high. Stated budget below floor scores zero. No budget stated scores medium. The budget grade exposes the leads that closed at zero or lost margin in the historical book and were classed as wins in the platform dispositions.

Score Signal 6 · Decision authority

Score each lead against authority. Owner-occupant decision-maker scores high. One of two spouses scores medium. Renter or referral-from-someone-else scores low. The authority grade exposes the long-quote-no-decision pattern that drains estimator hours.

Score Signal 7 · History pattern

Score each lead against the contractor's own history. First call in this category scores neutral. Third call after two prior bids scores high. Known quote-shopper or known tire-kicker scores zero. The history grade brings the contractor's existing pattern recognition into the scorecard as a signal.

Composite the score and re-grade the thirty

Composite the seven signals into a single 0-100 grade per lead. Re-grade the last thirty. The composite produces the contractor's first honest read on how many of the leads the contractor paid for were actually closeable inventory. Most accounts find the closeable fraction is twenty to forty percent, not the eighty percent the platforms imply through tagging.

Install the scorecard as the operating filter

Move the scorecard from one-time audit to operating filter. Every inbound paid lead gets scored before it gets called back. Below-threshold leads get a templated message, not bench time. Bench time gets allocated to leads above threshold. The dispute-with-platform conversation becomes documented at the moment of scoring, not weeks later from memory. The audit completes; the system stays.

06 Section 06 · Evidence and case links Evidence and case links.

The Position page is the doctrine. The links below are where the doctrine has been applied or referenced for a different audience. Each link is a test the doctrine has had to pass.

Primary case

The Roofer Who Paid for 47 Leads and Closed 3

The composite case file where a roofer ran an Angi spend of $3,400 for forty-seven leads in eight weeks and closed three jobs. The disagreement between the platform's grading and the contractor-side scorecard was thirty-six points. The fix was the scorecard install, not a vendor change. The closed-job count moved before the spend did.

Read the case file →

Companion case

The HVAC Operator Who Spent Five Figures on Tire-Kickers

The composite case file where an HVAC contractor's Google Ads geo, intent, and conversion-goal misconfiguration delivered five months of out-of-area, wrong-service, and form-filler leads at five figures of spend. The scorecard, run retroactively, showed that ninety-one percent of the leads scored below threshold.

Read the case file →

Reference

Contractor Lead Grading

The Reference entry on contractor lead grading as a category. The scoring rubric definitions, the platform-vendor grading comparison, and the gap the scorecard fills. The reference page is the doctrine's vocabulary.

Read the reference →

Reference

Lead Quality Score

The Reference entry on the composite lead-quality score itself. How the seven signals roll into a 0-100 number. How the threshold gets set. How the scorecard becomes an operating filter rather than a one-time report.

Read the reference →
07 Section 07 · Where it breaks Where it breaks.

Every methodology has assumptions. Naming the assumptions is part of defending the position. The seven-signal scorecard assumes the contractor has a roughly stable book of paid leads and a roughly stable service area. The methodology does not handle every configuration.

01

Brand-new contractors with no lead history

The scorecard scores against a contractor's own pattern. A contractor with no historical book has nothing to score signal 7 against and a thin base for signals 5 and 6 calibration. The methodology defaults to a baseline-by-trade scorecard for the first thirty leads, with re-calibration after the contractor's own history accumulates.

02

Pure organic / referral books

Contractors with zero paid lead spend and a one-hundred-percent referral book do not have the inventory the scorecard is designed against. The methodology still applies to the referral channel for filtering and routing, but the dispute-against-platform component does not. The next-engagement scope is different.

03

Crisis-mode contractors mid-season

Contractors whose phone has stopped ringing mid-season and who need calls inside seven days do not have the calendar for an audit-first install. The methodology defers to crisis-mode routing (route to F7.4 Crisis Consulting if applicable); the scorecard ships once the immediate crisis is stabilized.

04

National contractors operating multi-state

Contractors operating across multi-state footprints have geo signals that vary by jurisdiction and a service-match signal that varies by trade-license. The methodology applies, with per-state calibration. The base scorecard is single-jurisdiction by default; the multi-state install is a longer engagement.

05

Heavy commercial or new-construction books

Commercial GCs and new-construction subs run on RFPs, plan-rooms, and bid lists, not residential paid-lead inventory. The scorecard partially applies to inbound residential remodels and service calls; the commercial-bid lane needs a separate qualification framework. The methodology does not currently extend into that lane.

08 Section 08 · What it costs to apply What it costs to apply.

The seven-signal scorecard installs as the Conversion Second Opinion for contractors who want the read on its own. The audit runs against the last thirty paid leads. The deliverable is a written diagnostic, a scored set, a buyer-path map, and a 60-day follow-up call. The numbers below are the comparison.

Diagnostic only

Conversion Second Opinion

$99972-hour verdict

A written diagnostic against the seven signals. Last thirty leads scored. Source-quality, geo-match, intent, urgency, budget, authority, and history grades broken out per lead. The three named install moves that will move scores inside sixty days. The buyer-path map. No restructure, no implementation. The read.

See the engagement →

Diagnostic plus install

Sprint or System Build

Engagement-scopedread first, scope second

The diagnostic runs first as the scoping artifact. The Sprint or System Build engagement runs the install of the scorecard into the contractor's CRM, the templated below-threshold response, and the documented dispute trail. Pricing is set against the install scope after the read.

See the engagement formats →

The value equation, named.

Dream outcome
More closed jobs from the same lead spend. Measured in calls answered above threshold and jobs closed against scored leads, not in "leads received." A contractor closing three out of forty-seven moves toward eight out of forty-seven once below-threshold leads stop consuming bench time.
Perceived likelihood
Anchored to verified industry pattern: Core6 Marketing reports seventy to eighty percent of small-contractor ad spend goes to irrelevant traffic; ContractorTalk forums document the Angi-style "tire-kicker" complaint at scale. The scorecard surfaces the same fraction the industry has been complaining about for a decade.
Time delay
Seven business days from start to written verdict. The audit runs against the contractor's last thirty leads. The Conversion Second Opinion delivers the read in seventy-two hours of audit time; the engagement window is seven business days end-to-end including intake.
Effort
The contractor exports thirty leads into a sheet and answers a thirty-minute intake call. That is the contractor's effort. SC runs the scoring pass, builds the composite, names the three install moves, and writes the buyer-path map.
Risk reversal
We tell you which three changes will move calls inside sixty days, or you keep the diagnostic written report and pay nothing. The risk reversal is anchored to the diagnostic, not to a vague satisfaction guarantee. The contractor keeps the written deliverable in either path.
Value stack
Written diagnostic ($1,500 equivalent in agency-side audits) · three named install moves with sequencing ($1,200) · buyer-path map specific to the contractor's trade ($800) · ad-spend leak ledger across paid platforms ($600) · sixty-day follow-up call to recalibrate the scorecard ($400). Stack equivalent: $4,500.
defends in 15 seconds against operator loss
If your last thirty leads cost more than $999 and you cannot name how many were in-service-area, you are paying for the answer you do not have.

What you are already paying.

  • One bad lead month on Angi · $1,200–$2,500 in fees against leads that scored below threshold
  • One agency month with no clear ROI · $2,000–$8,000 against a report the contractor cannot defend
  • One missed call per day · $30,000–$50,000 annually in lost-job revenue per ServiceBusiness.ai pattern

What this costs: $999. Once. With the risk reversal above.

The fix is cheaper than one bad week of inventory. The reason it does not get bought is the contractor still believes the platform's grade was the read.

Five Cents · Stan's note

Five Cents

The thing I want contractors to internalize is that the platforms are not bad actors and they are not partners. They are sellers of inventory. Sellers grade what they sell. A contractor reading the seller's grade as truth is reading the seller's marketing as truth. The fix is not anger at the platform. The fix is a system on the contractor's side of the desk that scores what arrives.

The piece I keep watching break is the bench-time math. A roofer who pays fifty dollars for a lead and rolls a truck to a wrong-address out-of-area form-filler has not spent fifty dollars. The roofer has spent fifty dollars plus three hundred in opportunity cost plus the goodwill of the actual buyer who got hold music while the wrong call took priority. The scorecard moves the bench time before it moves the spend. The spend conversation comes second; the calendar conversation comes first.

What this position is for: if you have run paid leads for ninety days and you cannot name which seven signals predict close for your trade, you have this position. The Conversion Second Opinion runs the audit in seventy-two hours and ships the scorecard back as a working filter. The next move is the scoring pass; the scoring pass is what reveals which leads were inventory and which were noise. Everything downstream of the scorecard becomes scopable for the first time.

Stan Tscherenkow · Marketing Atlas · 2026-05-10
10 Section 10 · Related Atlas entries Related Atlas entries.

The Reference pages in the construction cluster, the case files this position was written against, the companion positions, and the hub. The graph below is the cluster map.

The power object · reward for reading

The Lead-Quality Scorecard.

One sheet. Seven signals. Thirty rows. A composite score per lead and a threshold the contractor sets. The contractor scores the last thirty leads and walks out with a number per lead, a closeable-fraction headline, and the three install moves that move the number. The scorecard is the read; the scorecard is the deliverable; the scorecard is what makes the platform conversation defensible.

Open the Lead-Quality Scorecard → · tool forthcoming

If you read this and recognized your last thirty leads

Score the leads. Then defend the spend.

The Conversion Second Opinion runs this position against your last thirty paid leads in seventy-two hours. A written verdict across the seven signals, the three install moves named, the buyer-path map drawn. If the verdict says install, the engagement formats are scoped against the read. If the verdict says hold, you keep the read and act on it yourself.