HomeLearnGoogle Ads › How to evaluate a performance report

Google Ads · Reporting

How to evaluate a Google Ads performance report without being misled

Published April 22, 2026 · 11 minute read · By Stan Tscherenkow

Quick answer

A useful Google Ads performance report shows cost per acquisition against target margin, brand and non-brand split, platform ROAS reconciled against actual revenue, year-over-year comparison, impression share lost to budget, and verified conversion tracking. A report missing any of these is presenting platform activity, not commercial outcome. CTR, impressions, and quality score are context, not conclusions.

Key takeaways

What this guide will give you

  1. CTR, impressions, and quality score are activity metrics. They measure whether the account is doing things. They do not measure whether those things are producing revenue.
  2. ROAS reported by the Google Ads platform is not the same as actual revenue return. Attribution inflation from view-through conversions and modeled data makes platform ROAS unreliable without reconciliation.
  3. The metric that matters is cost per acquisition against your actual margin. If your report does not show this number, the report is not answering the commercial question.
  4. Month-over-month comparisons are less useful than same-period year-over-year comparisons for businesses with any seasonality. A 20% drop in October vs September may be the market, not the account.
  5. Conversion tracking accuracy must be verified before trusting any reported conversion metric. A broken conversion tag makes every other number in the report meaningless.
  6. A report that shows all metrics improving but revenue flat is not a reporting success. It is a signal that the wrong things are being measured.

What this article covers

  1. Why most Google Ads reports answer the wrong question
  2. The metrics that actually connect to revenue
  3. How to reconcile platform ROAS with actual revenue
  4. Reading conversion data: what to verify before trusting
  5. The comparison problem: month-over-month vs year-over-year
  6. What a useful Google Ads report actually looks like
  7. A six-question checklist for evaluating any Google Ads report
  8. Questions operators ask about Google Ads reports
  9. Final thoughts

Most Google Ads reports are written to the wrong audience. They are built to make the agency look like it did work, not to help the business decide whether the work is producing revenue. The distinction matters. After twenty years of paid media work and more than forty account audits, the pattern that shows up most often is a monthly report full of improving activity metrics over a period when the business bank account did not notice. CTR rose. Quality score improved. Impression share climbed. Revenue stayed flat. That is not a reporting success. That is a reporting failure the business is paying to receive. This guide walks through the adjustments that convert a platform report into a commercial one. For the full pillar, see the Google Ads guides collection.

Why most Google Ads reports answer the wrong question

The default Google Ads export shows what the platform measured. Impressions. Clicks. Click-through rate. Average cost per click. Quality score. Conversions. Conversion value. All real numbers. None of them answer whether the account is commercially viable. That question requires numbers from outside the platform: cost of goods, overhead, target margin, actual revenue in the accounting system. A report that stays inside the platform can only show activity. It cannot show outcome.

The five tells of a platform-only report:

The report can still be delivered on time and look professional. It just will not help the business decide what to do. Decisions require metrics that reconcile against revenue. Activity metrics are context for those decisions, not substitutes for them.

The metrics that actually connect to revenue

Four metrics govern whether a Google Ads account is working. Cost per acquisition against target. Revenue attributed, reconciled against the accounting system. Impression share lost to budget and to rank. Conversion rate segmented by campaign type and device. Everything else is context. CTR helps interpret why CPA moved. Quality score helps interpret why CPC moved. Search impression share helps interpret why volume moved. But the four metrics above are the numbers a business owner can take to a board meeting and defend.

The benchmarks I use as first-pass reads:

When the report leads with these four and uses the rest as supporting context, the report is doing its job. When the report leads with the rest and buries these four, the report is hiding the answer the business came to find.

How to reconcile platform ROAS with actual revenue

Platform ROAS is a calculated number, not a measured one. Google Ads takes the conversion values it received, applies the attribution model, layers on modeled conversions and view-through conversions where enabled, and produces a ratio. The ecommerce platform or CRM measures a different number: actual orders, actual revenue, across all channels. The two rarely agree. The useful question is how much they disagree and whether the disagreement is explainable.

The reconciliation pattern:

A 10 to 30 percent variance is normal and driven by legitimate attribution differences. A 40 to 60 percent variance typically means view-through is too generous, attribution window is too long for the buying cycle, or a second tag is double counting. Above 60 percent means tracking is broken and the reported ROAS is fiction. The reconciliation is a 15-minute exercise that changes how the next decision gets made.

Reading conversion data: what to verify before trusting

Every metric that depends on conversion data is only as reliable as the conversion tracking beneath it. Cost per acquisition, ROAS, conversion rate, smart bidding performance, all downstream of the conversion tag firing correctly. An account where the conversion action double counts through both GTM and the Google tag will report twice the real conversions, halving the apparent cost per acquisition, and smart bidding will optimize toward the wrong signal. The reported improvement is the system adjusting to bad data.

Four verification checks before trusting the report:

When these four pass, the downstream metrics are trustworthy. When any fails, every number that depends on conversion data is suspect. The correct sequence is fix tracking, collect 14 days of clean data, then evaluate performance. Evaluating performance on broken tracking produces decisions that scale the wrong thing.

If you want the exact priority list for your specific account rather than the general framework, the Conversion Second Opinion delivers it in 72 hours.

The comparison problem: month-over-month vs year-over-year

Month-over-month comparison is the default in most reports and the wrong default for most businesses. Any business with seasonality, holiday exposure, B2B buying cycles tied to fiscal calendars, or even weather-sensitive demand, reads monthly comparisons as noise. A 20 percent drop from November to December can mean the account is broken, or it can mean the category normally drops 30 percent in December. One comparison cannot tell you which. Only year-over-year can.

When to use which comparison:

Reports that show only month-over-month create urgency around normal seasonal movement. Reports that show only year-over-year miss short-term changes that need immediate attention. Both should be present. If only one is, the report is presenting an incomplete frame of the period.

What a useful Google Ads report actually looks like

A useful report is one page. It leads with cost per acquisition against target, segmented brand and non-brand. It states platform ROAS next to revenue system ROAS and notes the variance. It shows year-over-year for the primary metrics and month-over-month as secondary context. It reports impression share lost to budget and to rank. It lists the top 10 search terms by spend with a column for conversion. It states whether conversion tracking was verified this period. It ends with a specific recommendation tied to a specific metric that moved.

Nine things to demand in every report:

When those nine are present, the report is diagnostic. When they are absent, the report is ceremonial. Agencies that produce ceremonial reports tend to deliver ceremonial results.

The framework

A six-question checklist for evaluating any Google Ads report

  1. Does the report separate brand and non-brand? (2 minutes)

    Blended reporting hides that brand campaigns produce most of the ROAS and non-brand produces most of the growth. A single ROAS number is a compliance answer, not a diagnostic one. Demand the split.

  2. Is conversion tracking verified this period? (3 minutes)

    Any report should cite when tracking was last audited. A broken conversion tag invalidates every downstream metric. If the report does not state verification status, it is presenting numbers of unknown reliability as if they were reliable.

  3. Are comparisons year-over-year, not only month-over-month? (2 minutes)

    Businesses with any seasonality require year-over-year to read performance honestly. October against September is noise. October against last October is signal. Both together are properly contextualized.

  4. Does ROAS reconcile against actual revenue? (3 minutes)

    Platform ROAS should be checked against ecommerce or CRM revenue for the same window. A 10 to 30 percent variance is tolerable. Above 30 percent means attribution, view-through, or modeled conversions are inflating the reported number.

  5. Does the report show impression share lost to budget? (2 minutes)

    Impression share lost to budget above 20 percent on a priority campaign is a capacity problem. A report that does not show this metric cannot answer whether scaling budget is the right move or whether structure must be fixed first.

  6. Does the report list spend on irrelevant queries? (3 minutes)

    The report should state what percentage of spend went to queries the business did not want. Below 15 percent is well-managed. Fifteen to 30 percent is typical. Above 30 percent is a leak no bid optimization can fix.

Questions operators ask about Google Ads reports

What metrics should I ask my agency to include in monthly reports?

Spend, conversions, cost per acquisition, revenue attributed, and ROAS, all split by brand and non-brand campaigns. Impression share lost to budget and to rank. Search terms spend on irrelevant queries. Conversion tracking verification status. Year-over-year comparison for seasonality. A report without those nine elements is selective, not complete.

What is the difference between platform ROAS and actual revenue return?

Platform ROAS is calculated by Google Ads from the conversion values it received and attributed. Actual revenue return is what the business bank account saw. The two differ because of view-through conversions, modeled conversions, cross-device double counting, and attribution across multiple channels. Variance of 10 to 30 percent is normal. Above 30 percent indicates tracking or attribution is misconfigured.

How do I verify my conversion tracking is working correctly?

Check three things. Each primary conversion action fires within the last seven days in Tools and settings, Conversions. The conversion count reconciles against the actual source, for example Shopify order count or CRM lead count, within 10 to 15 percent. No conversion is double counting through both GTM and the Google tag. Broken tracking invalidates every other metric.

What does a healthy click-through rate look like for Google Ads Search?

Brand campaigns should see 8 to 20 percent click-through rate. Non-brand search typically runs 3 to 7 percent on well-matched queries, lower on broader categories. Shopping campaigns run 0.5 to 2 percent. The number matters less than the conversion rate on the click. A 12 percent CTR that converts at 0.5 percent is worse than a 4 percent CTR that converts at 6 percent.

Should I compare to last month or last year?

Year-over-year for any business with seasonality or recurring calendar effects, which is most businesses. Month-over-month for accounts in a genuinely linear growth phase with no calendar exposure. Compare both when possible. Month-over-month flags directional change. Year-over-year accounts for the seasonal market. The two together catch false trends neither would catch alone.

Final thoughts

A report is a diagnostic tool when it is built to help the business make a decision. It is a ceremonial object when it is built to justify the retainer. The difference is visible in the first viewport. If the top of the page leads with activity metrics and buries cost per acquisition against target, the report was written for the wrong reader. If the top leads with the commercial question and uses activity metrics as supporting context, the report was written for the business.

Most operators will not change agencies over a reporting problem. They should. A report that does not answer commercial questions is a downstream signal of an agency that is not thinking in commercial terms. The reporting layer is the cheapest diagnostic of the relationship, because it does not require any special access. The report arrives every month. If the report is not useful, the month behind it was probably not either.

When the report questions stack up with structural questions about match types, campaign overlap, and conversion tracking at the same time, a one-off diagnostic is the right first move. Stan Consulting offers Google Ads management once the diagnostic is complete, and the Conversion Second Opinion is the entry point for anyone who wants the findings before the management.

Related: the full marketing guides collection covers Shopify, conversion, strategy, and agency management.

The engagement format

If this is bigger than a campaign fix, the Revenue Sprint handles the full build.

$5,000. One engagement. Diagnosis, build, and fix. No retainer after.

See the Revenue Sprint