Google Ads · Diagnostics
Back to Google Ads guides
Quick answer
The Google Ads search terms report lists the actual user queries that triggered your ads, not the keywords in your account. To read it, open Insights then Search terms, set a 7 to 30 day window, sort by cost, and review the top 40 rows. Flag irrelevant queries for the shared negative list. Promote converting queries to exact match. Review weekly.
Key takeaways
What this article covers
Most Google Ads accounts are optimized from the wrong document. Operators look at the keyword list, tune bids, rewrite ad copy, and never open the report that shows what Google actually bought with the budget. The search terms report is that report. It is the single diagnostic that separates accounts run by the keyword list from accounts run by what the algorithm is delivering against the keyword list. After twenty years of paid media work and 40-plus Google Ads account audits, the pattern is consistent: the accounts that get reviewed weekly at the query level outperform the accounts that do not, by margins large enough to change what the business can afford to spend. For the full pillar, see the Google Ads guides collection.
The keyword list is a statement of intent. The search terms report is the record of what Google delivered. These are not the same thing, and in broad match accounts they diverge far enough that most operators would not recognize their own account from the query log. A single broad match keyword for "running shoes" can trigger on "marathon training advice," "nike outlet near me," "shoe repair," and three dozen other queries that share no commercial intent with the original bid. Smart bidding makes this worse, not better. The algorithm will chase conversion signal into query space the advertiser never approved.
What the divergence looks like in practice:
Knowing which end of that distribution you are on is the first honest read of the account.
The path in the current Google Ads interface is Insights and reports, then Search terms. From a specific Search or Shopping campaign, the Insights tab contains the same view scoped to that campaign. Set the date range to either the last 7 days for weekly maintenance or the last 30 days for a monthly strategic review. Sort by Cost descending. This is the only sort that matters for a diagnostic pass. Sorting by impressions or clicks surfaces volume, which is a vanity view. Sorting by cost surfaces where the budget went.
Read the top 40 rows. In most accounts, those 40 queries represent 65 to 80 percent of total spend. The tail below row 40 matters for long-term pattern spotting but does not drive the weekly decisions. Columns to keep visible:
Hide clicks and impressions during a diagnostic pass. They pull attention toward volume and away from money.
Wasted spend in the search terms report has a recognizable shape. The category that surfaces fastest is branded queries for companies you do not compete with, appearing because broad match decided the queries were topically related. Competitor brand names you did not intentionally target. Job search terms ("careers," "jobs," "salary," "hiring"). Free-intent modifiers ("free," "DIY," "how to," "template," "tutorial"). Adjacent category terms where someone is searching for a larger, smaller, or different product.
The five patterns to flag on sight:
The test I use is blunt. Would the business knowingly pay $15 to be shown for this exact query? If the answer is no, the query is wasted spend and belongs on the negative list. The marginal time cost of flagging 10 wasted queries is about three minutes.
Negative keywords are the operational output of the search terms report. The review has no value if the flagged queries do not end up excluded. Most accounts fail here because negatives get added ad-hoc at the ad group level, scattered across campaigns, with no central list anyone can audit. The correct structure is a single shared negative keyword list applied to every search-intent campaign, plus one or two campaign-specific lists for genuinely localized exclusions.
What to decide for each flagged query:
Add negatives in batches, not one by one. Selecting 15 rows, clicking Add as negative keyword, and choosing the shared list takes 90 seconds. Doing the same work one query at a time takes ten minutes and produces the same outcome. Batch the decision once you have read the top 40.
The offensive half of the review is as important as the defensive half. Search terms that converted deserve to be promoted into the account as exact match keywords in their most relevant ad group. The reason is not cosmetic. Leaving a proven converting query in broad or phrase match means the algorithm decides when to show against it. Promoting it to exact match means the query has its own keyword with its own quality score, its own bid, and its own place in the auction insights. That is control, not semantics.
The promotion criteria I use:
Once promoted, add the original broader-match version as a negative in the old ad group if the query was being captured there inefficiently. Otherwise the new exact match keyword will compete with its own ancestor for impression share.
A weekly search terms review on a well-structured account takes 20 to 30 minutes. On a neglected account the first review takes 90 minutes because the backlog is long. That is fine. The first review is a one-time cleanup. After that, the weekly cadence keeps the backlog from forming again. Accounts that get reviewed every 60 days produce a different shape of maintenance: the negative list is always catching up, the algorithm is always training on queries that should have been excluded, and the monthly report always has a "we found more waste" line.
What the 30-minute cadence looks like:
That cadence is the difference between account management and account maintenance. Both are necessary. Only one is billable in a well-run engagement.
The framework
Insights, Search terms. Last seven days. Sort by cost descending. Keep search term, match type, cost, conversions, and cost per conversion visible. Hide clicks and impressions during the diagnostic pass.
Read each of the top 40 queries. Select any query the business would not knowingly pay for. Add them as a batch to the shared negative keyword list, not per ad group. Choose match type per query intent.
Filter for queries with at least one conversion inside the cost per acquisition target and at least three clicks. Add each as an exact match keyword in the most relevant ad group. Ad copy should already align.
Log the date, the number of negatives added, the exact match keywords promoted, and the weekly spend on irrelevant queries. One spreadsheet row. The log makes the trend visible across months.
Every seven to fourteen days for an active account spending $3,000 or more per month. Smaller accounts can run a 30-day cadence. The frequency is dictated by spend velocity, not preference. A week of unreviewed broad match on a $10,000 monthly budget is roughly $2,300 of queries nobody approved. Short review intervals cost less than skipped ones.
Keywords are the words you bid on. Search terms are the actual queries typed by users that triggered your ads. Match type determines how loosely Google pairs the two. A single phrase match keyword can produce hundreds of unique search terms. The keyword is your intent. The search term is what Google delivered against that intent.
Select the rows you want to exclude, click the negative keyword button above the table, then choose whether to add at the ad group, campaign, or shared negative list level. Most negatives belong on a shared list applied to every search-intent campaign. Account-wide negatives should live in one master list reviewed monthly, not scattered across ad groups.
Under 15 percent of total spend on irrelevant queries is a well-managed account. Fifteen to 30 percent is typical, fixable within a month. Above 30 percent is a structural problem tied to match type strategy, negative list maintenance, or thin keyword themes. The benchmark applies to spend share, not row count. Cost, not query volume, is the measure.
Partially. Google released search theme and search category insights inside Performance Max, visible under the Insights tab at the campaign level. Raw query-level data is restricted. The practical implication is that negatives must be added through account-level negative keyword lists or campaign-level exclusions, not reactively from a full query list like Search campaigns allow.
The search terms report is boring work. It is not the part of the account a new operator wants to spend time on. It is the part of the account that separates a diagnostic practitioner from a dashboard watcher. Twenty minutes a week of query-level review produces better outcomes than 20 hours a week of bid tweaking, because bids are a response to the conversion signal and the conversion signal is built from the queries the report shows you.
Most accounts that look "optimized" by agency standards are actually well-reported rather than well-managed. The reporting covers the conversions, the ROAS, and the month-over-month change. It does not cover whether the top 40 queries the account paid for were queries the business wanted to pay for. When the gap between those two things gets large enough, no bid adjustment can close it. Only negatives and promoted exact matches can.
When the pattern of wasted spend is too entangled to unwind alone, or when the account spans multiple campaign types with overlapping queries and the weekly review no longer fits in thirty minutes, that is the point to bring in structured help. Stan Consulting offers Google Ads management once the diagnostic is complete, and the Conversion Second Opinion is the entry point for anyone who wants the findings before the management.
Related: the full marketing guides collection covers Shopify, conversion, strategy, and agency management.
The engagement format
$5,000. One engagement. Diagnosis, build, and fix. No retainer after.
See the Revenue Sprint