The shape of a misleading report
Most monthly agency reports follow a predictable structure. They open with a screenshot gallery of ad creative or a prominent metric like impressions or click-through rate. They include several pages of platform metrics arranged by channel. They close with a list of recommendations, usually involving more spend or new channels.
Nothing in that structure is wrong on its face. Each element has a place. The problem is what is missing: cost per acquisition stated against your margin, revenue attributed to paid channels with the attribution model named, and a decision log showing what was changed in the account since the last report and what the change produced.
When those three are missing, the report is not measuring the work. It is presenting activity. Activity reports are easy to produce and look professional, which is why they are common. But they do not tell the operator whether the marketing budget is producing revenue.
The four numbers that should lead
Cost per acquisition against gross margin. Not ROAS in isolation. ROAS without margin context is meaningless: a 4x ROAS on a 20 percent margin product loses money. CAC against margin tells you whether the spend was profitable. This should be on page one of the report, with the math shown.
Revenue attributed to paid channels. With the attribution model stated. Last-click, data-driven, or first-touch all produce different numbers; an agency that does not name the model in the report is leaving themselves room to choose the most flattering one. The model should not change between reports.
Spend by campaign, with outcome each campaign drives. A spend report alone is fine for the agency's records. A useful report tells you which campaign is driving acquisition, which is driving retention, which is brand defense, and which is pipeline. Spend on a brand defense campaign should be evaluated differently from spend on a cold acquisition campaign.
Decisions made since the last report. What was changed in the account, why, and what the numbers did in response. This is the only part of the report that proves the agency is actively managing rather than maintaining the account.
How to read the platform metrics section without getting lost
Platform metrics (CTR, CPC, impressions, view-through conversions, frequency, etc.) are diagnostic data, not commercial information. They tell you why a campaign is or is not working. They do not tell you whether to keep running it.
Read this section once a quarter, not monthly. The platform numbers change weekly; the diagnostic conclusions change quarterly. A monthly deep-dive into CTR variance at the ad-group level is a use of the buyer's attention that should go to revenue questions instead.
If the agency is presenting CTR as a headline, ask why. The honest answer is usually 'it improved and the bigger numbers did not, and we wanted to lead with something positive.' Note it; ask for the bigger numbers next month.
Reading the recommendations section
Most agency recommendations follow one of three patterns: spend more on existing channels, add a new channel, or test a new creative direction. Each is a reasonable thing to do. None is automatically correct.
The test is whether the recommendation is tied to a finding in the data above. A recommendation to expand into Pinterest that follows a finding about Meta saturation is sound. A recommendation to expand into Pinterest that follows a finding about CTR improvement on existing campaigns is the agency manufacturing scope.
Read each recommendation back to the data and ask: what specifically in the report justifies this? If the answer is unclear, the recommendation is not load-bearing. Note it for the next review and watch what happens to the recommendation in the following report. Recommendations that disappear without explanation are worth flagging.
Five questions to ask in the next review call
1. Show me the cost per acquisition for last month against our gross margin. This is the single fastest test of whether the report is connected to the business.
2. What is the attribution model used for the revenue numbers in this report. If the answer involves shifting the model month to month, the comparison value of the report is zero.
3. Walk me through three decisions you made on this account in the last thirty days. A managed account has a decision log. A maintained one does not.
4. Which campaign is the worst performer this month, and what would happen if we paused it tomorrow. The answer should take seconds. If it requires a follow-up, the account is not being read at the campaign level by anyone senior.
5. What is the one number you are managing this account to. Every well-run account has one. CAC, ROAS-against-margin, blended revenue, qualified leads. If the agency cannot name a single governing number, there is no management layer above the tactics.