ROAS Reporting
Return on Ad Spend as the headline number on agency reports. The metric that hides waste behind a clean ratio.
Read the entry →Stan Consulting · Marketing Atlas · Reference · Agency Burn
Marketing metrics that signal effort or activity but do not predict commercial outcomes. The most-reported, least-decisive numbers in agency monthly decks.
Section 02 · Quick definition
A vanity metric is any marketing number that goes up when the agency works hard but does not move when the business gets healthier. The category includes impressions, reach, follower count, engagement rate, click-through rate, and most platform-reported activity counters. The metrics are real and the data is accurate. The problem is that the metric is not connected to a commercial decision the operator has to make. Vanity metrics get reported because they are easy to make go up and easy to defend in a thirty-minute monthly review.
Section 03 · Why it matters
Vanity metrics matter because they routinely substitute for outcome metrics in client reporting. When the agency report leads with impressions up 38% and the second slide says engagement is up 22%, the operator reads the deck as a positive month. The deck is a positive month for the agency. It is not yet a positive month for the business. New customer count, blended CAC, contribution margin, and net new revenue from paid channels are absent or buried.
The structural risk is that an agency reporting on vanity metrics is reporting on the work it did rather than the result the work produced. Six months of activity reports compound into a relationship where the operator stops asking the outcome question because the activity question has been answered every month and the answer has always been good.
The practical stake is that vanity metrics are not lies, but they answer the wrong question and let the budget keep flowing while the bank account does not move. Reading a marketing report well starts with refusing to accept activity as a substitute for outcome.
Section 04 · How it works
Vanity metrics enter agency reporting through three reinforcing mechanisms. The platforms make them easy to pull. The agency's deck templates were built around them. The client's monthly review window does not have time to interrogate them. The result is a stable reporting equilibrium where the agency reports on what it can pull cleanly and the operator accepts the pull because it looks complete.
Every ad platform exposes impressions, reach, frequency, click-through rate, and cost per click as default columns in the reporting interface. The numbers are accurate, granular, and auto-refreshing. They take one click to export. Outcome metrics like new customer count, contribution margin, and incremental revenue require a join against the operator's back-end data and rarely exist in the agency's tooling at all.
Agency monthly review decks are built from a slide template that has a standard set of headline metrics. The template was designed when the available data was activity data. Replacing those slides with outcome slides requires the agency to either build the join into the operator's data or accept that the headline number is no longer fully under the agency's control. Most templates persist because no one rebuilds them.
A thirty-minute monthly review is enough time to walk through a deck of activity numbers. It is not enough time to argue the methodology of how new customer count was calculated, what fraction was incremental, and what the contribution margin per acquired customer was. Vanity metrics fit the format. Outcome metrics do not.
An agency that reports outcome metrics has more to lose when the metric goes down. An agency that reports activity metrics has many ways to make the activity numbers go up regardless of what is happening commercially. The upgrade asymmetry biases the entire industry toward activity reporting and explains why almost every operator has the same complaint about almost every agency they have ever hired.
The four mechanisms compound. The fix is not to ask for better vanity metrics. The fix is to require the report to lead with a number the bank statement can confirm.
Section 05 · Common misunderstandings
“Engagement rate is up, so the brand is getting healthier.”
Engagement rate is the ratio of platform interactions to served impressions on a single platform's native audience. It correlates with creative quality and audience match, not with brand health. Brand health is measured in unaided recall, branded search volume, and direct traffic from typed URLs. None of those move because of engagement rate.
“Click-through rate is the conversion-quality signal.”
Click-through rate measures whether the ad was interesting enough to click. It does not measure whether the click produced revenue, whether the customer was incremental, or whether the unit economics worked. A 4% click-through rate paired with a 0.6% conversion rate produces less revenue per impression than a 1.2% click-through rate paired with a 4% conversion rate.
“Impressions show that the audience is being reached.”
Impressions count served creative units, which is not the same as exposure to a person, which is not the same as exposure to the right person, which is not the same as a person who will buy. Impressions are the loosest possible upstream proxy for outcome and they are the most-reported number in paid media.
“Follower count is brand momentum.”
Follower count is the cumulative result of follow-driven content paid for by the agency or earned by the founder over years of posting. It rises smoothly and falls slowly. It does not predict revenue, sales pipeline, or repeat purchase rate. The metric is sticky on the deck because it always goes up.
“Cutting vanity metrics from the report is hostile.”
Removing vanity metrics from the report is the opposite of hostile. It frees up the review meeting to discuss the small number of decisions the operator actually has to make. The agency that resists is the agency that depends on the vanity-metric defense; that resistance is itself the most useful diagnostic signal in the relationship.
Section 06 · Diagnostic questions
What is the headline metric on slide one of the agency monthly deck, and what commercial outcome does it predict?
How many of the report's top five metrics tie to bank-statement revenue versus platform-reported activity?
When was the last month the agency reported a metric going down, and how did that report read?
Is new customer count separated from total order count, and is incremental customer count visible at all?
What share of the report is platform-pulled activity data versus operator-side outcome data?
Does the report include contribution margin per acquired customer, or does it stop at revenue?
If the activity numbers all rose 25% next month and revenue stayed flat, would the agency report that month as a win?
Section 07 · Related Atlas entries
Section 08 · Five Cents
Vanity metrics dominate agency reports for one reason. They are the easiest numbers to make go up and the easiest numbers to defend in a thirty-minute monthly call. The agency does not pick them because the agency is bad. The agency picks them because the agency is rational. The platforms expose them, the deck template features them, and the client's review meeting does not have time to interrogate them. The fix is not better vanity metrics. The fix is to require the headline of every agency report to be a number the bank can confirm and to keep asking that question until the report changes shape or the agency relationship does.
Stan · Marketing AtlasSection 09 · Sources