Skip to main content Stan Consulting LLC · Marketing Atlas · Position · Reporting Is Not Knowing

Stan Consulting · Marketing Atlas · Position · Agency Burn

Reporting Is Not Knowing.

A marketing report describes what happened. Knowledge is the ability to predict what happens if you change something. Most agency reporting collapses the distinction and sells reports as if they were knowledge. The diagnostic separates the two and routes the work into the engagement format that produces each.

01 Section 01 · The claim The claim.

Reports describe. Knowledge predicts. Most agency retainers produce reports and most operators read them as if they were knowledge. The mismatch is structural and it is the largest single source of agency burn in the funded growth bracket.

The claim has two parts. The first is definitional. Data is what was captured. Reporting is data formatted for review. Knowledge is the causal model that lets you predict what happens if you change a variable. Three different things, three different layers, three different deliverables. The agency's monthly deck is layer two. The board's question (what happens if we cut Meta budget by thirty percent) is a layer-three question. The deck cannot answer it because the deck is a description of the past, not a prediction model for the future.

The second part is structural. Operators routinely buy a layer-two retainer (deck production, dashboard maintenance, weekly reporting) and read its output as if it were a layer-three deliverable (channel-level decisions, budget reallocations, scope renegotiations). The retainer cannot answer the layer-three questions, because the retainer is not paid to produce layer three. Asking the retainer to answer them anyway is the structural cause of the friction that operators describe as "the agency cannot tell us what is working."

The position is not "agencies cannot do knowledge work." Agencies can. The position is retainer-priced reporting work and knowledge work are different deliverables, and the operator has to know which one is being paid for at any given time.

02 Section 02 · The conventional view What most people believe.

The conventional read is that the agency knows what is going on because the agency wrote the report. The richer the report, the more the agency knows. This is so widely held that it is rarely articulated. It is the implicit theory under which most operators evaluate agency relationships.

Belief 01

"A long report means deep knowledge." The argument is that volume of reporting correlates with depth of understanding. Forty-seven charts, fifty-two KPIs, three attribution models compared on the same axis: the operator reads this as evidence that the agency has thought hard about the account. The argument fails because volume of reporting and depth of understanding are roughly uncorrelated. A long report can be the output of a tooling pipeline that produces every chart by default. A short report can be the output of a team that has decided which seven numbers actually matter. The volume signal is misleading.

Belief 02

"The agency is closer to the data; therefore the agency knows." The argument is that proximity to platform UIs and account-level access creates knowledge by exposure. The argument fails because data exposure produces familiarity, not causal understanding. The Google specialist who logs into Google Ads daily knows the UI intimately. The same specialist may not know what happens to the account's overall LTV if Meta budget is reallocated to brand search. Familiarity does not transfer to causal knowledge across the channels the specialist does not own.

Belief 03

"Knowledge will emerge from the data over time." The argument is that running enough campaigns and watching enough reports eventually produces knowledge as a side effect. The argument fails because knowledge requires structured experimentation: hypothesis, intervention, measurement, interpretation, written record. Most agency retainers do not include the experimentation scope. They include the activity-execution and reporting scopes. Without the experimentation layer, the data accumulates and the causal model does not.

Belief 04

"If the agency had knowledge, they would put it in the report." The agency-side internal version. The argument is that anything the agency knows would naturally surface in the deliverable. The argument fails because reports and knowledge have different formats. A report is structured around metrics. Knowledge is structured around predictions and causal claims. The agency could have a working model of the account in its head and produce a deck that contains none of it, because the deck format does not have a slot for "predicted impact of cutting Meta budget by thirty percent." Knowledge is invisible inside a metric-structured deliverable.

Each belief sounds reasonable in isolation. Stacked, they form the assumption that agency reporting is a form of agency knowledge. The structural reality is that reports and knowledge live in different deliverables and require different work.

03 Section 03 · Why the conventional view fails Why that belief fails.

The structural argument is that reports are descriptions of the past and knowledge is predictions about counterfactuals. The two are not redundant; they live at different layers and require different inputs to produce.

Five failure modes follow.

Failure mode one. Reports are post-hoc; knowledge is counterfactual. A monthly deck answers the question what happened. A budget-decision question answers what would have happened if, or what will happen if. The two questions are not the same and they are not produced by the same work. The deck can have every chart in the world and still not answer the second question, because the second question requires a model the deck does not contain.

Failure mode two. The retainer pays for hours; knowledge is not hours-bounded. A retainer is a fixed monthly fee for an agreed scope of activity. The economics work because the activity is roughly predictable in hours. Knowledge work is not roughly predictable in hours. A change-prediction with a confidence interval requires a written experiment, a structured intervention, and a documented interpretation. The hours required vary by the question. Retainer economics break when the scope is open-ended; the retainer model selects against knowledge work for that reason.

Failure mode three. The reporting tooling does not produce knowledge. Looker Studio, native Google Ads reporting, Meta Ads Manager, GA4 explorations: all of these produce charts. None of them produces a causal model. The tooling that produces knowledge (incrementality tests, geo experiments, holdout cohorts, structured A/B regimes, written change predictions logged against outcomes) is a different tooling stack. It is not what the agency tooling layer ships by default.

Failure mode four. The retainer renews against reporting, not against predictions. The renewal conversation reads against the deck. The deck looks comprehensive. The renewal happens. There is no mechanism by which the retainer renews against the agency's prediction track record, because the agency is not making written predictions. Without predictions, there is no track record. Without a track record, the renewal cannot be evaluated on knowledge grounds. Retainer relationships persist on activity-and-reporting grounds even when the operator's underlying need is knowledge.

Failure mode five. The operator never asks the change-prediction question. The single fastest way to surface the gap is to ask: if we cut Meta budget by thirty percent for sixty days, what happens to net-new revenue, with a confidence interval. Most operators never ask. Most agencies have not been asked. The first time the question is asked, the agency redirects to existing reports; the operator reads the redirect as a thoughtful answer; the conversation moves on; the layer-two retainer keeps running. The question is the test that closes the loop. It is rarely asked.

The conventional view treats reporting and knowledge as the same product at different volumes. The structural reality is that they are different products at different price points produced by different work. The position is the distinction that makes the difference operational.

04 Section 04 · The SC position The SC position.

There are three layers. Data is what was captured. Reporting is data formatted for review. Knowledge is the causal model that predicts change. Operators pay for layer two and read it as if it were layer three. The position names the difference and routes the work accordingly.

Each layer is named below with its scope, its deliverable, and the test that says whether the operator is at that layer.

L1

Data captured

Pixel and tag firing across the conversion funnel. Conversions API and server-side feeds. GA4 enhanced ecommerce, Salesforce opportunity records, HubSpot lifecycle stages, Stripe contract values, Shopify orders and refunds. The data layer is the captured-event hygiene layer plus the system-of-record integrations the reporting and knowledge layers read against.

  • Pixel and tag fire-rate · per platform, validated
  • System-of-record integrations · CRM, billing, finance close
  • Captured-event coverage · against the funnel stages that matter
  • Data-hygiene tolerance · deduplication, completeness, latency
  • Layer-one ownership · engineering plus analytics

Test the operator is at this layer: the data sources are mapped, the integrations are documented, the captured events match the system of record within technical tolerance.

L2

Reporting compiled

The deck, the dashboard, the weekly check-in. Data formatted for review. Layer two answers the question what happened over this period. It is a description of the past. Most agency retainers ship a layer-two deliverable and most operators read against it. The layer is necessary; without it, the operator has no shared review surface. The layer is also not where knowledge lives.

  • Monthly deck · chart inventory, KPI summary, narrative
  • Dashboard refresh · weekly or daily, against the live data
  • Reporting cadence · standing meetings, ad-hoc pulls
  • Reporting tooling · Looker, native UIs, GA4 explorations
  • Layer-two ownership · agency or in-house analytics

Test the operator is at this layer: the deck arrives on cadence, the data is reasonably up-to-date, the deck describes what happened. There is no requirement for the deck to predict.

L3

Knowledge predicts

The causal model. The set of written predictions about what happens if you change a variable, with confidence intervals, against an outcome that gets logged when the change is made. Layer three answers the question what will happen if we change X. It is structured experimentation, written interpretation, and a documented track record. It is a different deliverable from layer two and it is produced by different work.

  • Written change-predictions · with reasoning and confidence
  • Structured experiments · holdout cohorts, geo splits, incrementality
  • Causal map · channel-to-outcome relationships, documented
  • Prediction track record · logged predictions versus observed outcomes
  • Layer-three ownership · senior judgment, not retainer-bounded

Test the operator is at this layer: the operator can ask the change-prediction question and get a direct numeric answer with reasoning, plus a track record showing the predictions have been honest in past periods.

05 Section 05 · The mechanism The mechanism.

The working spec runs as a five-step diagnostic followed by a routing decision. Inventory the data, audit the report, apply the change-prediction test, name the gap, route to the engagement format that meets the actual need. The diagnostic completes in roughly seventy-two hours of audit time on a typical operating account.

L1 Inventory the data First move · map the captured-data layer

List the captured sources

Inventory every captured data source the reporting reads against. Platform pixels, GA4, HubSpot, Salesforce, Stripe, Shopify, finance close. The inventory is mechanical and surfaces gaps where downstream analytics is being assembled against an incomplete data base.

Validate the technical layer

Confirm each source is firing within technical tolerance. Where the data layer is broken, the diagnostic stops here; layer two and layer three cannot be assessed against an unreliable data foundation. Data fixes are scoped first; the diagnostic resumes once the layer-one read is intact.

L2 Audit the report Second move · map the reporting layer

Inventory the deck or dashboard

List every chart and KPI in the operator's monthly deck or always-on dashboard, by source. Most decks reveal a chart-to-question mismatch at this step: dozens of charts answering questions the board is not asking, while the question the board is asking has no chart at all.

Map each chart to the question it answers

For each chart in the inventory, name the question it answers in plain language. Then list the questions the operator actually needs answered. The two lists usually do not overlap on more than half their entries. The gap is the layer-two scope problem.

Identify the missing reads

Name the questions the operator needs answered that the current reporting does not address. These are layer-two scope items where the deck spec needs to be rewritten. They are not yet layer-three items; layer-two scope can usually answer them with the existing data and a different chart inventory.

L3 Apply the change-prediction test Third move · test for knowledge

Pose a specific change-prediction question

Pick a real and consequential intervention the operator is considering. If we cut Meta budget by thirty percent for sixty days, what happens to net-new ARR, with a confidence interval. The question is specific, time-bounded, and asks for a numeric prediction with reasoning. Vague versions of the question produce vague answers and miss the diagnostic.

Read the answer for shape

A direct numeric answer with reasoning, a confidence interval, and a stated mechanism by which the change would affect the outcome indicates the operator (or the agency) has knowledge. A redirection to existing reports, a list of considerations without a number, or a "we would need to test" without a written test design indicates the operator has reporting in place of knowledge.

Name the gap explicitly

Where the answer is reporting in place of knowledge, the gap is named in writing. The operator is paying for layer two and reading it as if it were layer three. The gap is structural; it cannot be closed by adding more charts or by switching layer-two vendors. The fix is to route the layer-three need into a different engagement format.

R Route to the engagement format Final move · decision

Layer-two scope to the agency or in-house analyst

Reporting needs are met inside the agency retainer or an in-house analytics function. The deliverable is the deck on cadence, the dashboard refresh, the weekly check-in. Pricing is roughly retainer-market. Scope is bounded by the chart inventory and the data sources.

Layer-three scope to an outside engagement

Knowledge needs are met inside an outside engagement: the Conversion Second Opinion for the diagnostic, the Sprint or System Build for the install, the consulting tier for ongoing judgment. Pricing is set against the change-prediction work, not against retainer hours. Reporting and knowledge cannot share a single retainer line item; the economics break in opposite directions.

06 Section 06 · Evidence and case links Evidence and case links.

The Position page is the doctrine. The links below are where the doctrine has been applied to specific composite operators or referenced in adjacent positions. Each link is a test the doctrine has had to pass.

Primary case

The Monthly Report With 47 Charts and No Margin

The composite case file where a B2B SaaS deck contained forty-seven charts and zero margin numbers. The board asked margin questions and got activity answers for nine months. The fix was the deck specification: the seven margin numbers, in order, on slide one. Layer two scope, redrawn against layer-three questions.

Read the case file →

Companion case

The Agency Report That Looked Good Until the Bank Account Disagreed

The composite case file where the agency deck claimed a 4.8x ROAS and the bank deposits read at 3.0x. The four reporting conventions inflated the picture and the operator had no layer-three read against the bank. The fix was the bank-honest source-of-truth document.

Read the case file →

Adjacent case

The Quarter Google, Meta, and GA4 All Claimed the Same Sale

The attribution-cluster case file where three platforms produced five conversion counts for one quarter. Layer-two reporting produced incompatible reads and the operator had no layer-three judgment to pick among them. The diagnostic shape is the same.

Read the case file →

Companion position

Why Agencies Sell Activity When They Cannot Sell Judgment

The companion doctrine on the structural reason agency retainers produce activity rather than judgment. The retainer model selects for hours-bounded execution; knowledge work is not hours-bounded; the market routes accordingly.

Read the position →
07 Section 07 · Where it breaks Where it breaks.

Every methodology has assumptions. Naming them is part of defending the position. The three-layer reporting diagnostic assumes the operator has functional reporting in place. It does not handle every operator-side configuration.

01

Layer-one broken accounts

Operators whose layer one (data captured) is broken cannot be diagnosed at layer two or layer three. Wrong tags, missing pixels, broken integrations, no single source of truth on order data. The methodology defaults to a layer-one install first; the three-layer diagnostic resumes once the captured data is roughly intact. The agency-burn pattern can hide a real layer-one problem; the diagnostic checks layer one before pronouncing.

02

Pre-reporting operators

Brands with no monthly deck, no dashboard, no recurring read against the data have no layer-two artifact for the diagnostic to audit. The methodology defaults to building the layer-two deliverable first; the change-prediction test runs once the layer-two surface exists and the operator has a chart-to-question mapping to read against.

03

Operators who genuinely only need reporting

Some operators do not have layer-three needs. Stable businesses with steady-state paid programs and no near-term restructure questions can run on layer-two reporting plus quarterly judgment touch-points. The diagnostic still applies; the routing decision is just to keep the layer-two retainer in place and not bolt on a layer-three engagement. The position is not "every operator needs knowledge"; it is "every operator needs to know which layer they are paying for."

04

Heavily seasonal or one-off launch businesses

Brands with a single annual launch window or extreme seasonality have a different prediction problem. The change-prediction test still applies; the time horizon and the structure of the predictions are different. The methodology applies but the standard intervention examples do not always translate cleanly.

08 Section 08 · What it costs to apply What it costs to apply.

The three-layer reporting diagnostic installs as the Conversion Second Opinion for operators who want the read on its own. The methodology is the same in either format; the deliverable shape and the engagement length are different.

Diagnostic only

Conversion Second Opinion

$99972-hour verdict

A written diagnostic verdict against the three reporting layers. Read across each layer, the change-prediction test applied to a real intervention the operator is weighing, the gap named, the routing decision documented. No restructure, no implementation. The read.

See the engagement →

Diagnostic plus install

Sprint or System Build

Engagement-scopedread first, scope second

The diagnostic runs first as the scoping artifact. The Sprint or System Build engagement runs the install of the layer that the diagnostic identified as the gap. Layer-two scope rebuilds the deck spec; layer-three scope installs the change-prediction discipline. Pricing is set against the install scope after the read.

See the engagement formats →

Five Cents · Stan's note

Five Cents

The thing I keep wanting operators to internalize is that a marketing report is not a piece of marketing knowledge. The two read alike on the page. They are produced by different work, they cost different amounts, and they answer different questions. Operators ask the layer-three question (what happens if we change this) of a layer-two deliverable (the deck) and get a layer-two answer (a description of what already happened) and feel let down by the agency for not knowing. The agency is not failing. The deliverable is doing what the deliverable does. The mismatch is in the operator's read, and it has been quietly priced into the relationship from the day the retainer was signed.

What I want operators to take from this position is the change-prediction test. Once a quarter, ask the agency or the in-house team: if we cut this budget by thirty percent for sixty days, what happens, with a confidence interval. If the answer is a number with reasoning, the operator has knowledge. If the answer is a redirection to last month's deck, the operator has reporting. Both are valid; they cost different amounts; they need to be sourced separately. The test takes ten minutes. It surfaces nine months of accumulated misalignment in a single conversation.

What this position is for: if you have a long monthly deck, a working agency relationship, and a quiet sense that nobody can tell you what would actually happen if you cut Meta budget tomorrow, you have this position. The Conversion Second Opinion delivers the verdict in seventy-two hours. The next move is the routing decision; the routing is what the engagement produces, and the routing is what gives every line item in the marketing budget a defensible reason to exist.

Stan Tscherenkow · Marketing Atlas · 2026-05-07
10 Section 10 · Related Atlas entries Related Atlas entries.

The Reference pages in the Agency Burn cluster, the Attribution and Shopify clusters that adjoin it, the case files this position was written against, the companion position on activity-versus-judgment, and the hub.

If you read this and recognized your account

Apply the change-prediction test. Then route the work.

The Conversion Second Opinion runs this position against your account in seventy-two hours. A written verdict against the three layers, the change-prediction test applied to a real intervention you are weighing, the gap named, the engagement format that meets the actual need documented. If the verdict says install, the engagement formats are scoped against the read. If the verdict says hold, you keep the read and act on it yourself.