In the deck
47
Charts in the September monthly board deck
Stan Consulting · Marketing Atlas · Case File · Agency Burn
case_type: composite cluster: agency-burn published: 2026-05-07
A B2B SaaS company at three-point-eight million ARR. Forty-eight thousand monthly paid spend across LinkedIn, Google Search, and a small G2 program. The marketing function was a fractional CMO plus an outside agency on the paid program. The monthly board deck had forty-seven charts. None of them was the cost per qualified lead at the SDR-handoff stage. None of them was the gross margin contribution from paid acquisition. None of them was the payback period by channel. None of them was the relationship between paid acquisition and net-new ARR.
That is the composite. The names change. The shape does not.
The company sold a mid-market workflow tool with an annual contract value of around eighteen-thousand and a sales-assisted motion. SDRs handled MQL-to-SQO. Account executives closed. The marketing org was the fractional CMO running ten hours weekly, an internal demand-gen manager, and an outside agency on a fourteen-thousand monthly retainer covering the LinkedIn and Google programs. The fractional CMO and the agency had been on the account for seven months at the point the audit was scoped.
The reporting was layered. The agency produced a weekly pull on campaign performance. The fractional CMO assembled the monthly board deck by combining the agency pull, a HubSpot dashboard export, a Salesforce report on opportunities, and a Stripe report on net-new ARR. The deck was forty-seven charts long, forty-five pages, and arrived in the board observer's inbox the Friday before the Monday board meeting each month.
The board read slide twenty-seven for ten minutes. Slide twenty-seven was a campaign-performance breakdown showing CTR, CPC, CPM, and platform-attributed conversions across LinkedIn and Google. The board asked the same three questions every month. Are we acquiring customers profitably. Is the paid program contributing to ARR growth. What is the payback period. The fractional CMO did not have direct answers from the deck. The deck was full of activity numbers and was missing every margin number. The board would discuss for ten minutes, get tactical answers in place of strategic ones, and sign the next quarter's budget anyway.
The CFO had been the one to scope the audit. The CFO had inherited the deck format from the previous marketing leader and had not had the bandwidth to challenge it. After three quarters of the same pattern, the CFO sent the deck to an outside reader and asked: tell us whether this deck is answering the board's question, and if not, what the deck should look like.
The deck contained forty-seven charts. The board needed seven numbers. The mismatch is what the audit was scoped to characterize. Below: a sample of what the deck contained, and the count of margin-relevant numbers it actually answered.
In the deck
47
Charts in the September monthly board deck
In the deck
45
Pages in the deck PDF the board observer received
In the deck
12
Charts on platform-attributed conversions and CTR breakdowns
In the deck
8
Charts on creative fatigue and audience-segment trends
In the deck
6
Charts on dayparting heatmaps and geo splits
In the deck
5
Charts on attribution-model comparisons across HubSpot, Google, LinkedIn
Margin numbers needed
7
Margin-relevant numbers the board read should have led with
Margin numbers in the deck
0
Of those seven numbers actually answered in the deck's 47 charts
The deck was answering activity questions. The board was asking margin questions. The two reads did not intersect. The fractional CMO had inherited the deck format and the agency was producing the chart pulls; nobody had stepped back to ask whether the chart inventory matched the board's read pattern. The deck reproduced itself month after month against the agency's natural reporting cadence, while the board kept asking the same three margin questions and getting tactical answers.
The September board meeting was where the gap calcified. The CFO had quietly tracked which questions the board kept asking and which questions the deck kept answering. The two lists did not overlap. The CFO scoped the audit the following day.
Four explanations were on the table when the audit started. Each one was almost-right and let the team avoid the structural fix.
"More charts is more transparency." The fractional-CMO read. The argument is that the board appreciates seeing the underlying work and that a forty-five-page deck signals diligence. The argument fails because transparency at scale becomes opacity. Forty-seven charts present forty-seven questions to the board's attention. The board has ten minutes. The board cannot weight forty-seven questions in ten minutes, so the board reads the chart that catches the eye and ignores the rest. Transparency without selection is just a longer version of hiding the answer.
"The agency cannot produce the margin numbers; that is finance's job." The agency-side defense. The argument is that the agency has access to LinkedIn Ads Manager and Google Ads but does not have access to Stripe, Salesforce contract values, or gross-margin data. The argument is technically true and operationally a deflection. Producing the margin numbers requires assembling data from finance into the deck. Either the agency does it (with finance handing over the inputs), the fractional CMO does it (in the deck assembly), or finance does it (in a separate pack). Somebody has to. The deck was structured so that nobody had to and so nobody did.
"The board reads slide twenty-seven; just keep slide twenty-seven good." The pragmatic-streamlining read. The argument is that since the board only reads one slide, all the work should go into that one slide. The argument fails because the board reads slide twenty-seven only because slide twenty-seven is what the eye lands on in a forty-seven-chart deck. The board would read a different slide if a different slide were the answer. The fix is not to optimize slide twenty-seven; the fix is to put the answer on slide one and remove the noise around it.
"The board does not really want to know; they want to feel reassured." The cynical-political read. The argument is that the board does not really want margin numbers because margin numbers might be uncomfortable, and the deck's job is to provide cover. The argument fails as a description of the September meeting. The board kept asking the margin questions explicitly. The board was getting deflected with tactical answers. The board was not signing the budget because they were satisfied; the board was signing the budget because the alternative (firing the marketing org) was a larger decision than the board was willing to make on a Monday afternoon.
All four explanations let the team defer the decision the audit was scoped to force. The structural defect was upstream of any specific chart. None of the explanations went there.
The deck was a compilation, not a report. The agency assembled the activity charts because activity charts are what the agency's tools produce. The fractional CMO assembled the deck because someone had to. Nobody owned the question of what the deck was for. The board read it as if it were a report. The deck was not a report. The deck was a furniture-arrangement of agency outputs.
The audit decomposed the deck failure into a single named structural defect: there was no document specifying which seven numbers the deck must lead with, in what order, against which time series. Without that document, the deck reproduced itself month after month as a compilation of whatever the agency tools made easy to produce.
Defect one. The deck was scoped to the agency's data sources, not the board's questions. The agency had LinkedIn Ads Manager, Google Ads, and the HubSpot pull. Those three surfaces produce campaign-performance, audience, creative, and attribution-model charts. They do not produce gross margin contribution or payback period or net-new ARR. The deck was the union of charts the agency could produce. The board's questions were the intersection of the activity numbers and the financial numbers. The intersection was not in the deck because the agency was not asked to produce it.
Defect two. The fractional CMO assembled the deck without a deck spec. The fractional CMO was operating ten hours a week. The deck was assembled by combining the agency pull and the HubSpot dashboard exports. There was no written specification for what the board deck should contain, in what order, with which inputs from finance. Without the specification, every month's deck looked like the previous month's deck. Iteration on the deck format never happened because nobody owned the format question and nobody had the time to build a new format from scratch.
Defect three. The CFO had not handed over the inputs the deck needed. Gross margin contribution requires CAC plus COGS plus the gross margin per ACV. Payback period requires ACV per cohort plus monthly contribution margin. Net-new ARR by channel requires Salesforce-to-source attribution plus contract values from Stripe. None of these inputs were flowing into the deck because the deck assembler never asked for them and the CFO never volunteered them. The data existed; the routing did not.
Defect four. The board did not enforce a deck format. The board's only real lever in this configuration was the budget approval. The budget approval was happening every quarter regardless of whether the deck answered the board's question. The board had no enforcement mechanism short of firing the marketing org. The board chose not to fire the marketing org. The deck format reproduced itself for nine months because the board's verbal questions were not backed by an "answer this or no budget" enforcement.
Four defects, one missing artifact: the deck specification. The whole failure mode runs through that single missing document. The deck was never wrong on its own terms. The deck was wrong because there was no document defining what its terms should be.
The decomposition reads in three layers. The data, the report, and the read against the report. The data was largely there. The report was an unspecified compilation. The read was a board question with no operator-side artifact to anchor against.
The data layer was largely intact. LinkedIn Ads firing pixels and Conversions API. Google Ads enhanced conversions. HubSpot tracking lifecycle stages. Salesforce mapping opportunities to source. Stripe holding contract-value data. The integrations were not glitch-free but they were functional. Most of the margin numbers the deck needed were retrievable with thirty minutes of engineering and a bit of glue.
The technical layer was not the source of the deck failure. The data was there and the deck was not reading it. That is a layer-three problem, not a layer-one problem.
The reporting layer was where the forty-seven-chart compilation lived. The agency was producing pulls against its own tools. The fractional CMO was assembling them into a deck. The deck was a compilation, not a report. A report has a thesis: this is the question, here is the answer, here is the supporting evidence. A compilation has a chart inventory: here are all the charts, in roughly the order the agency produced them, with section headers tying them together.
The board's questions were not decomposable into the deck's chart inventory. The board kept asking margin questions and getting activity answers because the deck did not contain a margin section. The reporting layer was failing not because individual charts were wrong but because the inventory was the wrong inventory.
The judgment layer was where the board's question lived without a corresponding artifact to read against. The board kept asking margin questions and getting tactical answers, then signing the budget anyway because the alternative (firing the marketing org) was a bigger decision than the data warranted on its own. Without an operator-side artifact addressing the margin questions, the board had no anchor for the harder conversation.
The fix was the deck specification: the seven margin numbers, in order, with the source for each, plus the supporting evidence in the appendix. Once the spec was written, the deck reorganized in a week. The board had its margin read on slide one. The harder conversations could happen because there was now an artifact to have them against.
The audit's written verdict named the install order. The deck was rebuilt from a forty-five-page compilation into a five-slide report plus an appendix. The seven margin numbers led the read. The activity charts moved to the appendix. The board's question had a place to land.
The audit drove into the Conversion Second Opinion engagement format and from there into a thirty-day install. The 5-slide replacement framework below is what was installed.
The seven numbers, in order, with the source for each: cost per qualified lead at SDR-handoff, blended CAC across paid channels, gross margin contribution from paid acquisition, payback period by channel, net-new ARR sourced from paid, MQL-to-SQO conversion rate, SQO-to-close conversion rate. Each number has a defined formula, a defined source, a defined refresh cadence, and a defined owner. The seven-number spec is the deck-spec foundation.
One slide. The seven numbers, this month versus last month versus three-month trailing average versus target. No charts. A table. The slide is the answer to the board's recurring question. Everything else in the deck supports this slide. If the board only reads this slide, the board has the answer. The agency does not produce this slide; finance plus the fractional CMO produce it together with monthly close inputs.
Slide two: channel-level CAC and payback in one chart, the channels read against one another. Slide three: cohort-level gross-margin contribution by acquisition month, showing how the paid program's economics evolve as cohorts age. Slide four: pipeline coverage and SDR-funnel health, the activity charts that actually predict the next quarter. Slide five: budget allocation and proposed reallocation against the read. Five slides. The board reads it in ten minutes and arrives at the meeting with a defined question instead of a vague one.
The agency continues to produce its weekly pulls and its monthly chart inventory. Those charts live in the deck appendix as supporting evidence. The board can pull on any chart in the appendix; the appendix is searchable by chart number. The appendix is not the read. The appendix is the source data the read sits on. The agency's deliverable does not change; what changes is where the agency's deliverable lands in the deck.
The November deck is the rebuilt deck. Five slides plus the appendix. The board reads slide one for four minutes, slides two through five for six minutes, asks the three usual questions, gets direct answers from slide one. The conversation that follows is about the channel-level reallocation in slide five, not about which chart in the agency pull was most interesting. The CFO has the deck she scoped the audit to produce.
The agency retainer is renegotiated. The agency produces the appendix charts on the same cadence and the same scope. The fractional CMO's hours are reallocated from deck-assembly to slide-one ownership. Finance's monthly close inputs are routed into slide one as part of the close itself. The deck is now an artifact the team produces together, not a compilation the agency hands over. The board has a working contract.
A deck is not a report unless someone wrote down what the report is for. The fractional CMO had inherited a deck format and the agency was producing the chart pulls; nobody had written the spec that says this deck answers these seven questions in this order. Without the spec, the deck reproduced itself as a compilation of what was easy to produce. The board's questions never matched, because nobody had ever asked the deck to match.
The CFO's audit question was not really a reporting-volume question. It was a deck-spec question. The forty-seven charts were a symptom; the missing spec was the cause. Once the spec was written and the deck rebuilt around it, forty-seven charts were not the problem; forty-seven charts were just the supporting evidence in the appendix. The same data, the same agency, the same fractional CMO, a different first slide. The board got its read in week four.
The lesson is that any B2B SaaS company running paid spend through an outside agency or a fractional CMO needs the deck spec before the next quarterly board meeting. The spec is seven numbers, in order, with sources. Writing it takes a morning. Its absence costs the marketing function the credibility to defend its budget on a margin basis instead of an activity basis. The default reporting stack does not include the spec. The default is a compilation that grows by chart count and shrinks in usefulness.
Five Cents · Stan's note
The thing that struck me about this case, and I see versions of it a lot in the funded SaaS bracket, is how confidently a forty-five-page deck can hide the fact that nobody knows the answer to the board's question. The deck looks like work. The deck reads like rigor. The deck has charts with subtitles and methodologies and tooltips. None of that adds up to a read. A read is one slide with seven numbers and a thesis. Everything else is supporting evidence for the read. When the read is missing and the supporting evidence is forty-five pages long, the team is performing reporting in place of producing it.
I want operators to take from this that the test of a marketing report is not its volume. The test is whether a board member with ten minutes can answer the three questions that matter from page one. Cost per qualified lead, gross margin contribution, payback period. If the answer is yes, every additional chart is supporting evidence and the deck is doing its job. If the answer is no, no quantity of additional charts fixes it. The fix is the spec, written, signed, and held to. The agency cannot write it because the spec requires finance inputs. Finance cannot write it alone because it requires the marketing context. The fractional CMO was the natural author and never had the time. So the spec did not get written. So the deck did not get fixed. So the board kept asking the same three questions and signing the budget anyway.
What this case file is for: if your monthly board deck has more than fifteen charts, your board reads it for less than fifteen minutes, and the questions the board keeps asking are not directly answered on slide one, you have this case file. The Conversion Second Opinion delivers the verdict in seventy-two hours. The next move is the spec; the spec is what the engagement produces.
Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages give the definition. Position pages give the firm's defended doctrine. The hub gives the map.
If this is the pattern in your account
If the case file maps to your account — a long deck, a board reading slide twenty-seven, the same three margin questions every quarter, no direct answer in the chart inventory — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the deck-spec framework, delivered in seventy-two hours. If the verdict says install, the Sprint engagement runs the rebuild. If the verdict says hold, you keep the read and act on it yourself.