01
Ad-hoc tool use
Individual contributors are running their own ChatGPT accounts or tools. There is no shared login, no shared prompt library, and no common approach to what the tool is for.
AI Operator Training is a $2,500 engagement from Stan Consulting LLC. Two 90-minute sessions build a shared AI operating standard for a marketing team. Deliverables: shared AI operating protocol, tool selection rationale, workflow integration map, written session summary. Fee is final on confirmation. Runs solo or in parallel with the AI Workflow Build.
Home Services AI Operator Lane AI Operator Training
Stan Consulting · F6.3 · Team preparation
Two sessions that build a shared AI operating standard for your marketing team. The protocol, the tool rationale, and the quality gate are written for your stack and your workflows. Not a workshop curriculum. Not a certificate.
01
Quick answer
AI Operator Training is a $2,500 engagement that installs a shared AI operating standard for a marketing team. Two sessions, 90 minutes each. Session 1 covers tool selection and workflow integration points. Session 2 closes the written protocol around real friction the team surfaced in the interval. Deliverables: shared AI operating protocol, tool selection rationale specific to the team stack, workflow integration map, and a written session summary. The fee is final on confirmation.
A shared standard built between sessions, not prescribed before them.
02
The AI Operator Training is built for a specific team state. Read the six cards. If four or more match, the team is ready for the protocol session. If fewer than three match, start with the AI Marketing Audit first.
01
Individual contributors are running their own ChatGPT accounts or tools. There is no shared login, no shared prompt library, and no common approach to what the tool is for.
02
AI-assisted output goes into production without a review gate. The marketing leader cannot tell which content was AI-drafted and which was not. Quality depends on individual judgment.
03
AI tools are used for one-off tasks. They are not connected to the content calendar, the brief process, the CRM, or the reporting cycle. Each use is a standalone experiment.
04
AI-generated content in a regulated category (healthcare, finance, legal, or any FTC-relevant vertical) goes out without a documented compliance review step. The risk is present and unnamed.
05
Different contributors have adopted different tools based on personal preference. There is no rationale for why one tool over another. Subscriptions are accumulating without a coherent stack decision.
06
The marketing leader cannot reconstruct which AI tools were used in which outputs this month. If a compliance question arrives, there is no record to consult. The work is invisible at the management level.
03
The sessions are not presentations. Both are working sessions with the team. Session 1 produces decisions. Session 2 produces the written protocol. The interval between them is where the protocol gets stress-tested.
Session 1 · 90 minutes
Tool selection rationale
The existing tool usage across the team is mapped. Overlapping tools are identified. A decision is made on which tools the team will use and why, based on the specific workflows in scope. The rationale is written, not assumed.
Existing workflow review
The team walks through the active workflows: content production, brief creation, reporting, and distribution. Each workflow is examined for where AI can close a gap and where it creates a quality or compliance risk if inserted without a gate.
Integration point identification
The three to five highest-value integration points are named and sequenced. These become the scope of the protocol built in session 2. Lower-priority integrations are noted but not built in this engagement.
Session 2 · 90 minutes · after the interval
Friction debrief
The team reports what broke, what slowed down, and what was unclear after using the session 1 decisions in their actual work. This is the data that makes the protocol specific rather than generic. Hypothetical friction does not count; only what actually happened.
Shared operating protocol build
The protocol is written in session 2, not before it. Each rule in the protocol closes a piece of friction the team surfaced in the interval. The result is a document the team actually uses because it answers questions they actually had.
Quality gate and compliance review step
The review gate for AI-assisted output is designed for the team's category. For regulated categories, the compliance review step is explicit: what gets reviewed, by whom, and what the standard is. For unregulated categories, the quality gate still stands.
04
A protocol written before the team has tried the tools is a document that answers questions nobody has asked yet. The interval is the mechanism that changes that.
Session 1 produces tool decisions and an integration map. The team then uses what was agreed for one to two weeks in their actual work. They hit real friction: a prompt that does not fit the brand voice, a workflow step where the AI output needs more review than expected, a compliance question nobody had anticipated.
That friction is the material session 2 works with. The written protocol is built around it. Each rule in the protocol closes a specific problem the team encountered, not a theoretical one. This is what makes the protocol useful past the first week after delivery.
A single six-hour workshop can produce the same volume of output. It cannot produce the same quality of protocol, because no team has surfaced their real friction in the same room where the protocol is being written. The interval creates the gap between "here is how we think it will work" and "here is how it actually works for this team."
Session 1
Tool decisions made. Integration points named. Team leaves with clear scope for the interval period: these tools, these workflows, these integration points.
The Interval · 1 to 2 weeks
Team uses the agreed tools in real workflows. Friction surfaces. Edge cases appear. Questions that could not have been anticipated in session 1 become concrete.
Session 2
Friction debrief opens the session. Written protocol is built around actual friction, not predicted friction. Quality gate is designed with real cases in hand. Protocol closes the engagement.
05
The engagement does not close with a presentation deck or a slide summary. Each deliverable is a working document the team refers to after the sessions end. The $2,500 fee covers the two sessions and all four documents.
01
The primary document. Covers which tools the team uses and for which tasks, the quality gate for AI-assisted output, the compliance review step, and the handoff procedure for work that moves between contributors. Written in session 2, specific to this team and this stack.
02
Written documentation of why each selected tool was chosen for this specific operation, what it replaces or augments, and what the decision criteria were. Not a generic comparison chart. The rationale document is for onboarding new contributors and for revisiting the stack decision when tool options change.
03
A written map of where AI tools enter the existing workflows: which step, what the input is, what the expected output is, and where the human review gate sits. The map is built against the team's actual workflows, not a theoretical content production process.
04
A written record of both sessions: decisions made, rationale stated, friction identified in the interval, and how each piece of friction was addressed in the protocol. The summary is the audit trail for the engagement and the reference point if the protocol needs revisiting after six months.
06
Both sessions run on Zoom. In-person delivery is available for operators in markets where the team has an active presence (New York, Los Angeles, Texas, Germany, Israel, and the greater Sacramento area). In-person delivery does not change the price or the session structure; contact the team before booking if in-person is preferred.
Attendance is one team per booking. The sessions are built around the team's specific stack and workflows; adding attendees from other departments dilutes the working quality. The recommended attendance is the marketing leader and the contributors who will use the tools daily. Three to eight people is the working range. A larger session is a different scope; contact the team before booking.
This is not a company offsite or an all-hands training. It is a working session for the people who will operate the tools. Executives who will not use the tools daily are welcome to attend session 1 for context but are not the primary audience for session 2. Session 2 is for the operators.
Price
$2,500 one-time
Format
2 sessions · 90 min each
Delivery
Zoom or in-person
Attendance
1 team · 3–8 people
Interval
1–2 weeks between sessions
Fee policy
Final on confirmation
07
This engagement is for
This engagement routes to something else
08
There is no fixed cap, but the sessions work best with the marketing leader and the contributors who will actually use the tools daily. In practice this is three to eight people. Larger attendance dilutes the working quality because the protocol must be built for specific workflows, not delivered to a general audience. If the whole company needs to attend, that is a different scope. Write to the team at [email protected] before booking if attendance exceeds eight.
Yes. Both sessions can be recorded by the team for internal reference. The written deliverables, the shared AI operating protocol, tool selection rationale, workflow integration map, and session summary, are the primary reference documents. The recording supplements them. Stan Consulting does not retain or distribute session recordings.
The training is still appropriate. Session 1 covers tool selection rationale for the specific stack and workflow, which is the correct starting point for a team that has not yet committed to a toolset. The output of session 1 is a tool selection decision for the team, not a review of existing usage. Teams with no prior AI use frequently find session 1 the most valuable part of the engagement. If the team has no existing marketing workflows at all, the AI Marketing Audit is the better entry point.
The AI Operator Training closes with a written protocol and session summary. Ongoing support is available through the AI Stack Retainer ($3,000 per month, 3-month minimum), which covers tool maintenance, workflow monitoring, and protocol updates as the stack evolves. The training does not auto-enroll into the retainer. Each is a separate engagement with a separate agreement.
A generic AI workshop delivers the same curriculum to every team. The AI Operator Training builds a protocol specific to this team, this stack, and this category. Tool selection rationale is written for the tools this team will actually use. Workflow integration points are mapped against existing workflows, not a theoretical content production process. The session 2 protocol is built around friction this team surfaced after session 1, not edge cases prepared in advance by the facilitator. The deliverables are not a slide deck summarizing what was discussed. They are working documents the team uses after the engagement ends.
The team uses what was agreed in session 1 in real work and brings the friction back to session 2. Session 1 sets the tool selection and identifies the integration points. The interval, typically one to two weeks, is where the team encounters actual friction in actual workflows. A prompt does not fit the brand voice. A compliance question surfaces that was not anticipated. A workflow step takes longer than expected with the AI tool in it. Session 2 opens with a debrief of that friction and builds the written protocol around it. The interval is not downtime. It is data collection.
Yes. The AI Operator Training and the AI Workflow Build are designed to run in parallel or in sequence. When the team needs a shared standard before AI tools deploy at scale, the training runs first or concurrently with the build. The AI Marketing Audit identifies the correct sequence for the specific operation. Operators entering the AI Workflow Build without the training first will typically find the build surfaces the same protocol questions mid-engagement. Running the training first makes the build cleaner.
Section 09 · Book the training
Eight fields. Two acknowledgments. The booking is confirmed after fit is established. Not before. Response within one business day. The $2,500 fee is confirmed at booking, not at intake submission.
Book the training
The protocol is built for this team, this stack, and this category. Not a workshop curriculum adapted from last month's cohort. The tool selection rationale, workflow integration map, quality gate, and session summary are written during the engagement and owned by the team after it.
Book the Training · $2,500Typical entry is from the AI Marketing Audit findings. Running solo or alongside the AI Workflow Build is both valid. Not sure which comes first? Start with the audit.