The Sales-Marketing SLA That Actually Stops the Fighting
Almost every B2B company has a sales-marketing SLA somewhere in a Notion doc. Almost none of them work. The document gets written in a moment of peace, gets referenced for two weeks, and then quietly disappears the next time a quarter goes sideways. By the time the lead-quality argument restarts in QBR, no one can remember what the SLA actually said. That's not a discipline problem. That's a design problem. Most SLAs are written to make people feel aligned, not to settle disputes when alignment breaks.
Why most SLAs fail in the first quarter
The standard SLA template you find online has three sections: marketing commits to N MQLs per month, sales commits to following up within X minutes, and both teams agree to meet weekly. Every clause in that template is a measurement of activity, not of outcome. None of it answers the question that actually starts the fight: "is this lead good?"
The second failure mode is asymmetric accountability. Marketing's commitments are measured in volumes the team can directly produce. Sales' commitments are measured in behaviors. When the quarter misses, marketing can prove they hit their number and sales can prove they followed up on time, and neither team has to answer for the only metric that matters: did the pipeline convert? An SLA where both sides can be "compliant" while the business misses plan is not an SLA. It's a press release.
What actually belongs in a working SLA
A working SLA has four sections, in this order. The order matters. Most templates start with volumes; the working version starts with definitions, because every disagreement traces back to one.
1. Lead definition, with examples
Write down what makes a lead qualified, with three concrete examples and three concrete counter-examples. Not "fits ICP and shows intent" — that's a slogan. Write the firmographic bands, the trigger events, the behavioral signals and the disqualifiers. Then put three real lead records next to it and explain why each one passes or fails. The examples are what make the definition operational. Without them, the definition is just text everyone interprets to their own advantage.
2. Conversion benchmarks, by source
Different lead sources convert at different rates. Webinar leads convert differently from inbound demo requests, which convert differently from outbound replies. The SLA should set an expected MQL-to-SQL and SQL-to-opportunity conversion rate for each source, based on the last four quarters of data. When a source falls below benchmark, you know whether the problem is upstream (lead quality) or downstream (sales execution). Without source-level benchmarks, every conversion miss becomes a generic "leads suck" or "sales doesn't work them" argument.
3. The disqualification protocol
This is the section every SLA skips and the only one that stops the recurring fight. Sales has 72 hours to disqualify a lead with a written reason against a fixed taxonomy: wrong ICP, no budget, no compelling event, wrong contact level, bad timing. Marketing reviews disqualifications weekly and either accepts them or contests them with evidence. Disqualification rates above 30% on any source trigger a joint review. This single mechanism eliminates roughly 80% of the lead-quality arguments we see, because both teams now have a structured way to disagree.
4. Joint outcome metrics
The last section is the one that aligns incentives. Sales-sourced and marketing-sourced pipeline are both reported, but the metric that matters in the QBR is total pipeline coverage against quota, owned by both leaders jointly. Bonuses, promotions and headcount discussions all reference the joint metric first, source attribution second. When the joint metric is the one that drives compensation, the lead-quality argument loses most of its political fuel.
The cadence that keeps the SLA alive
Even the best-written SLA dies if no one looks at it. The cadence that keeps it alive has three layers. Weekly: a 15-minute lead-quality review where the demand gen lead and the SDR manager walk through the disqualification queue and contested leads. Monthly: a 45-minute conversion review where both leaders look at source-level benchmarks and decide where to invest or cut. Quarterly: a 90-minute SLA stress test where the leadership team revisits the lead definition against the last quarter's closed-won data and updates the examples.
That last quarterly stress test is where most SLAs break and the working ones get sharper. Closed-won data shifts every quarter — new segments emerge, old segments cool off, the ICP evolves. An SLA that doesn't update its lead definition against current win data is calibrated to a customer that no longer exists.
Where the SLA can't save you
An SLA cannot fix an undefined ICP. If marketing and sales don't agree on who you sell to, no amount of definition writing will land — every lead becomes a Rorschach test, and every disqualification becomes a debate about taste rather than fit. If your SLA conversations keep circling back to "we just need to define the ICP again," the SLA isn't your bottleneck.
An SLA also cannot compensate for a demand mix overweighted on the wrong channels. If 70% of your pipeline is coming from a source that converts at a third of the others, the conversation isn't about lead quality — it's about channel mix. The SLA's source-level benchmarks make this visible, but fixing it requires a strategic call, not a process tweak.
Finally, an SLA cannot fix sandbagging or coasting on either side. If sales is disqualifying everything to keep their close rate clean, or marketing is gaming MQL volume by lowering the bar, the SLA will surface the pattern but the executive team has to act on it. The document doesn't have authority on its own.
The role of leadership in keeping the SLA real
SLAs survive when leadership treats violations as visible. The single most effective practice we see is the CRO and the CMO co-presenting the SLA scorecard at every monthly leadership meeting — not just to their own teams, but to the executive group. That visibility eliminates the "marketing leadership says one thing in private, sales leadership says another in private" pattern that erodes most SLAs. When both leaders own the same scorecard publicly, the SLA stops being a process artifact and becomes a leadership commitment.
Leadership also has to be willing to hold the line on the disqualification protocol when sales pushes back. Reps will always prefer to silently ignore leads they don't believe in rather than disqualify them with a written reason — the written reason creates a paper trail that they may have to defend. Holding sales leadership accountable for closing the disqualification queue every week is what makes the protocol operational instead of decorative.
Where to start
If your sales-marketing arguments feel cyclical, don't start by re-writing the whole SLA. Start with one section: the disqualification protocol. Pick a fixed taxonomy of five reasons. Require sales to use them within 72 hours. Set up the weekly review. Run that for one month and watch what happens to the QBR conversation. In most companies, the recurring lead-quality fight quiets within four weeks, because both teams finally have a shared language for the disagreement.
From there, layer in the lead definition with examples, then the source-level benchmarks. Build the SLA in sequence, not all at once. The companies that get this right treat the SLA as a living instrument, not a one-time document — and they usually find that the GTM Diagnostic flags sales-marketing alignment going from a bottom-quartile score to a top-quartile one within two quarters of disciplined SLA work. That's a fast return for what is fundamentally a writing exercise.