Every B2B team running outbound eventually hits the same question: how do we know if our numbers are actually good? Reply rates, meeting rates, cost per meeting — these numbers mean nothing without a benchmark to measure against. This post gives you the benchmarks, the context behind them, and the honest picture of what separates high-performing outbound programs from average ones in 2025.

These benchmarks reflect data from programs running AI-augmented outbound across financial services, technology, and professional services verticals. They are not industry-average numbers from a survey — they represent what well-run programs actually produce.

Cold Email Benchmarks

Cold email remains the highest-volume channel in most B2B outbound programs. Here is what the numbers look like for programs that are running well:

  • Open rate: 35–55% — Anything below 30% is a deliverability problem, not a copy problem. Fix your sending domain reputation before testing subject lines.
  • Reply rate: 3–8% — The range is wide because it is highly dependent on ICP precision and personalization depth. Generic sequences land at 1–2%. Research-backed, GPT-4-personalized sequences consistently hit 5–8%.
  • Positive reply rate: 15–25% of total replies — Most replies are objections. A positive reply rate above 20% of total replies indicates strong ICP fit and message resonance.
  • Meeting rate from positive replies: 60–80% — If you are below 60%, your AI reply handling or human handoff process is losing warm prospects.

The compounding math: at a 6% reply rate with 22% positive, and 70% meeting booking rate, you convert roughly 1 in 100 prospects to a booked meeting. That means a list of 2,000 verified, well-targeted prospects should produce approximately 20 meetings — which is precisely why we use 20 meetings per 90-day pilot as our guarantee threshold.

LinkedIn Outreach Benchmarks

  • Connection acceptance rate: 25–40% — Rates above 40% typically indicate you are targeting people you have some existing relationship signal with (mutual connections, engagement on their content). Cold connections to strangers land at 20–30%.
  • Message reply rate (after acceptance): 8–18% — LinkedIn messages outperform cold email on reply rate but require more manual-feeling personalization. Automation-heavy LinkedIn sequences drop to 3–5%.
  • Meeting booking rate from LinkedIn positive replies: 50–65% — Slightly lower than email because LinkedIn conversations tend to be longer before a call is proposed.

Multi-Channel Outreach Benchmarks

Programs that combine email, LinkedIn, and retargeting ads consistently outperform single-channel programs on meeting rate by 40–60%. The mechanism is simple: a prospect who sees your name in three contexts treats your email as from someone they know rather than a cold contact.

  • Email-only meeting rate: ~1 meeting per 120–150 prospects
  • Email + LinkedIn meeting rate: ~1 meeting per 80–100 prospects
  • Email + LinkedIn + retargeting meeting rate: ~1 meeting per 60–80 prospects

The retargeting layer does not book meetings directly — it maintains visibility with warm prospects who have engaged but not replied, reducing the time-to-book for those who eventually do convert.

Meeting Quality Benchmarks

Volume is not the point. Meeting quality — whether the people showing up have budget, authority, need, and timeline — is where most outbound programs fall apart. Here is what to track:

  • Show rate: 75–85% — Below 70% suggests either poor ICP targeting (meetings being booked with people who should not have been) or a weak confirmation sequence.
  • SQL rate from meetings: 25–40% — What percentage of meetings move to an active sales opportunity. Below 20% is a qualification problem — the ICP is too broad.
  • Close rate from SQLs: 20–35% — Varies significantly by deal size and sales cycle length. Shorter cycles and lower ACV close faster.

Cost Per Meeting Benchmarks

This is the number that matters for ROI calculation. Here is the range across different outbound models:

  • Human SDR team (3 reps): $400–$900 per meeting — When you factor in full loaded cost ($210K+ per year) divided by realistic meeting output (20–30 qualified meetings per month across the team).
  • Outbound agency (retainer model): $300–$600 per meeting — Retainers typically run $8K–$15K/month. Output varies significantly by agency quality.
  • Managed AI SDR system: $150–$350 per meeting — At $84K/year all-in producing 20+ meetings per month, the math works out to approximately $350 per meeting at the minimum performance threshold, with significant improvement as the system matures.
  • DIY with AI tools: $80–$200 per meeting (tool cost only) — But this excludes the internal labor cost of managing the system, which in practice adds $150–$300 per meeting in hidden overhead.

What Separates Top-Quartile Programs from Average Ones

After running hundreds of outbound programs, the differences between the top 25% and the middle 50% consistently come down to four things:

  • ICP precision — Top programs have ICP definitions specific enough that they could write a fully personalized email from the ICP criteria alone. Middle programs have ICP definitions that describe an entire market.
  • Personalization depth — Top programs use company-specific research triggers (a recent hire, a news mention, a job posting signal). Middle programs use variable substitution: "Hi [First Name], I noticed you work at [Company]."
  • Reply handling speed — Top programs respond to positive replies within minutes. Middle programs respond within hours. Every hour of delay in a B2B buying conversation reduces conversion probability by a measurable margin.
  • Monthly optimization cadence — Top programs test and make decisions on subject lines, opening hooks, and ICP segments every month. Middle programs set and forget for quarters at a time.

The good news: all four of these are system problems, not headcount problems. They are fixable with the right infrastructure regardless of whether you are running a 3-person SDR team or a fully managed AI outbound system.