How We Booked 2,230 Meetings in 2025: What Actually Worked
Mitchell Keller
Founder & CEO, LeadGrow · Managed 3,626+ cold email campaigns. 6.74% average reply rate. Booked 2,230+ meetings in 2025.
TL;DR
- 2025 totals: 3,626 campaigns managed, 6.74% average reply rate (3 to 4x industry average), 12.53% positive reply rate, 2,230+ meetings booked.
- Specificity was the biggest performance driver. Hyper specific campaigns hit 21.7% reply rates. Broad ones hit 2.9%. A 7x difference.
- Winning patterns: hypothetical questions, binary CTAs, 'forgot to mention' follow ups, named case studies, and specific numbers over vague claims.
- The 12% positive reply rate gate: never scale a campaign until positive replies exceed 12% of total replies. Below that, the message is not working.
- 80/20 rule for scaling: 80% volume to winners, 20% continues testing new angles. Never go all in on a single message.
This is not a theoretical framework. These are actual cold email results from 2025 across every campaign we managed. Every number is pulled from our campaign management platform. Every pattern is backed by real data.
2,230 meetings booked. 3,626 campaigns managed. Here is everything that worked, everything that failed, and the patterns that separated the top performers from the rest.
The Aggregate Cold Email Results: 2025 Numbers
| Metric | 2025 Result | Industry Average |
|---|---|---|
| Campaigns managed | 3,626 | N/A |
| Average reply rate | 6.74% | 1 to 2% |
| Positive reply rate | 12.53% of replies | 5 to 8% of replies |
| Leads per positive reply | 175:1 | 300 to 500:1 |
| Meetings booked | 2,230+ | N/A |
6.74% average reply rate across 3,626 campaigns. Industry average is 1 to 2%. That means our campaigns generate 3 to 4x the response rate of a typical cold email program.
LeadGrow EmailBison Campaign Dashboard, December 2025
These results span multiple industries (EdTech, DevTools, data centers, manufacturing, SaaS), multiple geographies (US, Canada, UK, New Zealand, Australia), and multiple campaign types (cold outreach, event based, signal based).
The Biggest Driver of Cold Email Results: Specificity
If there is one finding from 3,626 campaigns that matters more than any other, it is this: specificity drives performance. The more specific your targeting and copy, the higher your reply rate. This is not a small effect. It is the single largest variable in our entire dataset.
| Specificity Level | Example | Average Reply Rate |
|---|---|---|
| Hyper specific (state regs, deadlines) | NJ county coordinators, April 30th NJDEP deadline | 21.7% |
| Industry specific (vertical + pain) | SaaS companies posting blogs, Reddit distribution | 7.5% |
| Segment specific (geo + stage) | Toronto founders, Series A | 6.1% |
| Broad (all businesses in category) | New businesses under 3 years | 2.9% |
21.7% versus 2.9%. A 7x difference. Same agency, same infrastructure, same methodology. The only variable was how specific the targeting and copy were.
This is why we obsess over situation mining. We do not target "SaaS companies." We target "SaaS companies with a founder led sales motion who just hired their first SDR and are posting on LinkedIn 3x per week." That level of specificity changes the entire response curve.
What Separated High Performers from Low Performers
The Testing Sprint
Our highest performing campaigns all went through an aggressive testing sprint in month 1. The structure is the same every time.
Day 1: Launch 12 variants across 3 angles. Each angle gets 4 variants that test one variable.
Day 4: Data comes in. We know which angles are generating replies and which are dead. We take winning principles from the top performers and apply them to the underperformers.
Week 2: Second round of tests incorporating learnings. 12 more variants.
Month end: 24 to 48 total tests completed. Winning campaign identified and ready to scale.
The teams that skip the sprint phase and go straight to "one good email at scale" almost always underperform. Testing velocity in the first 30 days is the strongest predictor of long term campaign success in our data.
Offer Frame Over Personalization
This was counterintuitive. We expected hyper personalized emails to outperform everything else. They did not.
Well-framed offers (same service positioned through a specific worldview) outperformed personalized emails (using {{first_name}}, {{company}}, and custom observations) in most head to head tests. One example: a LinkedIn founders campaign using Reddit pain framing with zero AI personalization hit 36% positive reply rate. A heavily personalized variant of the same offer came in at 15%.
The takeaway is not that personalization is bad. It is that framing is more important. Get the frame right first. Then add personalization as the cherry on top.
Worldview Alignment Over Pain Based Copy
Our data showed an interesting pattern with worldview aligned copy (copy that aligns with the prospect's beliefs rather than poking at their pain).
Reply rates were sometimes lower with worldview copy (26% versus 36% in one test). But booking rate went up 2x. And close rate went up 5x. These were ready to buy leads, not just people willing to chat.
A campaign with lower reply rate but higher booking and close rate is a better campaign. The vanity metric (reply rate) was worse. The business metric (revenue) was dramatically better.
The 5 Patterns That Worked Across Every Market
Across 3,626 campaigns and every industry we touched, these five patterns showed up in winning campaigns over and over.
1. Hypothetical Questions Over Direct Pitches
"If you could wave a magic wand and have $50k to $10m in your account, no equity, no board seat, just capital, would that be interesting?" This structure outperformed direct pitches ("We offer business funding up to $10m") in every market we tested.
2. Binary CTAs That Reduce Friction
"Worth a look or not really?" and "Relevant to what you are doing, or off base?" These binary CTAs consistently outperformed open ended ones ("Would you like to learn more?"). Giving prospects an easy out paradoxically increases response rates.
3. The "Forgot to Mention" Follow Up
Follow up emails starting with "forgot to mention" or "one more thing" outperformed standard follow ups ("Just following up on my previous email"). It feels like a natural addition to a conversation rather than a sales nudge.
4. Named Case Studies Over Generic Claims
"Luke from Deljo Heating kept getting hammered with calls between 8pm and 2am" beats "Our clients see great results" every time. Named stories create mental images. Generic claims create skepticism.
5. Specific Numbers Over Vague Adjectives
"48 meetings from 1 event in 3 days" versus "significant meeting volume from event outreach." Specific numbers are believable. Adjectives are noise. We cut adjectives on the third editing pass of every email and replace them with data.
The Scaling Framework: When to Scale Cold Email Results
The 12% Positive Reply Rate Gate
We do not increase send volume unless positive reply rate is above 12% of total replies. Below 12%, the message is not working. More volume amplifies the failure, not the results.
| Positive Reply Rate | Action | Testing Behavior |
|---|---|---|
| 20%+ | Scale aggressively | Micro tests only. Do not break it. |
| 12 to 20% | Scale significantly | Conservative tests, small changes |
| 8 to 12% | Scale steadily | Moderate testing, controlled variance |
| 6 to 8% | Scale cautiously | Heavy testing, significant variance okay |
| Below 6% | Do not scale | Sprint for new angles on same list |
The 80/20 Rule for Sustained Performance
80% of volume goes to the winning message. 20% continues testing new markets and angles. Never allocate 100% to the winner.
Two winning campaigns plus two test campaigns is the standard operating structure at any point in time. This protects against message fatigue and gives you a pipeline of future winners ready to scale when the current ones start declining.
The 30/50 Refill Rule
Start thinking about refilling leads at 30% sent. Must be done by 50% sent. Past 50%, you are mostly doing follow ups with diminishing returns. If a winning campaign reaches 60%+ at the end of a week, that is the first campaign you refill.
Common Failures We Saw in 2025
Not everything worked. Here is what consistently failed.
"Save money" frames. Scarcity mindset prospects (focused on cost savings) are the hardest people to sell. Pain of missing out beats promise of saving every time.
Product led copy. Copy that reads like a product description performs 6x worse than consultative copy. If someone can read your email and immediately know you are selling something, rewrite it.
Testing personalization before offer fit. Teams that jumped straight to personalization without validating the offer first wasted months and budget. The offer has to work before you start making it prettier.
Scaling before data. Any campaign with fewer than 500 contacts sent does not have enough data to make scaling decisions. We cap at 1,000 contacts before forcing a decision to prevent infrastructure damage.
The Numbers That Matter Going Into 2026
2,230 meetings. 3,626 campaigns. 6.74% reply rate. 12.53% positive reply rate. 175:1 leads per positive reply.
These are not aspirational targets. These are actual results from real campaigns with real clients. The methodology that produced them (situation mining, worldview alignment, sprint testing, specificity obsession) is what we bring to every new engagement.
If you are running cold email and not seeing these kinds of numbers, the problem is almost certainly one of three things: targeting, offer framing, or testing velocity. We have the data to prove it.
Want results like these for your company?
We run the same sprint methodology for every client. 24 to 48 tests in month 1. Winning angles by week 2. Scaling by month 2.
See our detailed case studies or book a strategy call to discuss your market.
Frequently Asked Questions
Related Articles
Want us to run this playbook for you?
Book a strategy call and we'll show you how these frameworks apply to your business.
Book Strategy Call