Experimentation Velocity: How to Run 100+ Experiments Monthly

Experimentation Velocity: How to Run 100+ Experiments Monthly

Experimentation Velocity: How to Run 100+ Experiments Monthly

The fastest-growing companies like Booking.com, Netflix, or Amazon run hundreds to thousands of experiments monthly. How do they do it? And what do you need to get closer to them?

Why Velocity Matters

The math is simple:

  • More experiments = more learnings
  • More learnings = faster optimization
  • Faster optimization = faster growth

Booking.com runs 25,000+ experiments yearly. Even if only 10% succeed, that's 2,500 product improvements per year. Your competition running 10 tests monthly doesn't stand a chance.

Experimentation Velocity Impact

Prerequisites for High Velocity

1. Technical Infrastructure

Without the right tools, you won't scale.

ComponentPurposeExamples
Feature flagsQuick on/off switchingLaunchDarkly, Unleash, custom
Event trackingBehavior measurementSegment, Amplitude, Mixpanel
Statistical engineAutomated analysisEppo, Statsig, custom
DashboardsReal-time visibilityLooker, Metabase, custom

2. Organizational Setup

Decentralization is key:

  • Every team can run experiments
  • Central team provides tooling and best practices
  • Clear ownership — every experiment has a DRI
  • Streamlined approval — no more than 24h for approval

3. Cultural Aspects

  • 🧪 Experimental mindset — "we don't know until we test"
  • 📊 Data-driven decisions — intuition is a hypothesis, not fact
  • Tolerance for failure — 70% of tests "fail" — and that's OK
  • 📚 Learning culture — every test brings insight

Build vs. Buy: Decision Matrix

FactorBuildBuy
Upfront costHigh (eng time)Low-medium
Ongoing costMaintenanceSubscription
CustomizationUnlimitedLimited
Time to valueMonthsDays
ScalabilityDepends on implementationUsually good

Recommendation:

  • <50 experiments/month: Buy ready-made solution
  • 50-200 experiments/month: Hybrid (purchased + custom integrations)
  • 200+ experiments/month: Consider custom platform

Platform Comparison

PlatformBest forPriceProsCons
OptimizelyEnterprise$$$$Robust, full feature setExpensive, complex
LaunchDarklyFeature flags focus$$$Excellent flags, fastWeaker analytics
StatsigData-driven teams$$Strong statistics, AINewer player
EppoProduct teams$$Warehouse-nativeSmaller ecosystem
GrowthBookStartups$ (open source)Cheap, flexibleRequires setup

Process for Scaling

Experiment Proposal Template

Every experiment should have:

📋 EXPERIMENT PROPOSAL

1. Hypothesis: [What we're testing and why]
2. Primary metric: [One metric for decision]
3. Secondary metrics: [Guardrail metrics]
4. Segment: [Who's in the test]
5. Sample size: [How many users needed]
6. Duration: [How long it runs]
7. Success criteria: [When is test successful]
8. Owner: [Who's responsible]

Prioritization: ICE Framework

Score each experiment 1-10:

  • Impact: How big an impact do we expect?
  • Confidence: How confident are we in the hypothesis?
  • Ease: How easy is implementation?

ICE Score = (I + C + E) / 3

Highest score = highest priority.

Weekly Review Ritual

TimeActivityParticipants
0:00-0:10New results reviewEveryone
0:10-0:30Deep dive top 3 learningsEveryone
0:30-0:45New experiments proposalOwners
0:45-0:55Prioritization votingEveryone
0:55-1:00Action itemsLead

Measuring Program Health

MetricDefinitionTarget
Experiments launched/monthNumber of tests started100+
Win rate% of tests with positive result15-30%
Avg. experiment durationAverage test length2-4 weeks
Time to decisionTime from launch to decision<3 weeks
Learnings documented% of tests with documentation100%
Impact deliveredCumulative lift from winsTracking

Case Study: Booking.com

How they run 25,000+ experiments yearly:

  1. Democratization — any employee can run a test
  2. Low barrier — simple tooling, quick setup
  3. Automation — automatic analysis, automatic stopping
  4. Culture — testing is the company's DNA
  5. Learning loops — sharing learnings across teams

Results:

  • Conversion improvements every week
  • Thousands of small wins = massive competitive advantage
  • Culture of continuous improvement

Conclusion

Experimentation velocity isn't about chaotically launching tests. It's about a systematic approach to learning and optimization.

Action steps:

  1. Assess current velocity (experiments/month)
  2. Identify bottlenecks (tooling? process? culture?)
  3. Select or upgrade experimentation platform
  4. Implement proposal template and review ritual
  5. Measure program health metrics
  6. Iterate and scale

You might also like