In defense of false positives (why you can't fail with A/B tests)

This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).

A pushback against the perfectionist gatekeeping around A/B testing. The core argument is that even imperfect A/B testing is beneficial for product teams because the stakes are fundamentally different from drug trials: false positives are cheap (a wrong button color), prior success rates are higher (tests are hypothesis-driven), and true positives have real business value. The practical takeaway is to pick the metric with the best signal-to-noise ratio before running the test, avoid ratio metrics, and not cherry-pick metrics after the fact. More A/B testing, even imperfect, beats paralysis from over-theorizing.

4m read timeFrom erikbern.com
Post cover image

Sort: