You want to test a new donation page. Or a different email subject line. Or a shorter signup form. Before you split your traffic and wait, you need to answer one question: how many people do I need in each group before the results mean anything?

Too few, and you won't be able to tell whether a difference is real or just noise. Too many, and you're wasting time running a test that could have been called days ago. This calculator gives you the number.

Your Current Rate

What's the conversion rate you're starting from? If your donation page converts 5% of visitors right now, enter 5. If you're not sure, check your analytics for the last few weeks and use that number. It doesn't need to be exact, but the closer it is, the more useful your result will be.

The Smallest Change Worth Detecting

If the new page converts at 5.01% instead of 5%, do you care? Probably not. But if it converts at 6%, that matters. The minimum detectable effect is the smallest improvement you'd want to catch. Smaller effects need bigger sample sizes, because telling apart 5% from 5.2% requires a lot more data than telling apart 5% from 8%.

How Sure Do You Need to Be?

Two things can go wrong with a test. You might see a difference that isn't really there (a false positive). Or you might miss a real difference because you didn't have enough data (a false negative). These two settings control how much protection you want against each.

Significance level is your tolerance for false positives. At 95%, you're saying "I'm okay with a 5% chance this result is just noise." Most people leave this at 95%.

Statistical power is your protection against false negatives. At 80%, you're saying "if there really is an effect this big, I want an 80% chance of detecting it." Higher power means bigger samples, but less risk of missing something real.


How Long Will It Take?

If you know roughly how many visitors (or emails, or signups) you get per day, enter that here. The calculator will tell you how long your test needs to run.