Binomial Distribution
Your campaigner emailed 1,200 petition signers last week asking each one to share the petition with friends. Based on six months of data, about 12% of signers typically share when asked. This time, 168 people shared. That's 14%, up from the usual 12%. Your campaigner believes the new email template made the difference. Your director says it was probably just a good week. Who's right?
The binomial distribution answers questions like this. Whenever you have a fixed number of independent yes-or-no situations, each with the same probability of success, the binomial distribution tells you how likely each possible count of "yeses" is. With 1,200 emails and a 12% historical share rate, the expected number of shares is 144. But the distribution fans out around that number, showing you the full range of what's plausible through chance alone.
Think of it as a close relative of the normal distribution, but built for counting discrete outcomes. Where the normal distribution describes continuous measurements like test scores, the binomial distribution describes counts of successes out of a fixed number of tries. How many emails get opened out of 500 sent. How many event registrants actually show up out of 120 registered. How many donation page visitors convert out of 800 clicks. The answer is never a single number. It's a spread of possibilities, each with its own probability. And when the number of tries is large enough, that spread starts to look remarkably like a bell curve.
The entire distribution is defined by just two inputs. n is the number of trials, and p is the probability of success on each one. From those two numbers you can calculate the expected value (n times p) and the standard deviation (the square root of n times p times one minus p). For the petition example, n is 1,200 and p is 0.12. The expected value is 144, and the standard deviation is about 11.3. Roughly 95% of the time, you'd expect between 122 and 166 shares. And 168 is just outside that upper bound. This isn't random variation. Something actually changed, and that new email template deserves a closer look.
This matters because nonprofits constantly deal with yes-or-no outcomes at scale and rarely know what variation to expect. If your email open rate is usually 22% and you send 200 emails, you'd expect about 44 opens. But the binomial distribution tells you that anything between roughly 33 and 55 opens is ordinary random variation. If you ran an A/B test and one version got 48 opens while the other got 44, that gap tells you almost nothing. In grant applications, if your historical success rate is 30% and you submit 10 proposals, getting just 1 funded in a given cycle isn't a sign that your writing declined. The binomial distribution puts the probability of one or fewer successes at about 15%. That's bad luck, not rare bad luck. In event planning, if 60% of registrants typically attend and you have 80 registrations, attendance could range from about 40 to 56 without anything unusual happening.
The binomial distribution turns gut feelings about "good" and "bad" results into actual probabilities. When a result falls outside the range that chance alone would produce, you've found a signal worth acting on.
See It
Drag the sliders to change the number of signers emailed and the historical share rate. The highlighted bars show the range where 95% of outcomes fall.
Reflect
Think about a yes-or-no outcome your organization tracks at scale. Email opens, petition shares, donation conversions, event attendance. Do you know the typical success rate? If last month's number was higher or lower than usual, was it actually outside the range that random variation would produce?
When your team reacts to a single campaign's results, are they accounting for the fact that even a perfectly stable process will produce different numbers every time?