Confidence Intervals

Your organization just surveyed 400 supporters about a proposed policy campaign. 62% said they'd sign the petition. The campaign director wants to put "62% of our supporters back this campaign" in a press release. The data analyst pushes back. "That's the number from the people we asked," she says. "The real number for all 50,000 supporters on our list could be somewhat different."

She's right, and this is a problem that every sample-based number shares. You surveyed 400 people, not 50,000. If you'd happened to survey a different 400, you might have gotten 59% or 65%. The 62% is your best estimate, but it carries uncertainty. A confidence interval puts boundaries on that uncertainty. For this survey, the 95% confidence interval runs from about 57% to 67%. That's the range of plausible values for the true level of support across all 50,000 people on your list.

The mechanics are straightforward. You start with your sample result, then you add and subtract a margin based on how much variability you'd expect from sample to sample. That variability depends on two things you've already encountered. The standard deviation of the data tells you how spread out individual responses are. The sample size tells you how much that spread shrinks when you average across many observations. The central limit theorem guarantees that sample averages follow a normal distribution, and the 68-95-99.7 rule tells you how wide that distribution is. A 95% confidence interval spans roughly two standard errors in each direction from your sample result. A 99% interval spans about 2.6 standard errors, giving you a wider range but more certainty that the truth falls inside it.

The "95%" part is where people trip up. It does not mean there's a 95% probability that the true value falls inside this particular interval. The true value is a fixed number. It's either in the interval or it isn't. What 95% confidence means is that if you repeated this process many times, drawing a new sample of 400 supporters each time and computing a new interval, about 95% of those intervals would contain the true value and about 5% would miss it entirely. You're describing the reliability of the method, not making a probability statement about any single result.

This distinction might feel like splitting hairs, but it matters when the stakes are high. If you tell a funder "we're 95% confident the true support rate is between 57% and 67%," you're not saying there's a 5% chance it's outside that range. You're saying you used a method that gets it right 95% of the time. The nuance changes how much weight you should put on the boundaries. A value just outside the interval is almost as plausible as one just inside. The edges aren't walls.

Confidence intervals address a blind spot that p-values leave open. A p-value tells you whether a result is likely real, but it says nothing about how big the effect might be. A confidence interval gives you the range of plausible sizes. If you A/B test two versions of a campaign action page and the 95% confidence interval for the difference in conversion rates runs from 0.1 to 4.2 percentage points, you know the new version probably works, but the actual improvement could be anywhere in that range. If it runs from 0.1 to 0.3 percentage points, the improvement is real but so small it might not justify the effort of switching.

In survey research, the confidence interval is what determines whether your sample is large enough to say something useful. A survey of 100 supporters might give you an interval so wide that it's consistent with anything from lukewarm to overwhelming support. A survey of 1,000 narrows that range enough to make decisions. In email campaign analytics, when you report that open rates increased from 18% to 21%, the confidence interval tells your team whether that difference is solid or whether the true improvement could be negligible. In grant reporting, presenting results as a confidence interval rather than a single number signals intellectual honesty. Funders who understand statistics will respect the transparency, and those who don't will learn something about how evidence actually works.

A confidence interval turns a single estimate into an honest range. It tells you not just what the data says, but how loudly.


See It

Click "Draw samples" to pull random samples and see their confidence intervals. About 95% will contain the true value (shown by the vertical line). Adjust sample size to see the intervals narrow or widen.


Reflect

Think about the last number your team reported from a survey or campaign test. Was it presented as a single value, or with a range? If you had reported a confidence interval instead, would it have changed anyone's confidence in the result or willingness to act on it?

When two campaign strategies produce results whose confidence intervals overlap, what should that tell you about claiming one is better than the other?