Skewness and Kurtosis
Your development team just ran the numbers on last quarter's individual donations. The average gift was €42. Someone on the team pictures the usual bell shape, most donors clustered around €42, with equal tails fading off in both directions. But when you actually plot the data, it looks nothing like that. There's a giant pile of gifts between €5 and €25, a thin trail of gifts stretching out to €200, and one at €1,500. The picture isn't a bell. It's more like a ski slope.
That ski-slope shape has a name. It's called skewness, and it measures how lopsided your data is. A perfectly symmetrical distribution, like the classic bell curve, has a skewness of zero. When the long tail stretches to the right (toward higher values), statisticians call it right-skewed or positively skewed. When the tail stretches to the left (toward lower values), it's left-skewed or negatively skewed.
You've already seen the symptoms of skewness without knowing the term. In Day 1, we noticed that the mean donation was higher than the median. That gap between mean and median is the most visible sign of skewness. In a right-skewed distribution, the mean gets pulled toward the tail, so it sits to the right of the median. The more skewed the data, the wider that gap. In a perfectly symmetrical distribution, the mean and median are the same.
Donation data is almost always right-skewed. So is income data, website traffic, event attendance, and petition signatures. A few outliers at the high end stretch the tail out. Program satisfaction surveys tend to go the other way. Most respondents rate things 4 or 5 out of 5, with a small tail stretching down to 1. That's left-skewed. Recognizing the direction of skew tells you which summary statistics to trust. When data is heavily skewed, the median is usually more representative than the mean, and the interquartile range from Day 3 is more useful than the standard deviation from Day 2.
There's a second dimension to the shape of your data, though it comes up less often in daily nonprofit work. Kurtosis describes how much of the variation comes from extreme values versus moderate ones. Picture two distributions with the same average and the same standard deviation. One has a tall, narrow peak in the middle and heavy tails, meaning most values are either very close to the average or very far from it. The other is flatter and wider, with almost nothing in the extremes. They both have the same spread as measured by standard deviation, but they behave very differently. The first one (high kurtosis, sometimes called leptokurtic) produces more surprises. The second (low kurtosis, or platykurtic) is more predictable.
In fundraising, high kurtosis shows up when your donor base has a concentrated core of reliable mid-level givers and a handful of wildly generous major donors. Low kurtosis shows up in evenly distributed, grassroots campaigns where everyone gives roughly similar amounts with no extremes. Knowing which pattern you have tells you how much you should worry about tail events, like a single major donor leaving or a viral moment bringing a flood of one-time gifts.
This also matters in A/B testing and program evaluation. If your outcome metric has heavy tails, you'll need a larger sample to detect real differences, because those extreme values add noise that makes it harder to spot the signal. It's the same lesson from Day 2 about standard deviation, but amplified. Skewed data with heavy tails is the hardest kind to summarize honestly, which is exactly why it's worth understanding the shape before reaching for any single number.
Your data has a shape. Skewness tells you which direction it leans. Kurtosis tells you how much drama is hiding in the tails. Before you summarize anything with an average, look at the shape first.
See It
Drag the slider to reshape the distribution from left-skewed to right-skewed. Watch how the mean and median pull apart as the data becomes more lopsided.
Reflect
Think about a dataset your organization works with regularly, whether it's donations, event attendance, survey responses, or program outcomes. If you plotted every value, would the shape be symmetrical, or would it lean to one side? Which direction? Does your reporting reflect that shape, or does it assume a bell curve?
When someone on your team reports an average, do you ever ask "but what does the distribution actually look like?" What might change if you did?