Cross-Tabulations
Your fundraising team tracks two things about every new donor. They know which channel brought the person in (email, social media, or direct mail) and whether that donor gave again within twelve months. The team reports both numbers at the quarterly review. "65% of new donors came through email. Our overall retention rate is 38%." Both numbers are correct. But neither one answers the question that actually matters. Do email donors retain at the same rate as direct mail donors? When you lay the data out in a grid, with channels as rows and retention status as columns, the answer appears instantly. Email donors retain at 45%. Social media donors retain at 22%. Direct mail donors retain at 51%. That overall 38% was averaging together three populations with very different behavior.
What the team built is a cross-tabulation, sometimes called a contingency table or crosstab. It's what happens when you count data along two dimensions at the same time. Instead of asking "how many donors came from each channel?" and "how many donors were retained?" as separate questions, you ask both at once. The result is a grid where each cell holds the count for a specific combination of channel and outcome.
This is the natural two-dimensional extension of the frequency distribution. A frequency distribution counts values along one variable. A cross-tabulation counts combinations of two. The structure is the same, with an extra axis.
The raw counts in each cell are useful, but the percentages are where the insight lives. If you calculate what fraction of each row falls in the "retained" column, you can compare retention rates across channels directly. Those row percentages reveal relationships that no single summary number can show. The overall retention rate of 38% is technically accurate, but it obscures the fact that your three channels produce donors with dramatically different loyalty. A cross-tabulation makes that visible. You could also flip the perspective and look at column percentages, which would tell you what share of all retained donors came from each channel. Same grid, different question, different insight.
Cross-tabulations show up everywhere in nonprofit work. In program evaluation, you might cross-tabulate completion rates by age group or intake method to check whether the program works equally well for everyone. In digital campaigning, cross-tabulating petition source by second action (did they donate, share, or do nothing?) reveals which acquisition channels produce engaged supporters and which produce one-time clickers. In grant reporting, showing a cross-tabulation of outcomes by participant demographics demonstrates that you've thought carefully about equity, not just averages. In survey analysis, cross-tabulating satisfaction ratings by respondent type often reveals that your "overall satisfaction score" is blending genuinely happy clients with genuinely unhappy ones, two groups with very different needs.
A single variable tells you what happened. Two variables, crossed, tell you why. The most actionable insights in nonprofit data almost always answer the question "does X differ by Y?"
See It
Click the buttons to switch between raw counts, row percentages, and column percentages. Watch how the emphasis shifts depending on the question you ask.
Reflect
Think about a key metric your organization reports as a single number, whether it's retention, open rates, event attendance, or program completion. What's the second variable you could split it by? Would you segment by source, region, demographic group, or program track? What pattern might be hiding inside the average?
When someone presents an overall rate or average, what is the most useful follow-up question you could ask?