Data Types: Nominal, Ordinal, Interval, Ratio
Your volunteer coordinator keeps a spreadsheet. Each row is a volunteer. One column tracks which program they serve in, coded as 1 for youth mentoring, 2 for food bank, 3 for housing assistance, and 4 for community garden. Another column tracks their satisfaction rating on a 1-to-5 scale. A third column tracks hours logged last quarter. At the end of the quarter, someone runs averages on everything. Average hours: 24. That's useful. Average satisfaction: 3.8. Looks reasonable. Average program area: 2.4. Wait. Your volunteers' "average program" falls somewhere between the food bank and housing assistance, which isn't a program. It's a number that exists only because a spreadsheet doesn't know the difference between a code and a measurement.
Not every column of data works the same way. Statistics recognizes four levels of measurement, and the level determines which operations make sense.
Nominal data is labels. Program area, donation channel, event type, city of residence. The numbers assigned to them (1, 2, 3, 4) are just shorthand. You could swap them around and nothing would change. The only meaningful operation is counting. Which program has the most volunteers? That's a question nominal data can answer. What's the average program? That's a question it can't. You can build a frequency distribution of nominal data and find the mode, but sorting or averaging is meaningless.
Ordinal data has a meaningful order, but the gaps between values aren't necessarily equal. A satisfaction rating of 1 (poor) to 5 (excellent) tells you that 5 is better than 3 and 3 is better than 1. But the difference in experience between "poor" and "fair" might be much larger than the difference between "good" and "very good." You can sort ordinal data and find the median (the value where half fall above and half below). Computing the mean is technically possible, but it assumes equal spacing that may not exist. This is the source of a long-running debate in survey research about whether it's OK to average Likert scale responses. The practical answer is "sometimes, carefully, with caveats."
Interval data has equal gaps between values, but no true zero point. Temperature in Celsius is the classic example. The difference between 10°C and 20°C is the same as between 20°C and 30°C, but 0°C doesn't mean "no temperature." In nonprofit work, this shows up in standardized assessment scores where the zero point is arbitrary. You can compute means and differences freely, but saying "twice as much" doesn't make sense because there's no meaningful zero to anchor the ratio.
Ratio data has equal gaps and a true zero. Donation amounts, hours volunteered, event attendance, revenue. €100 is twice €50 in a way that is literally, physically true. All arithmetic operations work. This is the data type that gives you the most analytical freedom, and it's the one that most concrete nonprofit metrics (money, time, counts) happen to be.
The most common trap is survey data. When your team sends out a satisfaction survey with a 1-to-5 scale, those responses are ordinal. Reporting that "average satisfaction is 3.7" is standard practice, but it quietly assumes the gap between every pair of adjacent ratings is equal. A safer approach is to report the median and the percentage of respondents in each category, which respects the data's true nature without making spacing assumptions. When you cross-tabulate ordinal ratings by respondent group, stick to comparing medians or percentages rather than means.
Donor segments ("major donor," "mid-level," "grassroots") are nominal. You can count how many donors fall into each segment, and those counts are useful for planning and resource allocation. But ranking them or averaging them doesn't work, even if your CRM assigns them numeric codes. Program outcomes measured in concrete units (meals served, housing placements, hours of tutoring) are ratio data. You can add, average, and compare them freely. Grant reports built on ratio data are the most straightforward to analyze and the most credible to funders.
Before computing anything, check what kind of data you're working with. A number in a spreadsheet isn't always what it looks like. Some numbers are measurements. Some are rankings. Some are just labels wearing a disguise.
See It
Switch between data types, then try each operation. Watch which operations produce meaningful results and which break down.
Reflect
Think about the columns in your organization's main database or spreadsheet. Which ones are nominal (labels), which are ordinal (rankings), and which are ratio (true measurements)? Are any of them being averaged when they shouldn't be?
When someone reports an "average score" from a survey, what assumptions are they making about the gaps between response options? Would a frequency distribution of responses tell a more honest story?