Department of Sociology, CUNY Queens College, New York, NY

Measurement Error

A short video describing the issue of measurement error.

To see more about the “Teaching through YouTube” series.  For more, see this post.

Original Video Description

This video describes the concept of measurement error, and the difference between random and systematic measurement error.

Transcription (Auto-Generated)

Any social research project has to deal with concerns about measurement bias. Measurement bias occurs when we construct our data collection methods in a way that influences our respondents’ answers. When that happens, the results that we get may be less a reflection of how our subjects actually are and more a reflection of our own biases that we brought to the table and may have incorporated into the study. Now, there’s no silver bullet for solving measurement problems; it requires some judgment. Still, it’s good to know what it is and how to deal with it, and that’s the topic of this video. In a research project, people will give different answers to your questions. That’s a good thing. People are genuinely different, and part of the reason that we’re asking these questions is to find out how they’re different. At the same time, the answers you receive are not just a product of people’s genuine differences. Perhaps a good place to start when discussing measurement problems is to differentiate two sources of variation: two reasons that people give the scores they give and why people give different scores. Those two sources of variation are true variation and measurement error. True variation means that a measure is capturing people as they really are, and they’re giving different scores because they’re actually different. Measurement error occurs because there’s a problem in the question; it’s not completely capturing the characteristics or the information that you’re trying to get from your subjects. Not all measurement error is the same either. There are two types: random and systematic. Random error is unpredictable errors that don’t necessarily push a study’s results in one direction or another. So, when you’re conducting a study, there’s all types of reasons or unanticipated reasons that a respondent might not give an accurate answer of themselves. For example, maybe a respondent misread the question, or if there was a multiple-choice question, they meant to fill out B but accidentally filled out C. Maybe your question asks them to perform mental calculations, and they made a mistake. That’s error because your measurement isn’t reflecting your respondents’ true characteristics, but it’s random error; you don’t know where it’s going to pop up. Systematic error is different. It happens when there’s something in the question or the measurement scheme itself that pushes all respondents to answer erroneously and in the same way. A really common way to describe measurement error is with a bullseye graph. Imagine the center of the bullseye is the true answer for everybody, and the shots are examples of what their scores registered when measured. Systematic error is much more dangerous because there’s a possibility that I’ll get very precise answers, but the answers are wrong. A confident wrong finding is much worse than an uncertain finding that doesn’t tell us much. Measurement error is an inescapable problem. It plagues all studies. You can’t get rid of it. At best, you can be cognizant of its existence and try to keep your eye open for problems in your research measurement design that will cause these types of errors to become egregious. You have to use your judgment, but there are some basic tips you might want to follow: Make your questions and potential answers as clear as possible. Don’t make it hard to understand the question, give a direct answer, or code it. Those pitfalls can create error. When you design a survey, pre-test it. That means bring it to some people, give them the test, and then go over the answers with them. Ensure the answers you received are what they meant. Talk to them about whether or not the wording of your question made them feel like they should answer a particular way. Don’t ignore the problem. Bad data creates a bad analysis, and if you have a lot of error in your survey, your analysis could be all for naught.