3 Questions You Should Ask of Any Survey

By Daniel Olson

In the Jewish community, the results of survey research often persuade funders and program developers to direct resources toward certain projects and away from others. These stakeholders have abstract goals like strengthening Jewish identity, and surveys alluringly quantify these imprecise outcomes.

While surveys can be influential, their findings can mislead because even the best survey research contains error, or gaps between the survey’s findings and the facts on the ground. Experienced, well-funded pollsters conducted surveys until Election Day predicting that Sec. Hillary Clinton would win Wisconsin, Michigan, and Pennsylvania, but she lost all three.

Honest survey researchers recognize that error is inevitable and will make transparent their attempts to account for and minimize it. This process, unfortunately, is sometimes glossed over, especially in survey research without peer review. Such surveys might contain too much error to say anything meaningful about a population, yet are published and used to persuade.

Savvy consumers of survey data know to ask the following three questions of any survey. Doing so can help you determine whether a survey’s results provide meaningful information or merely “alternative facts.”

1. Did the survey actually measure what it claims to measure?

Survey researchers interested in the Jewish community often claim to measure abstract concepts like Jewish identity or engagement. To measure these concepts, they ask about specific behaviors like Sabbath and kashrut observance, synagogue membership, and in-marriage. If the researchers do not make a convincing argument for why those questions best measure the concept, then you should be skeptical of a survey’s validity.

Furthermore, researchers can ask questions that respondents might misinterpret. Responsible researchers carefully word and test questions to avoid this “measurement error,” but even that is no guarantee that respondents will perfectly understand each question.

For example, a Pew study on Orthodox Jews from 2015 asked: “Do you personally refrain from handling or spending money on the Jewish Sabbath?” Admirably, the researchers admitted in their report that this question, which contains a double negative, could have been confusing to respondents, especially non-native English speakers. Measurement error also occurs when respondents inconsistently interpret relative terms like “very emotionally attached” or “rarely.”

To look for measurement error, you should examine the questions, determine possible misinterpretations, and check if the researcher accounted for them.

2. What group was the survey trying to understand?

Every survey has a “target population,” or the group the survey researcher wants to understand, such as every American Jewish teenager, all alumni of Birthright Israel, or the Jewish seniors living in a metropolitan area. However, it is usually impossible to obtain accurate and complete contact information for everyone in a given target population. So, researchers must rely on incomplete or inaccurate lists, resulting in “coverage error.”

For example, not every Jew who lives in a metropolitan area is in a Federation or synagogue phone directory. If researchers use those lists alone to study all Jews who live in that area, the study will not reflect the opinions of Jews not affiliated with these institutions.

To look for coverage error, pay attention to how survey researchers define their target population, ask where they got their list(s) of names, and evaluate whether these lists cover the entire target population.

3. Who didn’t take the survey?

Researchers administer their survey to either everyone on their lists, or to a selected sample. But either way, not all of the people selected to take the survey respond.

This “nonresponse error” is especially problematic if the reasons for nonresponse are also what the survey is trying to measure. For example, in a program evaluation survey, those with strong feelings about the program will likely respond at higher rates. The survey will fail to measure the full range of participants’ reactions, leading to an inaccurate portrait.

To account for this error, better researchers carefully upweight the results of groups with lower response rates to better reflect the target population (for example, if women responded at a lower rate than men).

A high response rate decreases the risk of nonresponse error, but does not fully account for it. Still, responsible researchers take steps to increase the response rate, especially among subgroups that would otherwise respond at lower rates. Look to see if the researchers used any of the following strategies to increase response rates:

  • Administering the survey in person or by phone, instead of online or by mail.
  • Contacting non-respondents multiple times or through different means
  • Offering incentives for completing the survey.

Error is inevitable in survey research and even well-conducted studies can mislead. Answering these three questions will help you tell if the researchers sufficiently minimized error such that their results can be a meaningful basis for evaluation.

Daniel Olson is a Steinhardt Fellow in the doctoral program in Education and Jewish Studies at NYU. He is also a Wexner Graduate Fellow/Davidson Scholar. He studies disability and inclusion in Jewish education.