By Leonard Saxe, Charles Kadushin, and Theodore Sasson
A bold headline in Ha’aretz on the day after the Israeli election read, “When I was told about the results, I thought I was dead.” The quote was not by the leader of the defeated Zionist Union party, but by Professor Camil Fuchs, a distinguished survey statistician and lead consultant for several pre- and post-election polls. His polls, along with others reported by the Israeli press, were “dead wrong,” underestimating Netanyahu’s support by 50% in pre-election polls and post-election exit polls, and overstating Herzog’s vote by more than 10%.
The polling expert was not the only one initially misled. Relying, in part, on these polls, the Israel media predicted a Netanyahu debacle. Some of the press acknowledged that they were biased by wishful thinking, but they also failed to grasp the deep Israeli need for reassurances of security – the centerpiece of Netanyahu’s campaign.
There are a number of technical reasons for the failures of both pre- and post-election estimates. Not the least of which is the challenge of understanding a ten-party election, with complex rules about the minimum proportion of votes to be awarded a Knesset seat. As Fuchs pointed out in an interview, the statistical margin of error alone would account for most of the error in prediction for the two top parties.
As well, the need for near-instant results, particularly on election eve, forced pollsters to compromise their methods. Exit polls ceased data collection before 8:30 PM, an hour and half before the polls closed. Even if the pollsters’ methods were good, they could not have detected a late surge for center-right parties.
During the pre-election phase, the pressure for quick results also meant reliance on Internet polls when more phone polling would have been more accurate. Phone polls are a particular challenge in Israel where almost everyone relies on mobile phones and repeated calls to non-respondents may be needed to assure an unbiased sample.
Finally, there is some evidence that non-response was related to political views, in particular that right-leaning and Russian immigrant voters did not cooperate because they mistrust the polls. Fuchs’ exit polls used a questionnaire and had 30% nonresponse; in two other exit polls that used a mock ballot box, the non-response rate was 20%. Weighting these factors so that the sample looks like the true set of Israel voters is tricky, and weights do not compensate for non-respondents who differ systematically from the population sampled.
Polling methods and statistical issues notwithstanding, it is also the case that that voters’ preferences can change. Thus, in the last week before the election there appears to have been a surge for Likud. The polls, by Israeli law, are prohibited from reporting three days before the election. This issue is particularly maddening because the results of the pre-election polls may have affected voters. Reports of the polls – which showed Likud in trouble – likely drove voters to change preferences.
As problematic and difficult to conduct as the Israeli election polls are, surveys of U.S. Jews are actually more difficult. In Israel, the Central Bureau of Statistics provides accurate counts of the basic demographic characteristics of both Jews and Arabs, as well as religious preferences. The U.S. Census Bureau does not collect data on religion and, thus, there is no easy way to weight U.S. survey data on Jews. Surveys of U.S. Jewish attitudes – whether about experiences of antisemitism or engagement with the community – have all the problems of Israeli surveys and the additional difficulty of not having a population benchmark.
As survey researchers, we do not want to give up on opinion polling. The Israeli case reminds us, however, that we should avoid “horse race” estimates (of the type “35% of Israelis say….”). Statisticians call these point estimates because there is sampling error on either side of the point, but even with this caveat, they provide only limited information. Had the polls focused more on understanding the basis of voters’ preferences, the movement toward Likud would likely have been detected.
The lesson is that we should keep the focus on understanding the tendencies, dynamics, and trends of the population and various subgroups. Consider Jewish intermarriage in the United States. The point estimate, according to the recent Pew study of American Jews was 58 percent in 2005-13. This statistic served to once again sound the alarm regarding American Jews’ eventual extinction. Yet, more thorough analysis of the same data shows that Millennial generation children of intermarriage were nearly 40% more likely than older counterparts to identify as Jewish in adulthood and to have engaged in Jewish behaviors such as attending a Seder. This is critical information for any meaningful interpretation of the findings on intermarriage – and information not available from a point estimate.
In Israel’s election polls, point estimates were unavoidable, but the interpretive errors could have been minimized by better reporting of the values, perceptions, and concerns of Israeli voters, and the fluctuations in those attitudes throughout the election period. We need to apply the same logic to our demographic analyses of American Jewry. Understanding who we are is more than a collection of numbers about our size and characteristics.
Leonard Saxe is Klutznick Professor of Klutznick Professor of Contemporary Jewish Studies and Social Policy and Director of the Steinhardt Social Research Institute/Cohen Center for Modern Jewish Studies at Brandeis University.
Charles Kadushin is Distinguished Scientist at the Cohen Center for Modern Jewish Studies at Brandeis University and Professor Emeritus at the Graduate Center of the City University of New York.
Theodore Sasson is Senior Research Scientist at the Cohen Center for Modern Jewish Studies at Brandeis University and Professor of International and Global Studies at Middlebury College.
Corresponding author: firstname.lastname@example.org