By Zev Eleff, Lauren Raff, William Gomberg,
and Alex Jakubowski
In August 2000, a team of Brandeis University researchers published “preliminary findings” of Birthright Israel’s impact on participants. By then, the nascent program had flown 5,000 college students and young adults to Israel for 10 days of intense tourism and learning. The purpose of the well-financed initiative was to “connect young Jews to their heritage and to strengthen their Jewish identity.” The 58-page pamphlet was the first of many valuable Birthright studies produced by Brandeis’s Cohen Center.
The Birthright report wasn’t the first research on the impact of Jewish programming – but it was the best. Since then, and in line with the evolving data-positive trends in other fields, many funders and stakeholders have urged, or even required agencies to evaluate the outcomes of their programs on intended participants. Over the past eight months, the Jewish Impact Genome (JIG) has mined hundreds of program evaluations and other grey literature to develop a robust taxonomy framework and universal language of impact across the arena of Jewish Engagement. To support the field, the JIG team has furnished an Impact and Measures Guide that maps the field’s impact measures to its common outcomes.
Impact-driven professionals assess their programs detached from a supportive community of practice, unable to compare and learn. Most Jewish agencies do not know how peer organizations measure impact, or how to identify a menu of “best practices” to assess their outcomes. The Birthright research set the standard for the field – though at what might be an unreachable bar for most organizations. The efforts were routinized for each cohort, analyzed with a control group and other features difficult to duplicate.
These are some of the major issues that have emerged in conversations with practitioners in the Jewish Engagement field. Many organizations possess a well-grounded theory of change, or a logic model that articulates the goals of their programs. They comprehend the relationship between their outcomes and intervention activities. The challenge is to figure out the best way to measure impact, to ask the right questions of program beneficiaries that make the most sense for their interventions. Some organizations outsource this function, preferring to utilize outside evaluators to determine the success of their program efforts. In doing so, however, few also take the time to understand how, and why their evaluators selected specific measurements, or in some cases even defined specific outcomes they aim to achieve.
In all, the JIG researchers identified 1,359 total measures of impact, more than half in the area of “Jewish Connectivity.” Very few, by contrast, fell into the outcome area of “Sustainability and Welfare.” Programs like Birthright Israel, Limmud, and PJ Library anchor themselves to a variety of outcomes, most residing in the Jewish Connectivity neighborhood, a “Zip Code” the JIG defines for agencies aiming to develop or strengthen engagement with Jewish life and enhance participants’ connections to a Jewish network.
Evaluators have tended to measure impact by surveying participants’ attitudinal change: Did the program deepen their relationship to Judaism and other aspects of Jewishness? Some also poll participants on behavior changes that “reflected” – or more simply, measured – impact on Jewish identity, like time browsing Jewish Internet sites, attending religious services, and engaged in in Jewishly-oriented activities.
Some of the queries aren’t all that intuitive. Increases in each measure suggest a short-term surge in Jewish Connectivity. These “indicators” approximate the broader goals of the program but are certainly more tangible and measurable than asking about improved feelings about Jewish peoplehood.
Owing to this, the JIG team rated each measure of impact based on two fundamental criteria: “Measurability” and “Intelligibility.” For Measurability, JIG researchers considered the tangibleness and discreteness of the survey question and the reliability of responses to be compared to others among the same program cohort and to others in peer programs. An instrument that polls participants on meaningful increases in a Jewish friend network is stronger than, say, another that examined where an intervention “increased the likelihood of socializing with other Jews.” In addition, a program that measures impact by querying about increased observance of Shabbat and holidays, if relevant for that population, is asking a more measurable question than another that asks about participants’ “connection to Jewish traditions and customs.”
Intelligibility is also important. Some agencies invested in Jewish Culture ask participants about how a program moves them to partake in additional cultural programs and commit to raising the Jewish quotient of time engaged in overall cultural activities. Others ask about the depth of “spirituality” and “Jewishness,” vague language that is freighted with varied meanings for different people. The JIG team deemed the latter instrument less intelligible to participants, program staff and stakeholders.
This method accomplishes several important things. First, it moves the conversation away from what agencies ought to be measuring to what they can measure and, ultimately, assess. The JIG’s Impact and Measures Guide can serve as a guide to select from a suite of measures to best reflect the goals of individual programs.
The Path Forward
Thinking about how to best enhance the field, the Guide aims to help facilitate a conversation for agencies to get a better handle on assessment and to increase awareness of what others in the field are evaluating. No doubt, certain measures make more sense for particular organizations and are well beyond the scope of others. But no matter which measurement is chosen, every program staff member should be able to understand the data they collect, why it matters, and how they can utilize the information gleaned to improve.
Second, obtaining a better handle on the language of impact in the Jewish Engagement arena will support a fieldwide effort to share and compare data. By assessing success with a more standardized and democratized language, agencies and stakeholders will emerge in better position to understand the possibilities and expectations of impact in Jewish life.
Moving forward, the JIG team will refine and update the Impact and Measures Guide as the field develops. The goal, then, is shared learning. To support this effort, please take a moment to complete a brief 5-minute outcome survey. The short survey is intended to learn more about the Jewish Engagement field and how agencies in this space understand their own area of impact. While your individual responses will not be shared, your impact language will be vital to broaden the collective wisdom of Jewish Engagement practitioners.
Zev Eleff, Lauren Raff, William Gomberg, and Alex Jakubowski are members of the Jewish Impact Genome team. For more information please reach out to Zev Eleff at [email protected]