How to Fix Our Approach to Evaluation Research in Jewish Education – and Why We Need To

by Eran Tamir

The Jewish community is blessed with lay leaders, philanthropists and professionals committed to creating vibrant and innovative Jewish learning opportunities across the lifespan. Their relentless efforts have resulted in many exciting new educational initiatives. Still, it is no secret that while we all have great hopes that each one of these initiatives will become a great success and have lasting impact on the field, not all do. Identifying the most effective initiatives is a daunting task, one for which solid evaluation research becomes a must for policymakers and funders.

In 2007 I established the DeLeT Longitudinal Survey at the Mandel Center for Studies in Jewish Education at Brandeis University. The project studies and tracks the careers of alumni of DeLeT, a professional preparation program for Jewish day school teachers. It measures the impact of the program and the day school environment on these teachers’ long-term commitments to teaching and to Jewish education (see: Tamir et al., 2010; Tamir & Magidin de-Kramer, 2011). In launching this project, I looked for validated measures (survey questions which were carefully developed and tested in previous studies) in Jewish education, but found none. Instead, I borrowed validated measures from similar endeavors in public education where there is widespread concern about issues of teacher quality and retention.

A year later, I learned that JESNA had released findings from its Educators in Jewish Schools (EJSS) Study (Ben-Avie & Kress, 2008). Although EJSS and the DeLeT study both focused on topics like teacher retention, satisfaction and school support, they used different survey questions to measure and discuss these issues. Then, in 2010, the Jim Joseph Foundation commissioned a study to evaluate the Pardes Educators Program (Kopelowitz, 2011). While it focused on the same issues investigated by DeLeT and EJSS researchers, it employed yet another set of measures.

Evaluating the same phenomena with three different sets of survey items makes serious comparative research challenging. For example, a simple question about teacher career commitments was worded in three different ways. The EJSS survey asked respondents to agree or disagree with the following statement: “I would describe myself as having a career in Jewish education.” The Pardes survey asked: “Three years from now, do you intend to be a Jewish studies teacher in a day school?” The DeLeT survey asked: “What have you been doing this year? What do you anticipate doing next year? Five years from now?” So, when I tried to compare the responses of DeLeT teachers with those of Pardes teachers and teachers from the EJS study in order to learn what kinds of career commitments these day school teachers were anticipating, I was unable to do so. While the items I eventually chose are likely to correlate with each other and comparing them across studies might hold some merit, the results would be much more reliable if they were based on comparisons of teacher responses to the same question.

I had similar problems trying to compare responses regarding many other variables, such as the intention to stay in Jewish education or teacher satisfaction, and with various school condition variables. Each survey used a range of different items to measure the same phenomena, hindering comparison.

What can we learn from this? Having standardized, comparable measures can yield more dependable information regarding program impact, inform funding decisions, and advance knowledge and understanding. It is, therefore, critical that we do a better job of coordinating evaluation research.

In the process, we should move beyond a minimalist approach to evaluation which depends on the speedy creation of specific assessment tools to evaluate a particular project, with little attention to previous efforts in the field. In the short run, this approach is no doubt quicker and less expensive, because it does not require any investment in coordination and collaboration. Neither does this approach require evaluators to go the extra step and review the relevant literature to see if there are validated measures, which can then be used to compare similar initiatives. In the long run, however, the minimalist approach is wasteful and self-defeating for all stakeholders in the field of Jewish education.

I recommend an approach to evaluation research that encourages researchers to build on each other’s work and create over time well-established and validated measures. This approach will gradually produce separate but standardized data sets that will feed into and become a part of any future evaluation study and, as a result, strengthen the entire field.

In order to facilitate this process, when funders commission a piece of evaluation research, they should require evaluators to review past evaluations and look for appropriate measures to include in the current study. In addition, Jewish education needs a responsible body to collect and store searchable data and instruments from old and new evaluations. The new initiative to create a Jewish Survey Question Bank (led by Steven Cohen of the Berman Jewish Policy Archive @ NYU Wagner and funded by the Jim Joseph Foundation) is a step in that direction. Next, evaluators and researchers who receive funding could be required to list the names of their new projects, the type of questions they are trying to answer, and the methods and instruments they will use. Once the evaluation study is complete, researchers would be required to share their instruments and findings. Establishing a common repository – and requiring researchers to use it – could address the current problems of both vast inefficiencies and the lost opportunity to gather valuable information that could improve the quality of decisions and programs in the field.

Eran Tamir is a senior research associate at the Mandel Center for Studies in Jewish Education at Brandeis University.

References

Ben-Avie, M. & Kress, J. (2008). A North American study of educators in Jewish Day and Congregational Schools. New York: JESNA (Jewish Education Service of North America).

Kopelowitz, Ezra. Markowitz, Stephen. Evaluation of the Pardes Educators Alumni Support Project: Promoting Retention of Pardes Educator Program Alumni. Pardes Institute of Jewish Studies.

Tamir, E., Feiman-Nemser, S., Silvera-Sasson, R. & Cytryn, J. (2010). The DeLeT Alumni Survey: A Comprehensive Report on the Journey of Beginning Jewish Day School Teachers. Waltham, MA: Mandel Center for Studies in Jewish Education.

Tamir, E. & Magidin de-Kramer, R. (2011). Teacher Retention and Career Commitments Among DeLeT Graduates: The Intersection of Teachers’ Background, Preparation for Teaching, and School Context. Journal of Jewish Education. 77:1, 76-97.

Subscribe now to
Your Daily Phil

The philanthropy news you need to stay up to date, delivered daily in a must-read newsletter.