The Complex Challenge of Evaluation

by Meredith Woocher, Ph.D.

Last week on this blog Rabbi Aaron Bisno challenged us to have a “courageous conversation” about the realities of Jewish life today: “Let us begin by acknowledging that every expectation upon which Jewish life has long relied is now suspect. And as the assumptions upon which we have built our financial models, budgets and future prospects for sustainability are undermined; as new patterns of affiliation and new demographic and sociological realities redefine what we can expect going forward… the sacred ground upon which we have built our house is shifting beneath our feet.” In other words, the Jewish world is highly complex and unpredictable, and simple, straightforward solutions to our problems are no longer sufficient (if they ever were).

This reality requires all of us in Jewish professional and leadership positions to rethink core assumptions about how we operate. In my own field of evaluation research, it means having the courage to ask some hard questions about how we define, measure and achieve impact. The standard framework for evaluation (which I myself have used many times) involves clearly defining the impact a program or initiative will have on a target population; identifying short- and long-term outcomes for changes in knowledge, attitudes and behaviors; specifying the time-frame in which these changes will occur; and gathering data to determine whether or not the program has successfully met its intended goals.

But as I contemplate the implications of our complex environment, I find myself questioning many of the assumptions behind this evaluation framework: Can we really identify our intended impacts with certainty and clarity when the Jewish world today is so full of change and ambiguity? What is a reasonable time frame in which to expect and be accountable for results, when each day that passes is reshaping our community in ways that could hardly have been predicted even a few years ago? If we focus data gathering on a pre-determined set of metrics, what significant impacts and changes will we miss because they fall outside the scope of our inquiry? These questions should be of more than abstract interest to anyone with a vested interest in meaningful and effective evaluation (that is, anyone who plans, runs, participates in, studies or funds programming within the Jewish community).

Fortunately, in recent years a new approach to evaluation has arisen in response to just these kinds of questions. Pioneered by renowned evaluator Michael Quinn Patton, “Developmental Evaluation” uses knowledge-gathering and analysis to “support innovation by bringing data to bear to inform and guide ongoing decision making” in complex environments. (pg. 36). Patton defines a complex environment as one in which change is constant and often unpredictable, and there is “no known solution to priority problems, no certain way forward, and multiple pathways are possible.” (pg. 23). Sound familiar? But within the framework of developmental evaluation, these conditions are to be viewed not as cause for alarm or despair, but as opportunities for innovation and impact. For if multiple pathways are possible then the task of those who seek change is not to hope that they can hit on the one “right” answer, but rather to explore these pathways through constant experimentation, analysis and adaptation.

The role of developmental evaluation in this model is to facilitate the development (hence the name) of innovations (programs, initiatives, interventions, etc.) by providing ongoing feedback about how the innovation is interacting with the people and settings it touches. On the surface the activities of developmental evaluation look the same as any other type of evaluation – surveys are distributed, people are interviewed, data are compiled and presented. But the focus and purpose of the evaluative work is quite different: making sense of complex patterns of behavior rather than testing pre-determined outcomes; developing highly adaptable “effective principles” rather than standardized “best practices;” exploring possibilities for innovation rather than judging a program to be a success or failure.

Another key difference of this approach is the potential impact on the innovation being evaluated. Because developmental evaluation involves a wide scope of investigation into how a given program interacts with its environment, it can lead to radical changes in the program’s structure and operation. These changes are seen not as evidence of failure of the initial program model, but as an appropriate and necessary response to increased knowledge and understanding. As an example, at the Partnership for Jewish Life and Learning in the Washington DC metro area (where I serve as Director of Research and Evaluation) we have used evaluation data to guide us in rethinking and reworking signature initiatives in congregational education and youth philanthropy. In each case, the changes made were in response to knowledge gained in real time as the initiatives unfolded showing where, how, and with whom we were making the greatest impact, and where we were meeting unforeseen obstacles. This process was not always easy, as it required letting go of some deeply held assumptions and beliefs about the best way to achieve change in our community. And because we certainly don’t believe that we are done with our learning process – evaluation being a constant and core aspect of our work – we know that we will have to continue to adapt as our knowledge grows and the world around us continues to change.

In general, developmental evaluation requires us to tolerate an often uncomfortable level of unpredictability – we can’t know when we forge a new path exactly where we may end up. In the past decade or so Jewish institutions and funders have increasingly relied on evaluation in its classic role as a tool to identify outcomes, demonstrate the causal relationships between actions and results, and make decisions about the ultimate fate of a given program. While any emphasis on evaluation is to be commended – and is indeed a huge step forward for the Jewish community – it is possible to take this approach too far, such that our desire for answers stifles our ability to ask and explore critical questions. We may wish we could know at the outset the impacts we will achieve and the time it will take to achieve them, but in our complex and dynamic world this is simply not possible. Instead, we need to have the courage to accept uncertainty and the patience to allow change to emerge and unfold.

This is not to say we shouldn’t start with our best assumptions about what we can achieve, ideally grounded in as much knowledge and data as possible about the environment we’re working in. But we have to remember that these are only assumptions which not only can change, but undoubtedly will change as we continually observe, learn and adapt accordingly. The key evaluation question, then, is not “Did this program work?” or even “Did we achieve our intended goals?” but “What are we learning about the realities in which this program operates, and how can we use our growing knowledge to increase our ability to achieve impact?” Not quite as simple and straightforward, perhaps, but then neither is the world we live in.

Meredith Woocher, Ph.D., is Director of Research and Evaluation, Partnership for Jewish Life and Learning.

Subscribe now to
Your Daily Phil

The philanthropy news you need to stay up to date, delivered daily in a must-read newsletter.