By Zev Eleff and Alex Jakubowski
“Our program is entirely innovative. The design is unproven; the approach is untested; the outcomes are unknown,” imagined nonprofit blogger, Vu Le, about a ‘brutally honest’ practitioner-funder conversation. “We also have a tried-and-true service delivery model with outstanding results and a solid evidence base to support it. But you funded that last year and your priority is to fund innovative projects. So we made this one up. Please send money.”
The lampoon has struck a nerve within the nonprofit sector. It has been quoted or reproduced in a myriad of popular online venues. These lines serve as a more palatable means of speaking about a broken system, one that has needed fixing for a very long time.
For stakeholders in the Jewish Engagement arena, the problem is a lack of common language around defining success. To create this, the Jewish Impact Genome (JIG) team was not interested in furnishing another proscriptive script to replace an outdated model. Instead, the JIG operated descriptively, turning first to the peer-reviewed literature and then probing much deeper into the “grey literature” of program evaluations and grant reports to get the best possible handle on how Jewish Engagement talks about impact.
After hundreds of coding hours and dozens of conversations with field practitioners, this process produced the backbone of what has become the JIG’s Assessment Instrument, now fully integrated into the Impact Genome Project. Utilizing a common language, this tool allows agencies to report on the impact of their programs, distilling the “what,” “how,” and to “whom” the program seeks to make change in Jewish life. The Genome then aggregates and assesses the data to obtain a broad perspective on “change” and “impact” in North American Jewish life.
But as any well-grounded Latin scholar will tell you, a language is only as useful as those who speak it. The JIG team recruited practitioner guidance. We assembled a Practitioner Council to pretest the instrument and make recommendations on how it could be utilized to maximize impact in the sector.
The group was diverse. Agencies differed on how they conceive impact in the Jewish Engagement sector, as well as the strategize used to accomplish those goals. Some were legacy organizations, while others had emerged more recently on the Jewish scene. None collected nor used data the same. The group was united by their appreciation for diversity and willingness to support a common cause.
The Practitioner Council feedback, outlined in a findings report, was constructive, helping us refine the tool and get the most out of our field learning. Their comments offered during intensive debriefing sessions sharpened our thinking on the taxonomies and increased the instrument’s utility. For instance, their guidance broadened some of our “outcomes” and brandished some language in the taxonomies to make them more comfortable to staff in the field.
They learned, as well. The pretesting period supported our partners’ understanding of the programs they run, providing an “opportunity to think through all the aspects of one program.” Our pretesters relayed that they were “forced to consider, ‘How does our context impact our beneficiaries and how does that relate to our outcomes?’” In addition, Practitioner Council members comprehended the instrument’s fundraising implications. Pretesting was a valuable exercise, compelling staff members to “think of things we do within our programs that we don’t typically highlight.” Best of all, the process “enabled us to think of something unintentional happening in our program that we may want to make intentional.”
The Practitioner Council members agreed that the JIG instrument zeroes in on the areas that matter most. They appreciated the “simplicity of the idea and model and the ability to break though the noise of doing different evaluations for every single intervention we oversee.” All this, the pretesters agreed, will facilitate a more thoughtful conversation around funding and impact for all staff leaders and stakeholders.
That the Genome resonated with a diverse cluster of practitioners is important. But we also gained much from their concerns. One Executive Director shared her concerns that funders would utilize the JIG instrument to “simply do the math” and fund the program with the lowest cost per outcome. We share that concern. No instrument or dataset should ever replace the crucial, personal conversations between funders and grantees. Instead, much like an SEC 10-K or the Bloomberg Terminal’s advanced financial data, this Genome’s information offers the framework for funders and grantees to engage in substantive and layered conversations about portfolio goals and program impact.
Moving forward – particularly in the context of our ecosystem launch – the JIG team will redouble its efforts to demonstrate the instrument’s usefulness to support substantive conversations, the kind envisaged in Vu Le oft-quoted lampoon. The Genome is meant to facilitate deeper funder-grantee conversations supported by common language and fieldwide learning.
The Practitioner Council was the first step to reach out to the Jewish Engagement field. We have also convened learning groups and other conversations to capture the rhythm of Jewish Engagement.
In addition, we have launched a brief 5-minute outcome survey to test in the broadest way possible how our coding matches up with agencies’ expectations. Please take a moment to fill out this survey. Your individual responses will not be shared. But they’ll be crucial to empowering the field and democratizing the conversation around impact.
Zev Eleff and Alex Jakubowski are senior members of the Jewish Impact Genome team.