Opinion

Five Reasons Why New Initiatives Fail

We do not evaluate new initiatives adequately nor do we always report on our findings honestly, whether these initiatives are small, local efforts or multi-year, multi-million dollar initiatives.

why?by Dr. Gabe Goldman

I have seen my share of “new initiatives” in Jewish education over the past 45 years. I saw the rise and fall of open schooling in the 1970s; the Back to Basics movement in the 1980s; the Continuity Commissions of the 1990s; and experiential education in the 2000s. I have been a part of local and national initiatives to improve curriculum development, experiential education, family education, informal education and teacher training. Year after year, decade after decade, one new initiative follows another. And each soon becomes a vague memory. Few of these initiatives delivered a fraction of what they promised. Why didn’t we know at the time that these initiatives were not succeeding as well as we thought they were? Why is it that all new initiatives seem to be working exceedingly well throughout their trial periods (i.e. the period of time during which they receive funding) but ultimately fail to achieve their goals? In my experience, new initiatives fail for one or more of the following reasons.

1. New and Innovative Doesn’t Mean New

Most new initiatives do not represent anything new but rather the repackaging of practices already part of the educational system. I remember when one community funded a multi-million dollar initiative to “introduce” informal Jewish education in the form of a retreat center. Though the retreat center was new, schools throughout the community had been taking students on retreats for decades. For many of us, it came as no surprise that the retreat center did not increase the number of students participating in retreats (one of its primary “goals”). It is impossible to measure the effects of a new practice without knowing in advance the extent to which the new practice has already been implemented. This is something Jewish educators should bear in mind when evaluating the effects of the many “new” Experiential Education initiatives now being proposed. Again, though these initiatives are new, EE is not new to Jewish schools. Our schools have routinely used such EE practices as community service, model Passover Seders, sukkah experiences, retreats, hands-on projects and more for the last 50 years.

2. Failure to Report on Failures and Difficulties

We do not evaluate new initiatives adequately nor do we always report on our findings honestly, whether these initiatives are small, local efforts or multi-year, multi-million dollar initiatives. We have many ways of obscuring our difficulties and failures. One way is to misuse statistical data. One summer, for example, I was asked to interpret evaluation data on a Jewish camp’s new teen-parent program. One of the evaluation questions was, “Can you describe one new thing you learned from this program?” Over 90% of the teens answered this question affirmatively, and this is the figure that I reported. But, my review of their answers revealed that half of the teens had listed “it’s wrong to hurt another person” as what they learned. Obviously this was not something “new” that they learned, and had I been entirely honest I would have instead reported, “No evidence of significant new learning took place.”

Another method of obfuscating data is by reporting only on the results of end-product evaluations without reporting on the serious challenges that had to be overcome, and changes that had to be made, to achieve the results. One example of this is the community that adopted a particular Hebrew curriculum after it had demonstrated its success in multiple communities. That is, students in the program were making consistent progress across all levels of Hebrew instruction. Only later did the new community realize the success of the other communities was made possible by extraordinary effort in the form of hiring special tutors and additional teachers to work with students who did not keep up with their studies.

Jewish educators are not dishonest people by nature and fudging evaluation is not limited to the field of Jewish education. Part of the problem is the perception of all researchers that they will lose funding if they reveal problems with their research (i.e. new initiatives). And part of the problem is our collective lack of understanding that failure is only “failure” if we do not learn from our mistakes.

3. New Initiatives Implemented for the Wrong Reasons

Who conceives of new initiatives? In many cases, the answer is not Jewish educators but rather professional and/or community lay leaders. For generations, central agencies of Jewish education have been tasked with creating new initiatives because their community professional and lay leaders “wanted to see something done” and were able and willing to pay for it. The “something” varies from community to community but the pressures to take on these new initiatives are the same – opportunity to increase agency operation budget/staff and the need not to displease a lay leader who is probably a “friend” of the agency and a major donor to Jewish education.

Yes, Jewish educators play a role in helping to shape new initiatives. But, and this is a huge “but,” when educators disagree with the lay leadership funding an initiative, the educators rarely win the argument. The truth is that we educators – willingly or unwillingly – give in, though we call it compromise. And too often what we compromise is the very integrity of the initiative. Typically, for instance, we compromise on the number of staff we believe necessary, or on their level of training, or on the time frame in which we are required to achieve designated goals.

4. Failure to Design Effective Evaluations that Provide Timely Guidance

Evaluation design has always been the weakest part of new initiative planning. Few new initiatives I review demonstrate any understanding of evaluation theory or evidence that one design was chosen over another for specific reasons. In some cases, it is obvious that proposed evaluations simply parrot the language found in the funding proposal guidelines. In other cases, it is equally as obvious that an evaluation design was included simply because it was required or expected. Even when Jewish educators make a real effort to design evaluations that will tell them if their initiatives are succeeding or not, they are limited by their lack of experience.

An overwhelming majority of new initiatives, at local and national levels, utilize the same, quasi-quantitative, summative evaluation approach used for the past 40 years. This approach identifies specific goals, quantifies their successful accomplishment and measures success against predetermined benchmarks. It is the simplicity of this approach that makes it so seductively attractive – and so ineffectual for guiding the success of new initiatives. For starters, summative evaluation is cumulative in nature, measuring the level of success obtained at the end of the evaluation period. We know this approach does not protect against evaluation bias and probably encourages it. More importantly, this approach leads to mistaking the means for achieving educational goals with the goals themselves (see Point 5 below).

There are alternative evaluation approaches, such as formative evaluation, that greatly reduce the risk of evaluation bias while providing timely data on what is working, what is not working and what needs to be removed entirely. These approaches require more time to design and more resources to conduct but are the only way to ensure program planners are alerted to the inevitable midcourse changes necessary in all new initiatives before these changes become serious problems at a much later time.

5. Confusing Educational Goals with the Means for Achieving These Goals

Educational goals require changes in students’ attitudes, behaviors, beliefs, knowledge or skills. Everything else is a means by which these goals are achieved. Most Jewish educational goals cannot easily be measured and some cannot be measured at all in the time frame required. Rather than admit this, we instead find ways to “re-state” goals so that they can be quantified and therefore measured. We set benchmarks of how many people will participate in the programs, how many are satisfied with the program or how many demonstrate they learned something new. In doing so, however, we are not really “re-stating” goals – actually, what we are doing is eliminating goals and substituting for goals the mere means whereby goals are to be achieved. For example, having a lot of students attend a given program is merely a means towards the achievement of whatever the goal of that program is; it is not in itself a goal! When Jewish planners evaluate the means for achieving goals rather than the accomplishment of the goals themselves, it is easy to misunderstand what is working, what is not working and what needs to be changed about their initiatives.

There are plenty of people who would argue that it’s better that we evaluate something rather than evaluate nothing at all. Others would argue that we cannot achieve our goals without having effective means for achieving them so it is legitimate to evaluate the means rather than the goals themselves. These are logical arguments but even so we cannot ignore two pressing facts. First, the quantitative benchmarks we set are completely arbitrary and without basis in statistical analysis or social change theory. Secondly, we cannot ignore the fact that new initiative success often involves factors that exceed program design, factors that could not even be anticipated, let alone evaluated by the summative evaluation approach – such as an indefinable chemistry that develops between student and teacher or a satori (eye-opening) experience triggered by a certain phrase or song.

In conclusion, I want to note that the continued failure of efforts to change Jewish education in significant ways does far more harm than just waste time and money. It leads to feelings of hopelessness that “nothing is going to make a difference” and to the reluctance of funders to support new initiatives. We are still capable of regaining the confidence of our clientele and funders if we start now by developing new initiatives that have real educational goals, are guided by formative evaluation process and informed by complete transparency in our reporting of evaluation results.

Dr. Gabe Goldman is Director of Experiential Education at the Agency for Jewish Learning, Pittsburgh, PA.