It is always gratifying when one of these posts inspires reactions, challenges, and rebuttals. The post of several weeks ago on innovation funding seems to be one of those. This follow up piece addresses some of the reactions I have received.
Evidence based grantmaking: Some have read my earlier piece as a wholesale dismissal of the value of evidence-based grantmaking. That is not what I meant to imply. There are, indeed, many cases where there is significant documentation that one intervention is more effective than others. In addition, many funders and foundations don’t have the internal resources to do independent research or assessment of complex approaches. They are “mission aligned” with a much larger foundation which has done that work. Why not follow their lead? This can be a very efficient funding approach and can have a legitimate place in a grantmaking portfolio.
Moreover, even though not every evidence-based approach would withstand a longitudinal review, it does not mean that every unproven or untested program or intervention is equally worth funding. It does not exempt a grantee organization from an articulation of what success should look like, what an evaluation could examine, and what they would do if their assumptions prove false. All of us have met organizations – new and not so new – that have tried to dismiss accountability. “We are different” or “we have wonderful anecdotes” or any of a variety of other rebuttals. Those responses are simply not credible. Innovation funding requires openness to uncertainty, high tolerance of failure [see below], and acceptance of early stage indicators rather than clear evidence of success. But innovation funding does not dismiss accountability, knowledge of the field, or acceptance of the legitimacy of the early-stage indicators.
Replication: Another area which only makes sense if there is demonstrated effectiveness is the area commonly referred to as replication. Replication is NOT simply copying something tried in one place and plopping it down somewhere else. That is in itself a recipe for failure. How often have we seen that? But a program, approach, or project which has been tested, and, crucially, there is appropriate documentation about what made it successful, is a good candidate for “replication.” Here is another example where evidence matters.
Finally some words about success and failure. Recently, a lot has been written about failure in our field. It is hard to imagine any funders who are honest with themselves who can look at their portfolio over time and not find some flops. And that is as it should be. After all, virtually all funding is for something that did not yet happen. Nothing in the future is or can be guaranteed.
But failure should never be because a funder got in the way: created undoable conditions, knowingly underfunded, forced an organization to go beyond their reasonable competencies, overlooked organizational weaknesses which needed to be addressed…
I continue to be an unrepentant advocate for the importance of innovation funding. Funding innovation requires a high tolerance of failure. Sometimes a huge tolerance. But that failure should be for the right reasons and not because of funder myopia. And this is where strategy enters: asking the right questions, articulating what change might be possible, being courageous in helping those changes happen, and being responsible in your relationship with those who are implementing those noble efforts. What can be more strategic than that?
Richard Marker teaches and advises funders from around the world through both the NYU Academy for Grantmaking and Funder Education and the Wise Philanthropy Institute, both of which he founded. His blog can be found at Wise Philanthropy.