Prospect Research and Lists: the Do’s and Don’ts
By Gil Israeli
There was a time when Forbes published their annual lists in their magazine and that was it. There was no Internet (no deluge of online lists) and the magazine sold out (repeatedly – and this even helped keep the magazine afloat in its difficult years when sales were weak throughout the year).
Today, we have an abundance of lists – too many of them.
I’ve gathered over 90 for prospect research purposes and I’ve tried to discriminate to have those that are worth my time to have an ongoing on-target annual review of (to name a few): the wealthiest philanthropists in the US; major givers in higher education and medicine; leading philanthropic advisors; top 100 investment professionals; fastest growing private companies; entrepreneurs of the year; the top “30 under 30;” the “Million Dollar List” of grants awarded each year; firms that have made recent IPOs; Most Powerful CEOs Under 40; 50 Most Powerful Women; Top 20 Rising Stars of Real Estate; Silicon Alley Insider’s 100 Coolest People in Tech, etc.
But this glut can consume me. One way to differentiate these lists is to create three categories for analysis: A, B and C lists. A comprises those lists that indicate philanthropic gifts like those in The Chronicle of Philanthropy (in contrast to B lists – those that indicate the wealthiest individuals as in the Forbes 400). The third category (rising stars and relevant professionals) is also important, after all these are the people who often have access to the people who have the wealth or they may be involved in generating the wealth (like investment professionals). But, by far, the most important category (the consensus in fundraising) is that the information in list A is critical: the best measure of a prospect’s inclination to give is his or her past giving, not current wealth.
So, how else to handle this glut of lists? We are swiftly approaching the point where all of the publications themselves will enable us to automatically scan their lists with a canned selection of parameters. Parameters are what you’d expect: list the gifts involving higher education or those for $1,000,0000 and up. God knows, correctly so, the few publications that now offer this list-filtering (such as Million Dollar List) accelerate the list review that I do now throughout the year.
Yet, while this may be efficient, it also will bring a new mechanized aspect to prospect research. This tendency has become an industry movement that has been creeping into the field and continues to do so. As another example, I think, quite fortunately, of the Boolean search/alerts that I’ve programmed on Lexis-Nexis to search articles to monitor major gifts to 50 other competitive organizations. (The donors of those gifts could be potential contributors to the organization I work for). While this does the first half of the task, I fortunately still need to review the articles to confirm a hit. Along the way, I always pick up information about our competitors, which will be useful at a later time. The best of it: what are the new funding sources that our competitors are developing.
Beware of too much efficiency. I can see that while this scanning of lists is expeditious in its summary sweep , it could also cost something in terms of sophisticated human analysis which is costly to develop in a human being and takes years of experience. Of course, human sorting and analysis of data can lead to insight and even to an entirely new way to research prospects and it’s not clear to me that a computer program can shuffle through any sources to make such a discovery of a new heuristic. (I’ll soon give an example.)
To be fair, reviewing lists is part of the essential and natural human repertoire of everyday behaviors. We do it every day as we navigate the world: from the unconscious routinue follow the moment we wake up: shower, dress, breakfast, commute, work, lunch, work. We even consciously create and then explicitly prioritize our own lists for our ordered work behavior. Thereafter, we allow the dissolution of our list of formal activities for our remaining off-hours and vacations, when we can relax and idle (as Mortimer Adler described this valuable and often playful reinvigorating time).
Making and reviewing lists is healthy. Positively, it can also be the launching point for being creative, for reordering information in new configurations and new contexts to express or achieve something new.
What’s the danger of automated list-analysis? When you remove a part of our daily round of prospect research (in this case, the positive though sometimes time-consuming part of analyzing lists and making connections), then the sum total of our knowledge – the holistic sum of it becomes “thinner.” We may lose the habit (the unique creative human capacity) to cross-index in rational and also in imaginative ways that enables us to make and take tangents.
And note that this also applies to innumerable areas beyond prospect research.
As an example, I’ll contrast the two ways that I’ve reviewed foundation gifts over the years. I’ve accessed the Foundation Search database program (at the Foundation Center to search (for example) for New York-based foundations that support causes in Israel. In the results, at the end of each listed foundation, there appears a brief sample list of gifts – generally the largest grants awarded (just one useful criterion).
In contrast, after a light bulb ignited over my head, I started reviewing the annual 990 tax forms for gifts (nothing new here). Though here’s what was new: I reviewed the list of grants for the last three years and placed each gift in one of four categories that I had decided represented the foundation’s giving, i.e., causes in Israel, higher education in the U.S., medical causes and social welfare. (In other words, I now defined the categories for analysis.) Then, based on these categories, I generated statistics on the foundation’s interests and multi-year giving trends. The latter – the multi-year giving by percentage to each category (not indicated in the report or in the 990) – painted a hitherto unseen picture and proved to be the best data indicating if the foundation was a plausible candidate to be contacted and cultivated.
Lists are wonderful for gaining a particular view of the landscape, often close-up, but because they come bereft of more fine-grained and insightful analysis, they also often stay locked in one lens and can suffer from myopic tendencies. As with so many other cases of data representation, the underlying question (to understand what is really being measured) is defined by the parameters of interpretation.
The reality is that I’d like to be initially confused by a quirk or variation of my daily diet of data, journals, articles, discussion with colleagues and lists rather than get overly neat packages of canned analysis. The problem with canned analysis is that it often stays canned.
It’s a good thing to be pushed to be savvy and dig in, interpret results anew and to respond differently. The dividends will come in individual cases of unexpected clarity and the occasional “lucky” discovery of new ways to interpret the research data you work with.
Gil Israeli is Director of Prospect Research and Senior Writer, American Technion Society. The opinions he expresses are his own. He edits the blog fundraisingcompass.com.