This is my newest column for the Chronicle of Philanthropy. You can view my column archive here.
Grant Makers Seek Specific Results, but That’s Rarely Possible
March 6, 2011 | Chronicle of Philanthropy
Human beings hate uncertainty. But in reality, the world is a dynamic, uncertain place, and predicting the outcomes of our actions is extremely difficult.
As a result, anybody who tries to craft a grant-making approach or design a nonprofit program needs to recognize the limits of our knowledge.
Whenever a grant is made or a program offered, a foundation or nonprofit is essentially predicting that a set of actions will lead to desired results. It might make that bet based on intuition or the findings of rigorous testing, but the underlying idea is that the future is knowable.
However, humans are lousy at making predictions.
In his book Expert Political Judgment: How Good Is It? How Can We Know?, Philip E. Tetlock, a business professor at the University of California at Berkeley, makes the case that not only are people bad at making predictions but they also dislike uncertainty so much that they often underperform pure chance because they invent fictitious cause-and-effect theories that serve them poorly.
In the book, Mr. Tetlock discusses a Yale University study “that pitted the predictive abilities of a classroom of Yale undergraduates against those of a single Norwegian rat.”
The rat and the undergraduates had to predict on which side of a maze food would appear.
The food was consistently located on the left 60 percent of the time and on the right 40 percent of the time.
However, Mr. Tetlock explains, the rat made better predictions than the Yale students,
“The rat went for the more frequently rewarded side (getting it right roughly 60 percent of the time), whereas the humans looked hard for patterns and wound up choosing the left or the right side in roughly the proportion they were rewarded (getting it right roughly 52 percent of the time).”
The problem is that when faced with uncertainty, rather than making sensible bets on the best course of action, humans strive to conquer the uncertainty and devise a complex system intended to guarantee success.
While we can laugh at the hapless nature of humans, we must also recognize that our brains get in the way of making predictions.
And that is a problem in an era when nonprofits are urged to deploy “proven, effective” programs and grant makers to demand “proof” that such programs are working.
To be sure, we can learn more about what works and what does not. We can strive to better understand what sorts of programs appear to work better than others. We can search for the characteristics demonstrated by high-performing organizations. But we must frame this effort in the language of probability, not as cause and effect “laws of nature” that simply need to be discovered.
Mr. Tetlock himself makes clear that our limited predictive ability should not paralyze us. “It would be a massive mistake to ‘give up,’ to approach good judgment solely from first-person pronoun perspectives that treat our own intuitions about what constitutes good judgments, about how well we stack up against those intuitions, as the beginning and end points of inquiry.”
If we expect to figure out where money, talent, and other resources should go, we need a blend of approaches to gathering knowledge. We need analytical studies, third-party evaluations, and statistical data—but we also need ideas drawn from beneficiaries, from an assessment of the character of nonprofit management teams, and from the intuition of experienced people in the nonprofit world.
In judging the validity of a decision-making process, Mr. Tetlock suggests we focus on two questions:
• How well do the expectations fit with what we can observe?
• Do decision makers update their expectations in response to evidence?
Grant makers would do well to ask those questions before allocating money, and nonprofits should make those their guideposts in crafting decisions.
We are on the cusp of what could be an era of high performance by nonprofit groups.
But no matter how much progress we make, our success hinges on accepting the limits of our knowledge and a resistance to the seductive idea that if we just try hard enough we can identify “proven” approaches that guarantee success.
I think it is important to keep in mind, that unlike the experiment above, there are two actors in play here. One, as you laid out is the donor, who is working their way through the maze looking for cheese, but the other is the partner who is looking for some cheddar as well.
And for many who are on the receiving side of the equation, waiting for the mouse to find the cheese is a process that can at times have varing degrees of impact on the partner organization as well. Impacts that can, and often do, require the partner to act as educator, take on risk, and move the scope of programming.
So my question is.. is the design of the maze fundamentally improving the game, and is society benefiting from the added layers that have entered play? or is the maze becoming part of the problem, or potentially worse, exacerbating the problem?
Should we be continually pushing for metrics, 3rd party reporting, audits, etc that can all be published on independent websites so that potential donors can “shop for deals”?
Or are we at a point where what really needs to occur is a full review of this multi-layered process to see what actually works vs. what is adding unnecessary complexity to a process that was once a very personal process.
It is something that I struggle with as someone who (1) helps donors search for projects and (2) manage my own NGOs, and at the end of the day while I have no clear answers (I am a big believer in project auditing), I find myself gong back to basics with clients working with them to clearly define the issue they are passionate about, what their expectations of success are, and then helping them understand how to invest in something they have a tangible connection to.
Einstein once said we should make things are simple as possible, but not any simpler. I tend to think that the field of philanthropy makes things excessively complicated in an effort to overcome the fundamental uncertainty in social change. The point of my article is that trying to thwart uncertainty is a fools errand.
Today I guest lectured for a Colby College course on Philanthropy where at the end of the semester the class gives away $10,000, one of several such courses sponsored by Doris Buffet’s Sunshine Lady Foundation. In preparation I had the students read your column in the Chron. of Philanthropy…. It was a wonderful moment for them to pause, to talk about the dangers of over analyzing in this ever more quantified world, and to reflect on the fact that there is no “one right answer’ in philanthropy…. even as we DO want to identify and fund approaches that appear to have a high probability of success. Nobody said this stuff was easy! Thanks once again for great content.
Thanks so much for letting me know! I grew up with a professor for a father, so I’m always thrilled to learn my work is being used in a university context!