This is my newest column for the Chronicle of Philanthropy. You can view my column archive here.
Human beings hate uncertainty. But in reality, the world is a dynamic, uncertain place, and predicting the outcomes of our actions is extremely difficult.
As a result, anybody who tries to craft a grant-making approach or design a nonprofit program needs to recognize the limits of our knowledge.
Whenever a grant is made or a program offered, a foundation or nonprofit is essentially predicting that a set of actions will lead to desired results. It might make that bet based on intuition or the findings of rigorous testing, but the underlying idea is that the future is knowable.
However, humans are lousy at making predictions.
In his book Expert Political Judgment: How Good Is It? How Can We Know?, Philip E. Tetlock, a business professor at the University of California at Berkeley, makes the case that not only are people bad at making predictions but they also dislike uncertainty so much that they often underperform pure chance because they invent fictitious cause-and-effect theories that serve them poorly.
In the book, Mr. Tetlock discusses a Yale University study “that pitted the predictive abilities of a classroom of Yale undergraduates against those of a single Norwegian rat.”
The rat and the undergraduates had to predict on which side of a maze food would appear.
The food was consistently located on the left 60 percent of the time and on the right 40 percent of the time.
However, Mr. Tetlock explains, the rat made better predictions than the Yale students,
“The rat went for the more frequently rewarded side (getting it right roughly 60 percent of the time), whereas the humans looked hard for patterns and wound up choosing the left or the right side in roughly the proportion they were rewarded (getting it right roughly 52 percent of the time).”
The problem is that when faced with uncertainty, rather than making sensible bets on the best course of action, humans strive to conquer the uncertainty and devise a complex system intended to guarantee success.
While we can laugh at the hapless nature of humans, we must also recognize that our brains get in the way of making predictions.
And that is a problem in an era when nonprofits are urged to deploy “proven, effective” programs and grant makers to demand “proof” that such programs are working.
To be sure, we can learn more about what works and what does not. We can strive to better understand what sorts of programs appear to work better than others. We can search for the characteristics demonstrated by high-performing organizations. But we must frame this effort in the language of probability, not as cause and effect “laws of nature” that simply need to be discovered.
Mr. Tetlock himself makes clear that our limited predictive ability should not paralyze us. “It would be a massive mistake to ‘give up,’ to approach good judgment solely from first-person pronoun perspectives that treat our own intuitions about what constitutes good judgments, about how well we stack up against those intuitions, as the beginning and end points of inquiry.”
If we expect to figure out where money, talent, and other resources should go, we need a blend of approaches to gathering knowledge. We need analytical studies, third-party evaluations, and statistical data—but we also need ideas drawn from beneficiaries, from an assessment of the character of nonprofit management teams, and from the intuition of experienced people in the nonprofit world.
In judging the validity of a decision-making process, Mr. Tetlock suggests we focus on two questions:
• How well do the expectations fit with what we can observe?
• Do decision makers update their expectations in response to evidence?
Grant makers would do well to ask those questions before allocating money, and nonprofits should make those their guideposts in crafting decisions.
We are on the cusp of what could be an era of high performance by nonprofit groups.
But no matter how much progress we make, our success hinges on accepting the limits of our knowledge and a resistance to the seductive idea that if we just try hard enough we can identify “proven” approaches that guarantee success.