Anatomy of a Failed Grant

The Carnegie Corporation of New York (the foundation created by Andrew Carnegie) issues a quarterly newsletter titled Results. The recent issue includes a fascinating story by the director of their journalism initiative. It tells the story of a failed grant:

The article begins:

This is an analysis of media grantmaking to support nonprofit issues. It is also the story of a first-time grantmaker who can say, in the end, that the grantee did what it was supposed to do but the strategy was a failure.

As long time readers know, I’ve written frequently about sharing failures in philanthropy. I bring the issue back up today to follow up on my recent column in the Chronicle of Philanthropy and yesterday’s post on “superior knowledge” as the true “currency” in philanthropy.

Why did Carnegie publish this story of “failure”? Why did Susan King, the author, decide to write a six page article outlining the fact that her grant failed? Susan writes,

I must admit that any honest analysis of my first grant leads me to conclude I was naïve in making it, sensitive as a former broadcast journalist to news media needs more than issue impact, and unsuccessful in really improving the coverage of nonprofit organizations and priorities that were the twin goals of the grant.

But then continues:

I can say I learned a great deal from this $354,000 investment about how working with media can advance ideas foundations care about, and I write this edition of Carnegie Results to share some of those lessons.

It is clear that Susan, and the Carnegie Corporation, recognize that by sharing Susan’s story, that they may be able to help other grantmakers make better grants. Unlike in the for-profit world, where only the money an investor puts to work can produce a profit for the investor, Carnegie can produce impact in their focus area simply by influencing the way that other grantmakers deploy their resources in the areas that Carnegie cares about.

“Failure” in philanthropy is much more like “failure” in science. When an experiment doesn’t work, scientist do not feel ashamed. The data goes into public journals and becomes one more data point for other scientists to utilize.

So Carnegie has two resources at their disposal, financial capital and superior knowledge (as I laid out yesterday). My argument is that Carnegie’s knowledge is far more valuable than their financial capital. Carnegie gives roughly $120 million a year. But their knowledge, if effectively and broadly distributed has the potential to influence the $300 billion given to charity each year. In fact, since the government funds many programs in the areas of Carnegie’s interests, Carnegie’s knowledge can potentially influence the more than half a trillion dollars of public money spent in these areas.

$120 million is nothing. $120 million is what Madonna made last year. $120 million is what the video game maker Electronic Arts said their new Harry Potter game would generate in revenue this year.

If you want to play big, philanthropy is about knowledge. Everyone in philanthropy talks about “leverage”. Leverage means to take actions that have a magnified impact relative to the money deployed. The key to leverage in this field is sharing knowledge.

So play big. Carnegie is trying to. Let’s make this behavior status quo.

9 Comments

  1. Archana says:

    Carnegie reported on a failed grant to Zimbabwe last year and their analysis of what went wrong with the grant and what they learned was some of the most fascinating philanthropy reading I’ve ever seen.

  2. Yes, yes, yes is all I can say. Sean, keep highlighting these stories and before long this kind of honest, constructive feedback-reporting will be the norm. As argued in the document you tweeted about the other day (link below), it goes against any nonprofit organization’s rational grain to tout its failures. We have to shower praise on those who share the kind of information laid out in Carnegie’s example here, and continue to push the conversation on why it’s so important to do so, so that an organization’s cost-benefit analysis of whether to share ALL of its knowledge (good, bad, and ugly) falls on the side of sharing.

    http://ksghome.harvard.edu/~lpritch/ignorance_v2_r1.pdf

  3. The current issue of Foundation Review makes an excellent case about why foundations need to talk more about failure. The authors of Philanthropy and Mistakes: An Untapped Resource point out that although open grantmakers are not necessarily more effective than their “secretive” counterparts, they deserve kudos for helping advance knowledge and fostering a culture of learning and adaptation. You can download the full issue of the Review here: http://www.foundationreview.org.

  4. Thanks for all your comments. The Foundation Review article Bruce points to was my favorite in the issue.

  5. Individuals give 6.5X as much as foundations each year, but little of the foundations’ expertise is available to individuals who might want to make more effective philanthropic decisions. Pictures of baby seals are an effective way for an individual nonprofit to raise money, but a questionable way for our society to allocate hundreds of billions of dollars a year in scarce resources. The greatest opportunity for foundations is to leverage their resources with the far greater funds available from individuals, and yet how many foundations ever consider how individual giving could be grown or made more effective?

  6. I couldn’t agree more Robert. I couldn’t agree more. I think influencing the way that individuals give is THE “next big thing” in philanthropy.

  7. Allison Fine says:

    Thanks for highlighting this topic and article, Sean. I’m not sure that it’s the best example of failed philanthropy since it was, in essence, providing grant money for a pr effort – even if it was a nonprofit, not exactly the stuff of systemic social change. Nonethless, I was impressed by this quote from the grantee:
    “. . .foundations would have to spend hundreds of thousands (or perhaps millions) of dollars to do a comprehensive multi-year tracking survey that assessed listenership, attitude changes, and behavioral changes over a substantial period of time. Even advertisers do not have the resources to do this, although they believe that their ads work.”
    Doesn’t this strike at the heart of a lot of the problem, the disconnect between what foundations believe their rather small grants have accomplished and the reality of what would have to happen on the ground, and how much that would cost over many years, to create an impact?
    I am increasingly troubled by the acceptance of a social science paradigm as the answer to program evaluation, we need new models of measurement that match the agility and time pressures of the connected age. But that’s for a longer discussion, I think.
    Thanks again, I really love the topics you cover and your analysis.

  8. Tony Wang says:

    Great post Sean. I agree with Allison’s comment that points out the absurdity of trying to measure everything – sometimes theory and logic and quick experimentation are more important than comprehensive systems of evaluation (reminds me of the principles of angel investing and venture capital).

    I think you’re right on the $ that financial resources + superior knowledge are the key to improved impact and each represent a variable in a Cobb-Douglas production function, where having a balance is superior to having one extreme or the other.

    I think though one of the sticky challenges is understanding exactly how much impact superior knowledge has and at what point is the optimal allocation of resources towards information sharing. It would be nice if there were some way to provide a monetary valuation or impact valuation to a foundation’s information assets. Just some random thoughts..

  9. Tony, you just agreed that it is absurd to measure everything and then tried to quantify the value of superior knowledge!

    I think the advertising example is an important one. Advertising strive to measure things, but they know they usually cannot do it. Google Adwords is so attractive because it makes measurement easy, but it certainly hasn’t displaced those big billboards in Times Square.

    It seems to me we should strive to measure, but recognize that we often cannot and come to grips with that reality.

    Allison, I agree re: the short comings of a social science evaluation model. One thing social science shows us is that while studying issues and quantifying them is useful, it still doesn’t prove anything since social science theories are frequently disproved later. You can prove things like gravity in a way that you just cannot prove the reasons behind social trends. Even economics (a pseudo “hard” science, which is really a true social science) can’t prove many things.

    Most businesses also do not “prove” to themselves why things work. They do track their profits, but they often puts massive resources into projects because they think they will work based on the evidence at hand, but without proof. By looking at the fact that they turn a profit as a group, we can tell that this process actually works in the for-profit world.

    Study hard, measure, quantify what you can. Then make a leap of faith and don’t fool yourself into thinking you are actually sure of anything.