This is a difficult post to write.
The cover story of this weekend’s edition of Barron’s (an important weekly Wall Street newspaper), is titled The 25 Best Givers and ranks the most effective philanthropists.
First off, I’d like to commend Barron’s for putting philanthropy on their cover and focusing on effectiveness rather than the size of a donor’s giving. In 2007, Barron’s ran a similar cover story where they profiled 10 lesser known, but highly effective donors. I’d also like to highlight the role of Global Philanthropy Group, a philanthropy consulting group, for working with Barron’s on the report and for helping push the concept of effective philanthropy into the mainstream.
But there’s a big problem with this list. Barron’s isn’t just highlighting effective philanthropists (as they did in 2007), they are attempting to rank the top 25. In the article Barron’s acknowledges the difficulty:
By its nature, this exercise involves a lot of subjective calls. Facts and figures about philanthropy are much harder to come by than data on corporations. One giver’s definition of success can differ sharply from another giver’s — or from ours.
The article goes on to explain the process:
Global Philanthropy Group and Barron’s considered scores of philanthropists, rating them on such criteria as innovation, quality of alliances with other groups, the ripple effects of their giving and the extent to which their successful projects can be replicated. We gravitated to philanthropists whose causes address severe problems, like children’s health in high-poverty regions of the world, but a broad range of causes, even in the arts, are reflected in the final cut.
However, unlike in the 2007 report, when Barron’s linked to full information on the methodology used by Geneva Global (their philanthropy consulting partner that year) and noted the firm’s relationship with any members of the list, this year’s report doesn’t include either.
Here’s the problem. The list perpetuates a myth of precision. It suggests that we can know that which is currently unknowable. According to the list, Pierre & Pam Omidyar are the most effective philanthropists in the world. I would agree that the Omidyar’s are doing great philanthropy. But is there any real basis for them to be ranked #1 and The KIPP Foundation to be ranked #18 or Tom Siebel’s Meth Project to be ranked #5? Is Brad Pitt at #17 more effective than George Soros at #19?
The article notes:
But even if you disagree with some of our judgments, you are bound to learn some useful lessons from each of the 25 philanthropists on the list.
No doubt. It is a group of 25 outstanding philanthropists. It is wonderful to see such a group profiled on the front page of an important paper. But they should not have ranked the groups in order. Especially since there is no transparency to the methodology used.
One of the key things that got me interested in philanthropy was reading about how when Ted Turner made a billion dollar gift to the United Nations in 1998, his hand was shaking because he know the gift was going to knock him way down in the rankings of the Forbes 400 list of richest people. As a direct response to his statement, the Slate 60 was created to rank those people who give the most. A year and a half ago, I wrote about the release of the 2007 Slate 60:
With the 2007 Slate 60 out today, I think it a useful time to think about how cultural expectations drive human behavior. Personally I would love to see some sort of list of the most innovative, or most effective donors. Giving big is great, but giving well is better.
It is easy to measure how much someone gave, but really tough to measure how well they gave it away. In the financial markets, investors who generate the highest return on their investments are celebrated, not those with the largest portfolios. But then it is quite easy to measure for-profit investment returns.
When we discuss measurement, lets be sure to remember that we must measure the right things, not those that are easiest to measure.
So I want to see lists like the one Barron’s has created. But not if the ranking is purely subjective. Without a strong, transparent methodology to the ranking, there is no way for the list to actual affect behavior. To get higher on the Slate 60, you need to give more. But how might a donor not on the Barron’s list adjust their behavior so as to make the list next year? The article offers limited clues.
I’m bummed to offer a negative take on this story. I’ve advocated for this sort of list many times in the past. But if we are going to publicly rank donors based on effectiveness, we need to do it in a way that is transparent enough that we begin to set a bar for donors to attempt to clear. Otherwise, we risk making donors shrug off the need to be effective, since there are no obvious guideposts for what the concept actually means.