Rating Charities: A Qualitative Approach

"Not everything that counts can be counted, and not everything that can be counted counts." (Sign hanging in Einstein’s office at Princeton)

Quick, which movie is better?

Casablanca or Look Who’s Talking, Too?

The Godfather or Bio-Dome with Pauly Shore?

Star Wars or Grease 2?

How do you know? What metrics are you using? I can’t think of any scoring system or appropriate metric to rate these movies, yet I KNOW that the first choice is a better buy at the rental store.

This country has a robust movie ranting system that is entirely qualitative. We have an enormous system of professional movie critics who make their ratings public and provide detailed commentary on why they like or dislike each film. There is no reason why a similar system could not be developed for charities.

I think the focus on quantitative rating systems has to do with the large number of nonprofits. How could a start up rating system even begin to offer educated opinions on the over 1 million 501(c)3 nonprofits in this country? I don’t think they have to. An effective program could be started rating the largest nonprofits. If successful, I think you would quickly see new startups rating “The best small nonprofits”, “The best Christian nonprofits”, “The best nonprofits to save the environment”, etc.

The system doesn’t have to be limited to professional critics. I love the site IMDb.com which aggregates all sorts of movie reviews from both pros and everyday movie fans. There is certainly a place for a user created content platform so donors could talk about their own experiences with nonprofits. As we know from film criticism, moviegoers and movie critics often like different films. However, we rarely see really bad movies doing great at the box office.

So enough with administrative expense ratios. Enough with the focus on the salary of charity officials. I want to know which nonprofits are any good and I don’t think there’s any number you can show me that will answer that question. To paraphrase Einstein’s sign, “Just because you can count the amount spent on overhead doesn’t mean it’s important and just because you can’t count the amount of good done by a nonprofit doesn’t mean that’s not the single most important thing for me to know.”

So who’s going to do this? I hope Perla Ni is up to the task…

7 Comments

  1. Holden says:

    I can tell you how good a movie was because I’ve seen it. As long as you know me and you think our tastes match not horribly, my opinion of the movie is evidence because it’s based on the most relevant evidence there is: a first-hand experience.

    What if movies were encrypted such that no one (and I mean *no one*) could actually see them? What if the only way to get any of idea of what happens in a movie were to do an extremely difficult, extremely costly study that in the end could never be conclusive, and what if the expense and difficulty of these studies meant that they were rarely carried out, and that when they were, they were carried out by people concerned about “turning people off from Hollywood,” who therefore filled the studies with gushing optimism beyond what they’d actually studied? What if most people guessed at how good the movies were using only the reputation of the director (a reputation that itself was built on movies that no one had seen) and 2-sentence plot summaries?

    Then, Hollywood would pretty much suck … but the analogy to charities would be much more accurate.

    There is a part of your post that I agree with. Not everything has to be quantified. If you can show me that $1000 will save a child, I can check my gut to see if that’s a good deal, just like I can check my gut to see if I enjoyed a movie (once I’ve actually seen it). But without ANY formal analysis, I simply have nothing to evaluate – no experience, no evidence, nothing except that dang Form 990.

    The fundamental difference between charities and consumer products such as movies and restaurants is that the mere fact of being a charity’s customer doesn’t mean you know ANYTHING about it. If we want to separate the good charities from the bad, I simply see no way around doing difficult, costly, painful, formal evaluations.

  2. Laura Quinn says:

    I have deeply conflicted thoughts about this. One the one hand I agree and would go much further than you have about administrative expense ratios. These are not only ineffective, they’re *damaging* to the sector. They create what Paul Light calls a “race to the bottom” where nonprofits are trying to out-cheap each other on what they invest in infrastructure. When nonprofits feel like it’s inappropriate to invest in technology, marketing, rational salaries, or even decent office equipment, they’re creating environments in which they simply can’t succeed.

    On the other hand, if you can’t measure in a way the good you’re doing in the world, I would question what good you’re actually creating. This doesn’t mean sum it up into one tidy little figure – evaluation is a difficult science, but far under-used. Many in the nonprofit sector have a tendency to operate based on an instinct that they’re doing good, and these instincts can definitely be wrong. I can’t find it right now, but there was a great study a little while back about some sort of child development program (mentoring, perhaps?) which sounded great, but they showed to actually, looking across many outcomes, to be *decreasing* the kid’s chances for the future (my recollection is the hypothesis was that because the kids were identified for the program, they were given the feeling that they were “really far gone,” which contributed to them, well, becoming even further gone).

    My point is that it’s tempting to rely on a “gut check” for how useful something is, but guts can be wrong, and they’re certainly drawn to flashier programs rather than things that are just quietly working (one of my favorite quotes, from Esther Dyson “Millions of people get hungry every day, even though it’s not very interesting.”). However, going full circle, solid evaluation tends to be very expensive – yet another administrative expense – and has to be weighed in the larger scheme of things that an organization should spend money on.

  3. VanStokkom says:

    See my discussion with Bruce Sievers in the English magazine Alliance at:

    http://www.vanstokkom.nl/indepers/MightyMeasurablesAlliance0906.pdf

    Best regards,
    VanStokkom.

  4. Thanks so much for contributing to the conversation. My disagreement with your essay is that the core function of for profits is objectively measurable to the decimal point. The core function of most nonprofits is not. However, I am not advocating at all that we do not evaluate outcomes, just that trying to force concepts like “people lifted out of poverty” or “raised awareness” into quantitative metrics is probably not the right approach.

    In my analysis I use film critics as an example. Certainly they evaluate films very extensively, but they do not try and return metrics to the reader. Instead they use language to qualitatively review the film.

  5. Laura, Thanks for stopping by. I hear nothing but praise for IdealWare. I hope that my follow up post refined my qualitative/quantitative point of view. I agree that there is a real push and pull between the two. I just think that the metrics should be used in service of generating a qualitative evaluation rather than viewing the metrics as the end unto them self.

  6. Adam Martin says:

    Change.org allows people to “review” an NGO, and is powered by Guidestar, the largest database of U.S. nonprofits, as far as I know. This sounds like the IMDB of charities you were describing!

  7. Holden says:

    Only one problem: the reviews themselves. Look at them. They’re vague, impression-based, superficial … in a word, useless. In two words: useless. USELESS!

    My point about IMDB is that a good review isn’t about degrees or credentials. But in charity, unlike movies, it IS a heck of a lot of hard work. Social networking isn’t the miracle cure.