Last month the Wall Street Journal’s “The Numbers Guy” columnist Carl Bialik wrote about charity evaluators. Today he writes about the rating systems of movie critics. I was thrilled to see his follow up because in March of 2007 I wrote a post about what charity evaluation can learn from movie critics.
In his column, Bialik writes:
Today, the star system is ubiquitous but far from simple for critics who must fit an Oscar hopeful and a low-ambition horror movie on the same scale. Even those critics who don’t assign stars or grades find their carefully wrought opinions converted into numbers — or a thumbs up or thumbs down — and mashed together with other critics’ opinions. Critics tend to loathe the system and succumb to it at the same time. It all makes for an odd scale that, under the veneer of objective numerical measurement, is really just an apples-to-oranges mess.
…Upon its debut in 1990, Entertainment Weekly also compiled critics’ judgment in numerical form, and required all its own critics to attach letter grades to their reviews. “Fellow critics told me they hated the system, because it would mean that readers wouldn’t read their entire review,” says Jeff Jarvis, founding editor of Entertainment Weekly, who is no longer involved with the magazine. “I said that I thought we owed them that favor; readers are busy.”
…The New York Times, like The Wall Street Journal, doesn’t assign stars. “We don’t seek to reduce our arguments about a particular piece of art to a number, or letter grade, or golden spatulas, or whatever,” says Sam Sifton, the Times’ culture editor. “These are numbers that aren’t based on any rational or countable thing.”
…The star system, however imperfect, has become entrenched beyond film with the advent of online reviews, complete with stars, for everything from cardiologists to consumer-electronic devices to contractors. Michelin has been assigning stars to restaurants, and teachers have been boiling down months of student work into a single grade since before the New York Daily News instituted its three-star system. “I teach at Columbia, and I’m required to assign a grade,” says Mr. Lopate. “It’s equally unscientific, but if those are the terms of employment, then I do it.”
In my comparison between movie reviews and charity evaluation I wrote:
“Not everything that counts can be counted, and not everything that can be counted counts.” (Sign hanging in Einstein’s office at Princeton)
Quick, which movie is better?
Casablanca or Look Who’s Talking, Too? The Godfather or Bio-Dome with Pauly Shore? Star Wars or Grease 2?
How do you know? What metrics are you using? I can’t think of any scoring system or appropriate metric to rate these movies, yet I KNOW that the first choice is a better buy at the rental store.
This country has a robust movie ranting system that is entirely qualitative. We have an enormous system of professional movie critics who make their ratings public and provide detailed commentary on why they like or dislike each film. There is no reason why a similar system could not be developed for charities.
I think the focus on quantitative rating systems has to do with the large number of nonprofits. How could a start up rating system even begin to offer educated opinions on the over 1 million 501(c)3 nonprofits in this country? I don’t think they have to. An effective program could be started rating the largest nonprofits. If successful, I think you would quickly see new startups rating “The best small nonprofits”, “The best Christian nonprofits”, “The best nonprofits to save the environment”, etc.
The system doesn’t have to be limited to professional critics. I love the site IMDb.com which aggregates all sorts of movie reviews from both pros and everyday movie fans. There is certainly a place for a user created content platform so donors could talk about their own experiences with nonprofits. As we know from film criticism, moviegoers and movie critics often like different films. However, we rarely see really bad movies doing great at the box office.
So enough with administrative expense ratios. Enough with the focus on the salary of charity officials. I want to know which nonprofits are any good and I don’t think there’s any number you can show me that will answer that question. To paraphrase Einstein’s sign, “Just because you can count the amount spent on overhead doesn’t mean it’s important and just because you can’t count the amount of good done by a nonprofit doesn’t mean that’s not the single most important thing for me to know.”
So who’s going to do this? I hope Perla Ni is up to the task…
Bialik’s column is so useful because it demonstrates the way that the quantification of an inherently qualitative exercise can bring some benefits, but also has some serious drawbacks. See the debate that my movie critic/charity review analogy post set off along with the refining I did to my argument here.