Charity Navigator’s Vital Mission Hides Flawed Rankings

This entry to the One Post Challenge comes from Michael Soper. Michael is President, TeamSoper and the Development & Marketing Management Corporation. Formerly, he was Senior Vice President, Development at PBS and WETA.

By Michael Soper

Strong Marketing of a Weak Success Measure: Charity Navigator Vital Mission Hides Flawed Rankings

Everyone wants to figure out how to evaluate nonprofits. Grantmakers, donors, volunteers, journalist, and nonprofit leaders.

Individuals who contribute and the nonprofits that use those funds to provide vital services would both benefit from ranking of effectiveness and efficiency. Such evaluations would encourage nonprofits to constantly improve their performance and allow funders to make smarter investments.

Those were the driving motivations behind the creation of Charity Navigator and other nonprofit rating / ranking services. Yet, if you believe Charity Navigator or others have found the holy grail of evaluating nonprofit organizations you’re sadly mistaken — and I encourage you to keep reading.

For-profits have one thing nonprofits do not — a clear set of financial measures of success. Across for-profits is it relatively easy to measure and compare profits. Yet, it is much tougher to measure effective and efficient service – the mission and goal of all nonprofits.

Charity Navigator’s rankings are the result of gross oversimplifications. For example, Charity Navigator’s:

  • Evaluation process begins and ends with creating ratios based on almost any two numbers found in nonprofit organizations’ IRS Form 990.
  • Presumption is that all nonprofits complete their IRS Form 990’s in the same manner, using precisely the same definitions of what income and expenses are reported in response to a given question on Form 990.
  • Ratings do not include an “affirmative confirmation” from nonprofits’ top management to guarantee the accounting basis of specific figures or that the resulting ranking is both correct and fair.

Imagine a nonprofit institution raising capital funds for a new facility. Simply looking at the IRS Form 990 could lead one to believe the organization had a dramatic increase in revenues. That’s good news to the nonprofit — unless, for example, Charity Navigator decides to use that year as a “base year” on which to evaluate future year’s revenues. Future ratings and rankings could show the nonprofit in decline as a result of the decreasing revenue.

Or, consider how you would rate a nonprofit whose mission is to care for people during and immediately following natural disasters? Funding for the organization is like winding a clock-spring. All of the investments in infrastructure are “waste” if there are no disasters.

On the other hand, if there is a disaster and the expensive infrastructure doesn’t exist, the organization will fail to react instantly when conditions demand nothing less. Only when the spring is wound can the organization deploy resources and services when they are most needed.

These situations have me reflecting on the age-old loaded-question, “Are you still beating your wife?” Charity Navigator’s ratings, rankings, and top ten lists are all presumed to be true as published until and unless they are challenged a nonprofit that was damaged by an overly simplified ranking system that is not based on an apples-to-apples comparison.

In my view, Charity Navigator, its ratings, and its top ten lists are nothing more than great merchandising of a weak underlying product.

For example, wouldn’t any donor or nonprofit be interested in the following “Top Ten” Charity Navigator lists?

  • 10 of the Best Charities Everyone’s Heard Of
  • 10 Highly Rated Charities Relying on Private Contributions
  • 10 Charities Routinely in the Red
  • 10 Charities Stockpiling Your Money
  • 10 Charities Expanding in a Hurry
  • 10 Charities in Deep Financial Trouble

These lists — while attractive — are the “National Enquirer” approach to a topic that demands more substantive evaluation of nonprofits’ effectiveness and efficiency. While not as quick or easy as Charity Navigator’s overly simplistic rankings, it’s fascinating to see Charity Navigator’s own recommendation on how to evaluate nonprofit institutions that it does NOT rate (listed below) far more closely represent the time, questions, and interaction with nonprofits that are required to evaluate their effectiveness and efficiency.

Charity Navigator’s Suggestions On Evaluating Nonprofit Success

  1. Can your charity clearly communicate who they are and what they do?
  2. Can your charity define their short-term and long-term goals?
  3. Can your charity tell you the progress it has made (or is making) toward its goal?
  4. Do your charity’s programs make sense to you?
  5. Can you trust your charity?
  6. Are you willing to make a long-term commitment to your organization?

I give up! If these are the questions that Charity Navigator recommends you and I ask of nonprofits, why don’t they use these same questions themselves?

I can think of four answers:

  • It would require an enormous investment of time and money to gather the answers.
  • Even if answers to the above questions were collected, they don’t lend themselves to numeric ratings;
  • Without numeric ratings, it is next to impossible to produce apples-to-apples rankings, and, finally;
  • Without low-cost, easy to produce nonprofit rankings, there is no Charity Navigator.

Regardless of their size, the rating of a nonprofit’s service is complicated and highly subjective. The list of provided above by Charity Navigator is a good starting point for discussions with a nonprofit’s leadership, top management and key professionals.

But here in all this complexity and subjectivity is the beauty of making an individual decision to support a specific nonprofit organization.

Over time, you learn about those organizations dealing with the causes you care about most – and are passionate about their mission.

After all, if it were that easy to determine the most successful nonprofits, everyone could invest in nonprofit mutual funds and fund managers would make the investments in only those organizations’ services rated at the top of the list in terms of effectiveness and efficiency.

Large donors and small donors. Very well funded and not so well funded nonprofits. In all of these cases, half of the fun of investing in nonprofits – in giving away your hard earned cash – is learning about the similarities and differences between the half-dozen organizations meeting needs you believe are essential.

In the end, a significant contributor only has two good options.

  • They can become involved, get engaged, and learn about the organizations they fund.
  • Or, they can simply hire me (just teasing) – or any other consultant with substantial hands-on, nonprofit experience, to collect information on nonprofits of interest and provide them with a thoughtful narrative report.

My real concern is that Charity Navigator’s rankings appear to be so powerful and easy to use that:

  • Individuals will fail to take the time and gather the information to determine which ratings may be solid and which are gross oversimplifications or just plain wrong.
  • Potential contributors will simply discard a deserving nonprofit from their list of giving priorities, and / or;
  • Donors will fail to use the rankings provided by Charity Navigator as one of many topics to discuss with the top management of the nonprofit that interests them.

Charity Navigator’s (and other data aggregators / information providers) current ratings, rankings, and practice of publishing the “truth” until proven otherwise fails potential donors, some nonprofits, and its own mission.

I’m not suggesting that every poor rating of a nonprofit by Charity Navigator is incorrect or undeserved. I am urging the nonprofit industry to create better measures and / or methods of evaluating nonprofits’ mission-driven services in terms of effectiveness and efficiency.

Until that time, I would urge all nonprofits to be open and accountable and all current and potential contributors to become more involved with and knowledgeable about the nonprofits engaged in the causes that interest them the most.

20 Comments

  1. Amen! But why stop at Charity Navigator? The same goes for other numbers-based ranking systems such as AIP, and for those in which numbers play a factor, such as Wise Giving Alliance.

    What’s more, even the number- crunching watchdogs can’t agree on how to rank the charities.

    The Christian Science Monitor recently published the rankings of the 50 largest US charities, and included the watchdog ratings for each. Here are some samples …

    FEED THE CHILDREN
    AIP: F
    CH.NAV.: **** (highest rating)
    WGA: Y (presumably a “yes”)

    BOY SCOUTS OF AMERICA
    AIP: A+/B
    CH.NAV.: **
    WGA: Y

    MEMORIAL SLOAN-KETTERING CANC.CTR.
    AIP: A
    CHAR.NAV.: N/A
    WGA: N (presumably a “no”)

    So, what’s a donor to do? Michael laid out the options, none of which are terribly practical unless you are a donor with the means — or knowledge — to do your own assessments.

    Most, however, will simply stick with the Buds and Millers of the charity world, or rely on somebody — anybody — giving them a magical ranking to justify their charitable choice.

    Unless the “microbreweries” of our sector start to do a better job of communicating their unique and worthy attributes, not only will they never gain the attention of donors, they won’t even make it on a ratings list.

    And those of us who have the platform and the voice to do so should be doing a better job of de-bunking the value of 990-based ratings.

    Thank you, Michael, for putting a spotlight on one of our sector’s pink elephants!

  2. Renata, your samples from the Christian Science Monitor story were perfect. They’re simple and powerful illustrations of how abused 990-based ratings have become.

    You’re right, there are more pink elephants than Charity Navigator — although I feel their merchandising of 990-based ratings (Top Ten Lists) puts a second strike against them.

    This is not the classic “win-win” situation. Those organizations large enough to be evaluated risk being rated unfairly or misjudged. Smaller organizations don’t get rated and fail to attain any visibility.

    To me, one sign of a well managed nonprofit is that they can summarize their mission and how they measure their achievements in support it in just a few hundred words. Nonprofits would all benefit from documenting their own “proof of performance.”

    As Sean pointed out, larger nonprofit organizations may be able to create numeric measures of their success. Smaller nonprofits can convey powerful stories about their impact.

    In short, every organization must “do good,” “signal (communicate) that good” so as to be deserving of support, and, therefore, “fund good.”

    Only when current and potential donors understand the limitations of 990-based ratings, will they devote themselves to what I believe is the only practical approach remaining — to gather information from the organization itself and / or other contributors whose opinions you trust.

  3. tom belford says:

    Right on Michael Soper! Wish I wrote it. Happy to second your assessment on Wednesday’s Agitator.

  4. Nick S says:

    While I feel a lot of people have already pointed out the pitfalls of 990 based evaluation, I am glad to see it reinforced so eloquently. Part of my job is evaluating potential grantees, and I have seen so many different ways of accounting on 990s that I no longer place much stock in them. An audit is a better tool, but still falls short of really measuring a charity’s success.

    Interestingly, one of the other non-profit blogs I read most often is Trent Stamp’s, the CEO of Charity Navigator. I think he does a good job on this blog highlighting some of the most egregious violators of public trust. These organizations are easily spotted through their 990s, with their grossly oversized fundraising expenses. I hope he sees this post and takes the time to respond to your criticisms

  5. Charity Navigator does have great suggestions for how to evaluate nonprofits that they don’t rate. That was a fascinating discovery in looking at their site in writing my original post.

    Of course, CN and others’ rating systems are flawed at their base — 990’s contain substantial amounts of “interpretation” as Sean points out.

    I too would applaud hearing from Trent Stamp / CN regarding their reactions to these posts and if / when / what they may be considering to improve their overall rating / ranking system.

    Clearly CN and others are responding to a huge need / desire to better understand nonprofit performance and separate the wheat from the chaff.

  6. Before writing my Financial Times column about Charity Navigator’s flaws I called and spoke with them and asked them to respond to my various points.

    Personally I believe that CN has every good intention. They have done a huge service by shinning a spotlight on the concept of rating charities. They have the brand recognition that if they worked on transforming their rating system to something that made more sense they could continue to dominate the field and I for one would celebrate their move.

    I think it would be great if Trent Stamp wanted to comment on this post. The atmosphere might seem a little harsh right now. But I for one would welcome his comments. We’ve all heard the arguments against CN. What are the arguments for their system?

  7. Nick T says:

    Interesting post, particularly given the debate at the UK kicked off by this post by New Philanthropy Capital in the Guardian.

    Also, because Intelligent Giving attempted a much more nuanced and sensitive approach here in the UK.

  8. a fundraiser says:

    The Nonprofiteer had a great post this past week based on a conference call by the Nonprofit Finance Fund. I recommend her post:

    http://nonprofiteer.typepad.com/the_nonprofiteer/2007/11/re-the-990.html

    But, I also wanted to point out a post I put up in October of last year about the critical moment in CN’s lifecycle.

    http://donttellthedonor.blogspot.com/2006/10/is-charity-navigator-going-bankrupt.html

    At the time, it seemed to me that CN was going to go bankrupt. Since then, it seems they have reached out to secure more individual donations… I wonder if that is one of the reasons they have needed to use more attention grabbing headlines…

  9. Charity Navigator’s “Top Ten” lists may be popular and position CN as a brave “truth seeker,” the flaws in their underlying rating system remain.

    I appreciate “a Fundraiser” passing along the Nonprofiteer post (great!) and comments on a critical moment in CN’s evolution. Perhaps CN turned to more aggressive marketing to avoid bankruptcy.

    However, every organization benefits from asking if its marketing brings it closer to its mission? CN appears to have decided it did. I’m not so sure.

    Do funders want a critical review of nonprofits so badly that they are failing to critically review the methods and results of organizations promising those nonprofit ratings / rankings?

    While I continue to believe CN and all others who seek to evaluate nonprofit success will add depth and a diversity of content sources, I am reminded of the marketing maxim that, “Nothing kills a bad product faster than great advertising.”

  10. This post was featured on the Chronicle of Philanthropy website. That post generated a number of comments. You can read them here.

  11. Erich Riesenberg says:

    You write “I’m not suggesting that every poor rating of a nonprofit by Charity Navigator is incorrect or undeserved.”

    Of course not. However, can you point to one, two, or a couple dozen examples, of undeserved poor ratings?

    Thanks!

  12. Erich Riesenberg says:

    I am referring to the “worst 10” lists, such as this one.

    http://charitynavigator.org/index.cfm?bay=topten.detail&listid=28

    If any of those are actually responsible, effective charities meeting their mission with a passion, it would be a bit of a scandal. I am confused why, in all the criticism of CN’s ratings, specific examples of incorrect ratings are not discussed.

    As a donor, the two main sources I am aware of for broad information on charities is Guidestar and Charity Navigator. The 990 may not be an ideal or even great source, but it is better than no source, and donors and non profits would be more effective if they were widely used. I think Trent Stamp’s blog is great, and empowers donors to be comfortable asking tough questions. Obviously, CN advises people to go beyond its star rating.

    Finding a worthwhile charity to support is time consuming. If there is a better way to do it, hopefully the industry will hop to it in the next decade or two or three…

  13. Erich,
    I think the worst lists published by CN are probably accurate. I would advise against giving anything to a 1 star rated charity without significant due diligence. However, the real issue is whether a 4 star charity is more deserving of a donor’s money than a 2 or three star charity. For instance, as I wrote in Financial Times column a few months ago, the new Book Forces for Good examined the universe of nonprofits and profiled 12 case studies of excellent nonprofits. Their findings found no correlation with strong CN ratings. In fact CN rates one of their case studies, Habitat for Humanity, “Meets or nearly meets industry standards but underperforms most charities in its Cause”, while the authors profile it as one of the highest-impact nonprofits in existence today.

  14. Erich; You’re right. The 990 is good starting point, but, in my view, it’s a bad end point in rating a specific nonprofit. I hope you agree.

    Could I cite specific cases where Charity Navigator has under rated a nonprofit? Sure, but the problem is not the specific failures as much as the combined flaws of a) the rating system itself, and b) the hype giving to ranking nonprofits by those ratings.

    Sean raises an excellent point. Let’s presume that all nonprofits ranked at 1 star are questionable.

    Note: Even these ratings could be in error because CN makes no affirmative effort to have the rating reviewed, commented upon, or ignored by the nonprofit itself.

    If Charity Navigator’s rating and rankings are good measures of nonprofits’ success, you could trust them enough to wisely invest in those institutions (within your area of interest) that receive the greatest number of stars.

    Even if you are willing to discount all the 1 star ranked organizations, the lack of uniform reporting on 990’s will make it almost impossible to discern the differences between 2 stars and 3 … or between 3 stars and 4.

    Until Charity Navigator and others develop more comprehensive and diverse view of nonprofits than the 990, perhaps individual donors should take it upon themselves to ask nonprofits (in their areas of interest) why they received less than 4 star CN ranking.

  15. Erich Riesenberg says:

    It is quite a leap from: In my view, Charity Navigator, its ratings, and its top ten lists are nothing more than great merchandising of a weak underlying product.

    to: Sean raises an excellent point. Let’s presume that all nonprofits ranked at 1 star are questionable.

  16. I’m a veteran marketing researcher, business analyst, strategy person for Fortune 500 companies now dizzy from my first whiffs of air at the top of Maslow’s hierarchy (self-actualization plying my trade in ways that hopefully, eventually, somehow will do good).

    Any useful analysis of organizational performance requires both a qualitative and quantitative component. One possible approach to the nonprofit evaluation challenge might be to marry the quantitative analyses with various sources of qualitative detail, such as permitting the rated organization to comment and explain its own ratings, integrating with a social networking site dedicated to unfettered donor, nonprofit community, and nonprofit client feedback on programs and organizations. Then, these qualitative responses could be put through technological or human filters (I see Google foundation resources involved here) to deliver semi-quanitative reports and qualitative summaries to complement the numbers. Still not an ideal alternative to on-the-ground, firsthand interaction and investigation, but maybe a step better than what’s currently available in aggregate.

    Just thoughts…

  17. Thanks to Lauren Romero for her terrific comment above. I can’t agree more about the need for qualitative and quantitative parts to the analysis and possible “rating” of any nonprofit.

    I hope her comments help simulate the nonprofit industry to develop one or more solid measures and to include a nonprofit’s questions, clarifications, and comments in the process.

    When better and more reliable qualitative and quantitative analysis is available — and not before — the industry may be prepared for “ratings” and “top ten lists.”

    Others reactions?

  18. Lauren, I agree completely. Have you read the posts and extensive comment conversation around this issue and the role Google.org might play? You’ve commented on an old post here, so I’m guessing you haven’t see the more recent conversation. You should check out these posts:

    What to Measure and Why in Philanthropy

    Part Two

  19. Greg Timmons says:

    I ran into this post by accident not ten minutes after spending some time on Charity Navigator.

    I was there because I am on their mailing list. I am on their mailing list because I have been interested in and watching their progress for a number of years now.

    As the Director of a relatively young non-profit organization (Orphan’s Lifeline), I have struggled for a long time with frustration over the rating systems out there for non-profits.

    There are just so many things wrong with every one of them I have looked at that I don’t even know where to begin.

    But…since the comments here were driven by an article on Charity Navigator, I will comment from that standpoint as well.

    To begin with, I whole-
    heartedly agree that the 990 is a flawed source for rating or ranking. I would compare it to using math formulas to project when a glacier will melt that are based off of readings using 10 different models of equipment that measured the temperature of the glacier…and measured it in 10 different locations at 10 different times of the year.

    If the original data used in any measurement is not consistent and controlled in nature, the formulas using the data to find resulting x are flawed beyond measure. There is quite possibly no place worse to find consistent data than from the 990. Creative accounting and liberal use of generally acceptable accounting practices versus straight forward accounting and conservative use of accounting practices create a vast disparity accross the spectrum of 990 results.

    But that is just the beginning. It is relatively easy for a non-profit org. that has a billion dollars in revenue to look good under this type of methodology. By contrast, it can be very difficult for a org. that has revenues of 1% of that volume to look very good at all.

    One big factor, is the nature of their mission. The mission itself may be one that requires heavy management from day one. The mission itself may also require a heavy volume of cash grants versus program expenses that encompass a massive payroll for example. It is almost impossible to compare two such organizations from data from a 990.

    Then there is the measure of effectiveness and efficiency. Two abstract comparables that won’t be measured from any 990. One LARGE organization may be able to show 95% of their funds going to programs that feed children. But perhaps they do not have any due diligence or follow up and can’t even prove that children were fed. They might spend 5 million dollars on food and never feed a single child. Another smaller org. might spend one hundred thousand on food to feed 5000 children, but delivers it themselves and watches the children eat it…taking pictures and documenting the entire process. they can prove they fed 5,000 children, whereas the other org. can’t prove they even fed one.

    But…the first org. by numbers might have the advantage of size on their side…showing huge dollars going into programs against a seemingly modest administrative overhead. Whereas the smaller org, with smaller numbers has a higher relative administrative cost ratio by nature.

    In this scenario, they are far more effective, efficient and responsible with their donors money, but the numbers alone won’t show it, and in fact work against them…perhaps giving them a low rating even, resulting in slow growth, which in turn perpetuates that lower rating.

    I know that was a long way around to get to my point, which is that there are currently NO trustworthy sources for ranking non-profits as it relates to their efficiency in fulfilling their stated mission. The variables involved go far beyond a mathematical formula.

    Right now, the best a donor can do, is their own bit of due diligence…but faulty sources out there provide an easy way for them to get bad information and being busy in their own lives, they will unfortunately take the easy route in most cases…perpetuating the problem for many non-profits and lending even more credence to a flawed system of rating and ranking.

  20. Thanks Greg. I would just note that this post almost two years old. Charity Navigator is today working hard to address many of the concerns raised in the post.