Someone at Google.org read my posts (here and here) about nonprofit information being available within Google Finance and invited me to meet with them early next month. If you have any thoughts you’d like me to share with them, shoot me an email or leave a comment.
Checking back on the Red Cross discussion I started on the Google Finance discussion board, I found a new reply. I commend the Red Cross employees who have taken a shot at my question of how they know if they are effective, but I’m a little shocked that so far the organization has be unable to provide even minor information related to whether they do a good job. “Hard Nose Philanthropy” is on the rise, nonprofits need to be able to answer a simple question like “Tell me how you know if you’re being effective”.
Does anyone know of any other discussions on Google Finance nonprofit discussion boards? Here’s the new reply from the Red Cross:
From: ike.pig…@gmail.com – view profile Date: Wed, Dec 26 2007 7:36 am
Hello — this is Ike, and I am a regional communicator for the Red
Cross. I stumbled across this over the holiday break.I understand what you are talking about, with regards to our internal
measure of “effectiveness.” Unfortunately, you’re asking us the
equivalent of choosing a favorite child.Such a metric would be arbitrary, and could be easily fashioned to
highlight whichever line of service we wished to justify. In doing
so, number-crunchers would ask the question “Why in heck is Red Cross
involved in things that AREN’T as high-payoff as _______?” Just look
at the numbers. Why be involved in disaster relief when blood
provides the higher “impact?” Or vice versa?We’re dealing with two different dynamics here. As a large multi-
purpose humanitarian organization, we’ve got a tradition being
involved in a number of different activities. Disaster, blood,
service to armed forces, preparedness, first aid/safety, and some of
the international initiatives Maura described. Whether we like it or
not, there is a significant slice of America that expects the Red
Cross to play a role in each of those arenas. Public expectation
drives part of our mission. In some circumstances, we have made a
promise to be there (like immediate disaster relief). In others, we
end up getting involved because people think that’s what we’re
supposed to do, and no one else is stepping up (like the Safe and Well
website partnership.)The second dynamic is our volunteers. Some only have an interest in
disaster. Some only want to teach first aid classes. Some want to
volunteer to drive needed units of blood from the storage centers to
the hospitals. As a volunteer-led group, we’d alienate so many people
who are truly volunteering their time to make it all work.Are you really asking us to pick the one most effective line of
service, and do that to the exclusion of the rest? Because applying a
universal metric to all the lines of service is an invitation to
starting feeding some and starving others. That would be akin to
comparing the costs of helping 10 families in an apartment fire versus
10 single-home families spread out on different nights. Yes, one is
more “cost-effective.” That doesn’t mean it’s time to abandon the
rest.I think the key element you are dancing around here is the way we
handle donations. If someone wants to donate just to local fires in
their local chapter jurisdiction, we can assure that happens. If
someone wants to donate just to Services to Armed Forces, their wishes
are respected and followed through. We look at the business model of
each of those lines differently, asking first “Are we meeting this
mission?” and “Can we meet it more efficiently another way?”From: sstannard-stock…@ensemblecapital.com – view profile Date: Thurs, Dec 27 2007 8:28 am
Thanks so much for jumping into the conversation. I’m not asking you
to choose anything. I’m just asking how the Red Cross tracks whether
you’re doing a good job.For example, at my firm, Ensemble Capital Management we look at hard
numbers like revenue growth, assets under management and assets per
client. We also look at softer measures like visibility in the media
and online, depth of relationships with referral sources, and client
satisfaction. You can put good numbers on the first set, but not on
the second.All I’m asking the Red Cross is how do you know if you are doing a
good job? What do you track? And how do you compare yourself? For
instance, what if I asked you why my money could do more good by
donating it to you than donating it to another similar organization or
even to FEMA? If an investor or prospective client asked me why I
thought that Ensemble was a better investment or firm to hire than our
competitors, I could speak to the issue for hours, citing both hard
data and soft qualities. I’m just asking the Red Cross the same
question.
8 Comments
Perhaps it’s up to us, professional consultants working within the non-profit sector, to determine what the appropriate metrics are. We tend to straddle the worlds of philanthropy and business and therefore are generally more comfortable with the rigors of business reporting than many of our clients. Asking non-profits to self-report seems like it would yield uneven results at best – which ultimately would defeat the purpose of generating consistent data for donors.
Beyond the obvious – revenue, overhead, donor retention ratios, program growth, etc. – I’m interested in what you and the rest of the TP readers would like to see included in a standard set of metrics?
One item I’d offer: a measurement of “average cost of impact” – in other words, the organization’s total budget divided by the total number of people (or animals, or acres of land) it’s benefited within a specific time period. That metric would (1) force each organization to provide a definition of how it helps people (etc.) – and (2) force it to account for all the costs associated with providing that help.
Other thoughts?
Leyla Farah,
Cause+Effect – Public Relations with a Purpose
I do think that consultants play an important role in this debate. I do think that self reporting is a critical step, but not enough by itself. Public for-profit companies self-report, but the reports they release are heavily analyzed by professional analysts.
This might sounds strange, but I don’t really believe we will ever find a simple set of standard measures. Even in the for-profit world, where everyone is producing an identical outcome (namely money), there are many approaches to analysis. You can’t say, Well Cisco’s “return on invested capital” is higher than GE’s, so Cisco is clearly better. It just doesn’t work like that.
I think the frame work of “average cost of impact” is correct, but it will not ever be something that can be accurately measured (but that’s OK, humans still measure many things that can’t be accurately measured, is War & Peace a better book than some trashy romance novel? Can you show me some stats that measure this or come close to proving it?).
I think this whole debate is key to the development of the sector. Glad to have you in the conversation Leyla.
Paradise Lost versus Gone with the Wind. What metrics do we use to determine which is better? Some subject matter requires judgment, taste, discernment, even wisdom. We have movie critics, book critics, educators to help us make more discriminating judgments. Before we cry ourselves hoarse over metrics, we have to ask whether philanthropy is more like art or more like business. The call for metrics can be a bullying move by the half educated to impose their MBA logic on a sector whose reason for being is that it stands in contrast to both government and business. As the old saying goes, “Do not attempt to cure what you do not understand.” Stressing metrics, Sean, is in terrible taste. You paint yourself as Barbarian.
Metrics can be misplaced. Some things cannot be quantified.
But Phil, couldn’t you write up a paper comparing and contrasting Paradise Lost vs. Gone with the Wind and point to why you personally thought one was better than the other? Or at least the ways in which each one is different and the strengths and weaknesses of each? That’s what I’m asking for. I think the cop out is saying “Gone with the Wind is about such an important topic” (which is the same as a nonprofit citing the importance of their cause) or “I just know it is great, you have to read it yourself” (which is the same as many nonprofits just saying that they “know” they are doing a good job or telling you to come volunteer so you’ll get it). Evaluation does not require metrics but however it is done, it is critical.
If I may carry the Paradise Lost vs. Gone with the Wind analogy a little further, I think it raises some interesting points.
The first is that there are plenty of potentially relevant metrics with which one could back up one’s a claim for each work’s superiority: their longevity in years, the number of universities that include them in introductory freshmen humanities courses (as a proxy measure of their centrality to our cultural canon), a RottenTomatoes.com-style survey of critics. I can even imagine poor grad students counting allusions to them in last year’s bestsellers.
Relying solely on any one of these potentially valid measures, however, would obviously leave you wide open to criticism for the flaws of your methodology and the limits of the analysis. To construct a strong argument for your preferred choice, one could use both the metrics and qualitative measures. Same goes for nonprofits – the measures are neither perfect nor complete, but that is not the same as nonexistent.
I think the other point is the difficulty of comparing apples and oranges. Let me reframe the question as “Paradise Lost” work of literature vs. “Gone with the Wind” work of film. Both are widely-considered seminal works in their mediums. It’s not hard to imagine metrics, like those above, that could easily distinguish each as a leader within its respective medium. It is much harder, however, to compare them very convincingly across mediums. An author and a film buff might reach very different conclusions about which one matters more in today’s culture. Their distinctive values and tastes will influence that decision.
The same, I think, is true for nonprofits. Too universal a measure like “average cost of impact” might not be helpful for identifying whether a great afterschool program in New York or a great community health program in Uganda is better. The costs and the measures of impact are on different scales. But metrics certainly might help you identify each within its field as the seminal nonprofit. From there, one’s values and tastes might be expected to guide your choice.
Staffer – I would actually disagree that “[t]oo universal a measure like “average cost of impact” might not be helpful for identifying whether a great afterschool program in New York or a great community health program in Uganda is better.”
If the cost of a 1% increase in the ratio of high school graduates is NY $1M, and the cost of a 1% increase in live births in Uganda is $1M (both hypothetical measures of course), I think that donors would actually able to make an informed choice about how to spend their dollars – depending on their personal convictions. Without this type of universal measurement, it’s all just a shot in the dark, right?
Leyla Farah,
Cause+Effect – Public Relations with a Purpose
Please note that I’ve moved this discussion to a new post. Would really like all of you to continue adding your thoughts. The new post is here.
There are positions on nonprofit metrics for every taste, yet we have found the menu to be so confusing and divisive that many lose their appetite. The discussion on metrics becomes productive only when the conversation shifts from one of being threatened, compared, or evaluated to one of using outcomes (and their values) as a tool. We feel the highest and best use of that tool is in ensuring sustainable funding.
The bottom line on this discussion is that there will never a universal metric to measure impact, effectiveness, or whatever term one prefers. Can metrics be used improperly? Yes. Can they be manipulated? Yes. Can they be difficult to quantify and communicate? Yes. These possibilities do not outweigh the benefits of a well designed and implemented program of demonstrating a nonprofit’s value.
Our position in this confusing arena does not come from a philosophical or academic perspective: it comes from the trenches of helping nonprofits get the funding they deserve. Those nonprofits that make it easier for funders to connect the dots to their respective outcomes deserve to have capital flow to them. Those that choose not to go down this path do so at their own financial risk.