I’m meeting with someone from Google.org next week to talk about what kind of information I think they should make available about nonprofits in Google Finance and other ways that Google.com’s mission statement to “organizing the world’s information” can be directed at the Third Sector.
In preparation, I’d like to spend some time speaking as a community about this issue. I encourage you to leave comments or email me your thoughts.
In response to the thread I started on the Google Finance Red Cross board about how effective they are, I got a comment from Leyla Farah of Cause + Effect public relations:
One item I’d offer: a measurement of “average cost of impact” – in other words, the organization’s total budget divided by the total number of people (or animals, or acres of land) it’s benefited within a specific time period. That metric would (1) force each organization to provide a definition of how it helps people (etc.) – and (2) force it to account for all the costs associated with providing that help.
While Phil Cubeta of Gift Hub scolded me for focusing on metrics:
Paradise Lost versus Gone with the Wind. What metrics do we use to determine which is better? Some subject matter requires judgment, taste, discernment, even wisdom. We have movie critics, book critics, educators to help us make more discriminating judgments. Before we cry ourselves hoarse over metrics, we have to ask whether philanthropy is more like art or more like business. The call for metrics can be a bullying move by the half educated to impose their MBA logic on a sector whose reason for being is that it stands in contrast to both government and business. As the old saying goes, “Do not attempt to cure what you do not understand.” Stressing metrics, Sean, is in terrible taste. You paint yourself as Barbarian.
Personally, I’d like to state that I don’t intend to stress metrics as being valuable unto themselves. However, I do think that all things in life can be judged, at least in each person’s personal view, as being bad, good, better and best (I’m sure there are some exceptions, but you get the point). I think it is critical that we find ways to judge nonprofits so that philanthropic dollars can flow to the organizations that do the most good in the world. To me, funding the best of what is available is far more important than trying to invent the next big thing. I think that information about nonprofits is what is needed and this is why I care about nonprofits being in the Google Finance portal.
As a professional investor in for-profit companies, I can tell you that there are very few (none) golden metrics that allow you to comprehensively judge one for-profit against others. Even very widely used metrics like “price to earnings ratios”, “dividend yields”, “profit margins”, and “earning growth rates”, have been show in practice to be very useful, but not in any way adequate to judging the superiority of one investment choice vs. another on their own.
In my Philanthropy Predictions for 2008 that I wrote for the Chronicle of Philanthropy, I made one reference to measurement:
A United Way-authored outcome-measurement template will be adopted by the sector as the standard format for nonprofit organizations to report on their effectiveness. The narrative-driven form will soon be available for download from the home pages of many nonprofits.
Note that I suggest a “narrative-driven form”. If you read analyst reports on for-profit investments, you’ll see a lot of numbers and metrics, but the heart of the report is a narrative about the company.
This brings me to an excellent comment from the thread mentioned above from an anonymous “young staffer”:
If I may carry the Paradise Lost vs. Gone with the Wind analogy a little further, I think it raises some interesting points.
The first is that there are plenty of potentially relevant metrics with which one could back up one’s a claim for each work’s superiority: their longevity in years, the number of universities that include them in introductory freshmen humanities courses (as a proxy measure of their centrality to our cultural canon), a RottenTomatoes.com-style survey of critics. I can even imagine poor grad students counting allusions to them in last year’s bestsellers.
Relying solely on any one of these potentially valid measures, however, would obviously leave you wide open to criticism for the flaws of your methodology and the limits of the analysis. To construct a strong argument for your preferred choice, one could use both the metrics and qualitative measures. Same goes for nonprofits – the measures are neither perfect nor complete, but that is not the same as nonexistent.
I think the other point is the difficulty of comparing apples and oranges. Let me reframe the question as “Paradise Lost” work of literature vs. “Gone with the Wind” work of film. Both are widely-considered seminal works in their mediums. It’s not hard to imagine metrics, like those above, that could easily distinguish each as a leader within its respective medium. It is much harder, however, to compare them very convincingly across mediums. An author and a film buff might reach very different conclusions about which one matters more in today’s culture. Their distinctive values and tastes will influence that decision.
The same, I think, is true for nonprofits. Too universal a measure like “average cost of impact” might not be helpful for identifying whether a great afterschool program in New York or a great community health program in Uganda is better. The costs and the measures of impact are on different scales. But metrics certainly might help you identify each within its field as the seminal nonprofit. From there, one’s values and tastes might be expected to guide your choice.
So there you have it, a good beginning to an important conversation. If there was a single webpage, like this one for the Red Cross, or this one for Cisco Systems, that contained all the information you would like to see when you wanted to examine a nonprofit for the first time and decide if you might want to support them, what information would you like there to be on the site?
Google.org owes me nothing and anything I tell them might be ignored. But on the other hand, I will deliver the message that we co-create over the next week in this discussion. Someone from one of the largest (and oldest) foundations has already asked me to pass on their offer of help to Google.org after reading my posts on the subject. I do think that any effort that you the reader put into this discussion will be heard by the powers that be at Google.org, even if they do not take action.
Wit my for profit hat on, as a venture investor, I understand measurement as a way to measure success in hitting the bottom line; more money, sooner at higher margin; pointing to monetary value against a short time frame.
On the other hand, we might have two worlds converging when we talk about measuring the impact of non profits if we expect them to adhere to a simplistic, single dimension. You can make giving, a right brain, passion laden act, feel like cold hearted investing and thus rob it of value; making it more like a commodity than a luxury good where value is not as easily quantified.
The first bar for non profit metrics to pass is that they do no harm; that the process of measuring, the cost of measuring and the way the information is presented along with what is left out, does not detract from value. Making giving ape investing robs it of its essential value. Yes you should be able to figure out the best non profits, but it’s a wicked problem; a non linear equation. The world of impact is run by different rules.
I realize my advice is not practical. I don’t intend it to be. I hope that it could introduce a level of what I think is necessary ambiguity into the equation. There is no one to one translation from a world where externalities, social and environmental can be pushed off the balance sheet and the real world where we live with their consequences, and the non profit world where people try to mitigate those consequences as the core of their mission.
I move to suggest we clear the decks right now and all agree that for-profit investing is not a starting point for nonprofit analysis. It has some interesting corollaries that we would be foolish to ignore, but the output of each (social impact vs. monetary profit) are fundamentally different.
When I point out the “narrative” nature of for-profit analysis, it is not to say that we then know that this is the correct approach for nonprofit analysis, but to diffuse the argument that people with a business background think that metrics are more important than narrative.
But Kevin, professionally, at Good Capital, you analyze nonprofits and decide whether to fund them (as well as funding for-profit social enterprises). So can you share with us the framework that you use? What information might you request from a nonprofit you were considering funding?
Here is another angle: “For whom is this nonprofit a good fit?” Input/output is not generally how you think about, say, citizenship, or inspiration, or passion, or the creation of a self in community with others. For some givers it is this expressive, passionate, in community with others aspect of nonprofit life that makes it worthwhile. Somehow, if you are to do more good than harm, you have to engage with people for whom this is about identity, passion, and community, not “Results” or not only results. Here is a specific idea:
How About Donor Profiles? That is how about a tab that brings a reader to profiles of particular givers and why they gave. The profile would have enough personal info so a giver could see if she is among peers, pals, friends, kindred spirits. The bigger the gift, the more this kind of “engagement” is critical. Among gifts are time, attention, leadership, and social capital (referrals). Donor Profiles would be a key tab for me.
Sean, given how influential google is, it is critical that you, as an investment type talking to business types not drive the conversation to the exclusion of those who are driven not by the dream of efficient outputs but by something more like solidarity. The numbers make nice charts, but a better world has a personal/political dimension that charts like that do not capture.
The narrative, human, humane dimension is what distinquishes the third sector in many ways. I hope this site will appeal as much to, say, Tracy Gary, as to Holden Karnofsky.
I beg to differ with Messrs. Jones and Phil: in nonprofits as in any other human enterprise it’s possible to ask and answer the questions, “What are we trying to do and for whom? Are we succeeding?” If the answer is “teach people to read using the whole-language method that every scholar has demonstrated is ineffective,” then all the ineffability and passion in the world won’t make the charity a worthwhile recipient of money.
At the same time, the burden of reporting shouldn’t be one more thing making the work of charities harder.
My suggestion–and I’m suggesting this as an exercise for every nonprofit just for the sake of mission clarity–is to ask a nonprofit to complete the statement, “We do ACTIVITY so that HOPED-FOR RESULT takes place.” This gives prospective donors the opportunity to bring to bear their own knowledge of and preferences for particular activities, as well as their own judgment about how well any given activity connects to the result it’s supposed to produce.
In any case, it must be clear whether the narrative that appears when someone Googles a charity was written by the charity, by Google, by Guidestar . . . people are very able to evaluate assertions if they’re given sufficient data about the source of those assertions.
i agree with non profiteer. there are good questions to ask. what i object to is the transpositions of business language and terminology into a new realm without realizing other elements are at play. many of the people i deal with come from the business world, as i did, and think the same methods apply in a one to one relationship; that is, if they could just get non profits to act like a business it would all be better.
in fact, i used to think like that when i first entered this realm from my traditional business career. i had to learn that other rules were in play, more issues were at stake and my own simplistic approach had to be modified.
the first generation of social metrics applied to non profit social enterprises was too cumbersome and essentially, in many ways, arrogant and even colonial in approach; that is, not respecting the new culture it was entering.
A difficult task you’ve taken onto your shoulders…. as we approach the elections, I am reminded of a similar question:
What makes a good country?
Is it simply GDP growth? % of electorate who participate in elections? Average income? Average lifespan?
I think it can be very difficult to gauge a good non-profit organization. In fact, I think that we have a problem with the way that we gauge all organizations.
The over-reliance on financial measures for evaluating a for-profit company is a problem, as well. I am in the minority with this idea, but I do not believe that the purpose of a company is to make money. I believe that it is to provide a needed service or product to the community.
This is the case for non-profit and for-profit companies. Their distinction should be limited to tax designation.
Is it a profitable company a good company if it causes social ills? Is a company that provides social benefits a bad company if it operates at a deficit?
These are the larger questions that we must ask ourselves.
As comical and naive as it may sound at this point, I think that the primary metric for NPO’s should be transparency. The difficulty, of course, lies in the definition. To me, the ideal non-profit would strive for transparency in every aspect of its operations (from personnel management to donor development) and furthermore –and this is the key to me — would bear the burden of proof of its value.
In other words, the NPO would proactively provide the information necessary for the due diligence process.
Just like Board members are supposed to draft by-laws, announce meetings, post minutes and disclose conflicts of interest, the NPO’s ought to strive to show what they do, why they do it and how, including the self-evident and the sticky issue alike. (From my experience as the Director of a NPO, the line between these two is often more blurry than one might anticipate, which comes out through the process of talking them through.)
In my mind, transparency is an essential part of good governance (whether it’s in the political, profit, non-profit or private sector) and NPO’s would be on safer ground and more effective if when they designed their services –through the entire product cycle, from introduction to implementation and follow up — they simultaneously constructed and reported their plan for self-evaluation: the metrics and process.
Moreover, NPO’s should make all of this information available on an easy to find page of the website and create a mechanism (blog, bulletin board, etc) to invite public feedback and discussion. Ideally if the NPO’s lacks policies (from the point of view of the NPO, an individual in the NPO or the public), they ought to label the holes, too. Maybe a sort of grid of “helpful information an NPO should supply” could be applied here. The essential point I am making, which rejoins some of the comments made above, is that the NPO needs to take on the challenge of explaining itself over and over and over again.
In a sense what I’m calling for is a process of self-evaluation akin to the “talking cure”, whereby the people involved in the NPO constantly move between looking at themselves (as a group and as individuals working with that group), sharing their thoughts, feelings and questions between themselves and with the world and readjust their conceptions and actions in response. The major difference of course is that the money flows in the opposite direction, from the therapist to the patient.
I think anyone who believes they’re going to come up with a universal measure that gets to social impact is fooling themselves. Impact measures need to relate to goals — and the strategies by which organizations seek to achieve those goals.
And the goals of nonprofits are diverse. Sometimes, they’re even in direct opposition. So while we might be able to agree on a shared conceptual metric — impact relative to resources expended is what we at CEP talk about — it’s not actually calculable across organizations. Say we both spent the same amount of money. Let’s say, further, that we determined the causal connection between our funding and these outcomes: you saved a rainforest and I improved high school graduation rates in Boston by 10 percent.
Who’s impact was greater? We could argue about that for a decade.
This is fundamentally different from business, where the ultimate goal is shared across very diverse companies. It’s essentially to maximize profit. EBAY and Proctor and Gamble are really different, but you can ultimately judge both by measures like profitability and stock appreciation.
Foundations and nonprofits absolutely need to do a better job assessing their performance and improving on the basis of what they learn. And that requires tapping into a diverse array of performance indicators: some comparative, some not; some quantitative, some qualitative; some closer to end impact, some based on strategic hypotheses about what needs to happen first to create the desired impact. Those indicators must be tied, always, to clear goals and coherent strategies.
I was recently at a meeting where I said this, and someone said, “I’m so tired of hearing about effectiveness. What about the moral imperative to do the work?” I said, “I think the moral imperative is to do the work effectively. And if you don’t assess, you simply don’t know if you’re effective.” And if you’re ineffective (and some are), you’re better off not doing the work.
So I wish we could all agree on this. Nonprofit and foundation effectiveness requires 1) clear goals; 2) sound strategies to achieve those goals; and 3) relevant, rigorous performance indicators that relate logically to goals and implementation of strategies to achieve those goals.
Then we could ask for this information from every organization we were considering funding, or supporting in some way or another. Or there could be some common platform, like Guidestar, where every organization could succinctly outline their goals, strategies, and indicators. That seems entirely reasonable — and feasible — to me.
But there’s no single metric that gets anywhere close to impact that we can calculate for all nonprofits. That’s pure fantasy.
I’m skeptical about the value of a one-size-fits-all metric to measure charitable delivery. Like the GDP, you end up measuring the dysfunctional and the functional by the same yardstick. (The quality of tapwater is so bad that people have to buy bottled water and that shows up in the GDP as an increase.)
Rather than debating the ideal composition of the magic bullet, why not ask: Has the nonprofit established and published their own metrics to measure and track their effectiveness? Do they publish the criteria they use, justify the criteria in terms of their corporate mission, and publish the results? Do they explain how they use the results of the metrics to adjust their corporate strategy and their program design and operations?
If so, give us the links. If not, scold them until they do. Then take advantage of the ‘reserve army of the under-employed’ and provide a facility for the public to discuss the quality of their metrics, how good a job they are doing sticking to their guns, and how honest they are being about doing it. ( aka: “Use the Net, Luke.”)
Phil, it seems to me that your three requirements for effectiveness, “1) clear goals; 2) sound strategies to achieve those goals; and 3) relevant, rigorous performance indicators that relate logically to goals and implementation of strategies to achieve those goals.” Could be recast more simply as:
What are you trying to do? How? Are you actually achieving what you’re trying to do?
I would also add, Why?
That’s the information I want when I try and learn about a nonprofit for the first time. The answers to these questions are clearly not some magic bullet “metric”. Numbers and metrics may certainly be important in demonstrating the answer to “Are you actually achieving what you’re trying to do?”, but the rest of the questions must be answered in a narrative.
In addition to hearing how the nonprofits answer this question. I’d also like to hear what other people think.
Regarding Phil Cubeta’s request that the end result be attractive to both business types and passion driven donors, I agree completely. In fact, I don’t think that the info portal we’re discussing should even be part of Google Finance. “Finance” is simply the wrong frame here.
I think a far better frame would be a Nonprofit Wiki. Maybe each nonprofit could “claim” their own Wiki page and then be able to post their own information in a section devoted to the nonprofit itself. Then the community could add other info. It would also need a feed for news and blog posts about the nonprofit as well as a discussion forum.
There already exists something similar to this for stocks called ValueWiki, check out this entry for Apple.
Lastly, I want to comment on the idea that donors are “passionate” about nonprofits and so “data” is actually not what they need.
Donors are passionate about causes. Nonprofits are ways to affect a cause. I think that all of the work we do on helping donors figure out if nonprofits are effective will actually increase people’s passion as they realize that there are real tangible ways that they can have an impact on the causes they care about.
A major part of effectiveness is the people involved in the nonprofit and the “culture” of the organization. Those are both elements that donors get passionate about. So my hope is that a passion driven donor like Tracy Gary would find a Nonprofit Wiki as a way to discover nonprofits she did not know about and to learn about the amazing work they were doing and the amazing people involved. While a donor like Holden Karnofsky would use the Nonprofit Wiki as a way to find the “proof” he is looking for that the nonprofits programs are working.
Just because it is difficult to adopt a one size fits all metric for non-profits, doesn’t excuse them from the need to prove that they are achieving something good. That’s effectiveness. That’s metrics.
Find me a major charity that asserts on it’s website that, you know what, all they really want to do is make the donors feel good about themselves. Charities understand that they’re trying to solve problems, and I suspect most of them *think* they’re fairly good at it. If donors must rely on charities optimistic view of their own actions, then there is no real information for donors at all.
As Nonprofiteer says, charities should be able to articulate not only their mission, broad or narrow (improve education in Boston, or teach reading using the whole-language method), but they should also be able to articulate their goals, and some measure of how well they achieve those goals (increase literacy, by some measurable standard, among 6th graders in schools our program works with to, say, 95%).
I’m also seeing, here and in other locations where this general area has been a hot topic in recent days, the vibe that charity should all be about matching donors with something they’re passionate about.
You know what? I’m not passionate about whole-language learning. I’m not passionate about insecticide treated bed nets for combating malaria.
I’m passionate about making peoples’ lives better. I know my donation are not going to fix all the world’s ills, but I’d like to fix as many as possible, with the donations I make.
Yes, there are some values issues that are challenging to quantify or directly compare (education vs. health, local vs. international). But if I settle on local education as the area I choose to focus on, I don’t have any strong preferences as to which methods are best. If there are multiple charities working in a given area, I’d like those charities to persuade me that they’re method is best. If I give $X to a charity that causes a positive change for 20 people, but a different charity could have used that money to cause a very similar change for 40 people, then I’d like to know it, and hopefully, change my giving in the future.
Phil S (there are three Phil’s in this thread!), I totally agree. Imagine you wanted to improve primary school education in the Boston Area. Imagine that the Google database let you identify all the nonprofits working on this issue. What information would you like to see on each one? Let’s be specific so that we can draw some ideas for what data sets might exist in the database.
Sean – I’m not sure that there is a standard metric that would fully measure even a relatively narrow target like education in Boston.
But I think there are some metrics that would, with subjective commentary, be useful:
**Measures of input**
– Number of full time equivalent field workers for the charity (i.e. How many teachers, tutors, and what not are they putting into the field)
– Amount of money spent on material contribution to their program services (i.e. For education, this would be a measure of textbooks and the like. For health, it might be a measure of medicine shipped)
– Perhaps some measures of logistics and distribution efforts (more important for charities doing international material aid)
– Perhaps some measure of money/effort spent on advocacy. Advocacy *results* are, admittedly, quite hard to measure. But I’d like to know if a charity is spending 80% of it’s ‘program expenses’ on advocacy and 20% on direct efforts, or if the ratios are flipped. Depending on the cause in question, and my personal views, I might favor 80/20 or 20/80, or something else, but in this case, at least understanding the split in focus would be useful.
**Measures of output**
– Number of individuals directly impacted by the program (i.e. number of students perhaps)
– Average time spent in the program, per student (i.e. 5 hours per week, for 3 months, say)
Most of the above could be reported in a relatively uniform way for many different charities. But I’d also want some free form fields, for the charities to list data that they feel is important. i.e. “We increased 6th grade literacy on a standardized test to 96% for those who were in our program, versus 82% for a similar group not in our program.”
I mentioned “with subjective commentary” above, but forgot to flesh that out.
While my suggested metrics might provide a somewhat general framework that might be applied to many kinds of charities, I think they’re not very useful unless the charity gives a bit more detail. So, when we talk about number of individuals impacted, the charity might provide some sort of textual description of those impacted (primarily lower to lower-middle class children in grades 4-6 in the greater Boston area).
In some cases, this information might be relatively obvious from a simple description of what the charity does, but that would not always be the case. Hence the need to combine numbers with context. (i.e. What are those numbers are describing).
It seems to me that Google needs to know what is already out there and what they can provide that is both different and valuable. I would start by orienting them to all of the existing resources that provide data, metrics, measurements, or performance tools. An incomplete (and overly US focused) list would include –
• The NPO Reporter,
• New Philanthropy Capital,
• World Bank,
• The Philanthropic Initiative,
• Urban Institute,
• Keystone Accountability,
A full list would be a useful thing to generate collectively and for you to bring to Google. Given Google’s reach, a global list is key here – check out the Israeli organization I mentioned on philanthropy2173 on January 2 (Note: I have no affiliation with the Israeli organization).
Once the list exists, a gap analysis is next step. What do we know, what don’t we know? What information do we have, what do we not have? What information is useful, and for what purposes? What are advantages and disadvantages of each of the above resources (and others that would be listed).
Then, given that it is Google, the question becomes “What data are already available, scalable and can be crawled/compiled by machine (not person) and are also useful?” This is the hardest question to answer. Data that are readily available and that can be crunched into easy-to calculate-ratios are not really useful for making decisions and comparisons. So…
• Is such data more useful if it is arrayed in different ways than it can now be found? Could Google display the info so sets of nonprofits are lumped together for comparison – Israeli arts orgs under $500,000 in budget are only compared to other such arts orgs, Kenyan health centers are only compared to Kenyan health centers of similar mission or size, etc.
• Are there single, common data points that all NPOs have that are useful? Overall budget? Earned revenue? Earned revenue ratios? Numbers served? Percentage of board members that donate to org? (I’m listing examples, I am not suggesting that these particular examples either exist or are necessarily useful)
• Can Google do more than just list the data as a finance site? e.g. can it make it part of Google Knol and unleash the power of data and humans?
• Can Google invest in/buy out GrantsFire and focus on using foundation grants as a proxy for something?
I’m not sure any of this is useful, and some of it is rather deliberately “out there.” At the very least, it would be great if we and you could collectively identify all the resources we do know of.
** (Full disclosure: I am an advisor to NPO Reporter, a board member (at this moment) for GiveWell and have partnered with TPI. I also have had professional conversations with staff at NPC, Keystone and the organization that hosts SmartLink.
I’m not convinced that this sector is a good one for Google to address, at least using the tools that are typical for Google.
IIUC, Google is very focused on automated tools – web crawls and such. I suppose a sophisticated web crawl could create summary data approximating what is already available at sites like GuideStar and Charity Navigator, but then, recreating what already exists isn’t all that useful, is it?
I think there IS a need for better information organization in this sector, but I think going beyond what already exists will probably require human intervention – either a single organization addressing various holes, or some sort of collective effort of many users (something like Wikipedia, focused on the charity sector).
If Google were willing to do either of the latter two (either dedicate a small staff to content creation of its own, or to tool creation and support and perhaps helping get a community started for the last approach), then I think they could make a real contribution.
Sean, perhaps you could provide more information on the type(s) of effort Google is willing to consider?
Phil S. I don’t have any knowledge of what Google.com nor Google.org is “willing to consider”. I’m going to bring my thoughts and the thoughts generated by this conversation to the meeting. I don’t think we should constrain this conversation by any perceived limitation that Google might have. Google might not be the best place for all of this to move forward.
Why not a self-test that measures would-be donors’ understanding of charities, of what does not count as an appropriate measure of nonprofit performance and why it doesn’t? Then give these donors the option of clicking through to online tutorials.
I applaud this thread for this ambitious goal!
To me, the measurement issue has to be viewed through the lens of the different levels of service a group provides. I would recommend a multi-tier classification system with a cost per client ratio within that context, and recommended benchmark levels within each classification. These levels could be defined something like as follows (these are by no means the ones that I would recommend, I am just demonstrating how it could work)
1) Time intensive (3 or more times per week interaction with clients for multi-hours per interaction)
2) Time limited (1- 4 interactions per month of an hour or more each interaction)
3) Transactional (radio spots, flyer distribution, one time educational opportunities, etc)
4) Other classifications as needed
For example, a group whose mission is to educate people may have a very limited time interaction with people (handing them a flyer) and would have a low person to cost ratio due to the limited time interaction, but would be able to be compared to others who have transactional interactions. By comparison, a group who works day in and day out caring for those with severe disabilities would be classified as time intensive since 24 hour care would be required, and they would have very high persons served to cost of service ratios that could be benchmarked against others with similar classifications.
These different classifications would allow groups with similar interaction types to be more accurately compared against others of similar work. I prefer a tiered approach since all the different type’s work is valuable, but is difficult to compare apples to apples without further clarifiers through the classifications.
I hope this makes sense on a Friday after a long-work week. I look forward to all the replies!
Oh and I forgot to add….this is just one small component that is needed in addition to clearly stated goals and results and full disclosure. I think the Nonprofiteer, Lucy, & Phil Steinmeyer are on the right track. There is so much we can be doing and should be doing. It is a little dizzying.
Lucy thinks it’s a discovery problem; find out who is doing what, how it connects, what the gaps are. I think that makes far more sense than yet another replication of a cumbersome set of metrics that add cost but don’t deliver value.
What I’d like to see is a greater attempt to bridge the gap between academia and other ‘expert’ forums, and websites that target the public.
My gut feeling is that a lot of the things that charities do, with good intentions, don’t necessarily have much impact. Even if we say that charity X ran their program for Y people with Z dollars of funding, those metrics are moot if the program in question doesn’t work.
I assume that these issues are discussed in somewhat more rigorous form (randomized trials, or at least multi-factor outcome studies), and hopefully there is at least a degree of consensus among experts about what works and what doesn’t. But where can I, a non-academic, read about what really works and doesn’t?
I’ve read/skimmed a few books on aid, but they tend to be focused on government aid, rather than what private charities do, and/or their authors are such strong advocates of relatively extreme positions that I question their summaries of existing research.
I, too, am finding this a very valuable thread. My 2 cents are:
I agree with the idea of measuring a nonprofit by its stated mission, strategies and outcomes.
Additionally, I would look to see how a nonprofit implements its mission internally and externally. Basically, does an organization practice what it preaches? If the issue is ending poverty, does the nonprofit pay living wages to its employees? Does it support economic development projects that hire local residents and pay decent wages? If the issue is access to health care, does it provide health care benefits to its employees, does it have paid sick leave?
Looking at the mission and values held and implemented by a nonprofit can go a long way toward interpreting the “cost per unit of service” figure.
Sean, thanks for taking this on and good luck with Google.
If we’re really going to talk about metrics in this pretty retro what can you capture about inputs and outputs sort of frame, rather than discovery then I have to reccomend my favorite development economist, a woman who used to work with Amartya Sen who’s now opened up her own center at Oxford University the oxford poverty and human development initiative. sabina, who is a good friend, believes you need to start by measuring how a well meaning initiative creates empowerment and freedom for the recipient, and goes on from there.
my main point on which sabina are totally in synch is that metrics need to be multidimensional to be valid. the linear metrics being expressed on this thread really bore the heck out of me. this kind of thinking has been to little effect for decades. check out early jed emerson.
it’s an epistemologically impoverished frame to impose a manufacturing metaphor, which is what most of this thread does, on something that has many more than input and out dimensions.
the first thing metrics have to prove is that their imposition does no harm. stop and think before you stick some crazy metric dashboard into an enterprise. most of the time metric dashboards are killed by “friendly fire” from the people on whom they are imposed; people don’t cooperate with the measurement imposed or they feed it garbage or find a way to work around it.
As a former public school teacher and administrator and the current Director of an NPO, I have been involved with the challenge of evaluation for a very long time. And I couldn’t agree more with Kevin Jones when he wrote: “It’s (evaluation of NPO’s by fixed metrics) an epistemologically impoverished frame to impose a manufacturing metaphor” and “the first thing metrics have to prove is that their imposition does no harm. stop and think before you stick some crazy metric dashboard into an enterprise”.
Most people are aware of the numerous ways in which schools class children and the role that this sorting plays in the reproduction of society (Bowles and Gintis, Illich, etc). Many smart and dedicated people who are concerned with the problem have spent a lot of time and energy designing and experimenting with alternatives. Although there have been some noteworthy reforms (i.e. the introduction of state-wide standards such as the Vermont Curriculum Standards, the creation of new tools such as rubrics and the student portfolio and the National School Reform’s retraining of teachers as “critical friends”), the general consensus among educators is the following: there is no silver bullet for school reform.
There is no single metric which can be applied to a student, because there is no model person and no given path to any particular end. That is not to say that standardized tests are of no use, but rather that there is no single tool or collection of tools that can tell us everything that is important about an individual. Likewise, there is no standardized collection which tells us all we need to know about an NGO. Because NGO’s are as diverse as people and there a innumerable ways in which they can be effective.
Evaluation is best considered as a process, not a score, a process of give and take between the evaluator and the evaluated. The tools that comprise the process need to be multiple, varied, frequent and responsive in order to accommodate an organization’s difference and growth organizations and the ever-changing values of society.
Furthermore — and Kevin makes this point beautifully — it is critically important that the evaluating process not harm the evaluated. On the contrary, in my mind, a useful test is one that causes the test taker to learn something — a prompt towards self-evaluation and self-directed growth.
NGO’s, like schools, are in the people business, which requires a combination of clarity of mission, rigorous self-evaluation and reporting and constant revision and retelling of the story to garner support. The evaluation process needs to respect the organic nature of the work and support, rather than hinder, its development.
We shouldn’t let pressure (public or market) force us to provide a product that the sector doesn’t call for. Instead, we should try to teach the public how to conduct due diligence and encourage NPO’s to turn the public’s need to know into an opportunity for improved governance.
Comparing non-profits is more like comparing privately-held companies, rather than public corporations that have price-earnings ratios and such. When you invest in a private company, that usually involves talking directly with management, perhaps site visits, etc. That’s very hard to replicate on a large scale.
Additionally, investments in a public corporation are instantly and daily vetted by the price action in the stock market, which means you also instantly know what everybody else is thinking. I wonder if there would be a way to create something similar for non-profits? Rather than an annual review of their 990, or other such slow-moving analysis.
As for metrics, I think you could compare metrics within categories. Such as homeless served per dollar, assuming you can also compare the quality of the care.
im a private equity investor. that’s my day job. and i’ve succeeded in the public markets when i put my mind to it. the metrics that matter with your investor hat on are simple; time to money, essentially. what brings in the most money in the shortest amount of time, the most sustainably at the highest margin? that’s all simple stuff, though i used in my seven businesses all of which became dominant in the market before i sold for an average of 10x.
but that’s not the right frame for this. metrics as conceived here on this thread is in that small sphere of activity where the equations are linear; inputs equal outputs. that works if you constrain the problem in a way that eliminates other dimensions of impact, unintended consequences, etc. this is game where the imposition of metrics does not cause more money to flow, it only hampers operations at the behest or whim of imperialistic, they know best funders. any valuable metrics in this space would be a dialogue not a reporting to an all powerful funder.
development as freedom by Sen and Sabina deals with the power dynamic implicit behind all talks of metrics. it is easier to see in international development, where the power relationship has a colonial political base, but it is an underlying assumption behind nearly all of the comments on metrics on this debate, i think.
look at the power issues behind your demand for metrics. think how it would be to be on other side of that intrusive tool. would it help you work better or with more commitment? or would it be a braying loudspeaker from some big brother funder that made you stop doing what you were doing and report to big daddy, the company store?
I don’t see the use of metrics as primarily a means of badgering a charity I’ve already given to, but rather, primarily as a means of selecting charities to give to in the first place.
IMO, an organization that has been effective in the past, per dollar spent, is likely to be effective with additional donations.
Phil S I think you don’t see the cost of applying your metrics, whatever they are, to the charity you are trying to give money to, and then forcing them to report and skew operations toward the path that will bring in the money. you don’t seem to realize the impact of your request, in my opinion. if you create a gate on the path to the money you have imposed a severe constraint on operations. i expect that of the companies i invest in, but dont’ ask them to take on the cost of due diligence until we have vetted them pretty thoroughly, run them for approval by our investment committee and they and we all say that if things continue as on the track we see, with the visibility we have, we would invest $1 million plus into them with a five to seven year commitment. only then do we ask them to comply with our analysis process of due diligence.
but non profits get subjected to intrusive measurement for placement of money all the time with no assurance that if they pass they really will get something. the process for a for profit investment seems much more fair. our analysis has a cost. we are sure there can be a materially positive outcome if you pass this test. and we won’t change the test, though we will sample the product and the team and the process and the social impact, and the scalability, etc. on all levels that we find, from the quantitative to the qualitative, subjective. we turned down one recent investment prospect that made financial sense when, after a three hour meeting with the team, i asked our folks, so who wants to come up here and spend a minimum of a day and a half with these guys during a board meeting and talk to them 10 to 15 times a week, send dozens of emails a week on their behalf and work to open doors for them to reach their potential? no one wanted to invest our time in them. it was not a good fit. and that was a good reason to say no, validated by other, more quantifiable things.
metrics has a cost. realize the cost before you impose them.
I understand measurement as a way to measure success in hitting the bottom line; more money, sooner at higher margin; pointing to monetary value against a short time frame.
>>>Measurement is information; information is needed in order to establish the reputation of the prespect you want to invest resources (attention, ideas, time, money, contacts) into, whatever your motives, for profit or not. You have to establish their reputation to determine to what extent you can trust them – that is, what the risk involved is; what’s the probability that your investment will not have the desired effect (through incompetence, poor luck, or fraud)?
So the general call to arms here is correct – we need more information; measurement is quite a scientific (and thus,usually, more reliable) method of obtaining, displaying and weighing that information, and fairly and reliably converting into a metric for reputation (though other factors come into play – like the brand of the organisation, and the overlap of their values with your own – e.g. caring about the same issues you do).
So yes, I feel we SHOULD get obsessive about hard metrics. Other means of determining reputation – expert’s analysis, or the aggregation of popular consensus, are faulty. Only then do we get a relatively trustworthy estimate of the expected return on investment. Note, however, that talk of overhead, financial ratios, etc, is a misleading obsession. At best, these tell you about the company’s fundamentals – i.e. all you learn is that it is/isn’t likely to go bust, taking your investment with it.
It’s more important for an organisation to develop its own KPIs (key performance indicators) – which it should be doing anyway! – and then make their expected and actual performance against these KPIs public. These do NOT have to be the same metric as a neighbouring organisation, but it helps, since it allows us to make sure we’re being fair in converting these to reputation (since they don’t need interconverting and thus weighing)
HOWEVER, if, as some here have argued, the institutional model of philanthropy cannot effectively deliver these metrics, then the alternative is to cut out this middle man, get the donor (investor) closer to the point of impact – i.e. directly involved in how the money is spent, so he can judge for himself, first hand, how effective his investment is. This requires a huge shakeup in the organisation – even the basic philosophy – of how we do things – of a similar magnitude to the revolution UGC (in the form of youtube etc) is delivering to traditional media. it COULD happen. It has its obvious downsides, as does Youtube-style media production. But it is a very important couterpart to the traditional way of doing things, if done correctly.
givewell has made a promising start on that kind of platform, despite the glaring misfiring it the execution of its leader. what is right and good and worthwhile about the givewell platform, if we can focus on that for a minute rather than the tragedy of a founder who lost his way for a while?
I agree with Maureen and Kevin that there is any one way to evaluate a nonprofit. There are some nonprofits, such as job training programs, lend themselves easily to a metrics approach. However, the majority of nonprofits – have more challenging multi-dimensional goals.
For example, I visited a homeless shelter near my house that’s been around for over 10 years and gives homeless people free breakfast and some housing. I asked the director, a Unitarian minister, how he measures his effectiveness. I had expected him to say something about the number of people he’s helped find jobs, or the number of lunches they hand out, etc. Instead, he replied simply, “Last week one of our (homeless) regulars died. And 10 people, who he’d gotten to know here, showed up for his funeral. That’s how I measure our effectiveness.” That sobering encounter really made me think hard. Many nonprofits are trying to make a difference in people’s lives and people are not products. We are complicated – changing attitudes, ideas and behaviors could take years and it’s hard to isolate which one factor contributed to any specific result. It’s much more complicated than running a business, where it comes down to profits, units sold, or in the case of google’s culture, milliseconds and bytes.
The challenge of evaluating a nonprofit using metrics, is similar to asking a parent – what kind of metric do you evaluate yourself on whether you are a good parent? Whether your kid eats green vegetables? Whether your kid makes it onto the basketball team? Whether your kid gets into an Ivy League college? It seems intuitive that none of these metrics adequately reflect adequately whether we have been good parents or not. We may look to some of these things, but then we also use our judgement – is the kid happy and well-adjusted? And even then, we may not know for years whether we succeeded as a parent until we see our kids grown up.
Sociologists struggled with the evaluation question in the 1960’s when evaluating government social programs and most gave up on idea that you could feasibly use scientific methods to evaluate nonprofit programs. MRDC does some great evaluations using control groups and random selection, but there’s very very few programs that would qualify for the huge expense and time involved (plus overcome the moral dilemma to assign certain people to control category where they don’t get treatment/the program).
A good balance I think is that suggested by Phil Buchanan – every organization should set and state its goals, strategies and indicators. Indicators, in my opinion, can embrace narrative stories of impact, as well some sensible numerical indicators. Stories are just as valuable – sometimes a lot more so because they are able to capture nuances – in information as metrics. And in terms of galvanizing donors and volunteers, the research consistently show that photos and stories are more powerful than just numbers.
I’m enjoying this discussion. Thanks everyone.
Perla, your point about parenting is well taken. I’ve wondered if the perspectives people have expressed in the discussion of NPO evaluation mirror their child rearing philosophies and political party inclinations. Of course, I’m not the first to make this association. George Lakoff, in his book “Don’t Think of an Elephant Know Your Values and Frame the Debate”, developed the thesis.
I think that implicit to the conversation about NPO evaluation is a difference of opinion about what constitutes an adequate representation of an entity, whether it’s an individual, a group, a culture and that in order to proceed it’s important to flesh out the idea of identity. In this case, the NPO’s.
I’ve really enjoyed this discussion. I personally believe evaluating nonprofits is mostly about evaluating their output (the social good they produce). Since this output cannot be quantified (as Perla notes above), I think the focus on metrics as a framework for evaluation is misplaced. Metrics can be used, but they should be designed on a case by case basis for each situation.
Now what I’d really like to have everyone comment on is what information you would like to see available on Google Finance. As noted in this thread, Google aggregates and finds information, they do not tend to create their own. So what data should they display on the google finance page?
Rather than continuing to comment on this thread, let’s move this conversation to my new post from today.
I hope to see you all there!
I want to point out to everyone who commented on this thread, that another version of this debate is forming in the comments to my podcast interview with Phil Buchanan of the Center for Effective Philanthropy.
You can find the new thread here. I’d love to have your input.
Due to the recent surge in spam comments on this post, I’m closing it to comments.