SoCap Coverage: Nonprofit Analysis, Beyond Metrics

This is a guest post by Adin Miller, owner of Adin Miller Consulting, who is providing coverage of the Tactical Philanthropy track at the Social Capital Markets conference.  Follow him on Twitter:@adincmiller

By Adin Miller

Adin Miller The closing session for the Tactical Philanthropy track at the 2010 Social Capital Markets Conference (SOCAP10) was jokingly referred to as the Inquisition. In all honestly, it was a tremendous moment in transparency and candor and delivered on its intention to learn about three different approaches to analyzing nonprofits. Taken together or separately, the assessment approaches should help donors and social impact investors determine how a donation or investment might be applied to create positive community impact.

The session served as a forum for three major nonprofit analyst and rating agencies to report on their examination of DC Central Kitchen, a willing partner in the exercise. It featured Colette Stanzler of Social Impact Research Initiative at Root Cause, Timothy Ogden of Sona Partners representing GiveWell as a member of its Board of Directors, and Ken Berger of Charity Navigator. Michael Curtin, represented DC Central Kitchen as its CEO.

The session’s subject – moving beyond the application of simplistic metrics as rating tools for nonprofits – probably generated the most Tactical Philanthropy specific pre-conference discussions and issues. I previewed this session and some of my questions in two earlier posts (first post and second post) prior to SOCA10. The lack of demand for metrics by individual social impact investors was highlighted by Kevin Jones in a Social Edge discussion. Another pre-conference discussion hosted by Sean on Social Edge on moving beyond metrics generated 37 comments, some of which came from the panelists. In short, this is not a subject without its critics and opinions.

In introducing the session, Sean explained that he wanted to highlight how nonprofit analysis, which has focused simply on metrics and scores, is now evolving into a more holistic assessment of the whole organization.

Michael led off by presenting information about DC Central Kitchen as a social and philanthropic investment enterprise. He acknowledged the vulnerability of being under the microscope in a public setting, especially since he would be hearing the analyses for the first time. But, he also stressed the value of transparency, the ongoing need to develop more stories highlighting the organization’s effectiveness, and the importance of incorporating data to support its communication efforts.

The three rating agencies then presented their analysis. Without getting into the minutia of each analysis, which will be available in some form in the future (GiveWell, for instance, will publish its entire report on DC Central Kitchen online), I’d summarize their assessments as a split decision. All of the agencies identified DC Central Kitchen as amongst the top programs in providing employment assistance. But only Root Cause and Charity Navigator recommended the organization as investment for donors and social impact investors that would create positive community impact.

In presenting the Root Cause analysis, Collette concluded that DC Central Kitchen was a very impressive organization and offered real investment opportunity. The analysis, which focused on organizational health, program performance, and social and economic outcomes, identified DC Central Kitchen as having strong financial health, a solid management team, and demonstrated results.

Likewise, the Charity Navigator analysis concluded that DC Central Kitchen was a strong organization worthy of additional donations and investments. The analysis applied the new Charity Navigator analysis model that will officially launch in July 2012. That model rates organizations on their effectiveness and results (50%), accountability and transparency (17%), and financial health (33%). The effectiveness and results segment demonstrated Charity Navigator’s movement to link outcome performance with its assessments. The financial health segment include an analysis of an organization’s efficiency and sustainability; it includes overhead ratios as a criteria, but that now only accounts for 10% of overall score.

Under the existing Charity Navigator model, DC Central Kitchen only garners two stars, an average score for nonprofits. Under the new analysis model field tested for the session, the organization earned a complete different rating of four stars and a total score of 82 out of possible 100 points. Specifically, it earned 41 out of 50 points for effectiveness and results, 14 out of 17 points for accountability and transparency, and 27 out of 33 points for financial health.

The somewhat dissenting review came from GiveWell. Channeling Holden Karnofsky, who founded GiveWell in 2006, Timothy highlighted how GiveWell conducts its analysis which includes a rigorous process that involves independent research and review, an examination of a nonprofit’s website, open-ended conversation, and follow-up for more information. GiveWell’s analysis asks several key questions focused on documenting evidence of a nonprofit’s impact, assessing what a person gets for a donation, and determining if there is room for more funding and evidence for scaling growth options.

The GiveWell analysis focused on the job training component at DC Central Kitchen. It concluded that there is not yet compelling evidence that DC Central Kitchen has impact over and above other organizations and efforts in the same space. Nor did the analysis clearly identify how additional funds could scale up the organization. As a general approach GiveWell attempts to determine what other options beyond donating to a specific nonprofit does a donor have that will generate the best impact. In its analysis of DC Central Kitchen, it concludes that it does represent a top program in the employment assistance arena. But, the organization may not be as compelling as a means for a donor to create significant impact.

In taking a few days to mull over this session, I’m still struck by how much the tenor of the discussion of nonprofit ratings has moved beyond simple metrics. While each of the rating agencies admitted that it could have used a bit more time to conduct its analysis, the information presented does appear to be fairly substantial. Each rating agency also presented its findings while noting its philosophical biases on what makes for an effective nonprofit and potential recipient of donations and investments. And while the session did not address my fundamental concerns about applying a retrospective analysis in order to justify prospective funding, it was fascinating nonetheless.

Several issues still need to be addressed as we look to continue to move away from applying simple metrics in assessing nonprofits. For example, the assessments represent a snapshot of the organization at a specific moment in time. And the assessments vary in cost: for Root Cause and GiveWell, those assessments are not inexpensive; Ken did state an in-depth analysis from Charity Navigator only costs $250. With the possible exception of Charity Navigator’s new model, real-time and ongoing updates to an organization’s assessment may present challenges to the rating agencies.

The assessments also do not represent simple approaches that might appeal to the general donor population. As noted Behavior Finance session, while 85% of the respondents in “Money for Good” report (PDF) care about nonprofit performance, only 32% conducted research on a charity prior to giving and only 3% would use comparative data to make decisions on where to donate.

The analytical approaches may also require additional adjustments for organizations that are not providing direct services, such as advocacy organizations or animal welfare organizations (where it would be difficult to interview direct service recipients).

The assessment approaches will also need to reconcile their difference with social impact benchmark systems such as IRIS and GIIRS. The panel’s response to an audience question on integrating their assessment systems with IRIS and GIIRS left many of us concerned (see Nell Edgington’s post on Socap Day 2; it was also a point raised through Twitter after the presentation). GiveWell remains pretty skeptical of social return on investment benchmarks and metric systems like IRIS and GIIRS and Ken wasn’t familiar with the system but generally supportive of the concept of integrating data resources. Only Colette offered support for using standard indicators across fields such as IRIS and GIIRS that would be applicable to their assessment work.

Nell’s point that “the philanthropic and impact investing worlds aren’t communicating and collaborating” needs to be addressed as we move to improve assessment tools of nonprofits. A convergence is taking place in measuring the impact of the social investment marketplace in response to “the need for coherent and "harmonized" social and environmental performance system” (see Elly S. Brown’s wonderful post on the Harmonizing Tools to Measure Impact session). We need to keep pushing for analytical tools that can bridge both charitable giving and social impact investment opportunities. Ultimately, we all want evidence to know what works and where to best allocate donations and social impact investments. This session reveals that we’re moving in the right direction, but we still have work to do in this area.