I have to move on to a bunch of other topics that have gotten stacked up in my blog post queue given all the Social Innovation Fund discussion. But I want to cover one more aspect of the situation first because I’ve gotten a number of questions about it (including from non-finalist grantees).
The event that drove the Fund to release so much information about the grantees and the grantmaking process was Paul Light, a reviewer for the fund, announcing that he had given one of the grantees (New Profit) the lowest possible rating and asking how it was possible that they had gone on to win an award. [Update, actually a quick reader correctly points out that a number of issues pushed the Fund’s disclosure, including Light’s comments, questions from the New York Times, Chronicle of Philanthropy, Nonprofit Quarterly and others]. While much attention has been paid to the fact that the Fund released the application material and reviewer comments, the Fund also released detailed information on the various rounds of reviews and the scoring of the finalist grantees.
The Fund went through a four stage process to score the applications. Notably, the Fund did not simply rank all the applications and give the grants to the top scorers. Instead, the Fund used a “playoff” system similar to how most sports award championships.
Phase I
16 separate three person panels of reviewers rated the 54 applications. Two panels were assigned to each application. They rated the applications with one of four scores, Excellent (I), Strong (II), Satisfactory (III) or Weak/Non-Responsive (IV).
Click here for larger image.
All applicants which received at least one Excellent rating moved to Phase II. All applicants which received Strong ratings from both panels moved forward. Of the 15 organizations that received one Strong and one Satisfactory rating, 11 moved forward and 4 were disqualified because issues were raised by reviewers that would not have be clarified in Phase II. In total, 31 applications moved to Phase II.
Note that New Profit was one of three groups that received an Excellent rating from one panel and a Weak/Non-Responsive rating from the other. Like all groups which received at least one Excellent rating, they went on to Phase II. Also note that the single application which received Excellent ratings from both panels did not receive a grant. I’ll cover this further in a moment.
Phase II
All applications were reviewed by a newly assembled review board and rated on the same rating scale. However, this Phase examined the applications only on their use of data, evidence and evaluation.
Click here for a larger image.
All applications which received an Excellent or Strong rating (all eventual grantee scored in this range) moved to Phase III, except the staff of the Fund eliminated one application that they determined was not responsive to the Fund’s requirements.
Those applications rated Satisfactory in Phase II and which received two Excellent ratings or an Excellent and a Strong rating in Phase I were moved to Phase III. Those that scored lower in Phase I were examined by the Fund’s staff for their alignment with the Fund’s portfolio criteria and two out of 10 were moved to Phase III.
All applications rated Weak/Non-Responsive in Phase II were dropped. In total 16 applications were moved to Phase III.
Phase III
The Fund’s staff and three external reviewers met and discussed the Phase III applications with an emphasis on their strength of relationships and collaborations, opportunity for scale, potential to impact public discussion, and the rigor of sophistication of evidence and evaluation. At the conclusion of these discussions, 11 out of the 16 applications were advanced to Phase IV.
Phase IV
The Fund sent a long series of detailed clarifying questions to the Phase IV applicants. After reviewing their responses, the Fund awarded grants to all 11 finalists.
Remember the one applicant which received Excellent ratings from both review panels in Phase I? They were only rated Satisfactory in Phase II. They advanced to Phase III, but were eliminated at that point. The reason I refer to the process as a “playoff system” is because the applications had to make a certain cut to move to each Phase, but at that point they started fresh against the new, smaller pool of applications. The applicant that received two Excellent ratings in Phase I was like the New York Yankees having the best regular season record, but being beaten by another team deep into the playoffs.
Is this process the right one for the Fund to use? That is up to debate. But it certainly is a rationale system. One thing I like about the process is that rather then trying to achieve a false level of precision, the process embraces ambiguity. The Fund didn’t approach this process as if they were hiring a government contractor where they would have been looking for who could deliver on specification at the lowest cost. Instead, the fund recognized that the process of investing growth capital is steeped in uncertainty. Deciding which applications were best was not diminished to a simplistic set of metrics. While doing so might have made the Fund’s final decisions easy to understand, it would not have resulted in the best selections.
This sort of process is very similar to how for-profit investors build a portfolio. Rather than simply rating investment opportunities on a set of simplistic criteria, for-profit investors use multiple quantitative and qualitative screens and ultimately make a decision that attempts to holistically capture a broad range of inputs.
The investment process, both for-profit and nonprofit, is thick with ambiguity. Turn on the financial news station any day and you’ll see two professional investors, both with strong arguments, debating whether a specific company is a good or bad investment. Amazingly, two investors will frequently have almost opposite opinions of an investment opportunity. This disagreement is said to “make a market” because every buyer needs a seller to execute a transaction.
The key to evaluating an investment process, either for-profit or nonprofit, is to examine the validity of their process rather than their actual decisions. This is because a good process will hold up over time, while even a bad process can get lucky and make good decisions in the short term.
The Social Innovation Fund’s selection process is valid. It centered on the use of external experts to evaluate the applications based on a range of inputs. While these experts sometimes came to opposite conclusions, this actual validates the process. If the experts had all agreed all of the time, it would have been evidence that the rating criteria were too simplistic and/or quantitative. Or as one reader has argued, the range of expert opinions may in fact be evidence of innovation.
4 Comments
[Update, actually a quick reader correctly points out that a number of issues pushed the Fund’s disclosure, including Light’s comments, questions from the New York Times, Chronicle of Philanthropy, Nonprofit Quarterly and others].
Update to the update, Sean. Paul Light’s comment was the basis for all of the subsequent reporting, including the Times, the Chronicle, NPQ, and various blogs and tweets. No other reviewer expressed similar surprise on the record. The only other “fact” that was the basis for all of the “questions” was that SIF Director Paul Carrtar once worked for New Profit, one of the grantees, a fact that he disclosed in writing and which led him to recuse himself from all New Profit-related matters. No independent facts were presented other than Light’s “surprise,” which later proved inconsequential due to a contrary rating by another panel, and Carrtar’s past employment, which was disclosed beforehand and had no effect whatsoever on the grants. The rest of the fuss was based entirely on re-reporting of the above.
Steve, it isn’t particularly important to me one way or another, but certainly NPQ was questioning SIF prior to Light’s comments. They even responded by saying they would release the finalist applications before Paul’s post. From what I’ve heard, some of the other media stories were launched before Paul as well. I know that my Chronicle of Philanthropy story was mostly written before Paul’s post. But his post forced me to do a total rewrite.
The issue prior to Paul was theoretical in nature. People like me were asking about more transparency to advance more social impact. People like NPQ were asking questions from an accountability point of view. Paul’s comments just gave fuel to the fire.
Great explanation Sean, thank you.
Sean, terrific overview of the the SIF process. I particularly liked the parallels you drew to the for-profit investor approach in building a portfolio. As an SIF reviewer, I’d like to add a couple of points. Reviewers easily spent 50 hours, if not more, on the process. I can only speak to the qualifications of my fellow reviewers, but they were smart, dedicated, and knowledgeable about the nonprofit sector. Our team members separetly reviewed each application then came together on a conference call to discuss each one. We rotated the duties of facilitation and writing up the consensus. For me, the conference call was the most interesting part of the process. I learned a lot by listening to other people’s viewpoints. I will use that learning in other things I do.