A few weeks ago, I wrote a post titled Getting Results: Outputs, Outcomes & Impact, in which I explained the “jargon” of tracking results in the social sector and argued that these metrics were critically important. In a follow up post, I argued that tracking results in this way should not be seen as “finger wagging campaign by the funding side of the table,” but instead was the key to a nonprofit becoming a high performance organization.
One of the comments on the first post (which generated a lively discussion in the comments section) came from Isaac Castillo, head of evaluation at Latin American Youth Center. LAYC is known for their diligent efforts to track their performance (they even consult for other nonprofits) and was recently announced as one of the few pre-selected subgrantees of the Social Innovation Fund. Isaac wrote:
“I actually think measuring outputs, outcomes, and impact is fairly easy and straightforward. The truly difficult part is getting nonprofits to identify the specific things they want to track.”
This of course flies in the face of a lot of thinking about measuring results in the social sector. While I do think that measuring results conclusively can be difficult, it is possible to take a “fairly easy and straightforward” approach to results measurement.
So today I want to rerun a piece on measuring impact from the Mulago Foundation. When I first published this piece, Paul Brest, head of the Hewlett Foundation left a comment saying, “This is excellent. Should be bottled and distributed.” The piece popped up recently when it randomly re-circulated via Twitter.
The Mulago Foundation: how we think about impact
We measure impact because it’s the only way to know whether our money is doing any good. In fact, we don’t invest in organizations that don’t measure impact – they’re flying blind and we would be too. Those organizations that do measure impact perform better and evolve faster, and discussions around measuring impact almost always lead to new ideas about effectiveness and efficiency.
Everyone’s got their own definition of impact and here’s ours: Impact is a change in the state of the world brought about by an intervention. It’s the final result of behaviors (outcomes) that are generated by activities (outputs) that are driven by resources (inputs).
We’re a small shop, so we needed to develop an approach with enough rigor to be believable, but simple enough to be doable. When we work with organizations, we use these five steps to determine impact and calculate bang for the donor buck:
1. Figure out what you’re trying to accomplish: the real mission.
You can’t think about impact until you know what you’re setting out accomplish. Most mission statements don’t help that much. We re-formulate the mission in a phrase of ~8 words or less that includes 1) a target population (or setting), 2) a verb, and 3) an ultimate outcome that implies something to measure – like this:
- getting African one-acre farmers out of poverty
- preventing HIV infection in Brazil
If we can’t we can’t get to this kind of concise statement, we don’t go any further- either because they don’t really know what they’re trying to do or because we simply wouldn’t be able to know if they’re doing it.
2. Pick the right indicator
Try this: figure out the single best indicator that would demonstrate mission accomplished. Ignore the howls of protest, it’s a really useful exercise. Here’s some examples relating to the missions shown above:
- Change in farm income
- Decrease in HIV infection rates
Sometimes that best indicator is doable, and that’s great. Other times you might need to capture it with a carefully chosen – and minimal – combination of indicators. When there is a behavior with a well-documented connection to impact – like children sleeping under mosquito nets – you can measure that and use it as a proxy for impact. Projects that can’t at least identify a behavior to measure are too gauzy for us to consider. Notice that while things like “awareness” or “empowerment” might be critical to the process that drives behaviors, we’re interested in measuring the change that results from that behavior.
We don’t pretend that this method captures all of the useful impacts and accomplishments of a given organization and their intervention. What it does do for us as philanthropic investors is answer the most critical question: did they fulfill the mission?
3. Get real numbers
You need to 1) show a change and 2) have confidence that it’s real. This means that
- You got a baseline and measured again at the right interval, and
- You sampled enough of the right people (or trees, or whatever) in the right way.
There are two parts to figuring this out: the logical side and the technical side. With an adequate knowledge of the setting, you can do a lot by just eyeballing the evaluation plan – looking carefully at the methods to be used to see if they make sense. Most bad schemes have an obvious flaw on close examination: they didn’t get good baseline data, they’re asking the dads when they ought to ask the moms, they’re asking in a culturally inappropriate way. The technical part has mostly to do with sample size, and a competent statistician can easily help you figure what is adequate.
4. Make the case for attribution
If you have real numbers that show impact, you need to make the case that it was your efforts that caused the change. This is the hardest part of measuring impact, because it asks you to be able to say what would have happened without you. When real numbers show there has been a change, a useful thing to ask is “what else could possibly explain the impact we observed?”
There are three levels – in ascending order of cost and complexity – of demonstrating attribution:
- Narrative attribution: You’ve got before-and-after data showing a change and airtight story that shows that it is very unlikely that the change was from something else. This approach is vastly overused, but it can be valid when the change is big, tightly coupled with the intervention, involves few variables (factors that might have influenced the change), and you’ve got a deep knowledge of the setting.
- Matched controls: At the outset of your work, you identified settings or populations similar enough to ones you work with to serve as valid comparisons. This works when there aren’t too many other variables, you can find good matches, and you can watch the process closely enough to know that significant unforeseen factors didn’t arise during the intervention period. This is rarely perfect; it’s often good enough.
- Randomized controlled trials: RCT’s are the gold standard in most cases and are needed when the stakes are high and there are too many variables to be able to confidently say that your comparison groups are similar enough to show attribution.
5. Calculate bang-for-the-buck
Now that you know you’ve got real impact, you need to know what it cost. You can always generate impact by spending a ton of money, but it won’t give good value for the philanthropic dollar and it won’t be scalable (and it probably won’t last). Stick with the key impact you’ve chosen; don’t get sucked into the current trend of trying to monetize every social impact you can think of.
The easiest – and arguably most valid – way to calculate bang-for-the-buck is to divide the total donor money spent by the total impact. In organizations that do more than one kind of project, it is often possible to split out what they spent for their various impacts. Remember that start-ups are expensive and don’t worry so much about their current figures, but do see if their projections for steady-state operations make sense and assume (as we learned the hard way) that they are usually at the way-optimistic end of the scale.
In the end, though, the key to figuring out real impact is an honest, curious, and constructive skepticism. A healthy dose of skepticism – not cynicism – is a gift to doers, funders and the social sector as a whole.
6 Comments
True, it can be this simple although not easy. In our experience, organizational characteristics that support this type of thinking are realism, honesty, critical thinking and most of all the ability to put one’s ego in check. Fortunately none of these cost additional money nor a consultant. They rest and live in the organizational culture.
Sean….
If it not too late to comment, I’d like to add an observation to the conversation on Outputs, Outcomes, and Impacts, and the Mulago document.
It is particularly important to underscore the differentiation Mulago makes between Figuring Out What You’re Trying to Accomplish, and Picking the Right Indicator. Many nonprofits get lost transitioning between the first and second steps because they are confused by the difference between an Impact and an Outcome.
The Mulago piece mentions the fact that mission statements rarely are of any help. The reason for this is that mission statements are usually written from a broad perspective…sort of the view from 30,000 feet. It is at the Indicator level that organizations begin to get closer to the Outcomes that they can and ought to pursue.
Beyond this, however, while the Mulago document does mention the fact that the best indicator is/should be doable, it does not point out other characteristic that would significantly help nonprofits hit the targets they select.
Yes, a good indicator (or outcome) is doable, often with a stretch of effort. But it also should be:
1. A Positive Improvement, not merely the absence of a problem.
2. Meaningful, and not a change that is in actuality largely cosmetic.
3. Sustainable, something that will outlast the intervention that brings the change about.
4. Bound in Time, so that both the implementor and the funder/investor know when the results will show
5. Bound in Number, so that it is neither couched in, nor attempted in terms of a reach beyond the capacity of the organization
6. Narrowly Focused, targeting changes that the organization can bring about
7. Measurable, it needs to be a change that is discernable and amenable to some sort of quantification. The Mulago document notes that “Projects that can’t at least identify a behavior to measure are too gauzy.” The key is not to exclude goals such as the “awareness” or the “empowerment” that Mulago mentions, but rather to find measurable proxies that would indicate that these qualities have improved for the better.
8. Verifiable, they should be something that can be discerned by an outside observer.
To the excellent points made in the Mulago document, I’d add these points. Nonprofits that apply these qualities to the indicators or outcomes they choose, and the targets they pursue, will be in a much better position to bring about and demonstrate real impact to their funders and to those they serve.
Dr. Penna is an advisor to Charity Navigator and the author if the forthcoming book The Nonprofit Outcomes Toolbox, to be published by Wiley & Sons this winter.
Sean, I’d be interested to hear how you or your readers would apply these concepts to an arts organization. Art is subjective by nature, which makes its effects exceedingly difficult to track.
Describing the arts as a means to some larger end (e.g., better test scores, improved neighborhood economies) seems to be missing the point. But because of donors’ increasing demand for hard data, it’s often the only thing we’re left with.
A school kid comes into a museum, has an epiphany in front of a work of art, and goes home with his or her outlook totally changed. How in the world are we going to put that in a spreadsheet?
Sean……
In response to Bob Arnold’s question about applying outcomes and impacts to the arts, I’d like to offer the following.
Yes, this is an issue artistic and cultural organizations have grappled with for years as the outcomes movement seeped across the boundaries of the human and direct services sector where it began, and into the grant applications being made by nonprofit performing and visual arts groups. “How, they ask, “do we measure our impact? What are the outcomes we deliver to our stakeholders and can promise to our investors?”
As Mr. Arnold writes, it is tough to quantify an epiphany. What can be captured, however, are proxies for the impact an artistic experience may have. Mr. Arnold mentioned the example of a school kid. If one presumes that this child visited a museum as part of a school outing, then the opportunity exists to work in concert with his or her teachers to capture at least some of the impact the visit had. Was what the kids saw worked into their curriculum? Was there an assignment that might demonstrate what the kids took away from the visit? Was something learned later demonstrated in a follow-up assignment?
Too often the “outcomes,” the impact that artistic organizations claim to pursue are described in broad community-wide terms. Mr. Arnold mentioned some of them himself: better test scores and improved neighborhood economies. In a previous post I wrote that the best outcomes or impacts are often described in narrowly focused terms. This, I suggest, is at least part of the answer Mr. Arnold seeks. If museums, for example, want to have an impact on kids, then at least some of their targeted outcomes should be couched in terms that reflect that intent. If one goal is to broaden kids’ view of what art is and encompasses, then together with the schools in a community a local arts organization might capture information on whether a trip to a museum in fact accomplished that.
All arts organizations would probably love to be responsible for an “epiphany.” But epiphanies make poor targeted outcomes precisely because of their elusive nature. Art is intensely personal and subjective. That experience, as Mr. Arnold writes, cannot be put on a spread sheet. What can, however, be captured is whether a visit to a museum taught anything, whether it resonated in any way, whether a performance changed a perspective. Rather than focusing on epiphanies we cannot get at, perhaps the arts community could try focusing on the proxies which, with a little thought and work, we can often get at.
Bob, you bring up an important point. Robert is far more of an expert on outcomes than me and I think his advice is good.
I do think it is important that we use outcome data to help prove that something works, but NOT view the absence of outcome data as proof that something does not work. Science certainly works that way. Any good scientist will not tell you that aliens don’t exist, just that there is an absence of evidence. In some cases, certain sorts of nonprofit work may find it incredibly difficult to prove that their work makes a difference. I think it is important that people who care about proof do not make the mistake of thinking they know that these sorts of nonprofit do not make a difference.
Thank you both for your thoughtful replies. Dr. Penna, I very much like the idea of finding proxies to indicate a program’s effect. I think we often feel like we have to jump directly from the outputs of a specific program (e.g., students made art) to the fulfillment of our mission statement (these students’ lives were enriched). Thinking of outcomes as narrowly defined — and as separate from the broader impact — is a very helpful concept.
And Sean, very true — as they say, absence of evidence isn’t necessarily evidence of absence.