The Mulago Foundation: How We Think About Impact

[Update: The Mulago Foundation has launched a website. You will find more information about them here]

Measuring impact is always a hot topic with institutional foundations. However, methodologies for measuring impact are often densely theoretical and difficult to imagine implemented in practice on a broad scale. This is especially true if you try to apply impact measurement to the activities of smaller funders.

So I was fascinated to recently read a two-page paper by the Mulago Foundation on how they think about measurement. To my way of thinking, this framework lends itself to funders of any size using it as a way to think about measuring impact while adopting specific protocol to their own situation. The Mulago Foundation does not have a website, but it does maintain a website for the Rainer Arnhold Fellows Program, which it runs.

The Mulago Foundation: how we think about impact

We measure impact because it’s the only way to know whether our money is doing any good. In fact, we don’t invest in organizations that don’t measure impact – they’re flying blind and we would be too. Those organizations that do measure impact perform better and evolve faster, and discussions around measuring impact almost always lead to new ideas about effectiveness and efficiency.

Everyone’s got their own definition of impact and here’s ours: Impact is a change in the state of the world brought about by an intervention. It’s the final result of behaviors (outcomes) that are generated by activities (outputs) that are driven by resources (inputs).

We’re a small shop, so we needed to develop an approach with enough rigor to be believable, but simple enough to be doable. When we work with organizations, we use these five steps to determine impact and calculate bang for the donor buck:

1. Figure out what you’re trying to accomplish: the real mission.

You can’t think about impact until you know what you’re setting out accomplish. Most mission statements don’t help that much. We re-formulate the mission in a phrase of ~8 words or less that includes 1) a target population (or setting), 2) a verb, and 3) an ultimate outcome that implies something to measure – like this:

  • getting African one-acre farmers out of poverty
  • preventing HIV infection in Brazil

If we can’t we can’t get to this kind of concise statement, we don’t go any further- either because they don’t really know what they’re trying to do or because we simply wouldn’t be able to know if they’re doing it.

2. Pick the right indicator

Try this: figure out the single best indicator that would demonstrate mission accomplished. Ignore the howls of protest, it’s a really useful exercise. Here’s some examples relating to the missions shown above:

  • Change in farm income
  • Decrease in HIV infection rates

Sometimes that best indicator is doable, and that’s great. Other times you might need to capture it with a carefully chosen – and minimal – combination of indicators. When there is a behavior with a well-documented connection to impact – like children sleeping under mosquito nets – you can measure that and use it as a proxy for impact. Projects that can’t at least identify a behavior to measure are too gauzy for us to consider. Notice that while things like “awareness” or “empowerment” might be critical to the process that drives behaviors, we’re interested in measuring the change that results from that behavior.

We don’t pretend that this method captures all of the useful impacts and accomplishments of a given organization and their intervention. What it does do for us as philanthropic investors is answer the most critical question: did they fulfill the mission?

3. Get real numbers

You need to 1) show a change and 2) have confidence that it’s real. This means that

  • You got a baseline and measured again at the right interval, and
  • You sampled enough of the right people (or trees, or whatever) in the right way.

There are two parts to figuring this out: the logical side and the technical side. With an adequate knowledge of the setting, you can do a lot by just eyeballing the evaluation plan – looking carefully at the methods to be used to see if they make sense. Most bad schemes have an obvious flaw on close examination: they didn’t get good baseline data, they’re asking the dads when they ought to ask the moms, they’re asking in a culturally inappropriate way. The technical part has mostly to do with sample size, and a competent statistician can easily help you figure what is adequate.

4. Make the case for attribution

If you have real numbers that show impact, you need to make the case that it was your efforts that caused the change. This is the hardest part of measuring impact, because it asks you to be able to say what would have happened without you. When real numbers show there has been a change, a useful thing to ask is “what else could possibly explain the impact we observed?”

There are three levels – in ascending order of cost and complexity – of demonstrating attribution:

  1. Narrative attribution: You’ve got before-and-after data showing a change and airtight story that shows that it is very unlikely that the change was from something else. This approach is vastly overused, but it can be valid when the change is big, tightly coupled with the intervention, involves few variables (factors that might have influenced the change), and you’ve got a deep knowledge of the setting.
  2. Matched controls: At the outset of your work, you identified settings or populations similar enough to ones you work with to serve as valid comparisons. This works when there aren’t too many other variables, you can find good matches, and you can watch the process closely enough to know that significant unforeseen factors didn’t arise during the intervention period. This is rarely perfect; it’s often good enough.
  3. Randomized controlled trials: RCT’s are the gold standard in most cases and are needed when the stakes are high and there are too many variables to be able to confidently say that your comparison groups are similar enough to show attribution.

5. Calculate bang-for-the-buck

Now that know you’ve got real impact, you need to know what it cost. You can always generate impact by spending a ton of money, but it won’t give good value for the philanthropic dollar and it won’t be scalable (and it probably won’t last). Stick with the key impact you’ve chosen; don’t get sucked into the current trend of trying to monetize every social impact you can think of.

The easiest – and arguably most valid – way to calculate bang-for-the-buck is to divide the total donor money spent by the total impact. In organizations that do more than one kind of project, it is often possible to split out what they spent for their various impacts. Remember that start-ups are expensive and don’t worry so much about their current figures, but do see if their projections for steady-state operations make sense and assume (as we learned the hard way) that they are usually at the way-optimistic end of the scale.

In the end, though, the key to figuring out real impact is an honest, curious, and constructive skepticism. A healthy dose of skepticism – not cynicism – is a gift to doers, funders and the social sector as a whole.


  1. Paul Botts says:

    Wow, I really really really like that piece! Clear choices and conclusions expressed in plain common-sense language — so rare in our sector and so great!

    The opening and closing paragraphs in particular fit squarely into the “WHAT HE SAID!” category for me. Hmm, what would be the foundation-sector version of shouting “DITTO!!!”….?

    Where can I find (for purposes of sharing it with everyone I work with) that essay?

  2. Phil Steinmeyer says:

    I agree. Excellent, concise, clear.

    This method may be harder in practice than in theory, but as a goal, I commend it.

  3. Tye Johnson says:

    Excellent article. As a social scientist on the periphery of the non-profit sector, I’ve noticed that the evaluation of impact is frequently non-existent or violates basic methodological principles.

    The issue of attribution is particularly problematic, and while it is true that an experimental methodology is the only hope of demonstrating “causation”, random assignment and variable manipulation is fraught with ethical issues that may not be able to be overcome.

    “Natural” experimental methods may be appropriate in some venues, but solid methodologies short of experimental methods may be all that can be used.

    I’m certainly available to consult on impact studies.

  4. Paul Brest says:

    This is excellent. Should be bottled and distributed.
    Paul Brest

  5. Thanks for all the kind comments. I’ve made sure Mulago saw them. Paul, Mulago is a foundation with three staff members. As far as I know, this document is not in a distributable format. But you can always distribute this blog post!

    I’m intrigued whether part of the reason why this post resonated with people is because of the informal language used to describe what is traditionally an academic subject. Since the four of you liked the post, would you let me know whether you think the fact is was just two pages and used informal language was part of its appeal?

  6. Paul Botts says:

    I would rank the document’s plain language and its conciseness as it’s second and third most-positive attributes.

    Those are important, and welcome, and all too rare in our sector. But the number one reason for the appeal of the piece is its actual content: it states a tightly-reasoned and specific case for impact assessment in both philanthropy specifically and in non-profit work generally. And then in the final paragraph it also has a clear strong statement of overall philosophy towards the subject.

    That content would grab me even if it wasn’t so nicely put together, in other words.

  7. Randy Newcomb says:

    Well done. So much of the discussion about measuring impact gets lost in its own vernacular. Congratulations Mulago!

  8. Thank you so much for this. As a nonprofit that has long tracked outputs of our work such as acres converted to sustainable farming practices, but is just now trying to start tracking the broader impact of our work on poverty alleviation and the environment, this is very helpful for getting our heads around the basic issues we need to consider.

  9. It’s not just the informal, non-academic language here. What makes this a great piece is that cuts away the usual blather about “mission” and “mission statement” to ask
    1) what would have happened without you., and 2) “what else could possibly explain the impact we observed?” That’s it, folks. That’s what we should all be asking all the time.

    Any organization has to have a clear and real set of beliefs and purposes, and a real understanding of not just why the organization exists, but why it needs to exist, to be able to see whether it is actually making a difference. Whether it should, in fact, continue to exist.

    This is impressively straightforward. Loved it.