Philanthropy Evaluation: The Courtroom Approach

Steven Mayer is one of the people behind the Pathways to Progress website, dedicated to social justice philanthropy. Albert Ruesga wrote recently that “metrics based” philanthropy and “social justice” philanthropy are often viewed as two warring camps. But Albert suggested that in fact, “We fail to appreciate how closely united these two camps are in their rejection of philanthropy as usual.”

Today I want to highlight a recent essay by Steven Mayer that I think shows an approach to metrics that has the potential to bridge the divide between “metrics” and “social justice”. Because of the way Steven frames his approach to evaluation, I think he even presents a way to think about these issues that bridge the gap between the participants in the philanthrocapitalism debate.

Steve Mayer:

Our website JustPhilanthropy.org presents many productive avenues for pursuing social justice using the resources of philanthropy. Funders, nonprofits, and potential donors exploring these options frequently ask, “How can we evaluate these efforts?”

Needed: useful evaluation questions

“What is being achieved through this effort?” and “What kind of results are you getting?” are worthwhile questions that must be addressed to be fair to those who support this work. But demands for “measurable impact” and “outcome measures” are inappropriately placed on separate, local efforts; they apply more to the bigger picture, the picture indicated by disparities data. This is not to avoid the questions, but instead to find better ways of answering them. More satisfying data that inform next steps, stimulate innovation, engage participating stakeholders, and make better use of scarce philanthropic capital would come from asking for “evidence of progress” or even “early signs of impact.”

Think courtroom, not science

To appreciate these better questions, try this mental exercise: assume the program you support or operate has been accused of being trivial or ineffective, doing nothing to reduce disparities or improve social justice. What evidence could you provide in its defense? Think of a parade of witnesses testifying from their unique expertise, vantage point, experience, and vested interest. What “portfolio of evidence” could make a case good enough to persuade a jury of peers that this work, when considered in context, is useful and necessary for closing a key disparity?
No less rigorous or accountable

Asking for “evidence of progress” is by no means a diminished demand for rigor. Instead, it frames evaluation in more familiar and approachable terms. Data of all kinds can be considered — numbers, stories, graphs, pictures, records, opinions, artifacts, etc. There is no single “measure” that communicates effectiveness or truth, just as in a courtroom no single witness provides all the testimony. In a court, multiple lines of evidence are entered and judged on their merits, resulting in conclusions that stand tests of credibility and accountability.

I’ve seen lots of frameworks borrowed from different disciplines in an attempt to find a good way to evaluate the effectiveness of nonprofits and philanthropy. I even once suggested that we should turn to movie critics as a model of how evaluation should be performed. But I think that Steven’s court room framework is elegantly simple and captures the way I think of evaluation and impact analysis better than I have ever been able to describe.

The fact is, I think that the best way to evaluate the social sector is via a system similar to investment research on publicly traded stocks. But to most people, this is an alien system that they incorrectly believe is concerned only with quantitative evidence. In actuality, stock market analysis is much more like the idea that Steven presents, “Data of all kinds can be considered — numbers, stories, graphs,
pictures, records, opinions, artifacts, etc. There is no single
“measure” that communicates [the potential of an investment idea].” But most people do understand that in court rooms all sort of evidence is presented and evaluated as a composite whole.

Steven’s framework is brilliant. The one point I would make (I think Steven would agree although he doesn’t make this point explicitly) is that in the court room things musted be proved “beyond a reasonable doubt.” This makes sense because the ramifications for deciding incorrectly are very high. But in the nonprofit/philanthropy world, we simply need to get to a point where it can generally be agreed that some funding opportunities are better than others.

Bravo Steven!

4 Comments

  1. young staffer says:

    I do think the framework captures a good deal of what seems valuable in evaluation to me — kudos to Steven. The hardest point to get across in these discussions often seems to be the notion that evaluation involves considering varied types of evidence and wading through it with all of its contradictions, inadequacies, and ambiguities.

    I would make a slightly different point about where the analogy falls a bit flat for me (and, to be clear, it’s not meant to contradict anything that’s been said): court proceedings are intended to be one-time affairs, judging the past. As a funder, it’s important to know that organizations are evaluating, learning, changing, responding to a new environment, and then reeevaluating on an ongoing basis. You don’t want to just “rule” on whether the organization is good or bad at achieving its mission and leave it at that. You want to know whether it is dynamic and responsive to the enviroment or static and unchanging. How it is learning and planning is crucial, not just its past success or failure.

  2. Incredibly important point. Thanks for making it. Funders are not all knowing judges in this situation. And it is certainly not a one time decision.

  3. Tony Pipa says:

    I like Steven’s common-sense approach. It’s a useful elaboration of the concept that Jim Collins presented in his monograph “Good to Great” for the nonprofit sector: “It doesn’t really matter whether you can quantify your results. What matters is that you rigorously assemble evidence — quantitative or qualitative — to track your progress. If the evidence is primarily qualitative, think like a trial lawyer assembling the combined body of evidence. If the evidence is primarily quantitative, then think of yourself as a laboratory scientist assembling and assessing the data.”

  4. Excellent advice, thanks Tony.