An Experiment in Outcome Measurement

Recently, Charity Navigator CEO Ken Berger said that one of the problems they’ll have in moving to an outcome measurement rating system is that less than 10% of nonprofits are actually measuring outcomes internally. While measuring outcomes (what effect a nonprofit’s programs actually have) might seem simply in theory, in practice it gets very complicated and can be very expensive to implement.

So that’s why I’m excited to highlight a project being sponsored by the San Diego chapter of Social Venture Partners. SDSVP member and Tactical Philanthropy reader David Lynn writes:

We have a great opportunity for performing a quality case study on outcome measurement, and we’re looking for other groups that may want to get involved and help form some best practices.  We would like to publicize this process and provide enough transparency so that we can all learn and benefit.

San Diego Social Venture Partners is working with our Investee, Elder Law & Advocacy, to launch a new program focused on scams against seniors.  The program is a ground-up initiative within a well-established organization that already has staff, funding, plans, and plenty of SDSVP consulting – all the elements needed to embark on a new initiative, but early enough in the process that we can still focus the program.

However, while the need of preventing elder abuse is well defined and fairly obvious within the sector, the outcomes of a prevention program are not.  Thus, the open questions:

– What are the outcomes that we should we measure?
– How should we measure those outcomes?
– How can we best track and report on those measurements?

Here’s the summary of the program:

Senior Shield is a community-wide initiative is designed to help seniors avoid being victimized by unethical individuals who perpetrate scams and fraud and, as a result, help these seniors preserve their assets and maintain their physical, emotional and physical health and age with the best possible quality of life.

Through education and legal assistance to seniors, including a Fraud Hotline, and through advocacy at the local and state level, Elder Law & Advocacy will achieve these goals:

1) Protect unsuspecting seniors from abuse by family, caregivers, strangers;
2) Help seniors avoid falling victim to Medicare fraud;
3) Promote awareness among legislators of the increase in scams targeting seniors and the dramatic rise (and related cost) of Medicare fraud.

SDSVP is interested in partnering with foundations, universities, nonprofits or anyone interested in researcher the best ways to measure outcomes. If you’d like to participate, contact SDSVP directly or shoot me an email.

Note that SDSVP is committed to running this program in a transparent way so that the field at large can learn from their experiment.

5 Comments

  1. Outcome measurement press/help seeking for SDSVP at Tactical Philanthropy http://tinyurl.com/cglp88

  2. robert M says:

    In a March 20, 2009 Tactical Philanthropy article, Charity Navigator CEO Ken Berger was quoted as saying that fewer than 10% of nonprofits are actually measuring outcomes internally. It was further stated that while measuring outcomes might seem simple in theory, in practice it gets very complicated and can be very expensive to implement. In point of fact, however, not only is this statement not exactly accurate, but it misses the broader lack of knowledge that holds many nonprofits back from effectively achieving and measuring the intended outcomes of their efforts.

    Decades ago, Joseph Wholey and his colleagues realized that measuring the outcomes of many programs was difficult because those efforts had not been designed or managed to produce outcomes in the first place. While the language of outcomes has certainly permeated the nonprofit world in recent years, many organizations are still unclear about the difference between activity and outputs on the one hand, and outcomes on the other. Often too, being unaware of the characteristics of good, viable outcomes, more than a few nonprofits struggle to achieve goals that are imperfect for a variety of reasons.

    Adding to this situation is an often evident lack of understanding among nonprofits regarding their capacity to achieve the goals they have set for themselves and their programs, “capacity” all too frequently viewed as merely a question of money or staff size.

    Finally, although there are many very good outcome frameworks that can help nonprofits identify good outcomes, manage towards them, gauge their actual capacity, track their progress, make verification and reporting more useful, and help these organizations profit from their experience, many nonprofits are unaware of these models or of how to use them.

    In many cases it is this lack of awareness, rather than any inherent difficulty or expense involved, which holds nonprofits back from incorporating outcome management and measurement into their work.

    The vast majority of nonprofits want to be effective and do their absolute best on behalf of those they serve. Those who have developed and refined the many outcome-based tools available stand ready to help. But in the end, it is up to the investor community, the philanthropies, foundations and governmental agencies who underwrite the work of most nonprofits, to make this expertise available to their client, partner and grantee organizations.

    Robert M. Penna, Ph.D.

    Dr. Penna is an outcomes consultant and the lead author of Outcome Frameworks. His latest book, The Outcomes Toolbox is being published by The Rensselaerville Institute. He can be reached at rmpc52@aol.com.

  3. Robert, I believe that funders should expect their donations to cover all costs, not just program costs. So having funders expect to pay for outcome measurement jives with my thinking. It seems to me that the key is making sure the outcomes measured are relevant to the nonprofit in managing their business. If they are then 1) the cost is an investment rather than an expense and 2) if they are not, then the funders are probably paying attention to the wrong outcomes.

  4. thomast says:

    How do you measure the outcome of something like “increase legislator awareness”? You can measure how many calls and meetings you have or how many printouts and emails you send, but you can’t actually measure something like legislator awareness. Even if you sit them and their staff down with a questionnaire, they will fib.

    Same with protecting seniors from abuse. You can measure your activities, but you have no way of measuring whether you PREVENTED something from happening. Do you have a questionnaire for a potential abuser with a checkbox for “Was going to abuse my grandfather till I took your seminar?”

    I am totally not on board with this outcome measurement business. Social change and services does not take place with questionnaires. Find the right people, give them resources, let them learn and document their learning, help them implement what they’ve learned and continually improve it.

    The bottom line is hiring the right people and then trusting them and working and communicating with them consistently. This obsessive measurement does not take the place of trust and communication.

  5. Dr. Robert M. Penna says:

    Blogger Thomast asks how one might measure something that was prevented. Good question. One way is to measure results against a baseline of known occurrences of a particular situation or condition. If, for example, statistics showed that during the school year X number of teenaged girls in a school or district can be expected to become pregnant, then one way for a program designed to prevent pregnancies to measure its effectiveness is to see if it can achieve a reduction in the average number of pregnancies within a target population. In this case results are measured against statistically predicted occurrances.

    Similarly, with protecting seniors from abuse and scams, depending upon what initiatives made up the effort, it should be possible to ascertain whether a program contributed to a reduction in levels traditionally seen in past years.

    As for measuring increased legislator awareness, while I admit that after 13 years of service in the NY State Senate I too wonder about how to really get through to elected officials, one barometer I might suggest would be such indirect indicators as mentions of an issue in legislators’ newsletters, mailings, comments and other forms of communications.

    The point is that there are no “perfect” measures. Often indirect, secondary or proxy measures are all we have. But that should not stop nonprofits from working towards meaningful results and making the effort to measure, gauge or substantiate their impact.