During the conversations about what to measure in philanthropy, a dominate theme has been that no universal metric will ever work (although some participants do not agree). This idea is validated by measurement practices in the for-profit markets where different metrics are believed to be important for different companies.
So how should an individual nonprofit think about measurement?
I got the following email from a reader recently:
Your blog. I read it every day. It’s great. But frustrating.
How do WE measure success? We’re trying to implement a program like [deleted to protect privacy]. It will be difficult to quantify success, especially short term. We could have 5 students and really change their lives now–or maybe not be able to point to the impact for years. We could have 50 students and not connect at all. When we discuss this among the staff and with well-meaning supporters, everyone says to just make something up. That really grates on me. And we can’t be the only program with the same problem.
This was my answer:
For a minute, don’t think about numbers. Just tell me what you think your organization would look like in five years if it were successful. For instance, if you raised and spent $1 million and during that 5 years worked with 5 students. Would that be a success? What about 500 students? Or 5,000? If you had a choice between working with 500 students and feeling like you exposed them all to music, but didn’t really change their lives would that be better or worse than working with just 5 students and feeling that you totally changed all of their lives for the better?
After you have an idea of what success would look like, then we can think about ways to measure it.
4 Comments
So how should an individual nonprofit think about measurement?
There are useful and practical ways to think about measuring one’s effectiveness. To your reader, I would say:
1- Set your Goal: Where do you want to be with your program in say, three years?
2- Describe your Strategies: how do you plan to get to your goal?
3- Select your Progress Benchmarks: what intermediate results could signal progress towards your goal? An example: at the end of the first year, 70% of the students decide to enroll for next year’s activities. This benchmark would help show progress on how ‘connected’ the students are to the program.
Once you’ve defined your benchmarks, it will be easier to determine whether you are making progress and if you’re measurements are solid. And if you’re not making the expected progress, it’s time to revise your strategies and tactics, or perhaps even your measurements.
Hope it’s helpful! This is a difficult topic to discuss in the shortness of a blog…
Sean et al
I just want to point you to another take on this conversation as well as on the idea’s in TP’s earlier discussion about Google.org and data/metrics. It is unfolding over at HybridVigor under a headline “Metrics run amok are killing nonprofits.”
http://hybridvigor.net/2008/01/07/metrics-run-amok-leaving-nonprofitsto-bang-their-empty-tin-cups/
Disclosure: I blogged (at philanthropy2173 and SIIR) about the Re*Framing piece that Caruso wrote (and cites).
Edith, my point and yours (hope I get you correct) is that measurement is not about fitting your organization into pre-defined metrics. It is about identifying or creating metrics that can help you perform better.
Thanks for your comments.
Sean,
Yes, measurements only work to the extent that they’re designed to inform your own strategy. There has been plenty of entries on this and other blogs arguing that measuring nonprofit effectiveness is hard, that applying business-like metrics to nonprofits is useless, that the nonprofit sector is different, etc (see the article Lucy B. refers to in her post above for a convincing explanation of the short-sightedness of project-specific giving and its restrictive measurements.
I, for once, believe that nonprofits greatly benefit from embracing evaluation, especially the kind of evaluation that is coupled with planning, and that is thought of at the beginning of a program, not at the end. An evaluation that:
• Challenges assumptions
• Is dynamic and conducted in real-time to help flag what’s working and what’s not
• Helps make course corrections
• Is culturally appropriate
• Is adequately staffed and funded.
Furthermore, an evaluation you can discuss and share with your peers and funders. An evaluation that makes your organization–and your field–better.
Just my two cents.