I believe that in many aspects of life, a kind of pendulum effect exists. This effect describes the way in which people’s opinions tend to swing back and forth around reality. Rather than reflecting reality, people’s views vacillate in an arc around true reality. This creates a kind of boom/bust scenario that is very evident in the stock market (dot com stocks will make me rich! Ahh! Dot com stocks are poison!), but also shows up in politics, educations, pop culture, etc. I ran across a well put description this morning on Yahoo Answers:
I think the idea is that: somebody gets a good idea, and then a whole lot of people 1/2 understand it, and make it absolute, taking it to an extreme. Until somebody discovers “another” great idea, (the way things were originally done), and everybody jumps on that runaway train to hell.
The lesson: Moderation. A little common sense goes a long way. Don’t just “swing with the pendulum” of fashion in teaching/learning methods.
I think that there is a strong pendulum effect in philanthropy. I see it at work when we talk about metrics, evaluation, “philanthrocapitalism”, venture philanthropy, etc. Today I want to share with you an excellent article in the Financial Times by Gara LaMarche, the president of The Atlantic Philanthropies. I think LaMarche describes well the way in which approaches to evaluation have “swung too far” and his recommendations for a middle ground makes a ton of sense.
The philanthropic world, poked and prodded by a wave of new donors fresh from success in the business world, is grappling with the issue of evaluation. How do we know that grants – or, as they are now often called, reflecting the influence of the profit-making sector, “investments” – are making an impact?…
…Evaluation is a learning tool for the organisation and the funder, not a stick with which to beat grantees.
…Doing this correctly takes money… Funders should recognise and support their grantees in their efforts to learn what works.
…Evaluation should measure only what is important. Data should never be collected for the sake of it. The “metrics” obsession that has overtaken some funders has not always recognised this. Funders should never make grantees jump through hoops, distracting them from their core mission and costing valuable staff time, for reporting on trivial things. And there is nothing more demoralising, from the grantee’s perspective, than doing all this paperwork only to have it ignored.
…Both funders and the organisations they support need more humility about cause and effect. Organisations working for social or policy change should understand that no significant change was brought about by one organisation working alone.
…Finally, the most important thing: start with what you believe. If you have a passion about ending the death penalty or the isolation of older people – whatever it is – find a way to advance it first and worry about how to measure it second.
You can read the full, excellent article here.
Contrary to public belief (and truth) about many government programs, the Combined Federal Campaign (CFC), the Federal government’s workplace giving program has less red tape than almost any grant application, and it has absolutely less red tape on the evaluation side – there’s none.
$270 million annually to thousands of local, national and international non-profits — and the money is unrestricted, reliable and predictable.
Bill Huddleston, CFC Expert
I’ve been working on this article for a while now, this post seemed to be the perfect time to post it:
The Good Samaritan & “Performance Measurement”
Currently, there’s a lot of hype in the world about being “results oriented” and the culture of “performance management” has seeped its way into almost every realm of American life, including business, government and now, the non-profit world as well.
Well, why shouldn’t it? Doesn’t it sound like it’s the only way to be, after all, who could be “against results” or against “performance measurement.” It sounds great, but like the question, “When did you stop beating your wife (or husband)?” it sets the stage in an extremely negative, and skewed fashion.
Let’s use a historical example, the story of the good Samaritan from the Bible is one that I believe is so widely known that it qualifies as a societal story, not just a religious one.
To recap, in the parable a traveler is robbed, beaten, stripped of his clothes and left for dead. Two different people walk by, leaving the robbery victim alone. Then a man from Samaria (the Good Samaritan) comes upon the man, and even though the two different groups hated each other, he stopped to render aid. The Samaritan takes pity on the victim, bandages him, pours oil and wine on his wounds, then puts the victim on his donkey and takes him to an inn and takes care of him. The next day, the Good Samaritan gives the innkeeper two dineri (this was about a month’s earnings at the time) and tells the innkeeper, “Look after him, and when I return I will reimburse you for any extra expense you have.” (The story is from Luke 10:29-35).
Now let’s apply modern performance measurement and outcome techniques to this story. With 2000 years of history the story still resonates, how many people have been helped because someone remembered the story of the Good Samaritan and acted in a way that was not perhaps their first impulse?
We will never know, and to the performance management crowd, this incident would be recorded today as “too expensive” and “ineffective” – after all, the Samaritan only helped one person. We don’t know if the Samaritan ever came back and paid those extra expenses, and it was a month’s earnings to help just this one person.
It would also received the rating of : “Results Not Demonstrated” – we don’t know if the victim ever recovered, was permanently injured, or had mental impairment due to his injuries. All we know is that he had the crap beat out of him, multiple people walked by, until the “unclean” Samaritan stopped to help.
According to the performance measurement tools, the Good Samaritan was a failure.
I think not.
Copyright Bill Huddleston, All rights reserved.
Evaluation. My co-founder and I spent 2 hours standing on a street corner ‘after’ work discussing how to measure and convey the profound experiences we have daily. My stomach churns and my shoulders ache when our expert non-profit adviser talks about metrics. I struggle to add one more thing to my to-do list and I know metrics don’t say enough about why what we’re doing works and why it is important.
Interesting story Bill. I think social sciences are a better framework for measuring philanthropy than the hard sciences, but I would still note that even the hard sciences do not think something is worthless if it is not measurable, it is just not proven. Philanthropists should keep that in mind.
I would argue that the person who had the real impact was the person who told the story of the Good Samaritan and inspired others to act.
I suppose that an organization that does a particularly great job at spurring people to practice good Sameritanism might, under the lens of scientific evaluation, do quite well.
Might not those who use evaluation to surface and reward the most effective promoters of Sameritanism be more likely to increase overall Sameritanism than those who don’t?