In my last post on the Charting Impact project, which seeks to use nonprofits’ answers to five questions as a way to help donors decide who to support, I wrote:
“I’ve come to the conclusion that with these types of questions, it is how the nonprofit answers them, not the specifics of the answers that matter… the quality of the answers will be a stronger indicator than the information cited in the answer.”
Adin Miller questioned this line of thinking in a blog post of his own where he wrote:
“…we can’t be blinded by beautifully written prose or even poorly written responses. The information cited in the answer – whether well or poorly written – should carry tremendous weight in our analysis.”
The nuance here is important. I think the best analogy to tease out what I’m trying to get at is thinking about how we conduct job interviews.
A strong job applicant is able to communicate their effectiveness by providing details of their past work experience and highlighting accomplishments within the context of a broader narrative that expresses their passion for the job in question.
Job interviewers need to be careful to not be taken in by frivolous attempts to simply look good (the “beautifully written prose” that Adin rightly warns we shouldn’t be blinded by). But at the same time, job interviewers do not request large volumes of past work so that they can conduct their own analysis of the job applicants past performance.
If you are operating in a static environment, in which past performance is very likely to be repeated with almost identical results, then an evaluation process should focus on data analysis. This is the realm in which many scientific disciplines operate. If it can be proven that a medical drug works in a certain way or a vehicle survives crashes with a certain degree of safety, these results are very likely to stay constant no matter where or by whom they are implemented in the future.
But in a dynamic environment, where past performance is only a potential indicator of future results, the more important analysis is to examine the process rather than the results (although the two are obviously related). Since the future environment will be different than the past, we can’t rely on the data produced in previous environments. This data is useful, but the not as important as understanding the quality of the process or organization to perform well regardless of environment.
If nonprofit programs operated in a static environment, any competent nonprofit could simply implement a “proven effective program” and create the results they are seeking. But unfortunately the social sector is a dynamic landscape, where the key to producing results is a strong organization, not just utilizing a specific program.
Charting Impact provides a common framework, a common “playing field” if you will, where nonprofits have an opportunity to make their case. The questions asked are the sorts of questions that highly competent organizations should be able to answer in a compelling way that demonstrates their mastery within their area of expertise. Weaker organizations will find it difficult to answer the questions with the same level of detail and conviction.
Charting Impact is a starting point for understanding the effectiveness of organizations. It is a good starting point because it begins the process at the level of organizational goals and competency. As with a job interview, a couple pages of text isn’t enough to make a final decision. But the answers to the Charting Impact questions is a much better screen than any of the metrics, stats, ratios or other data analysis approaches that have historically been the focus of nonprofit ratings.
Absolutely right. It’s very difficult to isolate the effect of a specific initiative, so testing the implementer’s ability to question and learn can be useful. We’ve found this to be a very good sign in organizations around the world.