In mid December, the Corporation for National & Community Service released a Draft Notice of Funds Available (NOFA) for the Social Innovation Fund. They are now accepting public comment on the document until January 15 (simply email your comments to SIFinput@cns.gov). Until that date, I’ve opened the Tactical Philanthropy blog as a public forum for discussion of the Fund and the draft NOFA. For background, please read an explanation of the Fund, why it matters and the highlights from the draft NOFA.
This is a guest post from Katya Fels Smyth, founder of the Full Frame Initiative. This article was originally published in the Chronicle of Philanthropy.
Last month federal officials announced news many nonprofit groups and foundations have eagerly awaited: how the government’s new Social Innovation Fund plans to award its first year of grants, expected to total $50-million.
It’s perhaps surprising that so much attention has been paid to such a small sum. After all, state, local and federal governments give charities more than $350-billion in grants and contracts every year.
But the excitement seems to be less about the dollar amount than the fund’s laudable purpose: to solve some of the nation’s most challenging social issues by finding existing community-based solutions and using government dollars to spread the approaches that work best to communities nationwide. That’s an important goal, and I am impressed by the thoughtfulness of it the fund’s crafters. But the approach the government is taking to determine "what works best" could undermine its entire effort.
My concerns stem from the work of the nonprofit organization I head, the Full Frame Initiative (which is not seeking any money from the Social Innovation Fund, so no hidden agenda here). For the past five years, we have been doing work that resonates with much of what the Social Innovation Fund wants to do: finding out what works best to reduce poverty, violence, and health problems in the nation’s poorest neighborhoods and developing ways to spread those ideas.
What we have learned about how to evaluate those innovations makes me and my colleagues question whether the Social Innovation Fund can have the impact it clearly seeks. Even more worrisome is that the Social Innovation Fund’s requirements will cause foundations and other grant makers to change how they award money in ways we believe are harmful and long lasting.
Attracting private donations is a key goal of the Social Innovation Fund.
The fund plans to give its money to grant makers, which will be responsible for reinvesting that money to expand organizations that have already succeeded on a small scale.
To be eligible to distribute federal money, grant makers must provide one dollar of their own for every Social Innovation Fund dollar they give away. The organizations that receive the money to expand their projects are required to match every dollar they receive with money from sources other than the federal government. That turns the federal government’s $50-million into a total of $200-million focused on innovation and spreading effective programs.
As the Social Innovation Fund determines which grant makers to support, it plans to give extra weight to ones that pledge to pick nonprofit groups that can prove their effectiveness using a special type of assessment called experimental-design evaluation. This evaluation approach sets up situations in which a group of people getting a "treatment" — whether a drug or an after-school program — is compared with an equivalent group that didn’t receive the same help.
The Social Innovation Fund says it will make some exceptions "where these types of evidence are not available" but says that groups must still spend time and money building a pool of evidence based on the experimental-design methodology.
The simplicity of comparing results through experimental design suggests why these evaluations are so popular with policy makers. But the devil is in the details.
After-school programs are not pills. And experimental design studies alone cannot tell us whether something works absolutely all the time, as the Government Accountability Office and other experts have noted.
Such studies require a very narrow definition of who is being studied, and people who face multiple intertwined challenges — who are the most in need — are excluded. So, for example, if a new approach to helping homeless mothers is under scrutiny, experimental-design evaluation would exclude battered women, those with chronic health problems, or those involved in the criminal-justice system unless everyone had the same problems. And that’s not real life.
Furthermore, the approach a nonprofit group takes must be static (rather than evolving in response to demand) so that what is studied is the same protocol at the beginning and end.
Those are just a few of the requirements that this type of evaluation demands, and they make it not just inappropriate but also unusable for many very promising community groups that work with a diverse mix of people who need the help of a wide range of social services and systems simultaneously.
What’s more, those studies obscure the role of systemic forces, such as public-housing availability, child-welfare policy, antipoverty benefits requirements, and other social services, on the way people live. They also seek to minimize, not build on, the role of cultural context and local knowledge that might help a program succeed, making experimental-design evaluation less useful.
Many innovative social problems can be better and more rigorously assessed using a combination of research methods.
Experimental design and its offshoots have their place. But it is a limited place, and a place poorly suited to assessing approaches that work with those people and communities that most need and deserve highly effective solutions to dealing with deep-rooted problems. And they don’t tell us much about innovation. They certainly don’t illuminate for whom, when, why, and how a new approach works or doesn’t — rather important things to know.
There are ways out of this miasma, ways that evaluators and nonprofit groups should suggest to the Social Innovation Fund.
But still, since it is "only" $50-million, why pick this battle? Because this is not simply a mandate that affects government dollars.
By requiring grant makers and community groups to match the federal money and requiring that all the money be awarded following the same procedure, the Social Innovation Fund is pushing experimental-design studies squarely into the philanthropic world.
That will have a chilling effect far beyond the $50-million in federal funds. It will lead to less innovation and far less accountability to the communities that could most benefit from effective new ideas for curbing poverty and related ills — not only because they won’t have access to those dollars but also because they will lose access to private dollars as well.
Grant makers have long had the freedom to decide, for their particular purposes, the best ways to determine what approaches are most effective. Some grant makers have done it well; some have ducked the question, to be sure.
Among those committed to grant making based on the best evidence possible, however, few make their decisions by referring to experimental-design studies. Far from it. And it’s not because those grant makers are wishy-washy. Most of them have found that experimental-design studies are not just impractical for some of their most promising grantees and projects — they are impossible.
But rather than jettisoning good grantees, they seek out rigorous approaches to evaluation that are relevant to the task at hand. Those foundations and their grantees are committed to rigor in context — that is, there is no definition of rigorous evaluation that applies to every approach to help people stay healthy, get a decent education, and thrive in other ways.
Instead, rigor must be married with relevance to determine the right methodology for evaluation. As Hallie Preskill, who directs evaluation at FSG Social Impact Advisors, has noted, "Rigor has a lot to do with the credibility, relevance, and usefulness of the evaluation findings."
The Social Innovation Fund’s potential lies in using the attention it is garnering to lead by example — to demonstrate that there are new ways to spread good ideas — and to set a new tone in government, in philanthropy, and in the public square that we can make gains in solving what often seem like intractable social problems.
By hitching its wagon to the false promise of experimental-design studies, the Social Innovation Fund will miss some of the most important innovations, and the ones that hold the most promise for real change.
More significantly, perhaps, the Social Innovation Fund will change conversations in foundation boardrooms to further favor experimental-design studies, making it even harder for many great and transformative new social ideas to get support. The example it is setting is dangerous.
Up to $150-million of private dollars will be instantly locked up in nonprofit projects that generally do not reach the deepest needs of society. Far, far more could be diverted from approaches whose promise can be seen only by using different ways to gauge results than experimental design.
No one benefits. Not the best ideas for helping the nation’s most vulnerable, not the taxpayers, not philanthropy, and, most important, not the communities that most need help achieving a decent quality of life.