This is a guest post by Ian Thorpe. Ian writes the new blog "KM on a dollar a day". He has worked for over 20 years in the international development field and is currently Senior Information and Knowledge Manager with UNICEF. Ian’s blog is written in a personal capacity and does not necessarily represent the views of UNICEF.
By Ian Thorpe
In the organization where I work, like in many other development organizations, there has been a lot of push over the past few years on “evidence-based” policies and programmes. So when I tell people I work on Knowledge Management, they often imagine that I’m working on strengthening academic research, or on building massive all-encompassing databases full of peer-reviewed scientific knowledge.
Although I am working on some databases – this isn’t what I actually do most of my time – nor despite what some of my professional colleagues think – is this what I think we should be doing.
Development is a complex business, if it weren’t we would have gotten further along in solving the world’s problems before now. One common reason cited as to why we haven’t done better is that we don’t have enough data, and we don’t have enough evidence.
A number of remedies are commonly proposed to help address this:
- Collect more statistical data – more surveys, more administrative data collection. More recently we have started to say that we need more real time data collection.
- More research – more academic studies, more randomized controlled tests, more papers published, papers published more quickly.
- More evaluation – we need to more systematically evaluate more of our programmes to understood what worked and what didn’t and what lessons we can learn. We need to use better evaluation techniques.
- More, bigger and more open databases – we often acknowledge that a lot of research has already been done or data collected, but that it is not easily available as it is stuck behind pay walls, fragmented and not well disseminated or easily searchable. To address this we strive to make big well-organized mega-databases that are the preeminent knowledge sources on their particular topic, and advocate for more free access to data and research.
Guess what – I actually agree that all these things are worthwhile. I mean how couldn’t I? BUT – too many people seem to believe that if we keep collecting more and more data, do more and more research and evaluations and make more and more comprehensive databases, then we will have everything we need to do evidence based development work. Basically, if we look hard enough, the truth is out there…
There are a couple of reasons why I don’t agree with this:
- There are limits to how much evidence you can collect
- There are other important dimensions to knowledge that are actionable, yet tend to get overlooked when we take too strong a focus on “evidence”
Firstly the limits of what knowledge you can collect. In developing county contexts in particular, it can be difficult and expensive to come by high quality, timely and relevant data. Existing data collection systems are often weak, and while they can be developed there are still limits in terms of accessibility of marginalized populations and cost of developing surveys to them to such an extent that they can provide the data needed to answer many of the development policy concerns we have.
Similarly for research, there are a large number of potential questions that we would like to address, but availability of data, costs, time and limitations in the research methods themselves mean that there are a lot of questions that can’t be answered in a sufficiently timely manner for the development of policies and development programmes.
Evaluation also is limited in that it can be very costly, yet only tell you part of what you need to know in terms of whether a programme was effective and why.
One particular challenge for any knowledge related work is that of generalizability. To what extent can the results of a study of evaluation be generalizable to other contexts and other timeframes and how much do they tell you about what you should do and what will work in the future.
Another important limitation of “evidence” is that even when it exists and is fairly clear (which for the reasons stated above frequently isn’t the case), it often isn’t sufficient to motivate policy makers, politicians, families etc. to take action. Any findings or recommendations also need to be contextualized to the local culture and to the power relations of the situation where you are trying to use the evidence. People often choose to interpret evidence in a way which supports their current beliefs, are not necessarily going to use peer-review in a reputable journal as their benchmark on whether to trust the source, and may not accept advice they don’t like that they perceive may weaken their current influence or power.
None of this means that data, research and evaluation aren’t needed. But it does mean that they are not enough. So what’s missing?
An important aspect of knowledge transfer and change is personal relationships. Most people don’t have time or the skills to examine all the available evidence first hand. This means they rely on the opinion of others whom they trust. Similarly standard methods for collecting, storing and disseminating research often have little impact with people being too busy to seek out the evidence they need, or to even develop the skills to do so. Again people frequently ask others rather than access the evidence directly themselves.
Also there is a whole range of knowledge that isn’t captured by research, that of personal experience. Often you can understand a situation, and describe it to share it with others, but you can’t back it up with scientific research (a trivial example is that I’m pretty sure I know the quickest way to walk to the station in the morning- but I have neither measured it nor timed it). It might be that it would be too expensive and difficult to prove it through research, or that by the time you know the answer, it would already be too late. Some knowledge is in the form of skills or even instinct which doesn’t easily lend itself to being formally captured at all. This type of knowledge is known in the business as tacit knowledge. Here is a handy diagramme that explains the difference between the two (link to original).
So, in order to take advantage of the part of knowledge that lies below the surface (the part which isn’t “evidence” in the formal sense), then you need to take other approaches. These can involve using tools to try to capture some of what is currently hidden to make it more shareable (tools such as after action reports, end of assignments reports, self-reflection exercises, lessons learned, story telling etc.) and through approaches that make it easier for people with shared knowledge interests to find each other, trust each other, share with each other and collaborate (through approaches such as knowledge fairs, communities of practice, social networking, cocreation).
In fact I find that the most interesting, and promising work I do in the area of knowledge management is not about evidence at all – but is about the social dimension to knowledge. What I need to do is make a better case for this with my colleagues – but then I’m sure they are going to ask me to show them the evidence!
6 Comments
Beautiful post Ian and thank you so much for explaining these differences in knowledge management!
My assumption is that a large part of getting that Tacit Knowledge out of participants would be best done through conversations so immediately I can see how using something like Google Groups or a Facebook Page might be a great tool in gathering some of that.
My question is, have you come across any other tools online that you’ve seen to be a great use to NPOs or philanthropists for gathering Tacit Knowledge, either directly or indirectly?
Thanks again – its really valuable to see the full picture!
Hi Daniel
There are lots of different tools being used by different people – perhaps part of the challenge is there are so many that it can be hard for people to find one another. Many large organizations (like ours) have developed their own online platforms to support tacit knowledge sharing.
It’s important to distinguish though between the tools, and the communities (i.e groups of people) who use them. Ultimately it is getting the right group of people together, or finding the group that is talking about what you care about which is most important.
I’m working more with programme implementation people rather than fundraising or philanthropy management, so I don’t really know the best communities and tools from this side.
For me though a good place to start is twitter since its openness mean that you can easily find people who share your interests. These can then be a gateway to find more specialized communities.
I’d also recommend looking at Beth Kanter’s blog/website (www.bethkanter.org) as well as many of the blogs listed on the blogroll of this site, a many have tips on how to apply various tools for social good.
I hope this discussion grows because this is where true value comes from. The company that I work for is developing a tool, or set of tools that will allow our 40,000+ employees to connect, discuss and innovate with each other. A Knowledge management tool alone will not accomplish that; it needs a social interaction component. The participants must be drawn into it and wow’ed by what they can learn. I am looking for ways to enagage people in our company, to keep them engaged and sharing information. We have SharePoint 2010 in-house and it has some capabilities to host such an exchange. I’m not sure that the Wiki is the best format or using a Library/List with excellent search features. Any ideas?
Thank you for starting this discussion; I hope others will jump in.
Ian,
Thanks for this post. I think the idea of tacit knowledge vs explicit knowledge is an interesting and potentially useful one. And I think we need to be careful about how we use it.
The main problem, as I see it, is that nearly every charitable program that has been subjected to a rigorous randomized controlled trial — the highest standard of evaluation — has shown either no impact or only a relatively small impact. So it seems like there is a significant divergence between what our explicit knowledge is saying (“most of these programs can’t be shown to work”) and what our tacit knowledge is saying (“this program works great, we just need to grow it”). I agree that gathering and processing explicit knowledge is slow and expensive, and it may be so slow and so expensive that we’ll never be able to gather enough.
But when our two types of knowledge are in conflict, my bias is to trust the explicit type. That’s because it’s more objective and less subject to bias. Going back to your example of how you walk to the train station, it’s no surprise that you think you have the best way: If you thought otherwise, you would change your habit to whatever way you thought was better, and then you’d once more think you have the best way. Your interpretation of the facts is biased because of the way you arrived at that conclusion, which doesn’t invalidate your conclusion but it does suggest that if you came across some objective evidence that your route wasn’t best, even if you didn’t know what was better, you should trust that more than your own intuition.
In conclusion, placing too much trust or too much of an emphasis on tacit knowledge may lead us to be self-congratulatory, to pat ourselves on the back. We need explicit knowledge to keep us honest. Otherwise, we are no better than Gideon Gono, who watched over the destruction of Zimbabwe’s economy while ignoring all the evidence that the collapse was in fact the result of his actions.
Respectfully yours,
–Ian Turner
Hi Ian, I liked Ian Thorpe’s post a lot as well. But you raise a very good point. When we value non-measurable input higher than measurable input, we run the risk of using very bad inputs that we “believe” are good. However, when we value measurable inputs over non-measurable inputs, we run the risk of having very high conviction in our understanding of things that we actually are measuring poorly.
For instance, a randomized control trial might show that an intervention is successful but only at an 85% confidence level. This would result in the RCT finding that the intervention did not work, since the level of confidence was not high enough. This however does NOT mean that the intervention does not indeed work. In only means that the particularly study used found that it is actually likely the intervention worked, but there was still significant doubt. Yet those who value explicit data would likely conclude (with a high level of conviction) that the intervention did not work.
At the end of the day, I think we all need to recognize that there are real limits to our ability to know things. Both explicit and tacit knowledge are needed. Both are also dangerous.
Ian, Sean
I think you are right to also draw attention to some of the limits of tacit knowledge. Given that it is subjective, based on the viewpoint and experience of the person holding the knowledge it is subject to a variety of unconscious cognitive biases. That said, it still has a lot of value since it is real experience and it includes interpretation of meaning which is essential for knowledge to be put into practice.
It’s always useful to collect whatever measurable data and “hard evidence” about a situation to understand it better. And where the explicit knowledge and tacit knowledge don’t agree you need to look carefully why, often the hard evidence might be more reliable but not always. Evidentiary methods are good at telling you what a situation is, but are less good at telling you why, and so tacit and explicit knowledge together can often give you a more complete understanding of a situation.
One of the problems with evidence is that its not always possible to get it. Either it is unavailable, unreliable, relies on heroic assumptions or too expensive to collect, and it also might be too late to use it by the time you have it. In development aid these problems are particularly acute – which is why not all action can be based on high quality explicit knowledge – more likely a combination of incomplete explicit knowledge and incomplete tacit knowledge.
Certain contextual factors don’t lend themselves to evidence and experimentation, especially tricky issues around politics and human relations. It’s also worth recognizing that whatever the quality of the evidence, sometimes tacit knowledge is also often more effective in leveraging policy change by politicians and other decision makers.
I don’t think that one type of knowledge is “better” than another, but rather they are both useful, but may be more or less applicable in different circumstances and also tell you slightly different things.