This post is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca Tapscott, Lisa Denney and Pilar Domingo, Michael Woolcock, Morten Jerven.
There’s a commendable search for rigor in social science. But there’s also an illusion that numbers ipso facto represent rigor, and that sophisticated mathematical analysis of the social scientific datasets can expand the realm of explanatory possibilities. Social scientific researchers working in what the Justice and Security Research Programme calls “difficult places”—countries affected by armed conflict, political turbulence and the long-lasting uncertainties that follow protracted crisis—should be extremely cautious before setting off on this path.
There’s a simultaneous search for policy relevance: for bridging the gap between the academy and the executive. We want our research to be useful and to be used; we want policy-makers to listen to us. But we risk becoming entrapped in a self-referential knowledge creating machine.
The holy grail seems to be to emulate economists and epidemiologists, whose highly technical analyses of real world data—and in the case of the latter, double-blind clinical trials—set a gold standard in terms of methodological rigor, alongside a truly enviable record of influencing policy and practice. But before embarking on this quest, it would be advisable to examine what social scientific scholarship might look like, if it actually reached this goal.
Let me give two brief examples, and add a cautionary tale, and conclude with some observations about what actually influences policy. The first case is the perils of doing analytical economics with bad data. Morten Jerven (2013; 2015) has elegantly debunked the great majority of econometric analysis of data from Africa for the last twenty years. It can be boiled down to the old adage: garbage in, garbage out. The quality of most of the data is simply too poor to draw anything other than elementary conclusions. The edifice that was built on this sand, by (for example) Paul Collier (2007), has been extraordinarily influential. But it is still built on sand.
There are of course exceptions: social scientists who build their own datasets with extreme care, and event-based datasets based on measurement of real things, such as births and deaths. But generating reliable datapoints is so time-consuming and requires such an investment of time and energy, including mastery of languages and familiarity with local contexts, that too few are ready to do it. Most who deal with numbers are ready to play with the quantitative hand they are dealt. They don’t want to ask too many questions about how the datapoints were derived, and—especially for survey data—of the validity of translations.
Some of the enhancements in data collection give a plausible illusion of greater validity, and therefore greater analytical traction. But we should be deeply wary of rejoicing in a thicket of numbers rather than despairing at the sparseness of the data points we had before. Patrick Ball provides salutary caution about using crowd-sourced data for social phenomena in difficult places: the increase in the number of data points may just magnify the biases of data collection, with the outcome of amplifying error rather than correcting it (Ball 2015).
The second example is epidemiological studies and clinical trials, especially large-n investigations into the efficacy of medical interventions such as drugs. John Ioannidis (2005) wrote a provocatively entitled essay, “Why most published research findings are false,” examining the multiple sources of unreliability and bias in the medical and epidemiological literature. It is a sobering read, and Ioannidis’s conclusion is salutary: “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”
It would be interesting to supplement Ioannidis’s study with a measure of the policy influence of the studies he cites. My initial hypothesis would be: the greater the study reflects the prevailing bias, the more influential it might be said to be. However, this would likely be a correlation, not a demonstration of cause. In which direction does the influence flow?
My cautionary tale concerns climate change. It is a well-rehearsed trope that climate change will cause conflict. And no lesser an authority than the United Nations Secretary General has said so, in a newspaper of record (The Washington Post). But he’s wrong. In 2007, Ban ki-Moon published an op-ed on Darfur which was entitled “A climate culprit in Darfur” (Ki-Moon 2007). He wrote that drought was one factor that contributed to the Darfur conflict, citing the author Stephan Faris (2007) who had recently published an article in Atlantic Monthly. That hardly counts as proof, and still less so because the article pivots on an account of Darfur in the mid-1980s, when the camel-herding Jalul Arabs were impoverished by drought and saw their traditional way of life threatened. Their ageing tribal sheikh, Hilal Mohamed, darkly foresaw the end of the world. When he was incapacitated in 1986, he passed his chiefship to his younger son, Musa, who nearly twenty years later gained infamy as the leader of the Janjawiid militia. It’s a compelling anecdote that ties together drought, social disruption, and atrocious violence. And the original author of the anecdote was me (Flint and de Waal 2008). And I am the first to point out that the links are tenuous at best, not least because the drought ended in 1985/86, and the Jalul nomads in fact resumed their traditional camel herding thereafter.
The claim that Darfur is the world’s first “climate change war” is a factoid and an example of “policy-based evidence making” (Boden and Epstein 2006): shamelessly searching for tidbits of information to support a policy position, or an intuition. In fact, a host of statisticians and quantitative researchers have published their own climate change version of ‘why most published research findings are false’, debunking the claims for demonstrable causal links (Buhaug et al. 2014). Most avowed findings in this field tell us more about the process of filtering and screening results (aka “prevailing bias”) than about the realities.
I could go on (and on). Most analyses of the impacts of global health on world politics turned out to be false—compare the dire predictions of the 2000 U.S. National Intelligence Council report on infectious diseases, with its 2008 report, which admitted (implicitly) that its warnings were exaggerated. Most quantitative research findings on African civil wars are false because of the poor quality of the data. For example, all existing conflict datasets omit most inter-state and transnational conflicts. I compiled a listing of inter-state and transnational conflicts and major incidents for the Horn of Africa and adjoining countries, based on personal knowledge, that contained 92 cases, of which less than ten are in the most-used datasets. That is just a beginning. Those who compile event-based datasets, such as ACLED, are far more cautious in using their material than external researchers who are all-too-ready to plunge in, regardless of the warnings.
The problem is the global hierarchy of power, and in particular the power to set an agenda. The agenda for poor and troubled countries is set by rich and powerful countries. The policies of rich and powerful countries are not attuned to the complicated realities of how politics and society function in “difficult places.” They are attuned principally to their own requirements of crisis management. This is elegantly framed by Jean-Marie Guéhenno (2015) in his memoir of his eight years as head of United Nations peacekeeping. Known as an intellectual, Guéhenno confessed he didn’t find social and political science useful when he became a senior “operator”:
I do believe that a lot of the ‘thinking’ that goes on is useless for operators. The most useless way to pretend to help is to offer detailed, specific solutions, or recipes. … Operators do not read much. They do not have the time. I, who was an avid reader, read much less during those eight years than I used to. … I would either read memoirs, history books, or real philosophy. What I needed was the fraternal companionship of other actors before me who had had to deal with confusion, grapple with the unknown, and yet had made decisions.
The last sentence is the key: Guéhenno—the archetypal political intellectual—was seeking an intellectual and moral fraternity, and he didn’t find it among social and political scientists. A survey of what senior U.S. policymakers read found something similar: history and area studies stood out as useful, while academic work in political science and international relations didn’t (Avey and Desch 2014). The latters’ findings are summed up by the remark of one of their respondents, “most of the useful writing is done by practitioners or journalists. Some area studies work is useful as background material/context.”
The process of “policy-based evidence making” consists, I suggest, of senior decision makers formulating policy based on their own intellectual capital (acquired years or decades earlier), their experience of decision-making under stress and uncertainty, and their reflections on the dilemmas faced by others in similar circumstances. Perhaps a newspaper or magazine article may also prompt their thinking about a broader issue such as global health or climate change. Their junior aides then dress these policies up with a semblance of rigor by seeking out the abstracts of academic papers that appear to support their approach.
The way out of this thicket, I suggest, begins with the challenge of making research accessible to those who are its subjects, and who are supposed to benefit. Research in difficult places must make sense in the vernacular of the people who live in those places. Research will be more rigorous and more valid, when it is driven by a sense of accountability to its subjects.
References
Avey, P. and M. Desch, 2014. “What Do Policymakers Want From Us? Results of a Survey of Current and Former Senior National Security Decision-makers,” International Studies Quarterly, https://www3.nd.edu/~carnrank/PDFs/What%20Do%20Policymakers%20Want%20from%20Us_MC.pdf
Ball, P. 2015. “Digital Echoes: Understanding patterns of mass violence with data and statistics,” Open Society Foundations, transcript.
Boden, R., and D. Epstein, 2006. “Managing the Research Imagination? Globalisation and research in higher education,” Globalisation, Societies and Education 4.2, 223-236.
Buhaug, H., Nordkvelle, J, Bernauer, T, Böhmelt, T, Brzoska, M, Busby, J W, Ciccone, A, Fjelde, H, Gartzke, E, Gleditsch, N P, Goldstone, J A, Hegre, H, Holtermann, H, Koubi, V, Link, J S A, Link, P M, Lujala, P, O’Loughlin, J, Raleigh, C, Scheffran, J, Schilling, J, Smith, T G, Theisen, O M, Tol, R S J, Urdal, H and von Uexkull, N, 2014. “One effect to rule them all? A comment on climate and conflict.” Climatic Change, 127 (3-4). pp. 391-397.
Collier, P. 2007. The Bottom Billion: Why the Poorest Countries are Failing and What Can Be Done About It, Oxford, Oxford University Press.
Faris, S. 2007. “The Real Roots of Darfur.” Atlantic Monthly, April, http://www.theatlantic.com/magazine/archive/2007/04/the-real-roots-of-darfur/305701/
Flint, J. and A. de Waal. 2008. Darfur: A New History of a Long War. London: Zed.
Guéhenno, J-M. 2015. The Fog of Peace: A memoir of international peacekeeping in the 21st century, Washington DC: Brookings.
Ioannidis, J.P.A. 2005. “Why most published research findings are false.” PLoS Med 2(8): e124.
Jerven, M. 2013. Poor Numbers: How We Are Misled by African Development Statistics and What to Do about It, Ithaca: Cornell University Press.
Jerven, M. 2015. Africa: Why Economists Get it Wrong. London, Zed.
Ki-Moon, B. 2007. “A Climate Culprit in Darfur.”Washington Post, June 16 2007, http://www.washingtonpost.com/wp-dyn/content/article/2007/06/15/AR2007061501857.html