Turning the Gaze on Ourselves: Acknowledging the Political Economy of Development Research

This post is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries, beginning with Deval Desai and Rebecca Tapscott‘s piece.

While researchers (ourselves included) now consistently underline the importance of understanding the political economy of developing countries and donors that support them in order to achieve better aid outcomes, the research industry remains largely ambivalent about questions of our own political economy. Desai and Tapscott’s paper is therefore a refreshing attempt to start unpacking this and the ways in which ‘evidence’ is produced within the development industry.

Here, we offer reflections on three stages of this process: building evidence, translating evidence and dislodging evidence. But a word of caution is also merited upfront. The fact that we are talking about “evidence,” rather than research, is itself telling and underscores a shift in the development industry in the last ten years. Speaking about ‘evidence’ rather than about “research” suggests something much more concrete and indisputable. Evidence is taken as proof. But surely research is also debate. While there are of course things for which largely indisputable evidence can be found (the effects of vaccines on disease, for instance), the use of this terminology, particularly in the social sciences where little is concrete or universal, suggests that final answers are discoverable. It can thus be used to close down debate, as much as to encourage it. Research, on the other hand, recognizes that most findings are contributions to knowledge that helpfully allow to move us towards deeper understanding and greater awareness but do not claim to be the final word on a given topic.

The politics of knowledge production. building evidence:

As Desai and Tapscott note, not all ‘evidence’ is created equal. Who a researcher is, which institution she is housed within and what language she speaks (or writes in) has a huge bearing on how research questions are formulated, whether they are likely to acquire prominence as “relevant” for the policy world and the chances that her research findings will influence policy. It is hardly surprising that those researchers closest to the various policy making machines of donor agencies are predominantly from prestigious American or European universities or think tanks. Their research may, of course, be very good. But research of the same quality coming from unknown researchers at African universities, for instance, is not likely to attract the same level of policy interest. This is an important feature of the political economy of development research that should be taken seriously. It is all well and good to commit to no more all-male panels and including Southern voices on research panels,[1] but despite these (not unimportant) gestures, whose ideas end up shaping the world?

In part, this is related to the narrow pools of thematic experts that donors often draw on. Problematically, these ‘pools’ are often self-referential and self-contained with relatively monolithic forms of knowledge emerging, limiting genuine debate. An extension of this is that the pools of familiar “experts” tend to share similar disciplinary or ideational backgrounds. This means, for instance, that there is very little cross-referencing across disciplines, often resulting in siloed knowledge production.

But while these researchers and “experts” themselves have significant power, this is not unmediated. Donors set evidence standards (such as DFID’s How to note on Assessing the Strength of Evidence) which puts in place benchmarks for what they believe ‘counts’ as evidence.[2] At least in theory, only research that lives up to these evidence principles then gets translated into policy. Donors also set research budgets, which have important effects, for instance allowing only short stints in country for international researchers, which increases the reliance on tried and tested local researchers and respondents who are more likely to suffer from the “saturation” Desai and Tapscott speak about. At the same time, local researchers or research organisations (now often referred to more “horizontally” as country partners) rarely lead the process of knowledge production that is funded by donor agencies. They have limited scope to frame the research questions, or shape the analytical frameworks that underpin the research exercise at hand. And answers from saturated respondents are re-hashed and become static truths. In the course of fieldwork, “go to” respondents increasingly indicate their unwillingness to speak to researchers anymore because they feel they have told researchers everything before and they never see any changes. Researchers rarely get back to them with final reports (or the photos taken of them to adorn report covers) and the breakdown in the research-policy-practice transfer means our respondents see no changes in the problems we talk to them about – so why talk to us?

Translating evidence – dealing with the imperative of “context specificity”

An under-examined issue is how evidence that has been acquired is then translated to other contexts. Policy makers are ultimately interested in evidence that is generalizable enough to be widely applicable – that is what will help them program better. While context specificity is recognized as important, more important is knowing “what works.” “What works” is then adapted to what we know about the context. The problem is that this treats evidence as acontextual. It says that “intervention x works to achieve outcome y’ rather than that ‘intervention x worked to achieve outcome y in z setting.” The consequence of this is that ‘evidence’ is not appropriately translated. A more contextualized understanding of evidence would start by trying to understand why and how something worked (or did not) in a given context. When considering transferring what worked to another context, a process of translation would be needed – both to understand why something worked in a given context, and then whether those things apply or would be different in the second context. For this reason, things that have not worked in one case, may well be reasonable to try in another context if, again, translation of context ensued by which the intervener reasoned that intervention x did not work in country y for these reasons. On the basis of these reasons, they might show how it could nonetheless work in a different context. This is important – and might prevent the faddishness of policy approaches to evidence.

Dislodging evidence

The transmission from research report to policy influence is rarely smooth sailing. Ideas (some with better and worse evidence supporting them) can be remarkably resilient and “sticky,” with new insights finding it difficult to dislodge received truths. And practice is even more stubborn.

‘Evidence’ is easily frozen in time and becomes resilient. For instance, the 2003 statistic that 44% of poor, conflict-affected countries relapse into conflict within five years[3] was discovered by the authors to be incorrect and rectified to 20%, but the old fact remained and is routinely cited.[4] Similarly, the quip that no fragile state has met a single MDG was cited initially by the World Bank in 2011 and then got repeated and repeated, even after the Bank came out two years later and said this was no longer the case.[5]

And even when research chips away consistently at dominant ideas, the translation to practice remains far from straightforward. For instance, we see that despite the fact that non-state justice and security providers in many contexts are now widely accepted, including by donors themselves, programs continue to be overwhelmingly state-centric. This stems from at least two things: one, it can be incredibly difficult to dislodge ideas and it takes many, many reports to do so. Two, donors are trapped within the confines of political risk concerns and bureaucratic logic – Weber’s “iron cage” – that prevents the transmission of knowledge into practice. This is especially so when donors are challenged to take on board the fact of the “complexity” of social, political and economic change.

All of this is to suggest that the questions raised by the production of “evidence” for policy is deeply political and yet it is often treated as a straightforward, unproblematic exercise. Discussions that start to lift the lid on the political economy of this research process are needed both to mediate expectations about the holy grail of evidence-based policy, as well as to ensure those involved are self-aware and reflective of their role, power and responsibilities.

Notes

[1] H. Schulz, “Why I say no to all male panels,” Washington Post, October 13, 2015.

[2] DFID, “How to Note: Assessing the strength of evidence,” London: DFID, 2014.

[3] P. Collier et al., “Breaking the Conflict Trap: Civil war and development policy,” Washington, DC and Oxford: World Bank and Oxford University Press, 2003.

[4] A. Suhrke and I. Samset, “What’s in a figure? Estimating recurrence of civil war,” International Peacekeeping 14 (2007): 195-203.

[5] World Bank, “20 fragile states make progress on Millennium Development Goals,” May 1, 2003.

Share