Evidence Based Policy or Policy Based Evidence? Supply and Demand for Data in a Donor Dominant World

This post is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca TapscottLisa Denney and Pilar Domingo, and Michael Woolcock.

In 2010 I was doing research for Poor Numbers: How we are misled by African development statistics and what to do about it. I was In Lusaka, Zambia on a Wednesday afternoon, and was having a free and frank conversation with a specialist working for the UK’s Department for International Development (DfID) office there as part of the ethnographic component of my book. One of the themes we kept returning to was the problem that donors demanded evidence that was not necessarily relevant for Zambian policy makers. We were also discussing how results-oriented MDG reporting had created real outside pressure to distort statistics, with donors having the final say on what gets measured, when and how. Indeed, whenever I asked anyone in Zambia—and elsewhere in sub-Saharan Africa—“what do we know about economic growth?,” a recurring issue was how resources were diverted from domestic economic statistics to MDG-relevant statistics.

Two days later I was sitting in the Central Statistical Office in Lusaka, talking to the then only remaining member of the economic statistics division. In 2007, this division was manned by three statisticians, but when I returned in 2010, there was only one person there. The other two had been pulled from economic statistics to social and demographic statistics where there was more donor money for per diem payments. The phone rang. DfID Lusaka was on the other end. They had a problem. They had financed a report on social statistics, but the office statistician tasked with completing the report had recently travelled to Japan to participate in a generously funded training course, leaving the report incomplete. Could someone help out? And so it was that the last remaining economic statistician for the Zambian government temporarily left the office to come to the rescue.

The donors and the statistical office

The anecdote shows a classic coordination problem. Every donor laments the inconsistent workings of national statistical offices (NSOs), and would in principle agree to support some kind of national statistics strategy. Yet, in practice, donors will follow their own imperatives and objectives, and lean on NSOs to get the job done. In this case one DfID official did what another in the very same office had decried as destructive practice just two days earlier.

The problem here is not want of funds; instead, the politics of this money—where the funds come from and the purposes they can be used for—is the constraining factor. Budget information on the statistical offices is notoriously hard to come by, but reports indicate that the majority of funding comes through donors. Yemi Kale at the Nigerian Bureau of Statistics told me in an interview in 2015 that donors provide about 70 percent of his budget. And the provenance of this money is not always clear and transparent. Most development agencies cannot cover recurring expenditure or regular salaries. The funds thus go to cover ad hoc projects; new surveys; per diems and associated costs; and, if channelled by a clever director of statistics, occasionally new capital investments in cars and computers. This creates an awkward set of principal-agent dynamics for NSOs. In effect NSOs are often survey agencies for donor hire (frequently through arcane and opaque funding sources) rather than more—or less—independent actors in, and providers of facts for, the domestic political system.

While NSOs need external support to survive, this has to be balanced with the constraints it puts on the office in terms of the type of data they are able to supply. Producing survey statistics for the Danish development agency on infant mortality will come at the expense of the regular reporting of industrial statistics to the domestic central bank. Again, according to Yemi Kale, the statistics on downloads from the NBS web download show that there is a higher demand for the datasets that the NBS themselves choose to collect than for datasets requested by donors. At the same time, if a donor asks for a survey that feeds into their global monitoring needs, NBS would not be in a position to decline. And even though NSOs may want more funds, NSOs have to make the case that they are in need of more funds. But they must not make this argument too forcefully lest they create an impression that they are currently unable to do their job. A similar logic would pertain to the health ministry. The ministry may want to make the point that at current capacity they are unable to achieve universal vaccinations and seek resources for this purpose. However, if they make that point too well, the donor or development agency may ask why they would put their funds through a dysfunctional institution.

In 2013, there was some public reaction to Poor Numbers that seemed to indicate that such logic was indeed at play. The Director of Statistics in Zambia eventually came across my book. In a conference presentation in May 2013 he responded to my observations: “It is clear from the asymmetrical information that he had collected that Mr. Jerven had some hidden agenda which leaves us to conclude that he was probably a hired gun meant to discredit African National Accountants and eventually create work and room for more European based technical assistance missions.”

I will not offer any evidence here to prove that I am not a hired gun, nor will I document that no-one paid me to create room for more technical assistance missions. You will just have to take my word for it. The response was part of a more general outburst of dismay about having the validity of some official statistics questioned in my book, and particularly perhaps responding to some headlines generated by media interest in this research. In part I also view it as statistical offices having to assert and protect their political role as the sole credible fact provider in these countries. I think in this light it is possible to make the most sense of a new Statistical Act proposed this year in Tanzania, which contained clauses that suggested that only the National Bureau of Statistics can publish statistics on matters regarding the country.

Seeing like a donor, seeing like a state…

Thus as we move into the so-called era of the data revolution in development, it appears that some statistical offices are worried about their own role in this increased demand for evidence. It may of course just lead to more resources for statistical office. However, it may also increase the use of “helicopter statistics”: answering a specific question by flying in an externally-funded team of statisticians, who organize the enumerators and then fly out again some time later, with a brand new Excel sheet in hand. This parallels widely-criticized practices of institutional bypassing in other areas of development. A donor might consider a health system to be broken and thus build temporary parallel structures, thereby leaving or worsening a broken health system. I think that it is widely recognized that such an intervention in a ministry of health would be unadvisable, yet I do not think that there is yet such a consensus that the same dynamics prevail when it comes to supplying statistics.

The biggest problem of the data revolution—manifested in the “evidence-based policy” paradigm, or even worse the notion of “paying for results” in development—is that it is not accompanied by an idea of where “evidence” should come from. Sandefur and Glassman show a classic example from education data in Kenya. A donor supported intervention abolished school fees and rewarded Kenyan authorities financially for putting students through primary school. According to administrative data there was a rapid increase in enrollment following the abolishment of fees. The evidence showed the policies worked. So far so good.

But donor-funded surveys showed a different picture of school enrollments. The Demographic and Health Survey (DHS) showed enrollment rates that were flat over time. Why? Schools got more funding if they reported more pupils. Even more damningly, the discrepancy between the administrative data and the actual enrollment rates per the DHS manifested itself after the intervention that paid for results. In sum, the policy intervention had only one effect that could be clearly established: as a result of the policy the Kenyan government had no reliable evidence on how many kids actually go to school. The policy implication going forward, in the context of the “data revolution in development” would have to be that interventions should actually invest in information, evidence or data-gathering itself. Interventions that use information instrumentally to evaluate success or failure are likely to backfire in a system where the knowledge production system is part of its own complex political economy.

A more constructive example comes from data on fiscal spending in Uganda. Reinikka and Svensson report that surveys of central government expenditures on primary schools in Uganda from 1991 to 1995 showed that only 13 per cent of the funds allocated actually reached the schools. In response, a campaign was started, publishing in local newspapers how much public funding was allocated to the schools. Administrators and stakeholders in local schools could compare allocations with actual funds received. It was estimated that this intervention reduced graft considerably, and by 1999, 90 percent of the funds were reaching their destination.

Hand in hand with the demand for a data revolution comes the idea that the development community should not only embrace evidence-based policy, but also move towards paying for results. This would be a great idea if poor countries were closely-observed labs where you could make objective observations, and the Hawthorne effects – where the act of observation affect the facts – were not profound. It may be a terrible idea if this is not the case. What you get instead is policy-driven evidence, the opposite of what we need. When we falsely assume that a child is vaccinated, has escaped the poverty line, goes to school and has enough food, it is not just a statistical error but a real tragedy.

Local demand for data needs to come into focus. A statistical office is only sustainable if it serves local needs for information. The development community needs to remember that demanding evidence for policy means investing in accountability of the evidence-makers. Obtaining data is not a technocratic exercise but rather one of building institutions. Statistical offices are first and foremost institutions that provide information to promote a discourse between citizens and states. We should spend more resources to find out which data matters for the citizens of these countries.

Print Friendly, PDF & Email
Share