Jeff Tollefson :
Trials are showing that offering incentives can boost attendance at vaccination clinics.
In 70 local health clinics run by the Indian state of Haryana, the parents of a child who starts the standard series of vaccinations can walk away with a free kilogram of sugar. And if the parents make sure that the child finishes the injections, they also get to take home a free litre of cooking oil.
These simple gifts are part of massive trial testing whether rewards can boost the stubbornly low immunization rates for poor children in the region. Following the model of the randomized controlled trials (RCTs) that are commonly used to test the effectiveness of drugs, scientists randomly assigned clinics in the seven districts with the lowest immunization rates to either give the gifts or not. Initial results are expected next year. But smaller-scale experiments suggest that the incentives have a good chance of working. In a pilot study conducted in India and published in 2010, the establishment of monthly medical camps saw vaccination rates triple, and adding on incentives that offered families a kilogram of lentils and a set of plates increased completion rates by more than sixfold1.
“We have learned something about why immunization rates are low,” says Esther Duflo, an economist at the Massachusetts Institute of Technology (MIT) in Cambridge, who was involved in the 2010 experiment and is working with Haryana on its latest venture. The problem is not necessarily that people are opposed to immunization, she says. It is that certain obstacles, such as lack of time or money, are making it difficult for them to attend the clinics. “And you can balance that difficulty with a little incentive,” she says.
This is one of a flood of insights from researchers who are revolutionizing the field of economics with experiments designed to rigorously test how well social programmes work. Their targets range from education programmes to the prevention of traffic accidents. Their preferred method is the randomized trial. And so they have come to be known as the ‘randomistas’.
The randomistas have been particularly welcomed in the global development arena. Despite some US$16 trillion in aid having flowed to the developing world since the Second World War, there are little empirical data on whether that money improves the recipients’ lives (see page 144). The randomistas see their experiments as a way to generate such data and to give governments tools to promote development, relieve poverty and focus money on things that work.
Not everyone is convinced. Sceptics argue that the randomistas’ focus on evaluating specific aid programmes can lead them to lose sight of things such as energy, infrastructure, trade and corruption – macroeconomic issues that are central to a country’s ability to prosper, but that are effectively impossible to randomize. “Development is ultimately about politics,” says Angus Deaton, an economist at Princeton University in New Jersey.
Nonetheless, the randomista movement is gaining momentum (see ‘Scale the heights’). Universities are pumping out more economics graduate students with experience in RCTs every year. Organizations ranging from the UK Department for International Development to the Bill & Melinda Gates Foundation in Seattle, Washington, are throwing their financial support behind the technique. “There are hundreds and hundreds of randomized trials going on, and ten years ago that just wasn’t the case,” says economist Dean Karlan at Yale University in New Haven, Connecticut, who is at the forefront of the movement. “We’ve changed the conversation.”
Demand is only rising. This September, governments will gather in New York under the auspices of the United Nations to approve a new set of Sustainable Development Goals, which are intended to guide investments over the coming decade. And in December, questions about financial aid will be high on the agenda at the UN climate summit in Paris, where governments expect to sign a new climate agreement that will probably include commitments by industrialized nations to funnel money into sustainable development in poorer countries. In both cases, the effectiveness of the programmes is likely to be a key concern.
“This is front and centre on a lot of people’s agenda,” says Ann Mei Chang, who is executive director of the Global Development Lab at the US Agency for International Development (USAID) in Washington DC. “Where do we get the biggest bang for our buck?”
RCTs have been used to test the effectiveness of social programmes at least since the 1960s. But the modern era began in 1997, when one of the most famous and influential RCTs in public policy began in Mexico.
The experiment had its origins three years earlier, when Mexican President Ernesto Zedillo assumed office in the middle of an economic crisis and assigned economist Santiago Levy to devise a programme to help poor people. Sceptical of the conventional approach – subsidies for products such as tortillas and energy – Levy designed a system that would provide cash payments to poor families if they met certain requirements, such as visiting health clinics and keeping their children in school. “And because people were very critical about what I was doing,” says Levy, who now leads strategic development planning at the Inter-American Development Bank in Washington DC, “I wanted to ensure that we had numbers so that we could have an informed debate.”
As it happened, Levy had a natural control group for his experiment. The government was rolling out its payment programme in stages, so he could collect data on families in villages that were included in the initial roll-out, and in comparable villages that were not. Within a few years, his team had data suggesting that the programme, dubbed PROGRESA, was working remarkably well.
(To be continued)