GiveWell and Easterly Talk Meta-Research

Conversation between GiveWell (Holden Karnofsky and Stephanie Wykstra), a funder and Dr. William Easterly (Professor of Economics, NYU) on July 18, 2012

These notes reflect answers that Dr. Easterly gave during our conversation.

General understanding of development can inform thinking about aid projects:

One discussion is about how to do aid well and another is about what causes development. They are often confused as being identical questions, but obviously they are not.

However, the two discussions fertilize each other and some thinking about development can be surprisingly useful for thinking about specific aid projects. I think that a lot of the “what works in aid” debate is phrasing the question wrong. You really want to know what works for whom, which will then lead to the question at the heart of economics and politics: who gets to decide what happens? This isn’t necessarily answered by randomized controlled trials (RCTs) that show that an intervention improves some quantitative measure of well-being. Markets and democracy are better feedback mechanisms than RCTs, and they provide resolution on “who gets to decide?” Seeing what people buy and asking them what they want gives better indicators of what works for them than quantitative indicators coming from RCTs.

Everything that you do has positive effects on some people and negative effects on others. As an example: recently the World Bank funded a project in Uganda. The project ended up burning down farmers’ homes and crops and driving the farmers off the land. A lot of quantitative indicators like GDP would have shown up as improved as a result of the project, but there were many people whose rights were grossly violated in the process.

As Angus Deaton has repeatedly emphasized, RCTs give an average result. Treatment effects vary a lot depending on the context.  When we average over a lot of them it’s almost certain that we’re getting some negative treatment effects, even when the average is a positive and significant result. You want a safeguard against having one enormous beneficiary with everyone else losing. You want a safeguard against harming a lot of people unacceptably.

But researchers don’t want their job to be more difficult than it is. If you ask for not only a RCT but also a guarantee that it’s not concealing unacceptable harm, you’re making it harder, and RCTs are already expensive and hard to begin with. It’s inconvenient for the researchers to acknowledge these problems.

Development happens when people have the opportunity to choose what they want, choose whether or not to give consent for an intervention that affects them, protest if they don’t like what’s being done to them and have a mechanism to exit if they don’t like what’s being done.

I think it’s really important to have a system in place to ensure that you’re actually making people better off rather than harming them. Others would be better than I am on how to do this in practice, but just to start the discussion: this could mean taking surveys of beneficiaries.  Or it could mean offering them a menu of options that they can choose from, and learning from their responses. More broadly, promoting rights of poor people might have indirect positive consequences that are a lot larger than the benefits of individual interventions. It’s hard to quantify what the benefits would be, but it’s hard to see how it could do any harm. This is more than you can say about a lot of direct interventions; a lot of the time these interventions benefit some people while hurting others (for example, distributing goods might be putting producers of competing goods out of business.)

There are a lot of thing that people think will benefit poor people (such as improved cookstoves to reduce indoor smoke, deworming drugs, bed nets and water purification tablets) that poor people are unwilling to buy for even a few pennies. The philanthropy community’s answer to this is “we have to give them away for free because otherwise the take-up rates will drop.” The philosophy behind this is that poor people are irrational. That could be the right answer, but I think that we should do more research on the topic. Another explanation is that the people do know what they’re doing and that they rationally do not want what aid givers are offering. This is a message that people in the aid world are not getting. The rational choice paradigm has never been fully accepted in the development community. We should try harder to figure out why people don’t buy health goods, instead of jumping to the conclusion that they are irrational.

Giving out cash is a possible answer to the question of how to give poor people choice. You could also give them the option of cash instead when goods are being distributed. You could study how small the amount of cash is such that people would prefer cash to a bed net. That would give us information about the value of the intervention.

In response to a comment about research which shows that people’s choices are sensitive to framing, which may challenge the “rational choice” framework to a degree:

It’s easy to catch people doing irrational things. But it’s remarkable how fast and unconsciously people get things right, solving really complex problems at lightning speed. There’s publication bias in descriptions of the ways that people act irrationally rather than rationally, because this is a newsworthy, saleable topic. People may not be acting rationally according to the way we define a problem, but may nevertheless be following a good algorithm for their circumstances.

Health education could help increase take-up in the developing world:

It may be that one problem is that people (i.e., potential recipients) don’t believe the scientific model of medicine. They may not believe the scientific theory of malaria transmission, of germs in the water causing health problems and so on. We should be trying to solve the problem of educating poor people about the scientific model of medicine. Historically, training programs have been designed poorly with insufficient feedback, and so the community has given up on them prematurely.

Funding is biased toward a technocratic approach. Aid agencies do not want to deal with additional complexities like asking the people who they work with for consent or giving them choice. They already have hard jobs. They don’t want to hear about research that makes their job harder. Moreover, if we make aid sound more difficult, then maybe donors will give less. People on the ground are really pressed to keep the money flowing and feel as though they cannot afford these kinds of complications. So there’s very little demand for such research.

Data mining and selection effects:

Researchers have very strong incentives to find a result. If tenure-track researchers don’t get published then they can’t keep their jobs. The worst thing that can happen for a researcher is that he or she gets no result. Hypothesis registration is a way of keeping people disciplined. It’s a human tendency to believe that the world is filled with more patterns than there are. After playing with the data enough, researchers convince themselves that they’ve found something real. It’s been scientifically proven that researchers do this: there’s an anomalous gap in effect sizes reported in papers just below the standard that you need to get published. I’ve seen publications on this issue for 25 years. There’s a tragedy of the commons problem: if a single researcher takes this problem seriously, all it will do is lessen the probability of him or her getting published and keeping his or her job.

As Deaton has also pointed out, the sites at which RCTs are carried out are not random. NGOs and aid agencies have strong incentive to have positive PR, and there’s a bias toward choosing sites where a program is likely to go well. Also, where “convenience samples” are collected is not random; they were collected where they were for some reason and that reason will often be correlated with outcomes. I think people place too much weight on RCTs.

On dissidents:

There are some dissidents, such as myself, who say things that people don’t want to hear. Angus Deaton, Lant Pritchett, Ross Levine, and Andrei Shleifer are examples. Dissidents are a positive feature of a system that makes it more robust. A consensus model is prone to groupthink. Even if we dissidents were wrong, it would still be important that people like us challenge the mainstream consensus to make them rethink what they’re doing. Cass Sunstein wrote a book about this (Why Societies Need Dissent.) There are probably many more dissidents that we haven’t heard of. There are a lot of dissident aid workers who can’t speak publically without losing their jobs, and so keep quiet or write anonymous blogs.