5 Ways to Improve Your Impact Evaluation

Impact evaluations are supposed to tell us what works in development, and a lot of time and money goes into them. It's unfortunate, then, when they fail to report their results clearly. One of the things I found most shocking, looking through a large database of impact evaluations, was how often academic papers omitted information that is critical for interpreting the study's results and figuring out how well they might apply to other contexts. This blog post draws on data from over 400 studies that AidGrade found in the course of its meta-analyses. Here are five embarrassing things many papers neglect to report:

1) Attrition

It's normal for some people to drop out of a study. It can pose a problem, however, if attrition is not equal between the treatment group and the control group, as this self-selection process could bias the study's results. While attrition is very well-known to be something one ought report, only about 75% of papers reported it.

2) The standard deviation of key variables

Without knowing how much variation there is in an outcome variable, it's hard to know whether a paper found a relatively high or relatively low effect. Why? Often studies report outcomes that use scales particular to the paper, for example, reporting scores on a certain academic test. There is no way to compare these results across different papers using different tests unless you standardize the data - then you can at least say that program A was found to affect test scores by 0.1 standard deviations, while program B found an effect size of 0.2 standard deviations.

3) Whether the results include people who did not take advantage of the program

Intent-to-treat (ITT) estimates consider an intervention's effects on everyone assigned to receive treatment, regardless of whether or not they actually took advantage of the program. The alternative is to estimate the treatment effect on the treated (TOT). For example, suppose that only 10% of people who were offered a bed net used it, and suppose bed nets were 90% effective at preventing malaria. The TOT estimate would be 90% - the ITT estimate, 9%. Clearly, if the authors don’t take care to explain which they are reporting, we really don’t know how to interpret the results!

4) Characteristics of the context of the intervention

Are the people in your study rich or poor? It could affect how well they respond to a cash transfer. Does your intervention aim to decrease an infectious disease? It probably matters what the underlying infection rate is within the population, especially if people can catch it from each other. When did the intervention start and end relative to data collection? It is difficult to know what results mean without knowing much about the people in the study, and it makes comparing results across different settings even more difficult.

5) Comparable outcome variables

Finally, papers seem to "run away from each other" in terms of which outcome variables they cover. If one paper addresses the effect of HIV/AIDS education on the incidence of the disease, another will focus on whether people got tested. It makes sense given the incentives of the researchers to be the first ones to show a particular result and to differentiate their findings. However, a single paper's result cannot tell us how general it is. For that, you need more studies, and in order to compare those studies, they need to have outcome variables that are as comparable as possible.

Better reporting is not an impossible problem to solve. The Experiments in Governance and Politics network (EGAP), for example, decided to fund projects clustered around comparable intervention and outcome measures. In psychology, it was journals that started demanding better reporting. Something similar should happen in economics to provide researchers with the right incentives to maximize the use of their studies.