Our Research: Full Index

  Back to Research Home

Research related to:  Methodology

Site Selection Bias in Program Evaluation

“Site selection bias” can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site selection bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the U.S. Predictions based on rich microdata from the first ten replications substantially overstate efficacy in the next 101 sites...
Hunt Alcott

From Local to Global: External Validity in a Fertility Natural Experiment

Experimental evidence on a range of interventions in developing countries is accumulating rapidly. Is it possible to extrapolate from an experimental evidence base to other locations of policy interest (from “reference” to “target” sites)? And which factors determine the accuracy of such an extrapolation? We investigate applying the Angrist and Evans (1998) natural experiment (the effect of boy-boy or girl-girl as the first two children on incremental fertility and mothers’ labor force participation) to data from International IPUMS on 166 country-year censuses. We define the external validity function with extrapolation error depending on covariate differences between reference and target locations, and find that smaller differences in geography, education, calendar year, and mothers’ labor force participation lead to lower extrapolation error . . . 

Randomized Evaluation of Institutions: Theory with Applications to Voting and Deliberation Experiments

We study causal inference in randomized experiments where the treatment is a decision making process or an institution such as voting, deliberation or decentralized governance. We provide a statistical framework for the estimation of the intrinsic effect of the institution. The proposed framework builds on a standard set-up for estimating causal effects in randomized experiments with noncompliance . . . 
Yves Atchade and Leonard Wantchekon

Noncompliance Bias Correction Based on Covariates in Randomized Experiments

We propose some practical solutions for causal effects estimation when compliance to assignments is only partial and some of the standard assumptions do not hold. We follow the potential outcome approach but in contrast to Imbens and Rubin (1997), we require no prior classification of the compliance behavior. When noncompliance is not ignorable, it is known that adjusting for arbitrary covariates can actually increase the estimation bias. We propose an approach where a covariate is adjusted for only when the estimate of the selection bias of the experiment as provided by that covariate is consistent with the data and prior information on the study. Next, we investigate cases when the overlap assumption does not hold and, on the basis of their covariates, some units are excluded from the experiment or equivalently, never comply with their assignments. In that context, we show that a consistent estimation of the causal effect of the treatment is possible based on a regression model estimation of the conditional expectation of the outcome given the covariates. We illustrate the methodology with several examples such as the access to influenza vaccine experiment (McDonald et al (1992) and the PROGRESA experiment (Shultz (2004)).
Yves Atchade and Leonard Wantchekon