Can We Push for Higher Expectations in Evaluation? The Millennium Villages Project, continued

There's been some good discussion—here in the comments of yesterday’s post and on other blogs—on the Millennium Villages and what sort of evaluation standard they can (realistically) and should (ideally) be held to. Yesterday on Aid Thoughts, Matt was distressed that over 70 percent of the student body at Carleton University voted in a tuition hike—$6 per student, per year, every year—to fund the Millennium Villages. The students apparently concluded that the MV program "offers the most effective approach for providing people with the tools they need to lift themselves out of extreme poverty. It is also the only approach that, if scaled, would see countless lives saved, an African Green Revolution and, amongst other things, every child in primary school."

How is it that students are coming away with that glowing impression from a project that—as Matt points out—has yet provided little evidence that its benefits are scalable, sustainable, persistent or transferable?

Focusing in on results published on one specific MVP intervention, blogger Naman Shah pointed us to his analysis of the MVP scientific team’s early malaria results.  The project's claims to have reduced malaria prevalence were “disproportionate to the evidence (especially given the bias of self-evaluation)” and suffered from some “bad science” like a failure to discuss pre-intervention malaria trends.

Chris Blattman stepped into the wider debate to offer an evaluator's reality check, questioning whether rigorous evaluation of the MVP is feasible.  Chris said:

[T]here are other paths to learning what works and why. I’m willing to bet there is a lot of trial-and-error learning in the MVs that could be shared. If they’re writing up these findings, I haven’t seen them. I suspect they could do a much better job, and I suspect they agree. But we shouldn’t hold them to evaluation goals that, from the outset, are bound to fail.

But if the industry standard of best practice is moving towards funding interventions that are measurable and proven to work, why is the MVP encouraging the international community to shift limited aid resources towards a highly-visible project apparently designed so that it can't be rigorously evaluated?

Fact is, none of us know exactly what kind of evaluation the Millennium Villages Project is doing, or the reasoning behind why they’re doing what they’re doing, since they haven't yet shared it in any detail.  Perhaps someone at the MVP will respond to our request to weigh in.