Top 5 reasons why “failed state” is a failed concept

1) “State failure” is leading to confused policy making. For example, it is causing the military to attempt overly ambitious nation-building and development to approach counter-terrorism, under the unproven assumption that “failed states” produce terrorism.

2) “State failure” has failed to produce any useful academic research in economics.

You would expect a major concept to be the subject of research by economists (as well as by other fields, but I am using economics research as an indicator). While there has been research on state failure, it failed to generate any quality academic publications in economics. A search of the top economics journals1 reveals that “state failure” (and all related variants like “failed states”) has been mentioned only once EVER. And this article mentions the concept only in passing.2

3) “State failure” has no coherent definition.

Different sources have included the following:

a) “Civil war” b) “infant mortality” c) “declining levels of GDP per capita” d) “inflation” 3 e) “unable to provide basic services” f) “state policies and institutions are weak” g) “corruption” h) “lack accountability” 4 i) “unwilling to adequately assure the provision of security and basic services to significant portions of their populations” 5 (wouldn’t this include the US?) j) “inability to collect taxes” k) “group-based inequality… and environmental decay.” 6 l) “wars and other disasters” m) “citizens vulnerable to a whole range of shocks” 7

Most of these concepts are clear enough in themselves, and often apply to a large number of countries. But is there any good reason to combine them with arbitrary weights to get some completely unclear concept for a smaller number of countries? “State failure” is like a destructive idea machine that turns individually clear concepts into an aggregate unclear concept.

4) The only possible meaningful definition adds nothing new to our understanding of state behavior, and is not really measurable.

A more narrow definition of “state failure” is: a loss of the monopoly of force, or the inability to control national territory. Unfortunately this is impossible to measure: how do you know when a state has control? The only data I have been able to find that might help comes from the Polity research project that classifies the history of states as democracies or autocracies. 8 It describes “interregnums” that sound like the narrow “state failure” idea:

A "-77" code for the Polity component variables indicates periods of periods of “interregnum,” during which there is a complete collapse of central political authority. This is most likely to occur during periods of internal war.

If interregnums are indeed a good measure, the data show that “state failure” is primarily just an indicator of war. As the data show, the rate of “state failure” in the 20th century spiked in the two World Wars, and then increased again (but not as much) after decolonization, again almost always associated with wars.

Even this measure does not really capture the narrow definition. Many countries were often created as “states” by colonial powers rather than following any natural state-building process in which states gain more and more control of territory. Almost all ex-colonies fail to control national territory after independence, and many still do not do so today – many more than the usual number of “failed states.” (Africa being the most striking example as exposited in the great book by Herbst, States and Power in Africa. 4)

Hence, if we use the measure described above, than state failure is just synonymous with war, and if we don’t (as we probably shouldn’t), then “state failure” is something more common and harder to measure than the current policy discussion recognizes.

PolityIV_500

5)  “State failure” appeared for political reasons.

The real genesis of the “state failure” concept was a CIA State Failure Task Force in the early 1990s. Their 1995 first report said state failure is “a new term for a type of serious political crisis exemplified by recent events in Somalia, Bosnia, Liberia, and Afghanistan.” All four involved civil war, confirming the above point that  “state failure” often just measures “war.” And we have just seen from the data (and common sense about decolonization) that either the claim of “newness” is false, or we are still not sure what “state failure” means.

Nevertheless, “state failure” became a hot idea in policy circles.  If we use the number of articles in Foreign Affairs mentioning “state failure” or variants, then it first appeared around the same time as the CIA task force, and then really took off after 9/11.

ForeignAffairs_500 One can only speculate about the political motives for inventing an incoherent concept like “state failure.” It gave Western states (most notably the US superpower) much more flexibility to intervene where they wanted to (for other reasons): you don’t have to respect state sovereignty if there is no state. After the end of the Cold War, there was less hesitation to intervene because of the disappearance of the threat of Soviet retaliation. “State failure” was even more useful as justification for the US to operate with a free hand internationally in the “War on Terror” after 9/11.

These political motives are perfectly understandable, but they don’t justify shoddy analysis using such an undefinable concept.

It’s time to declare “failed state” a “failed concept.”


[1] Kristie M. Engemann and Howard J. Wall, A Journal Ranking for the Ambitious Economist, Federal Reserve Bank of St. Louis Review, May/June 2009, 91(3), pp. 127-39. We included all 69 journals that they studied, which they said were their meant to capture all likely members of the top 50.

[2] Sujai J. Shivakumar, Towards a democratic civilization for the 21st century, Journal of Economic Behavior & Organization, Vol. 57 (2005) 199–204.

[3] a through d: Robert I. Rotberg, Failed States in a World of Terror, Foreign Affairs. New York: Jul/Aug 2002. Vol. 81, Iss. 4; pg. 127.

[4] e through h:  World Bank

[5] USAID Fragile State Strategy, 2005

[6] j through k: Fund for Peace

[7] l through m: Overseas Development Institute

[8] Monty G. Marshall and Keith Jaggers, Polity IV Project: Dataset Users' Manual, George Mason University and Center for Systemic Peace, 2009.

[9] Jeffrey Herbst, States and Power in Africa: Comparative Lessons in Authority and Control, Princeton University Press, 2000.

Read More & Discuss

Teasing my friends at Center for Global Development: censoring for Hillary?

More updates on coverage of the big Clinton Development Speech, following up on the previous post: Chris Blattman has a negative take. Change.org some negatives, some positives, so a mixed review. The Center for Global Development (CGD) blog is positive, although mostly only about the idea of the Secretary of State even giving a whole speech devoted to development. Duncan Green at Oxfam liked some of the specific ideas in the speech. The Chronicle of Philanthropy gave an overview, citing "mixed reviews."

Mead Over disagreed with my "selectivity" complaint, saying Clinton was right to be LESS selective in health (don't just do AIDS treatment, strengthen health systems!) I confess that Mead is right on that one.

The speech host, CGD, aggregated now all the news coverage they could find, except, wait, they don't include any negative coverage...  They cited Foreign Policy — but they just gave the FP posting of the speech itself, not the review column (mine) at FP.

Oh, my dearest CGD friends, this couldn’t be some unconscious censorship of a negative view, could it?

The speech at least seems a focal point for a good discussion! Please continue to post your comments.

UPDATE: CGD has now put out a new press coverage aggregrate that includes negative coverage. I knew I could count on them, they're good people. (They certainly are a LOT more responsive than the USAID that Hillary wants to reform, who would either not answer or refuse to change or both.)

Read More & Discuss

The power of searchers

darpa-red-balloon-challenge_large The Defense Department just sponsored a contest in which they randomly placed 10 large red balloons across the United States and challenged teams to find them all. The one who found all 10 first would get $40,000.

The National Department of Supervisory Agencies for Universal Surveys for Many Different Types of Objects took on the challenge from its massive Washington DC headquarters. It dispatched instructions by secure mail pouch Circular #10-A643 to its 135 regional offices, notifying them to add large red balloons to the Watch List in their multiyear project for surveying the entire United States for Many Different Types of Objects. When last we heard, the regional offices were contacting Washington headquarters for clarification as to what diameter balloon should be considered “large.”

The winning team, at the MIT Media Lab, found all 10 balloons in 8 hours and 56 minutes. They used decentralized search through the Internet, spreading the message through web sites and social networks that there would be cash rewards to any chain of people that resulted in a balloon find. In the end, they drew on the efforts  of 4,665 people.

As Dr. Riley Crane, the leader of the MIT group, explained:

If you heard about our Web site and went to sign up directly, and you found a balloon, you would get $2,000…. If instead you signed up and then you told your friends, and one of your friends found a balloon, that person would still get $2,000 because they found the balloon. And you, because you signed someone up who found the balloon, would also be rewarded with $1,000...

Wow, the Defense Department has just simulated an entrepreneurial economy! Entrepreneurs search for things that will pay off, or search for other people who will find things that pay off.

Searchers also work in aid, finding techniques or projects that work where you least expect to find them. That’s how aid found microcredit, conditional cash transfers, mobile banking, water purification tablets, nutritional supplements, oral rehydration therapy, and on and on.

The first and only time I met Bill Gates, he complained about my book “what is all this nebulous crap about searchers?” The funny thing about very successful Entrepreneurs is that not even they realize that they are part of a decentralized search network. They think it was all their brilliance – the equivalent of the 10 -- out of the 4,665  --who actually spotted the balloons thinking “we are so brilliant at balloon finding.”

Hat tip to the great searcher Michael Clemens, for drawing our attention to the story.

Read More & Discuss

Forensic analyst busts Victoria’s Secret

Forensic analysts look for abnormal data patterns that allow them to catch bad guys doing bad things, with many economics applications. One of their recent non-economics triumphs has been to catch Victoria’s Secret’s blatant photo- shopping of their ads, notably the example below (HT to Tyler Cowen as usual). doctored-victorias-secret-ad The most obvious giveaway  is that they snatched the young lady’s handbag out of her right hand, leaving her holding – nothing. This made the forensic photo expert suspicious and he also caught Victoria’s Secret in more subtle photo shopping. Most predictably, they increased the young lady’s bust size. (This is documented in way more expert detail than you really want.) Not only does Victoria’s Secret objectify women to be like their gorgeous models, but even the models have to be objectified to be their concept of a fantasy woman.

I’m not a marketing expert, but I'm not sure “wear our stuff and you might look good enough to be photo-shopped” is the best ad campaign.

To Victoria’s Secret’s credit, after they got caught, they undid some of the photo-shopping and reposted the picture on their web site. They gave the young lady back her handbag. However, they did not undo the fake bust.

I realize that this is all pretty tame compared with the expectations raised by the headline. But Aid Watch NEVER exploits supermodels! Even here, I refrained from giving a far more sexy, hyper-objectified female example of photo-shopping.

Forensic economics does similar things with patterns in data rather than photos. Ray Fisman at Columbia famously caught some Indonesian companies as corruptly linked to Suharto, because their stock prices would fall whenever Suharto got sick. Ray has made a specialty of this – he also caught some countries smuggling art and antiques, using discrepancies between their exports of these items to country X and the country X data on imports of these items from these same countries.

So here’s the challenge – can we use forensic economics to keep tabs on aid agencies? Oops, I forgot, there's a lot fewer people who care about aid than Victoria's Secret models. Can both of you please forward your suggestions on forensic aid evaluation?

Read More & Discuss

Satire Wars! Owen Barder on "The universal cynics’ answer to why your aid project won’t work"

Happy 2010, Aid Watchers! New viewers on Jan 4: See Update 2 below. Since you don't really expect me to work on a holiday (do you?!), I will just start off the New Year with a link to Owen Barder's hilarious spoof. Are we in a satirical face-off?

Update 1: Yes we are! Aid Thoughts has responded to Owen's Universal Cynic with a very funny counter spoof on the Universal Project Advocate.

This update motivates me to correct the inexplicable omission of a link to my original satire "How to write about poor people," which may have motivated Owen's counter blast, which in turn motivate Aid Thoughts' counter-counter-blast. Plus for good measure, Aid Watch readers' additional 15 pointers (through my editorial filter) on writing on poor people.

Any other related blogs that want to launch their own missiles of maximum sarcasm? Yes, I mean you Chris Blattman! And of course, we have got to hear from you, Wronging Rights, you're a natural at this.

New Update 2 (Jan 4): Wronging Rights came through, they are as hilarious as I expected! And it's their blogiversary, so please go there. Blattman blew me off with some flimsy excuse that he's giving two presentations at the All-Galactic Social Science Association, currently meeting.

It doesn't get any better than this, aid watchers. Somehow satire brings deeper insights than yet another aid and growth regression.

Read More & Discuss

Peter Singer and I on Tough Love for Our NGOs at NYT (the 6 minute video excerpt)

I am so grateful and humbled that my message on the accountability of aid has finally reached this extremely high profile -- wait, I just realized, there is NO audience, it's the holidays. For those of you who didn't have enough heavily spiked eggnog to listen to the whole 46 minute version, here is the New York Times' 6-minute excerpt of the conversation, emphasizing microcredit, evaluation, overhead costs, and the limits of generic "answers."

The audience gave us rave reviews (both of you) :

There is a superb Bloggingheads debate between Peter Singer (author of The Life You Can Save) and Bill Easterly (author of I Hate Puppies and Christmas The White Man’s Burden). (Chris Blattman, what a card)

Peter Singer and Bill Easterly on Bloggingheads.TV (Tyler Cowen, Marginal Revolution, "assorted link". OK this is not really a review but at least we made it into one of the hundreds of links Tyler chooses.)

I sense a juggernaut slowly (VERY slowly) building up toward that day when we demand results of our NGOs, of our official aid agencies, of our favorite celebrities, until we will all be able to join hands and say "Accountable at Last! Accountable at Last! Accountable at Last!"

Read More & Discuss

Madagascar textile workers ask President Obama to keep their jobs for Christmas, but nobody is listening

Here's an excerpt from an ad that appeared in the print edition of Politico today, paid for by the owners of apparel factories in Madagascar and one of their American investor partners. We have blogged about this seemingly obscure issue already many more times than you, our patient readers, may have wanted, but we see this as one of those rare, clear opportunities for the US to do good by first doing no harm. And yet US leadership seems blithely set on a course of action that will punish vulnerable textile workers and their families without touching the fortunes of the political elite responsible for Madagascar's current predicament. Mada_AdBanner_Politico_400

President Obama: Please don’t harm one half million of the poorest people in Africa.

Your advisors have recommended that you decertify Madagascar as a beneficiary country for benefits under the Africa Growth and Opportunity Act (AGOA). This action will revoke the duty free treatment for products such as trousers, t-shirts and sweaters produced in Madagascar next year—re-imposing steep US import taxes as high as 32% on a polyester t-shirt. This action will make garments produced in Madagascar uncompetitive with similar products made in China — which already produces 100 times more apparel for the US market compared to Madagascar.

We ask you to consider the human tragedy of an action that will wipe out 100,000 good jobs in the Madagascar apparel sector created under AGOA. Surely there is a way to send a strong message to feuding politicians in Madagascar — without punishing innocent workers and their families who have nothing to do with the disputed control of Government. Your advisors will tell you that they are doing this to help people in Africa.

We respectfully disagree.

While we hope that the continued destabilized situation in the Government of Madagascar is hopefully only temporary, we know that the exit that results from AGOA decertification from the country of our American retail customers will be permanent.

Over 28,000 of our employees signed a petition to President Obama asking him to save their jobs. A copy of that letter can be viewed at www.gefp.com.

Read More & Discuss

The effects of foreign aid: Dutch Disease

This blog post was written by Arvind Subramanian, Senior Fellow at the Peterson Institute for International Economics and Center for Global Development, and Senior Research Professor at Johns Hopkins University.

The voluminous literature on the effects of foreign aid on growth has generated little evidence that aid has any positive effect on growth. This seems to be true regardless of whether we focus on different types of aid (social versus economic), different types of donors, different timing for the impact of aid, or different types of borrowers (see here for details).  But the absence of evidence is not evidence of absence. Perhaps we are just missing something important or are not doing the research correctly.

One way to ascertain whether absence of evidence is evidence of absence is to go beyond the aggregate effect from aid to growth and look for the channels of transmission. If we can find positive channels (for example, aid helps increase public and private investment), then the “absence of evidence” conclusion needs to be taken seriously. On the other hand, if we can find negative channels (for example, aid stymies domestic institutional development), the case for the “evidence of absence” becomes stronger.

One such channel is the impact of aid on manufacturing exports. Manufacturing exports has been the predominant mode for escape from underdevelopment for many developing countries, especially in Asia. So, what aid does to manufacturing exports can be one key piece of the puzzle in understanding the aggregate effect of aid.

In this paper forthcoming in the Journal of Development Economics, Raghuram Rajan and I show that aid tends to depress the growth of exportable goods. This will not be the last word on the subject because the methodology in this paper, as in much of the aid literature, could be improved.

But the innovation in this paper is not to look at the variation in the data across countries (which is what almost the entire aid literature does) but at the variation within countries across sectors. We categorize goods by how exportable they could be for low-income countries, and find that in countries that receive more aid, more exportable sectors grow substantially more slowly than less exportable ones. The numbers suggest that in countries that receive additional aid of 1 percent of GDP, exportable sectors grow more slowly by 0.5 percent per year (and clothing and footwear sectors that are particularly exportable in low-income countries grow slower by 1 percent per year).

We also provide suggestive evidence that the channel through which this effect is felt is the exchange rate. In other words, aid tends to make a country less competitive (reflected in an overvalued exchange rate) which in turn depresses the prospects of the more exportable sectors. In the jargon, this is the famous “Dutch Disease” effect of aid.

Our research suggests that one important dimension that donors and recipients should be mindful of (among many others that Bill Easterly has focused on) is the impact on the aid-receiving country’s competitiveness and export capability. That vital channel for long run growth should not be impaired by foreign aid.

Read More & Discuss

1-800-How’s My Spending?

Local people are the experts on whether they are being well-served by a development project or organization.

This observation, simple on the face of it but downright revolutionary in its implications, is at the heart of the story presented by Dennis Whittle of GlobalGiving at last week’s NYU conference  on the privatization of aid.

Local people may be the experts, but for outsiders deciding where their donations can do the most good, getting access to local knowledge and acting on it appropriately requires real-time feedback loops that most aid projects lack.

Over a little more than a year, GlobalGiving combined staff visits, formal evaluation, third-party observer reports called visitor postcards, and internet feedback from local community members to create a nuanced, evolving picture of a community-based youth organization in Western Kenya that had received $8,019 from 193 individual donors through the GlobalGiving website.

Initially, youth in Kisumu were happy with the organization. Among other things, the founder used the money to fund travel and equipment for the local youth soccer team. But the first tip-off that something was going wrong came when a former soccer player complained through GlobalGiving’s online feedback form that “currently the co-ordinator is evil minded and corrupt.” The view that the founder had begun stealing donations and was stifling dissent among his members was expanded upon by other community members, visitors to the project, and a professional evaluator.

In the end, a splinter group broke off and started a new sports organization, and the community shifted their support to the new group. Reflecting the local consensus, GlobalGiving removed the discredited organization from its website. Marc Maxson and Joshua Goldstein, the authors of the case study, write:

We consider this story a seminal case because it illustrates that true community building is neither tidy nor predictable, but is nevertheless possible when feedback facilitates a dialogue….Rapidly spreading new technologies, particularly mobile phones, and SMS-to-web interfaces (e.g. twitter), now allow villagers to report continuously on project progress, and ultimately to guide implementers.

Just having the technology to create the feedback loop isn’t enough to make it happen, though. GlobalGiving first had to explicitly tell beneficiaries that they wanted to hear from them, and then spread the word effectively (through bumper stickers in this case).

In this story, the community consensus seemed clear. Of course, you could easily imagine another scenario in which the real-time feedback loop elicited wildly contrasting opinions, or provoked concerns about who was telling the truth, whether you were seeing a truly representative sample of opinions, or who might be promoting some hidden agenda. We wouldn’t want to dismiss these considerations, but we’ll happily take problems in aggregating or verifying feedback over the alternative, which has for far too long been very little feedback at all.

Read More & Discuss

The secret to aid is people

Editors' Note: This will be the last Aid Watch post until Monday after the holiday weekend. Happy Thanksgiving! Which attribute of an aid project makes it more likely to succeed:

  1. It will have rigorous evaluation based on some output indicators to make sure it’s working, OR
  2. It is staffed by people who really, really want it to succeed?

Sister Shewaye Alemu, Area Director for Addis Ababa, introduces the staff of Marie Stopes

This question came out of a tour of maternity and family planning clinics of Marie Stopes International in Addis Ababa. The dedicated staff of Marie Stopes courageously confronts a sensitive issue responsible for about a third of the deaths behind Ethiopia’s high maternal mortality rate – deaths of mothers during unsafe abortions. Marie Stopes workers offer safe abortions consistent with Ethiopian law. They also provide the whole package for reproductive choices AND safe childbirth for women: contraception alternatives, testing and counseling for HIV, prevention of mother to child transmission of HIV, prenatal care, and a clinic for difficult, life-threatening deliveries referred from Ethiopia’s official hospitals. (Although I’m NOT a fan of family planning fanatics who decide on behalf of the poor that they should have less children; I AM a fan of family planning people who respect their clients enough to just give them more choices.)

Asfaw Fantaye, laboratory technician at Marie Stopes in Addis Ababa

One afternoon’s visit is not enough to verify a great aid project, and my brief stop at the Marie Stopes project is pathetically inadequate. But since informal site check-ups are much cheaper and more universal than more rigorous methods like randomized controlled trials (RCTs), which will only EVER be available for a small sample of aid projects, it’s worth pointing out a few advantages of the humble site visit.

First, a site visit tells you something about clean, well-maintained, high quality facilities, whether medicines and equipment are available, not to mention whether the health workers are present and whether there are patients waiting because they value the services. A government hospital in a regional town failed many of these same tests during another brief visit during this trip.

Second, gut instincts tell you at least a little something about the attitudes of the PEOPLE involved, workers and patients.[1] At Marie Stopes, I was very impressed with the eloquence and dedication of our host, Sister Shewaye Alemu, an Ethiopian who is the Addis Ababa Area Manager.  The Danish country director of Marie Stopes (the only non-Ethiopian employee), Grethe Petersen, told me her mission was to be the LAST non-Ethiopian country director of Marie Stopes.

RCTs, on the other hand, don’t have a good way of getting at the intangible human element in aid projects –is there good team spirit and morale? Are there good relationships between management and workers, and amongst coworkers, and between workers and patients? There are no scientific recipes on how to DO human relations, just tacit knowledge on managing people, and personal attributes like trust, humility, patience, and respect. Getting to know the people involved can give you a sense of how well this intangible stuff is going.

Sister Shewaye shows saintly patience while pestered by inquisitive farangi

RCTs could possibly identify the right actions, but if PEOPLE’S motivation to get good results is low, these actions will not be implemented in the right way, or not at all. This will usually be obvious in a site visit.

I am not saying that getting to know the aid workers and more rigorous methods like randomized controlled trials are mutually exclusive – both have value. But even one brief visit to Marie Stopes in Addis was enough to increase HOPE in the potential for determined PEOPLE to make a difference in aid, and was strangely more persuasive than randomized trials.

Think of the analogy to the private sector: venture capitalists don’t do randomized trials but they DO talk to the entrepreneur and inspect the operation in situ. We need venture capitalists and entrepreneurs as well as randomized experimenters in aid.


[1] A related observation: the best evaluations of actual project implementation I have ever read BY FAR are written by anthropologists, such as James Ferguson’s all-time classic on a World Bank project on Lesotho.

Read More & Discuss

USTR Replies to Our Campaign to Save Madagascar Jobs

After sending an email to Constance Hamilton, Deputy Assistant U.S. Trade Representative for Africa, we received the following email in response: Thank you, Mr. Easterly, for your email. We of course, want to have as many sub-Saharan African countries as possible be eligible for AGOA benefits. We are working with all the countries, including Madagascar -- to encourage their governments to abide by the AGOA eligibility criteria, particularly rule of law. There has been some recent progress amongst the Malagasy actors involved which gives us some hope. But at the end of the day, an unstable political environment, no regard for rule of law, etc. will undermine Madagascar's future, ongoing investment, and the lives of its people more than any one preference program or initiative. We hope that it is their understanding of that point that will keep them moving forward with restoring democracy and rule of law in Madagascar.

Regards, Connie Hamilton

Read More & Discuss

Famine Cover-Ups vs. Fake Famines

Is Ethiopia having a famine? As often is the case, there are two forces pulling in opposite directions that make it hard to answer the question. On the one hand, the authoritarian government wants to cover up any famine to mute criticism of its performance.  Ethiopia is due for elections next year, and the government is determined not to go the way of previous regimes toppled in part because of anger at famines in the 1970s and 1990s. The government’s solution? Prohibit journalists from entering the worst-off areas, and fight tooth and nail with aid agencies to repress or delay information on humanitarian needs.

Complicating the situation further is that the government army is operating against insurgents in the suspected famine areas in the South and cites security reasons for not allowing outsiders to enter, so nobody really knows what is happening there.

On the other hand, NGOs have a well known tendency to cry wolf and exaggerate—to see famine where there is no famine—perhaps in order to raise more money for their own organization (I am echoing here fierce accusations of exactly this from Ethiopians I talked to during my visit who were NOT allied with the government).

For example, aid organizations and journalists saw signs of famine in Mali in the summer of 2005. Reuters reported that aid and donations were urgently needed in Mali “where the same famine that struck neighboring Niger is intensifying.” In another article, an Oxfam official was quoted saying: “They say there's no famine in Mali, but that's false. People aren't able to eat for three or four days. Forget the political or academic definitions.” While Mali had suffered a series of droughts and an invasion of locusts which exacerbated the chronic food insecurity there, deaths did not approach famine levels. The predicted high numbers of deaths from famine in Niger in 2005 and Malawi in 2002 also thankfully did not materialize. It’s impossible to know how last minute appeals for funds may have affected these outcomes, but the fact remains that desperate pleas to end exaggerated famines are a blunt  instrument in addressing the causes of chronic malnutrition and long term food insecurity.

In his classic book “Famine Crimes” Alex De Waal observes that NGOs make “habitual inflation of estimates of expected deaths.” De Waal notes that during the pre-Christmas prime fundraising season, ‘One million dead by Christmas’… has been heard every year since 1968 and has never been remotely close to the truth.”

Put into the current mix a credulous Western media that is happy to check the box "Ethiopia = famine," and is unable to handle subtleties like chronic food insecurity and chronic malnutrition vs. emergency famine. Between unreliable media, NGOs, and government, it is tragically difficult to know when tragedy is happening.

Read More & Discuss

Is the agency that’s all about country ownership giving up on country ownership?

The Millennium Challenge Corporation was created in 2004 to be a different kind of aid agency, a model that would correct the mistakes of other development agencies and put lessons gleaned from decades of experience into practice. Belief in country ownership—the widely-accepted idea that country-led development is critical to the success of sustainable development—was one of MCC’s founding principles. At an event this fall the acting CEO said, “Country ownership is not just a catchphrase at the Millennium Challenge Corporation.  Though it has its share of challenges, it is—and will remain—a guiding principle.”

So why are people close to the organization saying that MCC’s commitment to ownership is eroding?

In the MCC model, a country competes with other countries to be pronounced eligible for funding by showing dedication to growth-oriented policies along 17 different indicators. From there, the country comes up with its own proposal, which should be developed with the input of the country’s citizens, “including women, non-governmental organizations, and the private sector.” The project, which should be “fully implemented, managed and maintained by the country,”  must be “able to measure both economic growth and poverty reduction,” and, for the real kicker, must be done in five years.

These guidelines conform to industry best practice. And the five year limit on compacts creates the pressure for countries to focus on the project and show results.

Only problem is, meeting these targets is easier said than done for even the most capable of poor country governments. In theory, five years is enough to finish the projects, but in practice, those five years have at times been whittled away by the time it takes to complete thorough pre-project studies and build the capacity of a brand new agency to guide the process from start to finish.

MCC is also responding to pressure from Congress—which determines the level of funding the organization will receive each year—to get compacts signed and dollars out the door. “We would love to convince people that our constituency is the poor...but we answer to Congress,” said an MCC employee. As a result, some observers say that the agency has shifted from its intended supervisory role to taking on greater participation in program design. The idea is to empower the host country government, said the MCC employee, but “sometimes we had to take the pen and write the terms of reference ourselves.”

Evidence for this shift can be found through the organization’s website, where publicly-posted information on procurements shows that the MCC is acting as counterparty to contracts for firms to work on compact development.

In the past, MCC hired independent engineering firms to provide due diligence and implementation oversight for infrastructure projects.  Now, the scope of work allows the MCC to hire firms to work on compact design as well. This seemingly small and easily-overlooked detail actually represents a “fundamental shift in the way we operate,” according to a source with detailed knowledge of the matter.

MCC leadership disagreed with this interpretation of their contracting procedures. “I can assure you that we’re not departing from our full embracing of our concept of host country ownership. I think we have gotten more sophisticated over the last five years in terms of what it looks like,” said Dick Day, Managing Director for Compact Development.

“We have very much learned from our experience,” said Day. “Some countries have sufficient capacity to do it all on their own.... Others need and want us to come alongside them and help them along the way, including during the compact development phase.” MCC’s experience with Senegal, Jordan and Moldova illustrate that country governments have varying levels of capacity to complete project design work in a timely way, he said.

Day described two areas of “evolution” in the way MCC does business. The first is the “overall concept of being more engaged as a partner.” The second is shifting the bulk of the compact design work (feasibility study, environmental impact assessment and some early engineering design work) to the period before the compact is signed with the host country government. This allows the host country governments to take more of the five years to carry out implementation of the project, but it may also give the MCC more latitude to increase their level of engagement in the process of designing the project when required, or - as a skeptic might phrase it less carefully - “do it for them.”

From at least one MCC employee’s perspective, these changes in MCC procedures mean “we are now contracting with engineering firms to do design work that the countries used to do, clear and simple.”

Read More & Discuss

The New Evangelists: Bill and Melinda Gates Spread the Good News on Global Health Aid

People usually come to the capital to criticize to government, Bill Gates joked at the start of his speech on Tuesday in Washington, but “we’re here to say two words you don’t often hear about government programs: Thank you.” The Gateses’ mission wasn’t just about gratitude, but to sell the simple—and, some might argue, simplistic—message that US government investment in global health works. They weren’t asking for money for themselves (the Gates foundation already has so much money to spend each year that they discourage individual donations), but rather to lobby US policy makers and citizens to continue the increasing American investment in global health.

Americans only hear the horrible stories about disease and malnutrition in the developing world, the Gateses said. The idea behind their new public advocacy initiative, the Living Proof Project, is to tell the stories of people in the developing world who are alive today because of US interventions in global health.

The reduction in mortality for children under five, from 20 million deaths per year in 1960 to eight million per year in 2008 is, Bill Gates said, one of the biggest accomplishments in the last 100 years. This happened because of higher incomes and smart spending on global health, and Bill says the US is largely to thank for it.

The Gateses talked about success in decreasing prices and increasing access to anti-retroviral treatments for AIDS patients, and praised the “American tax dollars” that have enabled “slow but real progress” towards finding an AIDS vaccine.

Bill Gates also talked about making “substantial progress” against malaria for the first time since the 1970s, arguing that scaled up indoor spraying and bednet distribution since 2004 has led to large reductions in malaria cases. [We’ve written posts on the Gateses’ erroneous use of African malaria data three separate times, with spectacularly non-existent effect on the Gateses.]

Gates went on to address some arguments that “skeptics” (who could they possibly be?) might level against the optimistic approach to global health.

There have been problems with corruption, he acknowledged, “if you look back at the history of aid” and “some of it ended up in the pocket of the local dictator.” But today’s global health spending, he argued, is different because it is more measurable. With health interventions, “we can measure the impacts, we can make sure the vaccines are getting to the children,” he said, though he left unclear how you identify the corrupt link in the chain from funding to inputs to outputs involving many separate actors.

To those concerned that aid creates a culture of dependency, Gates again pointed at history, saying that nearly twice as many countries in the 1960s received aid compared to today. Countries like Egypt, Brazil and Thailand, he said, are “not net recipients of aid.”  He predicted that the world will see increasing numbers of countries currently on aid becoming self-sufficient. We hope that includes the many countries that have become steadily more aid-dependent for five decades.

There’s been little substantive commentary on the speech in the news or blogosphere so far. Judging from the tenor of the enthusiastic real-time comments from viewers during the speech (“What can we do? Who to call or write?” and “I love hearing about the positive progress we have made...it is so rare that this fact is broadcasted,” for example), the Gateses were preaching to the choir.

This NPR interview,  though just seven minutes long, is actually meatier than the Gateses’ speech. In it, the interviewer gets Bill and Melinda Gates to talk honestly about why the Gates Foundation behaves differently than governments (“we can take risks where a government won’t or can’t”), and how their entrepreneurial approach to development problems allows them to acknowledge failures and change their approach midstream. Great!

Melinda Gates retells the story of delivering the rotavirus vaccine (but without the relentlessly optimistic spin from the speech). They worked with a scientist to develop a lifesaving vaccine, but failed with something much more mundane: producing the right packaging. They didn’t realize that they needed to put the doses in small containers so that it could be refrigerated all the way from the lab to remote locations in Nicaragua. She said: “You just learn from it and say okay, that’s a small mistake we made, and we’re not going to make that mistake again.” Kudos again! Would you mind if we called you “searchers”?

But all of this left us with one big unanswered question.  If the Gateses indeed have a much-improved aid model, then why this big campaign to defend US government aid agencies (including USAID), whom we and many others have documented do not change in response to – or even acknowledge – failures?

Read More & Discuss

Econometric methodology for human mating

econometric-methodology2 I recently helped one of my single male graduate students in his search for a spouse. First, I suggested he conduct a randomized controlled trial of potential mates to identify the one with the best benefit/cost ratio. Unfortunately, all the women randomly selected for the study refused assignment to either the treatment or control groups, using language that does not usually enter academic discourse.

With the “gold standard” methods unavailable, I next recommended an econometric regression approach. He looked for data on a large sample of married women on various inputs (intelligence, beauty, education, family background, did they take a bath every day), as well as on output: marital happiness. Then he ran an econometric regression of output on inputs. Finally, he gathered data on available single women on all the characteristics in the econometric study. He made an out-of-sample prediction of predicted marital happiness. He visited the lucky woman who had the best predicted value in the entire singles sample, explained to her how he calculated her nuptial fitness, and suggested they get married. She called the police.

After I bailed him out of jail, he seemed much more reluctant than before to follow my best practice techniques to find out “who works” in the marriage market. Much later, I heard that he had gotten married. Reluctantly agreeing to talk to me, he described an unusual methodology. He had met various women relying on pure chance, used unconscious instincts to identify one woman as a promising mate, she reciprocated this gut feeling, and without any further rigorous testing they got married.

OK, all of us would admit love is not a science. But there are many other areas where we don’t follow rational decision-making models, and instead skip right to a decision for reasons that we cannot articulate. A great book on this is by Gerd Gigerenzer, Gut Feelings: The Intelligence of the Unconscious. There is also the old idea that not all useful knowledge can be explicitly written down, but some of it is “tacit knowledge” (see any writings by Michael Polanyi).

Is the aid world more like love or science? Probably somewhere in between. Obviously, there is a BIG role for rigorous research to evaluate aid interventions. Yet going from research to implementation must also involve a lot of gut instincts and tacit knowledge. I know experienced aid workers who say that they can tell right away from a site visit whether the project is working or not.

I don’t know if this is true, but certainly implementation involves non-quantifiable factors like people who have complicated motivations and interactions. A manager of an aid project must figure out how to get these people to do what is necessary to get the desired results. The manager (who also has complicated motivations) must adjust when the original blueprint runs into unexpected problems, which again relies more on acquired tacit knowledge than on science. (How to keep the bed net project going when the nets were first impounded and delayed at customs, the truck driver transporting the nets got drunk and didn’t make the trip, the clinic workers are off at a funeral for one of their coworkers, the foreign volunteer is too busy writing a blog and smoking pot, and the local village head is insulted that he was not consulted on the bed net distribution.) Certainly something similar is true also in running a private business or starting a new one – there is no owner’s manual for entrepreneurship.

So for donors and managers of aid funds, is finding the right project to fund more like econometrics or is it more like falling in love? How about a bit of both?

Read More & Discuss

Will Aid Escalation Finally Crash in the Mountains of Afghanistan?

There has been a remarkable escalation in the scale and intrusiveness of aid interventions over the years (this was one of the major conclusions of my survey paper on aid to Africa). It seems to be reaching the reductio al absurdum in the current debate on whether to escalate US intervention in Afghanistan.

Let’s review the record:

Read More & Discuss

Big Plans vs. Real Plans

This guest post, by Jeffrey Barnes, Portfolio Manager at Abt Associates, is in response to yesterday's What must we do to end world poverty? At last, an answer. Aid Watch and other Easterly work, notably “The White Man’s Burden,” rail against the big plans of development. As this body of work rightly points out, there is a lot of paternalism involved in the Big Plans to “save” the poor. Easterly’s preferred alternative is “searching,” which at times sounds like semi-spontaneous experimentation that miraculously results in solutions to social and economic problems. Clearly, this is an exaggeration.

Top down vs. bottom up is not an either /or option.  It is a question of balance.  Top down plans made in Washington, New York and Geneva by people with little stake in or understanding of local situations are generally worthless. But bottom up approaches can also benefit from outside expertise, new technologies or external support. At some level, even searchers must plan.

Planning in the commercial world can be hugely successful.  A good business plan can mobilize a lot of capital and create significant wealth. What is it about this type of planning that can be emulated in the development world?

My experience of development planning leads me to the conclusion that there are, in fact, very few real plans in development. There are a lot of documents that are called plans, but these documents don’t really qualify as plans. A real plan describes in specific detail how the human, financial and technological resources that are under your control will be mobilized to produce a measurable result in a given time period.  This is what business plans do.  An entrepreneur (think searcher) would never begin implementing a business plan until all the financing, staff and equipment were in place.

Development plans fail to meet this definition of plans, either because the resources described in the plan are not under anyone’s control, or because the plan lacks specificity.   In the first case, development plans are better described as wish lists or advocacy documents. Think of all those national AIDS plans that describe every possible strategy against AIDS, but with the sources of financing and the implementing agencies to be determined. Such non-plans don’t serve as a guide to implementation—they are simply advocacy tools for governments to get donors to make pledges for different parts of the national wish list.   Even as one part of the plan starts, other parts of the plan remain inactive awaiting donor support. This is akin to starting a bicycle trip and hoping to find the handlebars and pedals along the way.  Little wonder that so few of these plans arrive at their intended destination.

The second kind of non-plan one sees in the development world is more accurately described as a process statement.  While financing may be in place, this kind of plan lacks specifics on who will be doing what, when and where. Most project proposals fall into this category. A proposal will name the principal staff and the stakeholders to be consulted, and it will describe the technical approaches and working principles to be adhered to (e.g. pro-poor, environmentally safe, gender equitable). But all the details of actual implementation are to be worked out later.

Eventually, such proposals might lead to some real plans, but this means that a large share of project “implementation” time is actually spent making and revising annual plans.  Sometimes, proposals lack specificity because so many of the variables (government stability, complementary activity by other groups) are outside the control of the plan’s authors. In business, plans with too many unknowns are not financed.  But in development, wishful thinking gets the better of such assessments.  Little wonder that so little is achieved in the typical project life cycle.

So please, Aid Watch, don’t give planning a bad name. Searchers use real plans, not wish lists or process statements. A real plan is made at the operational level, with little or no ambiguity about what resources are available, and who is to perform what activity at which time. If the development industry would ensure that all its plans met these criteria, it could prevent the top down processes that are so often doomed to failure.

Read More & Discuss

Millennium Villages Comments, We Respond

We received the following comment this morning from the Director of Communications at the Earth Institute, regarding the Aid Watch blog published yesterday, Do Millennium Villages Work? We May Never Know. My response is below.----- It’s unfortunate that the author of this post chose to publish such an uninformed blog on the Millennium Village Project’s monitoring and evaluation activities. She and William Easterly at Aid Watch were invited to meet with our scientists and discuss the science and research behind the Villages and the details of the MVP monitoring and evaluation process before publishing any commentaries. Instead the author hastily chose to publish without talking with MVP researchers. The inaccuracy of the blogpost is a reflection of the lack of rigor and objectivity with which the Aid Watch authors approach this subject time and again.

For readers interested in reading factually accurate information about the Millennium Villages project and its monitoring and evaluation strategy, please see: http://www.millenniumvillages.org/progress/monitoring_evaluation.htm

Erin Trowbridge Director of Communications The Earth Institute

----- Dear Erin,

I had hoped for a different kind of response, one that addressed the specific points made by the piece. Your only comment on content is to say the piece was "uninformed." It would be helpful if you would clarify exactly what you think the piece got wrong, and offer what you view as the correct information to replace it. I would be happy to post such a response on Aid Watch.

Your comment is in any case an inaccurate characterization of our interaction over the past two months. I sent seven separate emails to you and one to CEO John McArthur beginning in mid-August, asking for information on the overall MV evaluation strategy, and eventually asking specifically for an explanation of how the thinking of the team had evolved from 2006 (when Jeff Sachs said there were no controls) to 2009 (when we were informed that there are comparison villages for 10 MV sites). Your responses were represented fairly in the blog post that Aid Watch published yesterday. We expressed willingness to meet with the research scientists after you offered this; it is unfortunate that we were unable to find a mutually convenient time to meet before our publication deadline, which we had already postponed several times.

Thank you for sharing further details of the MVP evaluation process with the information that has now appeared on the link you provided. Interested readers can now independently judge for themselves the merits and demerits of the ongoing MVP evaluation.

Frequent readers here may tire of hearing it, but it is our belief that greater transparency and a greater willingness on the part of donors and aid practitioners to share information with supporters and skeptics alike will make aid better.

Laura

Laura Freschi Associate Director Development Research Institute

Read More & Discuss

Can We Push for Higher Expectations in Evaluation? The Millennium Villages Project, continued

There's been some good discussion—here in the comments of yesterday’s post and on other blogs—on the Millennium Villages and what sort of evaluation standard they can (realistically) and should (ideally) be held to. Yesterday on Aid Thoughts, Matt was distressed that over 70 percent of the student body at Carleton University voted in a tuition hike—$6 per student, per year, every year—to fund the Millennium Villages. The students apparently concluded that the MV program "offers the most effective approach for providing people with the tools they need to lift themselves out of extreme poverty. It is also the only approach that, if scaled, would see countless lives saved, an African Green Revolution and, amongst other things, every child in primary school."

How is it that students are coming away with that glowing impression from a project that—as Matt points out—has yet provided little evidence that its benefits are scalable, sustainable, persistent or transferable?

Focusing in on results published on one specific MVP intervention, blogger Naman Shah pointed us to his analysis of the MVP scientific team’s early malaria results.  The project's claims to have reduced malaria prevalence were “disproportionate to the evidence (especially given the bias of self-evaluation)” and suffered from some “bad science” like a failure to discuss pre-intervention malaria trends.

Chris Blattman stepped into the wider debate to offer an evaluator's reality check, questioning whether rigorous evaluation of the MVP is feasible.  Chris said:

[T]here are other paths to learning what works and why. I’m willing to bet there is a lot of trial-and-error learning in the MVs that could be shared. If they’re writing up these findings, I haven’t seen them. I suspect they could do a much better job, and I suspect they agree. But we shouldn’t hold them to evaluation goals that, from the outset, are bound to fail.

But if the industry standard of best practice is moving towards funding interventions that are measurable and proven to work, why is the MVP encouraging the international community to shift limited aid resources towards a highly-visible project apparently designed so that it can't be rigorously evaluated?

Fact is, none of us know exactly what kind of evaluation the Millennium Villages Project is doing, or the reasoning behind why they’re doing what they’re doing, since they haven't yet shared it in any detail.  Perhaps someone at the MVP will respond to our request to weigh in.

Read More & Discuss

Do Millennium Villages work? We may never know

Jeffrey Sachs’ Millennium Villages Project has to date unleashed an array of life-saving interventions in health, education, agriculture, and infrastructure in 80 villages throughout ten African countries. The goal of this project is nothing less than to “show what success looks like.” With a five-year budget of $120 million, the MVP is billed as a development experiment on a grand scale, a giant pilot project that could revolutionize the way development aid is done.

But are they a success? To address that question, we need to know: What kind of data is being collected? What kinds of questions are being asked? Three years into the start of one of the highest-profile development experiments ever, who’s watching the MVPs?

The most comprehensive evaluation of the project published so far is a review by the Overseas Development Institute, a large UK-based think tank. The review covered two out of four sectors, in four out of ten countries, with data collected in the MVs only, not in control villages. The report’s authors cautioned that “the review team was not tasked and not well placed to assess rigorously the effectiveness and efficiency of individual interventions as it was premature and beyond the means of the review.”

Despite this, a Millennium Villages blog entry on Mali says, “With existing villages showing ‘remarkable results,’ several countries have developed bold plans to scale up the successful interventions to the national level.” Millennium Promise CEO John McArthur described Sachs’ recent testimony to the Senate Foreign Relations Committee: “Sachs noted the success of the Millennium Villages throughout Africa and the tremendous development gains seen in the project over the past three years.”

The Evaluation that Isn’t?

In contrast, evaluation experts have expressed disappointment in the results they’ve seen from the Millennium Villages Project to date. This isn’t because the MVPs fail to produce impressive outcomes, like a 350 percent increase in maize production in one year (in Mwandama, Malawi), or a 51 percent reduction in malaria cases (in Koraro, Ethiopia). Rather, it has to do with what is—and is not—being measured.

“Given that they’re getting aid on the order of 100 percent of village-level income per capita,” said the Center for Global Development’s Michael Clemens in an email, “we should not be surprised to see a big effect on them right away. I am sure that any analysis would reveal short-term effects of various kinds, on various development indicators in the Millennium Village.” The more important test would be to see if those effects are still there—compared with non-Millennium Villages—a few years after the project is over.

Ted Miguel, head of the Center of Evaluation for Global Action at Berkeley, also said he would “hope to see a randomized impact evaluation, as the obvious, most scientifically rigorous approach, and one that is by now a standard part of the toolkit of most development economists. At a minimum I would have liked to see some sort of comparison group of nearby villages not directly affected by MVP but still subject to any relevant local economic/political ‘shocks,’ or use in a difference-in-differences analysis.” Miguel said: “It is particularly disappointing because such strong claims have been made in the press about the ’success’ of the MVP model even though they haven't generated the rigorous evidence needed to really assess if this is in fact the case.”

An MVP spokesperson told me that they are running a multi-stage household study building on detailed baseline data, the first results from which will be published in 2010. The sample size is 300 households from each of the 14 MV “clusters” of villages (which comprise about 30,000-60,000 people each.) She also said that their evaluation “uses a pair-matched community intervention trial design” and “comparison villages for 10 MV sites.”

But Jeff Sachs noted in a 2006 speech that they were not doing detailed surveying in non-MV sites because—he said— “it’s almost impossible—and ethically not possible—to do an intensive intervention of measurement without interventions of actual process.” A paper the following year went on to explain that not only is there no selection of control villages (randomized or otherwise), there is also no attempt to select interventions for each village randomly in order to isolate the effects of specific interventions, or of certain sequences or combinations of interventions.

CEO John McArthur declined to comment on this apparent contradiction. The MVP spokesperson could say only that the evaluation strategy has evolved, and promised a thorough review of their monitoring and evaluation practices in 2010.

Comparison villages could be selected retroactively, but the MVP has failed to satisfactorily explain how they chose the MVs, saying in documents and in response to our questions only that they were “impoverished hunger hotspots” chosen “in consultation with the national and local governments.” If there was no consistent method used in selecting the original villages (if politics played a role, or if villages were chosen because they were considered more likely to succeed), it would be difficult to choose meaningful comparison villages.

Living in a Resource-Limited World

Imagine that you are a policymaker in a developing country, with limited resources at your disposal. What can you learn from the Millennium Villages? So far, not very much. Evaluations from the MVP give us a picture of how life has changed for the people living in the Millennium Villages, and information about how to best manage and implement the MVP.

Sandra Sequeira, an evaluation expert at London School of Economics, sums up the quandary neatly. “Their premise is that more is always better, i.e. more schools, more clinics, more immunizations, more bed nets. But we don't live in a world of unlimited resources. So the questions we really need to answer are: How much more? Given that we have to make choices, more of what?”

These are tough questions that the Millennium Villages Project will leave unanswered. For a huge pilot project with so much money and support behind it, and one that specifically aims to be exemplary (to “show what success looks like”), this is a disappointment, and a wasted opportunity.

Read More & Discuss