Let’s take the “con” out of aid and growth econometrics.
Steve Radelet seems to be retreating from his claim that aid (unconditionally) raises growth (his original paper was one source of what he now says is the “fictitious” claim that “the impact is large”). Radelet now only claims a “modest” effect. Of course, even modest effects on growth would be wonderful news for aid—just handing over aid money (of Radelet’s preferred type of “short-impact aid”) to a government in the poor country would lead to a permanent increase in the growth every year of personal income. This seems to imply remarkably irresponsible behavior on the part of the donors—they could have reallocated aid from other types to “short-impact aid” (which is only half of aid now) and could have reallocated aid from countries that were experiencing Radelet’s “diminishing returns to aid” to other countries that still had high growth payoffs to aid. Just think of the lost opportunities for poverty reduction in Nigeria, which gets tiny amounts of aid as a percent of GDP. All this poverty reduction could have been achieved by giving much more money to the country’s corrupt politicians, whom most Nigerians (who have apparently not yet heard about the latest positive aid and growth regressions) condemn as wasting the country’s past oil and aid money.
Alas, all these statistical results are built on sand, as Professor Lal rightly notes. They suffer from the econometric equivalent of “even a broken clock is right twice a day.” If you had a hypothesis that your clock works (even though its hands were stuck at twelve o’clock), and tested the clock only on data gathered around 12 noon and 12 midnight, you could prove “my clock works.”
I am not accusing any individual researcher of being so blatant about selecting data to fit the hypothesis that “aid raises growth.” Unfortunately, the same thing often happens unconsciously when econometric researchers only report statistically significant results and don’t report those that simply show zero effects. This may even not involve the same researcher, as the one with significant results gets publicized and the one showing zero effects gives up and works on something else. The selection process happens NOT with choosing some data points and not others, but choosing different slices of the data by adding some right hand side variables and not others. The aid and growth literature suffers from this syndrome big time, with its dizzying array of other control variables and its extreme flexibility as to how to enter aid itself into the growth regression, with a bias towards reporting those of the myriad of possible regressions that show the significant effects of aid on growth. The only real check on this is for subsequent researchers to add new data to the old results following the exact same specification (the original literature could not have anticipated the new data and so it will only fit if the “aid works” hypothesis is actually true). Another possible check is to make small improvements in the specification that are hard to disagree with and see if the results still hold. Such exercises have consistently failed to confirm the positive aid and growth results—they have found zero effect of aid on growth. Q.E.D.