Terence Kealey’s insightful essay is likely to provoke a vigorous debate among libertarians on the utility of publicly funded science. He concludes that “the public funding of research has no beneficial effects on the economy.” I will argue that the situation, at least in a prominent environmental science, is worse, inasmuch as the more public money is disbursed, the poorer the quality of the science, and that there is a direct cause-and-effect relationship.
This is counter to the reigning myth that science, as a search for pure truth, is ultimately immune from incentivized distortion. In fact, at one time James M. Buchanan clearly stated that he thought science was one of the few areas that was not subject to public choice influences. In his 1985 essay The Myth of Benevolence, Buchanan wrote:
Science is a social activity pursued by persons who acknowledge the existence of a nonindividualistic, mutually agreed-on value, namely truth…Science cannot, therefore, be modelled in the contractarian, or exchange, paradigm.
In reality, public choice influences on science are pervasive and enforced through the massive and entrenched bureaucracies of higher education. The point of origin is probably President Franklin Roosevelt’s November 17, 1944 letter to Vannevar Bush, who, as director of the wartime Office of Scientific Research and Development, managed and oversaw the Manhattan Project.
Roosevelt expressed a clear desire to expand the reach of the government far beyond theoretical and applied physics, specifically asking Bush, “What can the Government do now and in the future to aid research activities by public and private organizations.” In response, in July, 1945, Bush published Science, The Endless Frontier, in which he explicitly acknowledged Roosevelt’s more inclusive vision, saying,
It is clear from President Roosevelt’s letter that in speaking of science that he had in mind the natural sciences, including biology and medicine…
Bush’s 1945 report explicitly laid the groundwork for the National Science Foundation, the modern incarnation of the National Institutes of Health, and the proliferation of federal science support through various federal agencies. But, instead of employing scientists directly as the Manhattan Project did, Bush proposed disbursing research support to individuals via their academic employers.
Universities saw this as a bonanza, adding substantial additional costs. A typical public university imposes a 50% surcharge on salaries and fringe benefits (At private universities the rate can approach 70%.)
These fungible funds often support faculty in the many university departments that do not recover all of their costs; thus does the Physics Department often support, say, Germanic Languages. As a result, the universities suddenly became wards of the federal government and in the thrall of extensive programmatic funding. The roots of statist “political correctness” lie as much in the economic interests of the academy as they do in the political predilections of the faculty.
As an example, I draw attention to my field of expertise, which is climate change science and policy. The Environmental Protection Agency claims to base its global warming regulations on “sound” science, in which the federal government is virtually the sole provider of research funding. In fact, climate change science and policy is a highly charged political arena, and its $2 billion/year public funding would not exist save for the perception that global warming is very high on the nation’s priority list.
The universities and their federal funders have evolved a codependent relationship. Again, let’s use climate change as an example. Academic scientists recognize that only the federal government provides the significant funds necessary to publish enough original research to gain tenure in the higher levels of academia. Their careers therefore depend on it. Meanwhile, the political support for elected officials who hope to gain from global warming science will go away if science dismisses the issue as unimportant.
The culture of exaggeration and the disincentives to minimize scientific/policy problems are an unintended consequence of the way we now do science, which is itself a direct descendent of Science, The Endless Frontier.
All the disciplines of science with policy implications (and this is by far most of them) compete with each other for finite budgetary resources, resources that are often allocated via various congressional committees, such as those charged with responsibilities for environmental science, technology, or medical research. Thus each of the constituent research communities must engage in demonstrations that their scientific purview is more important to society than those of their colleagues in other disciplines. So, using this example, global warming inadvertently competes with cancer research and others.
Imagine if a NASA administrator at a congressional hearing, upon being asked if global warming were of sufficient importance to justify a billion dollars in additional funding, replied that it really was an exaggerated issue, and the money should be spent elsewhere on more important problems.
It is a virtual certainty that such a reply would be one of his last acts as administrator.
So, at the end of this hypothetical hearing, having answered in the affirmative (perhaps more like, “hell yes, we can use the money”), the administrator gathers all of his department heads and demands programmatic proposals from each. Will any one of these individuals submit one which states that his department really doesn’t want the funding because the issue is perhaps exaggerated?
It is a virtual certainty that such a reply would be one of his last acts as a department head.
The department heads now turn to their individual scientists, asking for specific proposals on how to put the new monies to use. Who will submit a proposal with the working research hypothesis that climate change isn’t all that important?
It is a virtual certainty that such a reply would guarantee he was in his last year as a NASA scientist.
Now that the funding has been established and disbursed, the research is performed under the obviously supported hypotheses (which may largely be stated as “it’s worse than we thought”). When the results are submitted to a peer-reviewed journal, they are going to be reviewed by other scientists who, being prominent in the field of climate change by virtue of their research productivity, are funded by the same process. They have little incentive to block any papers consistent with the worsening hypothesis and every incentive to block one that concludes the opposite.
Can this really be true? After all, what I have sketched here is simply an hypothesis that public choice is fostering a pervasive “it’s worse than we thought” bias in the climate science literature, with the attendant policy distortions that must result from relying upon that literature.
It is an hypothesis that tests easily.
Let us turn to a less highly charged field in applied science to determine how to test the hypothesis of pervasive bias, namely the pedestrian venue of the daily weather forecast.
Short-range weather models and centennial-scale climate models are largely based upon the same physics derived from the six interacting “primitive equations” describing atmospheric motion and thermodynamics. The difference is that, in the weather forecasting models, the initial conditions change, being a simultaneous sample of global atmospheric pressure, temperature, and moisture in three dimensions, measured by ascending weather balloons and, increasingly, by downward-sounding satellites. This takes place twice a day. The “boundary conditions,” such as solar irradiance and the transfer of radiation through the atmosphere, do not change. In a climate model, the base variables are calculated, rather than measured, and the boundary conditions—such as the absorption of infrared radiation in various layers of the atmosphere (the “greenhouse effect”) change over time.
It is assumed that the weather forecasting model is unbiased—without remaining systematic errors—so that each run, every twelve hours, has an equal probability of predicting, say, that it will be warmer or colder next Friday than the previous run. If this were not he case, then the chance of warmer or colder is unequal. In fact, in the developmental process for forecast models, the biases are subtracted out and the output is forced to have a bias of zero and therefore an equal probability of a warmer or colder forecast.
Similarly, if the initial results are unbiased, successive runs of climate models should have an equal probability of producing centennial forecasts that are warmer or colder than previous one, or projecting more or less severe climate impacts. It is a fact that the climate change calculated by these models is not a change from current or past conditions, but is the product of subtracting the output of the model with low greenhouse-gas concentrations from the one with higher ones. Consequently the biasing errors have been subtracted out, a rather intriguing trick. Again, the change is one model minus another, not the standard “predicted minus observed.”
The climate research community actually believes its models are zero-biased. An amicus brief in the landmark Supreme Court case Massachusetts v. EPA, by a number of climate scientists claiming to speak for the larger community, explicitly stated this as fact:
Outcomes may turn out better than our best current prediction, but it is just as possible that environmental and health damages will be more severe than best predictions…”
The operative words are “just as possible,” indicating that climate scientists believe they are immune to public choice influences.
This is testable, and I ran such a test, publishing it in an obscure journal, Energy & Environment, in 2008. I, perhaps accurately, hypothesized that a paper severely criticizing the editorial process at Science and Nature, the two most prestigious general science journals worldwide, was not likely to be published in such prominent places.
I examined the 115 articles that had appeared in both of these journals during a 13-month period in 2006 and 2007, classifying them as either “worse than we thought,” “better,” or “neutral or cannot determine.” 23 were neutral and removed from consideration. 9 were “better” and 83 were “worse.” Because of the hypothesis of nonbiased equiprobability, this is equivalent to tossing a coin 92 times and coming up with 9 or fewer heads or tails. The probability that this would occur in an unbiased sample can be calculated from the binomial probability distribution, and the result is striking. There would have to be 100,000,000,000,000,000 iterations of the 92 tosses for there to be merely a 50% chance that one realization of 9 or fewer heads or tails would be observed.
In subsequent work, I recently assembled a much larger sample of the scientific literature and, while the manuscript is in preparation, I can state that my initial result appears to be robust.
Kealey tells us that there is no relationship between the wealth of nations and the amount of money that taxpayers spend on scientific research. In reality, it is in fact “worse than he thought.” At least in a highly politicized field such as global warming science and policy, the more money the public spends, the worse is the quality of the science.