In the second round of comments here, Brad Wilcox chose to focus on my argument that marriage promotion doesn’t work—that is, it doesn’t lead to more marriages. I have two brief responses to his comments.
First, Wilcox asserts that I have ignored salient evidence, and he mentions two studies. He writes:
But Cohen did not do justice to the existing literature on the HMI [Healthy Marriage Initiative] or of interventions like those used within it. For instance, he ignores evidence of modest success for the Oklahoma Marriage Initiative in fostering family stability (the longest running local effort working on this issue) and research that found that spending on the HMI was “positively associated with small changes in the percentage of married adults in the population” (italics in the original).
However, in my essay I linked to my book, Enduring Bonds: Inequality, Marriage, Parenting, and Everything Else That Makes Families Great and Terrible. There I dealt with the subject in much greater depth.
In particular, with regard to the claim that marriage promotion was associated with more marriage, the link is to this study (paywalled) in the journal Family Relations, by Alan Hawkins, Paul Amato, and Andrea Kinghorn. In my book I devote more than two pages to debunking this single study in detail. Since Wilcox appears not inclined to read my analysis in the book, I provide some key excerpts here:
[Hawkins, Amato, and Kinghorn] attempted to show that the marriage promotion money had beneficial effects at the population level.
They statistically compared state marriage promotion funding levels to the percentage of the population that was married and divorced, the number of children living with two parents or one parent, the nonmarital birth rate, and the poverty and near-poverty rates for the years 2000–2010. This kind of study offers an almost endless supply of subjective, post hoc decisions for researchers to make in their search for some relationship that passes the official cutoff for “statistical significance.” Here’s an example of one such choice these researchers made to find beneficial effects (no easy task, apparently): arbitrarily dividing the years covered into two separate periods. Here is their rationale: “We hypothesized that any HMI effects were weaker (or nonexistent) early in the decade (when funding levels were uniformly low) and stronger in the second half of the decade (when funding levels were at their peak).”
This is wrong. If funding levels were low and there was no effect in the early period, and then funding levels rose and effects emerged in the later period, then the analysis for all years should show that funding had an effect; that is the point of the analysis. This decision does not pass the smell test. Having determined that this decision would help them show that marriage promotion was good, they went on to report their beneficial effects, which were “significant” if you allowed them a 90 percent confidence (rather than the customary 95 percent, which is kosher under some house rules).
However, then they admitted their effects were significant only with Washington, D.C., included. Our nonstate capital city is a handy wiggle-room device for researchers studying state-level patterns; you can justify including it because it’s a real place, or you can justify excluding it because it’s not really a state. It turns out that the District of Columbia had per capita marriage promotion funding levels about nine times the average. With an improving family well-being profile during the period under study, this single case (out of fifty-one) could have a large statistical effect on the overall pattern. Statistical outliers are like the levers you learned about in physics—the further they are from the average, the more they can move the pile. To deal with this extreme outlier, they first cut the independent variable in half for D.C., bringing it down to about 4.4 times the mean and a third higher than the next most-extreme state, Oklahoma (itself pretty extreme). That change alone cut the number of significant effects on their outcomes down from six to three.
Then, performing a tragic coup de grâce on their own paper, they removed D.C. from the analysis altogether, and nothing was left. They didn’t quite see it that way, however: “But with the District of Columbia excluded from the data, all of the results were reduced to nonsignificance. Once again, most of the regression coefficients in this final analysis were comparable to those in Table 2 in direction and magnitude, but they were rendered nonsignificant by a further increase in the size of the standard errors.”
Really. These kinds of shenanigans give social scientists a bad name. (Everything that is nonsignificant is that way because of the [relative] size of the standard errors—that’s what nonsignificant means.) And what does “comparable in direction and magnitude” mean, exactly? This is the kind of statement one hopes the peer reviewers or editors would check closely. For example, with D.C. removed, the effect of marriage promotion on two-parent families fell 44 percent, and the effect on the poor/near-poor fell 78 percent. That’s “comparable” in the sense that they can be compared, but not in the sense that they are similar. Again, the authors helpfully explain that “the lack of significance can be explained by the larger standard errors.” That’s just another way of saying their model was ridiculously dependent on D.C. being in the sample and that removing it left them with nothing.
Oh well. Anyway, please keep giving the programs money, and us money for studying them: “In sum, the evidence from a variety of studies with different approaches targeting different populations suggests a potential for positive demographic change resulting from funding of [Marriage and Relationship Education] programs, but considerable uncertainty still remains. Given this uncertainty, more research is needed to determine whether these programs are accomplishing their goals and worthy of continued support.”
In short, this paper provides no evidence that HMI funding increased marriage rates or family wellbeing.
The other link Wilcox provides (“modest success for the Oklahoma Marriage Initiative”) goes to an essay on his website by the same Alan Hawkins. The evidence about Oklahoma’s “modest success” in that essay is limited to a broken link to another page on Wilcox’s site, and—I find this hard to even believe—an estimate of the effects of HMI funding in Oklahoma extrapolated from the paper I discussed above! That is, they took the very bad models from that paper and used them to predict how much the funding should have mattered in Oklahoma based on the level of funding there (and remember, Oklahoma was an outlier in that analysis). There was no estimate of the actual effect in Oklahoma. In fact, as I explained in a followup debunking, Oklahoma during this period experienced a greater decline in married-parent families than the rest of the country, even as they sucked up much more than their share of marriage promotion funds. This is, to put it mildly, not good social science. (The Oklahoma program, incidentally, is the subject of an excellent book by Melanie Heath: One Marriage Under God.)
Wilcox also argues that I am too demanding of federal programs, expecting demonstrable success. He concludes, “If the United States had adapted Cohen’s standard a half century ago, this would have resulted in the elimination of scores of federally funded programs that now garner hundreds of billions of dollars every year in public spending—from job training to Head Start.”
Amazingly, because Wilcox has made this argument before, I also addressed it in my book. Specifically, I wrote:
Of course, lots of programs fail. And, specifically, some studies have failed to show that kids whose parents were offered Head Start programs do better in the long run than those whose parents were not. But Head Start is offering a service to parents who want it, a service that most of them would buy on their own if it were not offered free. Head Start might fail at lifting children out of poverty while successfully providing a valuable, need-based service to low-income families.
As you can imagine, I am all for giving free marriage counseling to poor people if they want it (along with lots of other free stuff, including healthcare and childcare). And if they like it and keep using it, I might define that program as a success. But it’s not an antipoverty program.
Finally, in response to the idea that we just need more funding and more research to know if marriage promotion works, here’s my suggestion: in the studies testing marriage promotion programs, have a third group—in addition to the program and control group—who just get the cash equivalent to the cost of the service (a few thousand dollars). Then check to see how well the group getting the cash is doing compared with those getting the service. That’s the measure of whether this kind of policy is a success.