Social risk aversion is an undertheorized element in Tyler’s framework. Let’s take for granted that society should maximize sustainable growth, properly understood, subject to a near-absolute human rights constraint. There’s still the question of what counts as sustainable. Since the effects of many actions are uncertain, it’s not just a question of which actions maximize absolute or even expected growth over millions of years. Suppose an action resulted in either an instantaneous tripling of wealth or a complete destruction of human civilization with 50-50 probability. In expected terms, this is a 50% gain, an incredible return on global wealth worth trillions of dollars. But I expect Tyler would reject this gamble as unethical due to the 50% chance of losing everything.
There are a number of variables we can play with here: the number of outcomes, the percent return of each, the likelihood of each, and the population that it affects (all of human civilization vs. a subset). I would expect that the acceptability of the gamble would vary positively with expected value and with the lower bound of outcomes, and negatively with the proportion of the population affected. I don’t think Tyler would reject all such gambles, adopting a minimax decision rule, as that would make economic growth nearly impossible. Society, in Tyler’s framework, should therefore be somewhat but not completely risk averse.
One way of instantiating that partial risk aversion might be discontinuously: minimax for gambles that involve the possibility of complete civilizational annihilation, and risk neutrality for everything else. Another might be to write the Wealth Plus function in such a way as to explicitly account for uncertainty, as we do in economics with utility-of-wealth functions. What is the right answer here, Tyler? And if it’s the latter, how do we pick a coefficient of social risk aversion?
Another issue is whether there is information we could receive that ought to make us more or less socially risk averse. For example, Tyler ties the sustainability criterion to the degree of irreplaceability of civilization (p. 86). If civilizations are scarce, then that pushes us in the direction of social risk aversion. But if civilizations are abundant in the universe, then for consistency we should accept some level of commensurability among civilizations just as we do for individual lives (I agree with Tyler that we all do), and that should drive us in the direction of risk neutrality. Does the detection of ‘Oumuamua, an interstellar object of possible artificial origin—almost immediately after humans gained the capability to find such projectiles—change Tyler’s coefficient of social risk aversion? At the margin, civilizations now appear less scarce than they did before we saw ‘Oumuamua.
Or what about the question of whether the universe is finite or infinite, or whether the multiverse contains an infinity of universes? Perhaps we will never know for sure, and that drives us to some risk aversion, but if there are an infinity of civilizations, even if many of them are outside of our light cone, that seems like an argument for social risk neutrality.
I suspect Tyler has thought about the issue of social risk aversion and that readers want to understand his view as much as I do.