Cathy O’Neil wants to puncture a particular perception of algorithms that views artificial intelligence and other mathematical decisionmaking models as a type of “fairy dust” we can sprinkle over all our problems to make them disappear. This is an important project, and O’Neil does vital work in her essay and her book pointing out where and how unquestioning faith in these processes can lead us astray. Libertarians should take heed of her warning: algorithms can certainly be skewed with historically biased data, or targeted toward unethical ends. Yet this should not lead us to storm Silicon Valley with torches and pitchforks. Algorithms are powerful tools that also have tremendous potential for good.
In her effort to push back on the fairy dust viewpoint, O’Neil goes too far in the opposite direction. In her lead essay, she claims there “is no such thing as a morally neutral algorithm” and that “Every algorithm has, at its heart, an ethical dilemma.” While this certainly sounds like a stern and prudent warning, it’s not immediately obvious that this is true. Does a simple AI trained to play checkers house an intractable moral debate? Does matrix multiplication need to be preemptively regulated for fear of opening Pandora’s Box?
If this seems silly it’s because algorithms, in the abstract, are morally neutral; any moral weight they have comes from their real world applications. And the decisions we make don’t gain or lose any ethical significance because we asked an algorithm to help. An individual wrongly imprisoned is just as much a tragedy whether human bias or “algorithmic bias” is responsible.
Put this way it becomes clear that an algorithm is like any other tool – neither inherently good nor evil and only raising questions of ethics when we ask it ethical questions. Like a hammer, it requires careful and thoughtful application to hit the nail, rather than our thumb. Granted, it’s much easier to understand how a hammer works and so perhaps easier to avoid causing unintended harm. But it doesn’t make sense to render an ethical evaluation of the hammer outside of its applications.
This matters because it indicates that we may want to govern algorithms differently depending on the specific use case, rather than as an impossibly broad blanket category. We probably have more to fear from an algorithm that can recommend a longer prison sentence than from one that gives us Tinder dates. And it’s worth pointing out that the vast majority of applications fall closer to the Tinder side of the spectrum. As such, we should desire the strongest legal protections and accountability measures when algorithms are being used by governments, as they are the only entities legally entitled to curtail our civil liberties.
Government Incentives Are Different
Libertarians will argue that private companies have strong reputational and competitive reasons to make sure they are using algorithms in a productive manner, and they will be skeptical of excessive government regulation as a result. And rightly so! After all, many companies are responsibly using algorithms to create amazing innovations, and we don’t want to put the brakes on those developments. Governments already have established legal frameworks for governing discriminatory outcomes in sensitive areas, and while the courts may need to interpret and tweak their application for the digital age, there is no reason to think we need a sweeping new regulatory apparatus. For the most part, we should let the private sector experiment with this developing technology and avoid buying into technopanic. New regulations, if necessary, should be narrowly tailored to address specific bad outcomes rather than theoretical ones.
The most compelling concerns about the improper use of AI and algorithms stem primarily from government use of these technologies. Indeed, all the tangible examples of harm O’Neil cites in her essay are the result of poor incentives and structures designed by government. Namely, hiring models at a public teaching hospital, teacher value-added models, recidivism risk models, and Centrelink’s tax-fraud detection model. The poor results of these kinds of interactions, in which governments purchase algorithms from private developers, could be viewed primarily as a failure of the government procurement process. Government contracting creates opportunities for rent-seeking, and the process doesn’t benefit from the same kinds of feedback loops that are ubiquitous in private markets. So it should be no surprise that governments end up with inferior technology.
Libertarians, then, should be especially supportive of strong oversight and accountability for the use of algorithms and artificial intelligence when the government is exerting its power over individuals in areas like criminal justice. Take the Wisconsin case of Eric Loomis, for example. He was deemed “high risk” by a proprietary risk-assessment software and sentenced to six years in prison, partly because of that designation. He appealed, claiming that he should be able to view the algorithm and make arguments about its validity as part of his defense, but his request was denied by the state Supreme Court. Regardless of the specific merits of the Loomis case, the larger idea of unviewable algorithms aiding in sending people to prison is extremely problematic; a fundamental aspect of due process is understanding why you were sentenced and a public explanation of the process. As the use of artificial intelligence in criminal justice continues to grow, this will only become more of an issue.
But the answer here isn’t to abandon the project or collapse into what I might call bias fatalism – the belief that bias is inevitable, so why bother. We need to push forward and advocate for strong institutional accountability over the use of AI – especially when people are being sent to prison. The safest path forward would be open sourcing the entire criminal justice system. Requiring that all algorithms used to strip individuals of their civil rights be made available as open source software would mean government and civil society groups could regularly audit everything from the underlying data to the variable weights to help identify and root out problems. This may, on the margin, decrease the incentives for private developers to innovate and develop new solutions, but constraining the coercive power of the government requires a strong weighting toward transparency.
Open source is a fantastic tool to use in particular situations, like the justice system, but it seems unlikely to be the silver bullet for all possible government applications, from tax-fraud detection to child protection services. Sometimes we need to keep the exact weighting of the variables opaque to prevent gaming of the system, and sometimes transparency isn’t nearly as important as getting accurate predictions: although it is disputed, there is reason to believe there is some fundamental trade-off between accuracy and transparency in “black box” machine learning algorithms. But this brings us back to the earlier point that our weighing of the trade-offs between different governance systems should change based on the specific use case of the algorithm in question, rather than on the fact that it is “an algorithm.” There are lots of potential options, and a robust discussion around this topic certainly needs to continue.
A Positive Vision
My biggest fear is that someone reading O’Neil’s work would go on to become an activist against the use of algorithms, rather than for their use in a responsible manner. O’Neil herself recognizes that the pre-algorithm world often isn’t a better alternative, but doesn’t spend much time laying out a positive vision for the inclusion of these tools in the first place. One could be forgiven for thinking that the takeaway is despair: if including mathematical decisionmaking models opens you up to criticisms of entrenching racism with no benefits, then why bother?
But places where human bias is most prevalent offer some of the most exciting opportunities for the application of algorithms. Humans appear to be really, really bad at administering justice by ourselves. We judge people based on how traditionally African American their facial features look, we penalize overweight defendants, we let unrelated factors like football games affect our decision making, and more fundamentally, we can’t systematically update our priors in light of new evidence. All this means that we can gain a lot by partnering with AI, which can offset some of our flaws.
Algorithms can also be used as a catalyst for other, more fundamental, reforms of our system. New Jersey, for example, recently reformed its bail system and replaced it with a risk assessment software. The result has been a 20 percent decline in the jail population in the first six months alone. But without the algorithms to augment the new system, it seems unlikely this reform would have happened.
We can not afford to collapse into bias fatalism; our own human failings are too great to leave unaddressed. Greater integration of algorithms into our society poses risks, and O’Neil certainly brings up many important questions. But with the proper safeguards we can slowly find and remove many forms of machine bias while beginning to constrain our own.