In an astonishing New York Times op-ed last week, former homeland security advisor Richard Falkenrath greeted news of a technology ban announced by the rulers of the United Arab Emirates with “approval, admiration and perhaps even a touch of envy.” In the name of national security, the UAE—soon to be mimicked by Saudi Arabia and India, among others—was threatening to limit the use of Blackberry mobile devices unless their Canadian manufacturer, Research in Motion (RIM), agreed to restructure their secure network to allow the government easier access to encrypted messages.
Of course, the Emirates had their own conception of what counts as “national security”: The announcement came mere weeks after the arrest of Badr Ali Saiwad Al Dhohori, an 18-year-old activist who had been using BlackBerry’s Messenger service to plan a (canceled) protest against rising gas prices. Indeed, for those familiar with encryption technology, it was hard to see the proposed BlackBerry bans as a useful anti-terror measure: Committed criminals and jihadists would have no difficulty securing their communications with freely available software that could be installed on any number of laptops or smartphones—and would have advance warning not to rely on the security provided by RIM’s network.
But the proposed ban soon led RIM to agree to accommodate a number of authoritarian regimes known to practice pervasive monitoring and filtering of the Internet as a means of political and social control. The message was delivered loud and clear to its real targets: Ordinary BlackBerry users who might have incidentally benefited from the network’s security, but lacked the resources, commitment, and technical savvy of criminals and terrorists.
The BlackBerry controversy helps to illustrate why perhaps the most frequently invoked metaphor—one might say cliché—in surveillance studies is the Panopticon, a prison designed for total, centralized surveillance, first designed by the English political philosopher Jeremy Bentham but popularized by French theorist Michel Foucault. The significance of the Panopticon for our purposes is that it is an explicitly architectural metaphor: It exerts a structural disciplinary power that extends far beyond the individual acts of observation it enables. Ideally, the warders can put up their feet and watch Seinfeld reruns all day, trusting that it will be enough for the prisoners to be aware that someone always could be watching them. A group of academics and journalists who brought a lawsuit challenging the NSA’s warrantless wiretapping program in 2006 alleged that just such a “chilling effect” was afflicting their communication with foreign sources.
I mention this because it highlights my lone point of agreement with the critics of Glenn Greenwald’s masterful—though, depressingly, far from comprehensive—summary of the explosive growth of American surveillance since 9/11. For it is, as Paul Rosenzweig argues, a “pointillist” portrait that emphasizes particular “abuses” and “excesses.” And this really does risk missing the forest for the trees—though pace Rosenzweig, I believe that if anything it understates the potential problems with the burgeoning surveillance state. More disturbing than the quantitative increase in surveillance Greenwald documents—and it is disturbing, when we consider that the sheer number of National Security Letters and FISA warrants issued annually dwarfs any plausible estimate of the number of terror supporters in the United States—are the qualitative and structural shifts in the nature of that surveillance.
Some of those qualitative changes are themselves driven by increases in the quantity of surveillance requests. A Sprint executive captured by security researcher Christopher Soghoian at last year’s ISS World surveillance conference explained how his firm was dealing with a growing number of demands from law enforcement:
[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.
Debates about surveillance policy typically focus on the formal legal constraints on government monitoring, but physical and technological architecture are often as important determinants of the real scope of surveillance in practice — a point pithily summed up by Lawrence Lessig’s maxim that “code is law.” Consider, as a thought experiment, the difference between a society in which police may, pursuant to some legal process, install cameras and microphones in private homes, and a society in which, pursuant to precisely same process, they may activate the cameras and microphones required to be installed in all homes.
The plummeting cost of data storage, the increasing ubiquity of network communications, and the vastly increased capacity of law enforcement to fruitfully analyze “transactional data” subject to far more anemic protections than the contents of communications all combine to make an extraordinary degree of monitoring both more feasible and more attractive to investigators, even holding constant the legal framework within which that monitoring occurs. A few decades ago, intelligence agents might have found it convenient to compare a list of everyone reading unsavory publications with a list of people who share group memberships with a suspicious number of subjects already under investigation—but they would have had no practical way of doing so. Now it is not only feasible, but inundated telecom providers and profit-seeking contractors are racing to find plug-and-play solutions that make the process ever cheaper and easier.
There’s also ample evidence suggesting that individualized, subject-based monitoring of communications themselves is yielding to a broader algorithmic approach that seeks to monitor entire data streams. John Yoo, who wrote the (now repudiated) memoranda providing the legal basis for the NSA wiretapping program, for example, has described a system in which “computers are initially searching through communications first and only bringing correlations to the attention of a human, to a security officer when there’s a certain level of confidence that they might involve terrorism.” Where once we identified targets and then looked for suspicious behavior or incriminating communications, the “new” approach—whose closest precedent may be the NSA’s scandalous SHAMROCK program uncovered by the Church Committee’s investigations in the 1970s—involves monitoring behavior patterns and communications streams in search of targets.
To the extent that intelligence surveillance has been moving to this model, it is a mistake to view (for instance) the explosion in the use of National Security Letters to acquire transactional data as a separate concern from legislation authorizing broad “programs” of surveillance or “roving” wiretap warrants that specify neither an individual target nor a particular communications facility to be monitored. These are complementary pieces of a broader investigatory strategy geared toward identifying targets.
I’ll have more—much more—to say about the specific empirical and legal arguments raised by our discussants as the conversation continues here. But the crucial macro-level point I’d like us to bear in mind is that the architectural shift in surveillance is potentially much more significant than a temporary spike in the number of warrants or NSLs issued over the past decade. History provides abundant proof that this sort of large-scale monitoring, even when undertaken for initially legitimate purposes, invites abuse. And perhaps still more worrying, even in the absence of such abuse, the scope of state control is in myriad ways a function of what James C. Scott, in his seminal Seeing Like a State, has dubbed the “legibility” of populations. Surveillance infrastructures and databases built for benign purposes tend to persist even when their administrators cease to be benign.