Ben Wittes would like to distinguish two kinds of concerns about NSA surveillance — one he regards as a topic for legitimate debate, one he regards as an unhelpful distraction. While it is all well and good to debate the appropriate contours of the formal rules that should govern NSA spying, Wittes writes, we must not “mingle” arguments about whether these rules may permit too much or protect too weakly with “fears about the capabilities themselves.”
While I agree that this is a useful distinction to make for the sake of clarity, I prefer to talk about “architectures” rather than “capabilities,” because the latter is vaguer and obscures important differences between different mechanisms for achieving the same broad objectives.
I also reject the idea that we should, in effect, permit any architecture of surveillance, however potent, that the intelligence community might find useful to construct, provided only the formal rules governing their use are adequate.
If we were confident that an adequate set of rules would both remain in effect and be followed scrupulously, arbitrarily far into the future, perhaps we could safely follow Ben’s advice. But rules are not always followed scrupulously, and indeed, rules can change far more quickly in times of panic than architectures can. We had a fairly stringent set of rules embedded in statute on September 11, 2001, and soon thereafter loosened them substantially — rather too substantially for my taste — in order to further empower our intelligence community to prevent further terrorist attacks. And yet, as we learned only many years later, President Bush determined that even the loosened rules fettered him too tightly, secretly directing the NSA to launch an expansive program of warrantless surveillance, relying on a legal theory to which only a tiny handful of government officials were made privy — and one ultimately discarded as indefensible.
To dismiss as “speculative” the fear that something similar might recur in a future crisis strikes me as requiring something bordering on a Humean skepticism. It’s like regarding as mere conjecture the belief that, having risen this morning and the morning before, the sun is likely to rise tomorrow as well. We should certainly debate what rules make sense — but if it is reasonably foreseeable that in times of crisis, rules will sooner or later be either violated or improvidently diluted by legislators disinclined to give much weight to the rights of unpopular minorities, we should act now to ensure the tools government wield are not limited only by those formal rules.
NSA’s defenders sometimes deflect concerns of this sort by observing that, after all, any authority can be abused. Which is true. But some authorities are more susceptible of abuse than others, and some abuses are more catastrophic in their effects. It is easier to misuse a secret authority than a publicly scrutinized one, and it is easier to evade oversight in a system of surveillance operating on a massive scale than in a more narrowly targeted one.
A helpful concrete example of what I mean is provided courtesy of the transparency report recently issued by the telephone carrier Vodaphone. In the United States, as in many other countries, the normal protocol for a telephone wiretap works roughly as follows: A law enforcement or intelligence official applies to a judge for a warrant authorizing monitoring of a particular line, and if approved the order is served on the appropriate communications provider, whose lawyers then review and whose technicians then implement the wiretap, arranging for communications over the targeted facility to be transmitted to the appropriate government agency.
In six countries, however, Vodaphone revealed that governments demand “direct access” to the company’s network (and presumably those of other carriers), enabling governments to implement taps without involving the carrier. In some ways this approach has obvious advantages in agility and secrecy, and may even enable the rapid addition of “capabilities” (taps targeting a suspect’s voiceprint rather than a phone number, say, or temporary recording of all calls for later retrieval) that would be difficult to implement quickly through the carrier’s own systems.
Equally obviously, however, an architecture of direct access eliminates important structural checks on the use of wiretaps. It makes it easy for the government to radically expand the scale of surveillance without raising any red flags, and it eliminates external chokepoints: The implementation of the tap is carried out by employees of the executive branch, and any records that might subsequently enable investigators to assess the legality of wiretaps remain within the exclusive control of the executive branch. The choice of architecture cannot truly be separated from the choice of optimal rules, because one of these architectures makes it far easier to violate the rules — or to secretly “interpret” them into irrelevance — undetected.
As we now know, American companies were initially willing to obey presidential directives that required both acquisition of records and electronic surveillance outside the procedures established by FISA, but they did eventually begin to insist on court orders. Under an architecture that did not require their cooperation, it seems extremely likely that surveillance programs would have continued for far longer without FISC involvement.
Ben is obviously correct that we are not going to forswear signals intelligence tout court — a policy so insane that I haven’t seen anyone seriously propose it. But that scarcely resolves the question of whether there are “capabilities” — or, as I prefer, “architectures” — that we would be better off not constructing in service of the SIGINT mission, even subject to the best imaginable rules.
NSA appears, for example, to conduct “upstream” collection under §702 via some form of “direct access” to the Internet backbone. If the current architecture bypasses the backbone provider as an intermediary or chokepoint for either individual tasking decisions or even significant programmatic changes to the operation of the surveillance equipment, we can reasonably debate whether this is wise — and whether restoring the chokepoint function in at least the latter case establishes an architecture with less danger of misuse, even if it entails some attenuation of SIGINT capabilities.
Of course the United States is not going to “disarm” in the face of state adversaries that all conduct substantial intelligence operations, but that is hardly the same as saying we must ensure, at any cost and by any means, that every e-mail sent on the planet is at least technically accessible to the NSA. If Ben’s analogy entails that we must match China in every particular — having no more qualms than they do about installing systems of surveillance on domestic networks, or degrading the security of widely used encryption systems — then I question whether that is an “arms race” worth winning.