In October of last year, Senator Elizabeth Warren posted an ad on Facebook which included the false statement that Mark Zuckerberg supported President Trump. Hoping to focus attention on the platform’s lack of fact checking, Warren explained, “What Zuckerberg has done is given Donald Trump free rein to lie on his platform—and then to pay Facebook gobs of money to push out their lies to American voters. If Trump tries to lie in a TV ad, most networks will refuse to air it.”
Broadcasters, however, are generally required to run candidate ads under Section 315 of the Federal Communications Act of 1934. Because of the spectre of running afoul of this federal communication law, networks tend to extend a hands-off policy to the entire company and run ads regardless of their verity. Warren might be expressing a popularly held belief about political ads, but as PolitiFact explained, “We could find no evidence that most networks reject false candidate ads.”
John Samples’s opening essay this month tackles the question that Warren was trying to spotlight: “should social media suppress ‘lies’ in political ads?” Where Warren has provoked social media companies to be more involved in fact checking, Samples instead finds his footing in the free speech doctrine, which “assumes that a direct connection between politicians (speakers) and voters (listeners) serves autonomy and the social good.” None of the alternatives are palatable, even the course that Google has taken, which disallows microtargeting: “‘Reach, not speech’ contravenes an idea undergirding free speech: people have the right and ability to discern truth and falsehood.”
If Samples falters, he does so by sidestepping the deeper political and social concerns that have given rise to his essay. In one of the closing paragraphs, he fully admits,
Policies about speech are unlikely to solve the complex political dilemmas faced by the platforms. The Facebook policy may help get elected officials off the company’s back. But it also led many on the left to conclude Facebook was helping Donald Trump’s re-election effort by spreading lies. Of course, if Facebook refused to run the Biden ad (and others later), the President and his supporters would conclude that the company was determined to oppose his re-election. Whatever social media does on speech will be seen as helping or hurting one side or the other in our polarized, zero-sum politics. The same may be true elsewhere. The companies should figure out what they stand for on these issues and stick with their principles. That will be a lot easier to say than to do. But it might offer in the longer term a way to avoid being governed by an “FCC for social media.”
While he agrees that the position Facebook has taken is “a mainstream policy that reflects the traditional value of free speech,” he doesn’t take the next step and ask, why are these platform companies facing backlash for their policies in the first place?
Like broadcasters, most organizations involved in the communication business aren’t also in the business of fact checking political ads. The U.S. Post Office doesn’t open political campaign mailers and check their validity. Telecommunications companies typically aren’t blamed for political robocalls that stretch the truth. There is no federal truth-in-advertising law that applies to political ads and very few states have legislated on the issue. Moreover, while social media will be a major source for 2020 political ads, only one-fifth of total spending will go to digital. Broadcast will get about half, cable will get another 20 percent, and radio will pick up the rest. While social media platforms are routinely criticized for their hands-off approach, their position isn’t aberrant.
To understand the source of ire, some table setting is needed. Google and Facebook are uniquely situated agents within the information ecosystem. Unlike the one-to-many media outlets, platforms perform two types of actions, which might be dubbed operations of legibility and operations of traction.
The first category is a catchall term for the efforts to attach clickstream data and other interactional data to profiles to form a detailed network map. It is through this assemblage that inferences about individuals can be made. The term legibility comes from James C. Scott, a political theorist whose work has focused on early state formation. As he defined it, legibility references,
a state’s attempt to make society legible, to arrange the population in ways that simplified the classic state functions of taxation, conscription, and prevention of rebellion. Having begun to think in these terms, I began to see legibility as a central problem in statecraft. The premodern state was, in many crucial respects, partially blind; it knew precious little about its subjects, their wealth, their landholdings and yields, their location, their very identity. It lacked anything like a detailed “map” of its terrain and its people. It lacked, for the most part, a measure, a metric, that would allow it to “translate” what it knew into a common standard necessary for a synoptic view.
Social media platforms are also blind to their users; they must model the social networks of individuals to make them legible. As research finds, Facebook data about Likes can be used to accurately predict highly sensitive personal attributes like sexual orientation, ethnicity, religious view, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, gender, and, most important for this discussion, political opinions.
Initiatives to make social and informational networks legible are inherently connected to the second grouping of actions, efforts of traction. Traction includes all those measures meant to persuade or influence people through the presentation of information. Traction is best exemplified in advertising and in search engine results and in the ordering of Facebook’s Newsfeed.
Users are on the other side of this process and cannot easily peer behind the veil. They are forced to make their own determinations about how legibility and traction work in practice. As Sarah Myers West, a postdoc researcher at the AI Now Institute, described the process,
Many social network users develop “folk theories” about how platforms work: in the absence of authoritative explanations, they strive to make sense of content moderation processes by drawing connections between related phenomena, developing non-authoritative conceptions of why and how their content was removed.
Research on moderation efforts confirms this finding. Users tend to think Facebook is “powerful, perceptive, and ultimately unknowable” even though there are limits to both legibility and traction.
As surveys from Pew have found, most aren’t aware that Facebook automagically categorizes individuals into segments for advertising purposes. Yet, when asked about how well these categories actually track their preferences, only 13 percent said that they are very accurate descriptions. Another 46 percent of users thought the categories were somewhat accurate. On the negative side of the ledger, 27 percent of users “feel it does not represent them accurately” and another 11 percent of users weren’t assigned categories at all. In other words, over a third of all users are effectively illegible to Facebook. Other examples abound. A Phoenix man is suing the city for a false arrest because data obtained from Google clearly showed that he was in two places at once. A group of marketers sued Facebook for wrongly stating ad placement data.
As for traction, online ads seem to be slightly more effective than traditional ad methods, but in many cases online ads yield nothing in return, as eBay found out. Online political ads tend to be less effective because they are often met with competing messages. Users ignore advertising, leading to ad blindness. Ad blockers are popular, about 30 percent of Americans use them. In short, people aren’t blithely consuming advertisements. Yet, when individuals are asked if they think ads are effective, they are quick to claim they aren’t fooled, but are convinced that others are.
All combined, these folk theories go far to explain why there is so much frustration with online filtering mechanisms. They also explain why political advertising is being targeted. Since users think that platform companies know everything about them, their friends, and their social networks, these companies should easily be able to parse fact from fiction in ads. Online ads are also seen as powerful shapers of opinion, again shifting the burden onto social media. Add in a general concern for the health of this country’s democracy and it makes sense why Senator Warren’s ad hit a nerve.
Samples is right to come down on the side of free speech for the reasons he lays out. But readers will still probably find his position deeply unsatisfying, not because he is wrong, but because of the all the baggage not discussed in the essay.