In Political Ads, Do Two Wrongs Make a Right?

Warren Facebook AdLast week the political campaign of 2020 Democratic presidential candidate Elizabeth Warren “intentionally” posted an ad containing false claims on Facebook as a way to test the social media platform’s policies on political advertising. On Twitter, Warren announced to her followers, “We intentionally made a Facebook ad with false claims and submitted it to Facebook’s ad platform to see if it’d be approved. It got approved quickly and the ad is now running on Facebook.” The ad, shown above or at left, ran under the headline “Breaking news: Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election,” a claim that the ad itself admitted wasn’t true. “You’re probably shocked, and you might be thinking,” the ad continued, “‘how could this possibly be true?’ Well, it’s not. (Sorry.) But what Zuckerberg *has* done is given Donald Trump free rein to lie on his platform––and then to pay Facebook gobs of money to push out their lies to American voters.”

The “free rein” Warren was referring to, according to The New York Times, was the recent purchase of ads across social media by Trump’s campaign that accused another Democratic presidential candidate, Joseph Biden, of corruption in Ukraine. That ad, viewed more than five million times on Facebook, falsely said that Biden offered $1 billion to Ukrainian officials to remove a prosecutor who was overseeing an investigation of a company associated with Biden’s son Hunter Biden. (One thorough assessment of the truthfulness of the Trump ad was published on the website This past week, the Biden campaign demanded that Facebook, Twitter and YouTube take down the ad. Facebook refused, telling the Biden campaign that it would keep the Trump ad up because of its belief that statements by politicians add to important discourse and are newsworthy, even if they are false. Facebook has said that it allows ads made by politicians themselves (that is, their political campaigns), even if they are believed to be false, but scrutinizes ads for candidates made by third parties. Twitter and YouTube also kept the ad online.

That social media may be allowing ads to be run that are demonstrably false is one important issue. CNN and NBCUniversal refused to run the Trump campaign’s false ad. However, another important issue is Warren’s decision to counter one false ad with another. Warren claims her ad was meant to reveal Facebook’s preference of profit over truthfulness. “Facebook holds incredible power to affect elections and our national debate. They’ve decided to let political figures lie to you—even about Facebook itself—while their executives and their investors get even richer off the ads containing these lies,” she tweeted.

Although she may have had good intentions in submitting the ad to Facebook, there is something unsettling about countering falsity with falsity; as the old adage goes, “two wrongs don’t make a right.” Of course, most people would agree that false claims made in political advertising should be identified and brought to light. Commenting on the Warren ad, Subramaniam Vincent, Director of Journalism and Media Ethics at the Markkula Center for Applied Ethics at Santa Clara University, said in a tweet, “we need more ways for counter-speech to debunk deceptive speech in front of a real public.” Perhaps it’s best that “counter-speech” not resort to deception in order to counter deception. If the intentional airing of falsities to address prior falsities becomes acceptable in political discourse, then where does the normalizing of falsity end? To fight the propagation of such false advertising, educating the public seems like a logical solution; but what if such efforts fall on deaf ears? What should be done here?


Facebook Wants to End the “Popularity Contest”

facebookFacebook has announced that it will begin testing the hiding of like counts from posts on user’s news feeds. Although authors will see likes on their own status updates, they will be unable to see the numbers of likes on updates posted by those who they follow. (They will still see the number of comments their followers’ posts get and be able to see the different emojis used to express likes on their posts.) According to Tech Crunch, the testing—which will begin in Australia—is being initiated in an effort to improve users’ well-being by eliminating the psychologically damaging “popularity contest.” How the change would look to the average user is illustrated in the graphic below.

Interestingly, although like counts would disappear from friends’ status updates, Facebook reports that the data would still be used by algorithms to drive traffic on user’s news feeds. So although one would not see the number of likes that a friend’s update is getting, if the update is popular it would presumably rise to the top of one’s news feed. Whether or not users would ultimately get wise to this remains to be seen.


What also remains to be seen is how hiding like counts will reduce or eliminate the practice of social comparison that critics have long believed leads to envy and ultimately to anxiety. A number of studies have shown that such problems occur most often as a result of “passive” Facebook use, in which users casually scroll through their news feeds. As reported on Tech Crunch, one 2013 study “found that 20 percent of envy-inducing situations that experiment participants experienced were on Facebook, and that ‘intensity of passive following is likely to reduce users’ life satisfaction in the long-run, as it triggers upward social comparison and invidious emotions.’” In other words, users “compare their seemingly boring life to the well-liked glamorous moments shared by friends or celebrities and conclude they were lesser.” Additionally, researchers assert that “Facebook users can even exhibit a ‘self-promotion – envy spiral’ where they increasingly adopt narcissistic behaviors and glorify their lives in an attempt to compete with the rest of their social graph.”  The New York Times’ Jenna Wortham describes this as “success theater:” where people only show the best side of themselves on social media, and hide all the warts of real life.

Over the years there has been no shortage of criticism leveled at Facebook for profiting from such user distress. Now, however, Facebook seems to be recognizing that taking care of users is in its best interest. As Mark Zuckerberg said in a recent conference call, “protecting our community is more important than maximizing our profits. It’s important to remember that Facebook is about bringing people closer together and enabling meaningful social interactions; it’s not primarily about consuming content passively.” Time will tell if this particular test proves him right. According to Tech Crunch, “if the test improves people’s sense of well-being without tanking user engagement, it could expand to more countries or even roll out to everyone.” But what if the test does tank user engagement? Will we return to the popularity contest?