TWITTER’S
POLICY CHANGES UNDERSCORE THE ELECTION-RELATED MISINFORMATION
The tech company is making it harder for users
to spread misleading claims—and the White House isn’t happy.
OCTOBER
10, 2020
Twitter announced a series
of measures targeting election-related misinformation and falsehoods on Friday,
product and enforcement changes—some of which are temporary—that the platform
will be rolling out in the coming weeks. The social media giant said that
users, including political candidates, are prohibited from claiming an election
win “before it’s authoritatively called,” a determination that the platform
requires via “an announcement from state election officials, or a public
projection from at least two authoritative, national news outlets that make
independent election calls.” Starting next week, tweets that fail to meet this
criteria will be flagged and users will be brought to Twitter’s official U.S.
election page. The company also announced it would remove any tweets “that
encourage violence or call for people to interfere with election results or the
smooth operation of polling places,” a policy that applies to all Congressional
races as well as the presidential election. And while Twitter was already
slapping warning labels on tweets containing misinformation of this kind, as
well as those spewing coronavirus-related falsehoods, the platform said Friday
that there will be new prompts and more warnings on misleading posts beginning
next week. People who go to retweet a post that has been labeled as misleading
will be prompted to seek credible information on the topic. Tweets “with a
misleading information label from US political figures (including candidates
and campaign accounts), US-based accounts with more than 100,000 followers, or
that obtain significant engagement” will receive more warnings and
restrictions, requiring users to “tap through a warning” to see the content and
making it harder for people to spread the information by prohibiting them from
liking, retweeting, or replying to the post, which they can only then amplify
through the quote feature. Twitter also said it will not algorithmically
recommend such tweets, which are also subject to removal.
These changes are “likely to have a direct
impact’” on Donald Trump’s use of the app, the New York
Times writes, noting the
“Twitter tear” the president has been on since returning to the White House
after he was hospitalized for coronavirus. On Tuesday night, the president
tweeted or retweeted posts from other accounts roughly 40 times. The White House
railed against Twitter’s changes on Friday, criticism that comes as no surprise
given how often the president has been the recipient of such warning labels. A
recent study by Cornell University researchers found that Trump
alone “was likely the largest driver of the COVID-19 misinformation
‘infodemic.’” Yet according to Samantha Zager, deputy national press
secretary for the Trump campaign, Twitter’s changes are “extremely dangerous
for our democracy” by “attempting to influence this election in favor of their
preferred ticket by silencing the President and his supporters.”
There will also be temporary changes across
the platform aimed at stopping the spread of election-related misinformation.
Users who go to retweet a post will be encouraged to “add their own commentary
prior to amplifying content” through “the Quote Tweet composer,” a prompt that
“adds some extra friction for those who simply want to Retweet” but that the
platform hopes will bring “more consideration” to what users are amplifying.
(It will still appear as a retweet if users don’t add anything before retweeting.)
Another temporary change is that users will no longer see “liked by” and
“follow by” recommendations from people that they don’t follow on their
timeline, nor will they receive notifications for such posts, a measure that
Twitter hopes will slow the spread of misinformation.
The news comes as other tech companies have
issued similar countermeasures in preparation for Election Night, including
Facebook announcing earlier this week that it would ban all political
advertisements after the polls close and Google already committed to doing the
same. As Bloomberg notes, Facebook’s
misinformation risk is particularly high given that the social media company
does not fact-check political ads. But will these temporary efforts make a
difference? In a statement, Democratic lawmakers Cheri Bustos and Catherine
Cortez Mastro wrote that Facebook and Google’s post-Election Day ad
ban will “do little to stop bad actors from pushing dangerous disinformation
organically” and questioned whether the companies were ready for “potential
run-off scenarios or urgent announcements like protocol around recounts.”
Further, posts labeled as misinformation will, in many cases, still be left up
on Twitter and Facebook, something that Michael Serazio, an
associate professor of communication at Boston College, called “just as
problematic” in today’s online ecosystem. “In 2020 there is a real fear and
anxiety about what will circulate on social media in the absence of a declared
winner,” he said.
And while Facebook is labeling misleading
posts with links to additional information, the labels themselves do not
directly state that the post is wrong, which means users must take it upon
themselves to figure that out. “It’s just more information for people to parse
through,” said Vanderbilt Law School professor Gautam Hans, “and
people are notoriously bad at doing that.”