Trump allies, largely unconstrained by Facebook’s
rules against repeated falsehoods, cement pre-election dominance
From a pro-Trump super PAC to the president’s
eldest son, conservatives have blown past Facebook’s fact-checking guardrails,
with few consequences.
By
November 1, 2020 at 5:00 a.m. CST
In the
final months of the presidential campaign, prominent associates of President
Trump and conservative groups with vast online followings have flirted with,
and frequently crossed, the boundaries set forth by Facebook about the repeated
sharing of misinformation.
From a
pro-Trump super PAC to the president’s eldest son, however, these users have
received few penalties, according to an examination of several months of posts
and ad spending, as well as internal company documents. In certain cases, their
accounts have been protected against more severe enforcement because of concern
about the perception of anti-conservative bias, said current and former
Facebook employees, who spoke on the condition of anonymity because of the
matter’s sensitivity.
These people
said the preferential treatment has undercut Facebook’s own efforts to curb
misinformation, in particular the technologies put in place to downgrade
problematic actors. Toward the end of last year, around the time Facebook-owned
Instagram was rolling out labels obscuring fact-challenged posts and directing
users to accurate information, the company removed a strike against Donald
Trump Jr. for a fact-check on the photo-sharing service that would have made
him a so-called repeat offender, fearing the backlash that would have ensued
from the accompanying penalties, according to two former employees familiar
with the matter.
These
penalties can be severe, including reduced traffic and possible demotion in
search. One former employee said it was among numerous strikes removed over the
past year for the president’s family members.
A
spokesman for Donald Trump Jr. did not respond to multiple emails seeking
comment. Facebook spokeswoman Andrea Vallone did not dispute the detail, saying
the company is “responsible for how we apply enforcement, and as a matter of
diligence, we will not apply a penalty in rare cases when the rating was not
appropriate or warranted under the program’s established guidelines.”
The
kid-glove treatment contradicts claims of anti-conservative bias leveled by
Trump and his children, as well as by Republican leaders in Congress. It also renews
questions about whether Facebook is prepared to act against the systematic spread of falsehoods that
could intensify as vote tallies are reported this week.
Facebook,
in a bid to avoid previous election missteps, has issued a slew of new
policies, including limits on political ads and rules against
premature assertions of victory. But the current and former employees say the
company’s four-year-old fact-checking program, introduced in response to the
flood of fake news that marred the 2016 election, has failed to constrain the
most prolific purveyors of false and misleading content.
The
program relies on independent fact-checkers rather than involving the company
in judgments about the veracity of content, and it makes available a range of
ratings for dubious material, from false to partly false to missing context —
which are labeled on the offending posts. Exceptions and political
considerations, however, shape the consequences that these ratings trigger,
according to people who have been involved in the program’s execution in the
run-up to the election, consistently steering the company toward less robust
enforcement.
Fact-checking
at Facebook has faltered, said Mike Ananny, an associate professor at the
University of Southern California who completed a 2018 review of the program,
“because its business model requires a scale and speed and level of engagement
mismatched to controlling misinformation.”
One of
the people familiar with internal deliberations said some efforts to improve
fact-checking and content review have been stymied by concerns about a
disproportionate impact on conservative users. Members of Facebook’s public
policy team recently floated a proposal ensuring that a new system to escalate
harmful posts do so evenly along ideological lines, the person said, so that 50
percent of the escalated material would be conservative and 50 percent would be
liberal, even if the material was not equivalent in potential risk.
“No
such policy exists,” Facebook’s Vallone said.
But the
person who was involved in the discussions said the idea showed how efforts to
combat misinformation are viewed internally as a political liability. “Too
often we’ve made politically expedient exceptions at the expense of our own
rules, which we generally believe to be fair," the person said.
Delayed
and uneven enforcement of the company’s rules is evident in particular on
prominent right-leaning Facebook pages involved in sharing news about the
election. More than a dozen such pages identified by The Washington Post shared
content debunked by Facebook’s own third-party fact-checkers twice within 90
days over the last six months, meeting the definition of repeat offenders
described by multiple people familiar with the company’s process and backed up
by internal communications. But many of these pages were still attracting
significant engagement and still purchasing ads, despite rules for repeat
offenders that prescribe significant penalties, including reduced distribution of content and the revocation of
advertising privileges.
Some
had clearly violated Facebook’s two-strike rule for “false information,” the
strictest rating available. Several had three or more fact-checks against them
within 90 days, though some were lesser ratings of partly false or missing
context.
The
largest outside group supporting Trump’s reelection, for example, has
repeatedly posted material judged as false by Facebook’s third-party
fact-checkers. The false claims circulated by the group, America First Action,
involve hot-button domestic policy issues core to the presidential campaign.
One video, accusing former vice president Joe Biden of seeking to defund the
police when he has in fact resisted that call, was labeled as false. A
different video leveling the same claim, posted three days later, earned an
identical label.
Even
though it received two false ratings within 90 days, in addition to repeated
fact-checks applied to posts about Biden’s energy agenda and tax plan, America First Action is still able to
advertise, according to Facebook’s public archive. There is no evidence that
its distribution has been reduced, according to engagement data from the social
media analysis tool CrowdTangle and Facebook fact-checking partners who spoke
on the condition of anonymity because the company is their client.
“I’m
baffled by the policy,” said the head of one such organization, singling out
America First Action for getting away with repeat offenses. “We repeatedly flag
offenders that nevertheless seem to prosper and continue to do ads.”
The
super PAC did not respond to a request for comment. Vallone declined to comment
on the status of America First Action’s page or of any other. She also declined
to make anyone from the fact-checking or news integrity team available for an
interview. She maintained that “many” of the pages inquired about by The Post
“have been penalized for repeatedly sharing misinformation in the past three
months." She did not specify the penalties or to which pages they had been
applied.
“We
don’t disclose the details of these thresholds publicly for very real concerns
about gaming the system, but we do send notifications to groups, pages,
accounts and advertisers when they’ve received a strike and are receiving
reduced distribution, and when they are a repeat offender," Vallone said.
She also defended the fact-checking program, saying it makes Facebook the “only
company that partners with over 80 fact-checking organizations to apply
fact-checks to millions of pieces of content.”
The
fear of appearing biased against Trump and other conservatives runs up to
the highest levels of Facebook, and has shaped
everything from the algorithm deciding what appears in the News Feed to the
process of reviewing potentially harmful content. Allegations of preferential
treatment in the fact-checking process leaked into public view this summer,
when a Facebook engineer published information, first reported by BuzzFeed, showing that company
managers were intervening on behalf of right-leaning publishers.
Meanwhile,
conservatives gain some of the largest audiences of any publishers on Facebook,
a trend that has continued through the election. Over the last week, the top
political Facebook pages receiving the largest increases in views were
right-leaning pages, including Donald Trump, Fox News, Breitbart News, along
with Joe Biden’s Page and the left-leaning NowThis Politics, according to an
internal report on traffic viewed by The Post.
Only by
disclosing the underlying data on fact-checks and the consequences they yield,
said Matt Perault, a former director of public policy at Facebook who now runs
Duke University’s Center on Science and Technology Policy, can the company
“show that it’s enforcing its policies consistently.”
As
Facebook declines to count certain fact-checks as strikes against a page or
user, other posts pushing dubious claims — already addressed by Facebook’s
third-party fact-checkers — are not even getting labeled. The pattern is stark
for some pages, including one operated by talk radio host Rush Limbaugh, who
boasts more than 2.3 million followers.
In
August, Limbaugh publicized the claim that Anthony S. Fauci, the nation’s leading
infectious-disease expert, owns half the patent for a coronavirus vaccine. One of Facebook’s third-party
fact-checkers debunked the claim, but no label was
applied to Limbaugh’s post, which has gained more than 17,000 shares, comments
and likes.
Later
the same month, Limbaugh shared a link to a story on his website questioning
whether Biden had delivered his convention speech live or in fact prerecorded
it — a conspiracy theory that gained traction among some right-wing
commentators. One of Facebook’s third-party fact-checkers debunked the claim, but no label was
applied to Limbaugh’s post. Limbaugh did not respond to a request for comment.
The
same pattern can be observed in Facebook’s treatment of right-wing blogger
Pamela Geller, whose page racked up a false rating last month but, she claimed,
has not been penalized. A post this month, amplifying a news story debunked by one of Facebook’s third-party
fact-checkers, was never labeled. Geller deleted the post following an inquiry
this week and said in an email she had received no notification from Facebook
about repeat offender penalties.
Facebook’s
Vallone declined to disclose the average time it takes a post to get labeled
but said “we do surface signals to our fact-checking partners to help them
prioritize what to rate."
Some
users do say they have been punished for what Facebook claims are repeated
falsehoods. Peggy Hubbard, a former Republican congressional candidate who
posts pro-Trump memes to her more than 350,000 followers, wrote in an email
that she had been “locked out of all accounts.” And in August, Facebook barred
one pro-Trump super PAC, the Committee to Defend the President, from
advertising following what a company spokesman, Andy Stone, called “repeated
sharing of content determined by third-party fact-checkers to be false.”
At
least four times in July and August, the Gateway Pundit, a right-wing news site
recently cited in a congressional hearing as
a victim of anti-conservative bias, posted stories rated as false or misleading
by Facebook’s independent fact-checkers.
The
site publicized the claim that Fauci would “make millions” from a coronavirus
vaccine (he will not); that a Democratic fundraising platform was
making payments to Black Lives Matter protesters (the platform, ActBlue, denied this); that the common cold was being treated as a covid-19 positive
result (it was not); and that Sen. Kamala D. Harris (D-Calif.),
the vice-presidential nominee, was haunted by a “dark secret” that her
ancestors owned enslaved people (the ancestral fact is common to many African Americans).
Some of
these stories were later modified, avoiding sanction by Facebook. But false
posts are shared rapidly, while fact-checks are slow to be applied. So, too,
false posts frequently outperform true ones. In the case of the Gateway Pundit,
an examination of more than 800 posts this summer found that those labeled as
false or misleading on average earned nearly 50 percent more likes, comments
and shares than were garnered by the page’s overall posts in the same period.
Jim
Hoft, the creator and editor of the Gateway Pundit site, did not respond to a
request for comment about what penalties, if any, his Facebook page has faced.
Pinned to the top of the site’s page on Facebook are instructions for its more
than 616,000 followers to ensure they can see posts from the Gateway Pundit “at
the top of your News Feed.”