Tuesday, March 03, 2015

ADL Cyberhate Panel at 1871 MODERATED BY 1871 CEO Howard Tullman















3 ways for Facebook users to handle offensive or abusive content


Facebook is a platform for communication but not necessarily for free speech.
So said Monika Bickert, global policy manager for Facebook, on Monday night at the 1871 tech hub.
“When you’re dealing with this global community where 85 percent of the people are using Facebook from outside the U.S. and Canada, talking about the First Amendment isn’t particularly useful,” Bickert said.
Bickert took questions about how the company handles hate speech and online abuse in a panel moderated by 1871 CEO Howard Tullman. Steve Freeman, director of legal affairs for event host the Anti-Defamation League, which fights bigotry with a focus on anti-Semitism, also appeared on the panel.
Freeman said Facebook is not bound by free speech laws the way government entities are. The social media platform has the right to control the types of speech it allows, he said.





“In our community standards, we are focused on promoting expression and sharing,” Bickert said, “and that necessarily means that if something is dangerous or attacking a person, we want to take that seriously because it’s going to chill speech and sharing.”

Facebook policy prohibits harmful or hateful speech, including that which glorifies violence or threatens others, Bickert said. She said the company relies on community members to report abuse, which staffers review and deal with accordingly.
“We want to give people a variety of weapons,” Bickert said.
She outlined the different ways Facebook users can handle offensive or abusive content:
• Social resolution tools: Facebook suggests that users who don’t like the posts they are tagged in privately message the poster and ask him or her to remove it. The company provides a polite, pre-written message to make the process easier and encourage use. “Our data shows that in 85 percent of cases, this sparks a dialog and in many cases leads to just a voluntary removal of the photo,” Bickert said. This kind of self-policing reduces strain on Facebook, for which addressing reported posts is an expensive and laborious process, Bickert said.
• Block users: Facebook users have the option of blocking harassers or those who post content they interpret as offensive or annoying. In the absence of threats, this can be a good way to shun inappropriate posts and prevent future content, Bickert said. But she warned that blocked users won’t know that they’ve been blocked. “If you just blocked the photo that I posted of you, you said, ‘Whatever, I’ll just defriend Monika and I won’t deal with her anymore,’” and the poster won’t learn anything from that action, Bickert said.
• Report to Facebook: When a user notices intentional hate speech, glorification of violence, evidence of child abuse, bullying or other threats, Bickert recommended reporting that post to Facebook. She said the team responds to reports within 48 hours, unless it needs special attention from a foreign-language or category expert. Bickert said Facebook responds to posts or users depending on the severity of the content. “I think the vast majority of people using Facebook are well-intentioned,” Bickert said, suggesting that sometimes offensive posts come from teenagers who aren’t being thoughtful. In such a case, Facebook would send the person a warning and remove the content from the site. “However, if we see repeated patterns of violations, the consequences go up.” She said penalties range from warnings, to feature blocking — which prevents posting for 24 hours — to being banned from the site.

Total Pageviews

GOOGLE ANALYTICS

Blog Archive