Saturday, 22 September, 2018

Facebook Releases Censorship Stats In First Ever Such Report

Facebook Releases Censorship Stats In First Ever Such Report Facebook Releases Censorship Stats In First Ever Such Report
Nellie Chapman | 16 May, 2018, 00:02

Facebook took down or applied warning labels to 3.4 million pieces of violent content in the three months to March - a 183 per cent increase from the final quarter of 2017.

In total the social network took action on 3.4m posts or parts of posts that contained such content. The company said its technologies were able to detect 85.6% of the posts before they were flagged, much higher than the previous quarter's 71.6%. For every 10,000 content views, an estimated 22 to 27 contained graphic violence and 7 to 9 contained nudity and sexual violence that violated the rules, the company said. The post said Facebook found nearly all of that content before anyone had reported it, and that removing fake accounts is the key to combating that type of content.

"As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse", Guy Rosen, vice president of product management, wrote in a blog post announcing the report.

By comparison, the company was first to spot more than 85 percent of the graphically violent content it took action on, and nearly 96 percent of the nudity and sexual content. This was an increase from estimates of between 0.16 percent and 0.19 percent in fourth quarter of a year ago.

The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of a year ago.

Facebook says the number of views of terrorist propaganda from organisations including ISIS, al-Qaeda and their affiliates that happen on the platform is extremely low. The company has come under fire for failing to remove content that has incited ethnic violence in Myanmar, leading Facebook to hire more Burmese speakers. The Facebook leader hoped that the public would read the report to see how the company is acting to dilute a plethora of harmful activity. "While not always flawless, this combination helps us find and flag potentially violating content at scale before many people see or report it".

However, Facebook's ability to find this hate speech before users had reported it was not as good as other categories, with the company picking up only 38%.

Facebook also recently published a community standards report, finally releasing the full internal guidelines its content moderators use to police the social network. Facebook also took down over 837 million pieces of spam during 2018's first quarter. Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

Facebook stated that artificial intelligence has played an essential role in helping the social media company flag down content.