Saturday, 22 September, 2018

Twitter Is Improving Its Troll-Detecting Capabilities

Twitter Revises Strategy To Fight Abusive Internet ‘Trolls’
Company
By Agency
On Twitter Revises Strategy To Fight Abusive Internet ‘Trolls’ Company By Agency On
Nellie Chapman | 15 May, 2018, 23:03

Hence, tweets that don't violate the rules but were submitted by users whom Twitter deems problematic will remain visible-you'll just have to click on "Show more replies" to access them. This is, however, a very 2014 way to look at content moderation and I think it's grown pretty apparent as of late that Twitter needs to lean on its algorithmic intelligence to solve this rather than putting the burden entirely on users hitting the report button. Using a set of signals, the platform will now determine if an account, while not violating Twitter's terms of service, is using "behaviors that distort and detract from the public conversation".

The result is that people contributing to the healthy conversation will be more visible while those that try to poison or undermine the debate with negativity will be digitally sidelined. "Some troll-like behaviour is fun, good and humorous".

Twitter has made little effort to be transparent about the kind of signals it looks for when seeking to identify accounts that, in their words, "distort the conversation".

Twitter has finally (FI-NAL-LY) unveiled a way to silence the trolls - it's using machine learning to hide the tormentors tweets from the feeds of the tormented.

"Less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what's reported does not violate our rules".

"That means fewer people are seeing Tweets that disrupt their experience on Twitter", the post said. The hope is that by understanding what makes a Twitter conversation healthy, the company can promote methods to foster more worthwhile tweets rather than their bellicose counterparts.

Harvey and Gasca said that there are many new signals that Twitter is taking in, most of which are not visible externally. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them.

Abusive accounts, according to Dorsey, will be monitored by how often they tweet to someone who does not follow them, whether they have confirmed their email address, and if their language is appropriate. We're encouraged by the results we've seen so far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.

The results have been a 4pc decrease in abuse reports from search and an 8pc drop in abuse reports from conversations as people see fewer tweets that will disturb their experiences on the platform.