Published: Wed, May 16, 2018
Money | By Armando Alvarado

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

The inappropriate content includes vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts.

"As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse", Guy Rosen, Facebook's vice president of product management, wrote in a blog post. "Accountable to the community".

Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight.

It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.

The company previously enforced community standards by having users report violations and trained staff then deal with them. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". The renewed attempt at transparency is a nice start for a company that has come under fire for allowing its social network to host all kinds of offensive content. It said it estimates that between 7 and 9 views out of every 10,000 pieces of content viewed on the social media platform were of content that violated the company's adult nudity and pornography standards.

Over the a year ago, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000.

During the press call, Schultz noted it will be a mix of full-timers and contractors spread across 16 locations around the world. This is a problem perhaps most salient in non-English speaking countries.

Twitter Is Improving Its Troll-Detecting Capabilities
However, if they don't violate Twitter rules, they will still be visible if people click on "show more replies", the company said. Harvey and Gasca said that there are many new signals that Twitter is taking in, most of which are not visible externally.

Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4. It attributed the increase to "improvements in our ability to find violating content using photo-detection technology, which detects both old content and newly posted content".

The company also removed 21 million pieces of content that contained adult nudity or sexual activity, flagging nearly 96 per cent of the content with its own systems.

In total the social network took action on 3.4m posts or parts of posts that contained such content. The company found and flagged 95.8% of such content before users reported it. It took down 2.5 million pieces of hate speech during the period, only 38% of which was flagged by its algorithms. "By comparison, we removed two and a half million pieces of hate speech in Q1 2018,38 percent of which was flagged by our technology". It says it found and flagged almost 100% of spam content in both Q1 and Q4.

Facebook's renewed moderation effort of almost 1.5 billion accounts has resulted in 583 million fake accounts being closed in the first three months of this year, according to The Guardian.

It disabled 583 million fake accounts.

"In addition, in many areas - whether it's spam, porn or fake accounts - we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts".

Like this: