FACEBOOK is struggling to block hate speech posts, conceding its detection technology “still doesn’t work that well” and it needs to be checked by human moderators.
The world’s largest social network published enforcement numbers for the first time on Wednesday, revealing millions of standards violations in the six months to March.
The inappropriate content includes vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts.
Facebook took down or applied warning labels to 3.4 million pieces of violent content in the three months to March – a 183 per cent increase from the final quarter of 2017.
Almost 86 per cent was found by the firm’s technology before it was reported by users.
Facebook removed 2.5 million pieces of hate speech in the three months to March, a rise of more than half from the three months prior.
Only 38 per cent of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.
Facebook has faced fierce criticism from governments and rights groups for failing to do enough to stem hate speech and prevent the service from being used to promote terrorism, stir sectarian conflict and broadcast acts including murder and suicide.
“We have a lot of work still to do to prevent abuse,” Facebook Product Management vice president Guy Rosen said.
It uses both software and an army of moderators to take down text, pictures and videos that violate its rules.
Facebook’s detection technology “still doesn’t work that well” in the hate speech arena and needs to be checked by the firm’s review workers, Mr Rosen said.
He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important.
“(And) technology needs large amounts of training data to recognise meaningful patterns of behaviour, which we often lack in less widely used languages,” Mr Rosen said.
More than a quarter of the human race accesses the platform, with two billion monthly users.
Under pressure from several governments, Facebook has been beefing up its moderator ranks and hopes to reach 20,000 by the end of 2018.
“Whether it’s spam, porn or fake accounts, we’re up against sophisticated adversaries who continually change tactics to circumvent our controls,” Mr Rosen said.
Facebook has been in hot water following allegations of data privacy violations by Cambridge Analytica, an election consultancy that improperly harvested information from millions of Facebook users for the Brexit campaign and Donald Trump’s US presidency bid.
On Monday it temporarily suspended 200 apps that had access to large amounts of user data prior to 2014.
INAPPROPRIATE CONTENT REMOVED/ADDRESSED BY FACEBOOK
Q4 2017: 1.2m pieces of content (72 pct found by technology before reported)
Q1 2018: 3.4m pieces (86pct)
Q4 2017: 1.6m pieces (24pct)
Q1 2018: 2.5m pieces (38pct)
Q4 2017: 21m pieces (94pct)
Q1 2018: 21m pieces (96pct)
Q4 2017: 1.1m pieces (97pct)
Q1 2018: 1.9m pieces (99.5pct)
Q4 2017: 727m pieces (99.8pct)
Q1 2018: 836m pieces (99.7pct)
Q4 2017: 694m pieces (99pct)
Q1 2018: 583m pieces (99pct)