Facebook removed 2.2bn fake accounts in first quarter

Facebook said it took down a record 2.2bn fake accounts in the first three months of this year, only slightly less than the total number of monthly active users on the social media network, in a sign that its battle against bad actors is far from over.

The company disabled 1.2bn fake accounts in the final quarter of 2018, and 754m accounts in the quarter prior to that, according to its bi-annual report on community standards enforcement published on Thursday.

The sharp rise to 2.2bn in the latest quarter was “automated attacks by bad actors who attempt to create large volumes of accounts at one time”, Guy Rosen, Facebook’s vice-president of integrity said in a blog.

But the company added that many were taken down within minutes of registering and therefore were not included in its monthly active user count, which stands at 2.38bn.

Facebook has ramped up its security spending in an effort to better prevent abuse on the platform as well as the spread of misinformation, after evidence emerged of attempts by Russia to interfere in the US 2016 election using the network.

It now employs 30,000 security staff, many as content moderators, and has invested in developing artificial intelligence and other technologies for automatically detecting content that violates its policies.

In a call with reporters, Facebook’s chief executive Mark Zuckerberg hit back at calls, which have become common among some democratic politicians, for the company to be broken up by regulators due to competition concerns.

He argued that the firm’s size allows it to spend more on tackling bad actors. “I think the amount of our budget that goes toward our safety systems . . . I believe is greater than Twitter’s whole revenue this year,” he said.

Facebook revealed on Thursday that it had also invested in tackling content related to the promotion, sale or purchase of illegal goods. As part of this, it took action on 900,000 pieces of content related to drugs and 670,000 related to firearms in the first quarter of the year.

The social network has also been cracking down on hate speech recently, particularly following the terror attacks on mosques in Christchurch, New Zealand, earlier this year.

It said that the proportion of hate speech that it found “proactively” — before users reported it — rose to 65.4 per cent in the first quarter of the year compared with 38 per cent a year ago.