Meta nearly doubled the amount of violent content it removed from the Facebook platform. During the first quarter of 2022, the company deleted about 21.7 million posts for violating its rules on violence and incitement to violence, an increase from 12.4 million in the previous quarter.
Removals were also up for the quarter across the Instagram platform. But only slightly. The company removed 2.7 million posts for violating its violence rules, up from 2.6 million during the fourth quarter of 2021.
The company shared the new metrics as part of its quarterly report on Community Standards Implementation. In the report, Meta attributed the increase in removals to expanding its proactive detection technology.
According to the company, more than 98% of the posts it deleted were removed before users reported it. The report comes as Meta faces scrutiny over its response time after the recent mass shooting in Buffalo.
And there has been a slowdown in removing all new versions of the live recordings of the shootings that were circulating on Facebook and other platforms and companies.
One copy posted to Facebook was shared more than 46,000 times before it was removed more than nine hours after it was originally posted.
As with previous mass shootings like Christchurch, people's ability to quickly download and make new copies of live recordings has tested Meta's ability to enforce its policies.
Facebook faces questions about its response to mass shootings
“One of the challenges we see with events like this is people creating new content, new releases, and new external links to try to evade our policies,” said Jay Rosen, Vice President of Meta Integrity.
"We continue to learn to improve our processes and improve our systems to ensure we can remove violating content more quickly in the future."
Meta also shared the updated stats about the content you accidentally delete. For violent content, the company said it had recovered 756,000 posts on Facebook that were challenged after they were initially removed.
The company said it is also developing robust error metrics. But it did not explain what it might measure other than returns of contested content.
The company revealed that it removed 1.6 billion fake accounts from its platform in just three months from January to March. The report is designed to show how it implements its policies across 14 different areas within Facebook and 12 within Instagram.
Areas include bullying, harassment, hate speech, spam, violent content, and more. "We estimate that fake accounts represented approximately 5% of our worldwide monthly active users via Facebook during the first quarter of 2022," the company said.
The company aims to remove as many fake accounts as possible. Priority is given to accounts that seek to cause harm through spam or financial motives.
In the first quarter of 2022, Facebook took action on 1.6 billion accounts, compared to 1.7 billion in the last quarter of 2021.
0 Comments