Facebook disclosed ten policies on Facebook and four policies on Instagram across the world including Pakistan describing about the categories of the content that violates its metrics and lead to the blockade or ban.
In its fourth edition of Community Standards Enforcement Report for quarter 2 and 3, it provided details about the content violated its policies subsequently facing blockade or ban on the social networking platforms.
Facebook reveals its metrics including the prevalence of the content violation; its action against it called as actioned content; proactive action on the content even before it was reported; how much was detected before someone reported it.
The metrics include appealed content which describes how much content people appealed after it took an action; and the restored content which describes how much content was restored after Facebook initially took action.
In this first report for Instagram, Facebook reveals its strict content policy on four policy areas including child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda.
While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services.
Facebook has also recently strengthened its policies around self-harm and made improvements to its technology to find and remove more violating content.
“On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.1% we detected proactively,” the statement issued said.
On Instagram, it can be seen similar progress with removal of about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.
Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. Facebook has identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content