For the month of January, Meta, formerly Facebook, removed over 11.6 million pieces of content across 13 policies for Facebook and over 3.2 million pieces of content across 12 policies for Instagram, according to the company.
Meta actioned 1.4 million pieces of adult nudity and sexual activity content, 233,600 pieces of bullying and harassment content, and 1.8 million violent and graphic content, among other things.
"In accordance with the IT Rules, we have published our monthly compliance report for the 31-day period of January 1 to January 31." "This report will contain details of the content that we have proactively removed using our automated tools, as well as details of user complaints received and action taken," a Meta spokesperson said.
According to the company, between January 1 and 31, Meta received a total of 911 reports through the Indian grievance mechanism, and 100% of these reports were responded to. In the case of Facebook, the most complaints (270) were about accounts being hacked, followed by fake profiles (107) and bullying and harassment (106) among other things.
Instagram received 1,037 reports, and the company responded to 100 percent of the complaints. Again, hacked accounts received the most complaints (677), followed by fake profiles (252) and others.
"Meta mentioned that they have consistently invested in technology, people, and processes over the years to further our agenda of keeping our users safe and secure online while allowing them to express themselves freely on our platform." "To identify and review content against our policies, we use a combination of Artificial Intelligence, reports from our community, and review by our teams," the company said in a statement.