Instagram’s Failure to Remove Self-Harm Images Exposed: Damning Study Reveals Shocking Truth About Meta’s Moderation Practices

Copenhagen, Denmark – A recent study conducted by Danish researchers revealed alarming findings regarding Meta’s handling of self-harm content on Instagram. The researchers created a private network on the platform, sharing 85 pieces of self-harm-related content that increased in severity – including images of blood and razors. Despite Meta’s claims of improved content moderation using artificial intelligence, the study found the social media giant’s efforts to be “extremely inadequate.”

Digitalt Ansvar, an organization promoting responsible digital development, criticized Meta’s moderation practices, stating that the platform’s actions were not in compliance with EU law. The Digital Services Act requires digital services to identify systemic risks, including negative impacts on physical and mental well-being. Meta responded by emphasizing its commitment to removing harmful content, citing the removal of over 12 million pieces related to suicide and self-injury on Instagram in the first half of 2024.

The study also highlighted Instagram’s algorithm actively contributing to the formation and spread of self-harm networks. Rather than combating these networks, the algorithm seemed to facilitate their growth, with 13-year-olds becoming friends with members of the self-harm group. Ask Hesby Holm, CEO of Digitalt Ansvar, expressed shock at the results, stating that the lack of intervention in small private groups could have severe consequences, particularly in association with suicide.

Psychologist Lotte Rubæk, who resigned from Meta’s global expert group on suicide prevention, condemned the platform for neglecting to remove explicit self-harm content. Rubæk emphasized the detrimental impact of such content on vulnerable individuals, attributing it to the rise in suicide figures. She described the issue as a matter of life and death for young users of the platform, criticizing Meta’s prioritization of profit over the well-being of its users.

In light of these findings, it is evident that Meta’s approach to self-harm content moderation on Instagram is under scrutiny for its ineffectiveness. The failure to address harmful content not only violates platform policies but also raises concerns about the safety of users, particularly young individuals. As calls for improved moderation and accountability grow louder, the impact of neglecting self-harm content on social media platforms continues to be a pressing issue requiring immediate action.