Instagram launches new technology to remove Suicidal/self-harm posts

instagram

instagram

New technology has been launched by Instagram to recognize self-harm and suicide content in its app, in the UK and Europe.

The new tools can identify both images and words that break its rules on harmful posts.

The post will be made less visible in the app, in critical cases, will be removed automatically.

Head of Instagram, Adam Mosseri detailed the new system, which uses artificial intelligence in a blog post on its website

When it concerns human referral the technology, exists outside of Europe on Facebook and Instagram, the posts that are identified as harmful by the algorithm can be referred to human moderators who can choose whether to take further action by directing the user to organizations that offer help and to emergency services.

However, Instagram told the UK’S press association news agency that human referral was not part of the new tools in the UK and Europe because of data privacy considerations, mainly due to the General Data Protection Regulation (GDPR). But the social media firm said implementing a referral process would be its next step.

Instagram’s public policy director in Europe, Tara Hopkins said, “In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community.”

She also added that ”because in a small number of cases a judgment would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a “mental health assessment” and therefore a part of special category data, which receives greater protection under GDPR.”

The real reason for this change from the social media firm is due to the fact, in recent years they have been criticized for a lack of regulation over suicide and self-harm material.

 

Exit mobile version