Facebook has been working extensively to prevent the incident of suicide and self-harm of its users through its different innovative tools and features.
In relation to the World Suicide Prevention Day, we are sharing an update on what we’ve learned and some of the steps we’ve taken in the past year, as well as additional actions we’re going to take, to keep people safe on our apps, especially those who are most vulnerable, said Antigone Davis, Global Head of Safety at Facebook.
From April to June of 2019, we took action on more than 1.5 million pieces of suicide and self-injury content on Facebook and found more than 95% of it before it was reported by a user. During that same time period, we took action on more than 800 thousand pieces of this content on Instagram and found more than 77% of it before it was reported by a user.
Experts have told us that one of the most effective ways to prevent suicide is for people to hear from friends and family who care about them. Facebook has a unique role in facilitating those kinds of connections and we are taking additional steps to support those who are discussing these sensitive topics, especially young people.
To help young people safely discuss topics like suicide, we’re enhancing our online resources by including Orygen’s #chatsafe guidelines in Facebook’s Safety Center and in resources on Instagram when someone searches for suicide or self-injury content.
The #chatsafe guidelines were developed together with young people to provide support to those who might be responding to suicide-related content posted by others or for those who might want to share their own feelings and experiences with suicidal thoughts, feelings or behaviours.
We will continue to invest in people, technology and resources so that we can do more to protect people on our apps.
Earlier this year, Facebook began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad content online and newsworthy depictions of suicide.
As a result of these consultations, we have made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm, even when someone is seeking support or expressing themselves to aid their recovery. On Instagram, we have also made it harder to search for this type of content and kept it from being recommended in Explore.
Our engagement with experts has proven so valuable that we are also hiring a health and well-being expert to join our safety policy team. This person will focus exclusively on the health and well-being impacts of our apps and policies and will explore new ways to improve support for our community, including on topics related to suicide and self-injury.
And for the first time, we are also exploring ways to share public data from our platform on how people talk about suicide, beginning with providing academic researchers with access to the social media monitoring tool, CrowdTangle. To date, CrowdTangle has been available primarily to help newsrooms and media publishers understand what is happening on Facebook.
In addition to all we are doing to find more opportunities and places to surface resources, Facebook is continuing to build new technology to help us find and take action on potentially harmful content, including removing it or adding sensitivity screens.