How social media is preventing suicide

Date: 18/09/2017 Written by: Rosie 5 minutes to read.

Facebook has rolled out a feature allowing people to 'report' a post indicating possible self-harm or suicidal thoughts, and will be shortly rolling out robots to identify people who are considering taking their lives. This is a prime example of tech for good, as these features and robots are literally saving lives. However, is this the right thing to do? Should we really be relying on technology to save lives?

Facebook is constantly changing. But one feature deserves a high five - its suicide prevention technology.

In 2015 Facebook partnered with organisations like the National Suicide Prevention Lifeline and Now Matters to explore ways to prevent suicide. Following some research and collaboration, Facebook rolled out a feature that allows people to ‘report’ a post that indicates possible self harm or suicidal thoughts. Shortly, the tech giant will roll out its robots to recognise when someone is considering taking their life - a great example of how technology can literally save lives. But it does raise the important question - should we literally put the lives of humans in the hands of the bots?

ABOVE: How to report suicide thoughts and self harm on Facebook

This year, Facebook have continued to develop their reporting features, allowing users to report live streams and content on Facebook messenger. Previously it was just 'posts' that could be reported. When they first announced this development, many people asked why Facebook would even allow the streaming of suicidal themes with some saying it is best to 'cut the live stream'. Facebook quickly responded:

“Some might say we should cut off the live stream, but what we’ve learned is cutting off the stream too early could remove the opportunity for that person to receive help,” - Jennifer Guadagno, Facebook Researcher

Obviously if a live-stream was displaying graphic content, Facebook would take it down as soon as possible, however it is comforting to know that they are thinking of the long term benefits of their reporting features and to some degree, allowing these vulnerable people to carry out their 'call for help'. The viewer of the video can quickly alert Facebook and the Facebook team sends a message to the vulnerable individual on the live-stream with a caring message stating that someone is worried for them. It also sends a message to the person who created the alert and provides ways to support or help their Facebook friend.


In the US they are currently testing an update which includes the use of AI technology and clever algorithms to spot patterns. Put simply, Facebook are starting to automatically detect if someone is expressing suicidal thoughts through their site.

The flagged post is automatically sent through to a Facebook ‘Community Operations Team’ that reviews the post, and if they deem it necessary, they will contact that person and offer support and signposting. This new technology doesn’t rely on any possible ‘human error’ regarding failure to spot the warning signs. Facebook said it “might be possible in the future” to bring this sort of AI and pattern recognition to Facebook Live and Messenger, but the team said it’s too early to say for sure.

What do other social media sites do?

Similarly, Twitter allows users to flag content that implies self harm, with that tweet being sent to a team “devoted to handling threats of self-harm or suicide”. This team will then reach out to the person at-risk and offer support, guidance and encouragement to pursue further support. Crucially though, similar to Facebook, this team will let that person know that someone has expressed concern for them which has put the team into action.

On Instagram users have the option to report an image for self harm or suicidal content. This will lead to the content being taken down by Instagram and although there isn’t much concrete proof, their website does say they will reach out to the person at risk and offer guidance, resources and support.

Snapchat does not have a way for users to report suicidal content, but you can fill out a safety concern form on their website. The at-risk person is then given contact information for suicide hotlines with words of encouragement to seek further help.

Instagram and Twitter are following the blue footsteps closely with simple reporting features that are quick and easy, however they don’t match up to Facebook’s upcoming automatic detection technology that takes all responsibility out of the users hands and removes any and all instances of human error.

What we need to ensure with the roll out of this great technology is that machines do not replace the duty of human beings to spot the signs of diminishing mental health in our friends and family. We can care. Robots can't. However, in this fast paced, busy world when it is easy to get lost in the sea of social media posts I think this is a great example of how humans and machines can work together for the greater good.

If you agree, then here is a quick list of what to look out for on social media (so we don't leave it all to the machines).

High-risk warning signs

These are comfortably in the realm of common sense when it comes to spotting suicidal tendencies.

A person may be high risk of attempting suicide if they:

Other warning signs

These warning signs are sometimes difficult to spot, especially if you don’t know you should be looking out for them.

A person may be at risk or attempting suicide if they:

You might also like

Get the latest insights and other interesting changemaker stuff in your inbox.

Once or twice a month, 100% real and interesting, 0% spam.
We create meaningful digital products that connect with people and make positive change possible.

Home of the changemakers.

We just need a bit more info

* indicates required

Marketing Permissions

Social Change will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

Marketing preferences

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Project Planner