There’s no denying that suicide is a very real and serious problem. In 2013 over six thousand people in the UK took their own lives, a tragic figure which suicide prevention programmes rightly hope to reduce. But you might be surprised at just how common suicidal thoughts are among people who don’t go on to act on them. Research suggests that up to 17% of us will consider suicide at some point – that’s almost 11 million people in this country alone. I’m one of them; maybe you are too.
It’s this that has prompted Facebook, the world’s biggest social media platform, to introduce its own suicide prevention tool (interestingly Radar, the disastrous attempt at such a tool from the Samaritans, has finally been put to bed today). It makes sense; as more and more of us spend our lives online, more and more people are talking about suicidal thoughts in their blogs or on social media. The plan, already in place in the US, is to introduce an option to report a suicidal comment and have that report reviewed by a third party. If the report is deemed worrying, next time that user logs in they will be greeted with a pop up box saying, “Hi [name], a friend thinks you might be going through something difficult and asked us to look at your recent post.” The user is then offered the choice of being given someone to talk to (a suicide hotline) or to be offered “tips and support”.
So what’s not to like? Firstly, the online mental health community needs to remain a safe space. Although most people who have suicidal thoughts never act on them, the experience itself can be frightening and isolating. Thoughts of taking your own life can seem too shocking to share with your nearest and dearest, which is why the anonymity of social media can be so valuable. Developing internet friendships with people experiencing similar issues can be a genuine lifeline, and the online environment offers people the chance to speak the truth about their feelings without worrying loved ones. Being able to honest is vital. With the reporting structure in place I fear people will begin self-censoring, frightened their words will prompt someone to hit that button.
And when people talk about suicide there is a great deal of complexity and nuance. Suicidal thoughts can be fleeting and nebulous, for example, wondering if friends and family might be better off without you. Some people may express a desire to “not be”, perhaps posting thoughts such, “I just want to go to sleep and never wake up” – but with no real indication that they are planning to end their life. These kinds of statements are incredibly common within the online mental health community. I’ve said them many times myself. But most users will not know this and may well hit the report button in good faith.
Let’s not forget it’s not only your Facebook friends involved in the process. I’d particularly like to know more about “third parties” charged with reviewing the reports. Who are they? What training will they be given? Will they understand how common suicidal thoughts are, how often people express them, and how the online mental health community uses social media? I can only imagine the support they can offer support, inevitably, be limited. Almost anyone expressing suicidal feelings is likely to be aware of services like the Samaritans, while offering someone who feels suicidal a list of generic hints and tips feels almost derisory.
Finally, it’s invasive. The user can hit skip, but cannot opt out of the system so they don’t get presented with the dialogue box (perhaps again and again). It has all the subtlety of the Annoying Paperclip that graced Microsoft Word some years ago. “Hi! It looks like you’re writing a letter!” “Hi! It looks like you’re going through something difficult!” No shit, Sherlock. Whilst nowhere as intrusive as last year’s disastrous Samaritans Rader tool, this still gives the power to the button hitter and it is for the recipient to have to dismiss the dialogue box – without knowing who has reported them.
Some might say this isn’t meant for the adult online mental health community, but there’s your real problem. In the pool of online suicide references, the vast majority will come from that community. Maybe Facebook is thinking of younger users expressing fist tentative suicidal feelings, but that’s not unproblematic either. There is clear potential for people, young or old, to have malicious reports made about them. Imagine being faced with the paperclip, sorry dialogue box, every time you log in to Facebook. My fear is that will drive people off the platform, disconnecting them from their friends and their coping mechanism. And of all the suicidal people I meet online, it’s the ones who disappear I really worry about. This move looks like a great way for Facebook to generate goodwill, but their invasive tool might just hurt the people it wants to help.