The tragedy of suicide must be tackled with policies that work – a crackdown on technology is not one of them
After a series of suicide tragedies involving children who researched methods online, or sought peer group approval through groups and sites, some commentators have linked social media use to suicides.
Demands for action currently range from a safeguarding duty of care for social media providers through to prohibition in schools, and an out-right bans on phone use by children altogether. The impulse is obviously perfectly understandable. But there is no greater evidence that social media has ‘caused’ the suicide than it being a correlating or enabling factor, easily substituted should social media be otherwise unavailable. Nor has there been much reflection on whether such measures would be either effective or proportionate.
The causes of suicidal thoughts are complex, and difficult to discuss, given those having them can withdraw from seeking help or feel incapable of finding it. They are, though, not uncommon; far more people have such thoughts at some point in their lives than is reflected in the much smaller number attempting to act on them, let alone those succeeding. During a lifetime about 1 in 5 think about it, 3 in 1,000 attempt it, 1 in 10,000 succeed. It is then likely even if you yourself have never experienced a suicidal impulse, you will know someone who has.
Measurement is difficult and contested, however World Health Organisation figures suggest suicide accounts for around 1.4% of all deaths worldwide (or 800,000 people a year), and is generally, but not exclusively more prevalent in lower income countries, or those with authoritarian regimes. Within the UK the rate of suicide in the lowest income decile is about double that of the highest. There is also a clear link between increased suicide risk and being divorced or widowed. Adolescence is a time of increased risk for self-harm compared to childhood (the third highest cause of death for this age group in some studies), and the peak period for experiencing suicidal thoughts.
But it would be a mistake to assume adolescents are the group at highest risk of actually killing themselves; they’re in fact the lowest risk group, adolescence provokes the thought but not the deed. The ‘mid-life crisis’ conversely is aptly named, suicide rates peak among those aged 45-49, decline towards your 80s, and then rise again with growing infirmity.
Importantly, the overall risk of suicide and self-harm does not correlate with technology, if anything the reverse is true given the inverse relationship with development, and that it is principally used for prevention. Technology may facilitate it, but it is almost always an explanation of method, not the cause. Cyberbullying, for example, is a form of bullying, any kind of which can provoke suicidal thoughts. It is the bullying that causes the harm, not the ability to communicate by text or message platform. Pre-digital means of communications, such as telephones, could be used to harass people until they either pulled the cord or changed their number, digital phones and social messaging systems conversely can be used to instantly block trolls, and curtail the activities of stalkers. Forums can be used to find support and achieve insight. An adolescent who feels they cannot talk to their parents, peers or teachers, may find virtual aid from around the world. They may also find the wrong kind of support, but that is a general risk, very rare, and not unique to digital media.
There are other social aspects to suicide, for example the media contagion effect. Ill-judged or salacious reporting of suicides, particularly high-profile cases, can trigger dark thoughts and encourage the execution of copycat fantasies. As a result the media industry and social media companies have responsibility guidelines, generally encouraging those impacted to seek help. That is what self-regulation looks like and it’s not obvious that it’s the wrong approach. Peer pressure did not start with the printed word, let alone the internet, suicide pacts and cults have been a thing throughout history from Masada to Jonestown, and are extremely rare.
Nor is suicide prevention an under-regulated area unfamiliar to policymakers. The church considered it a mortal sin, and ‘self-murder’ was illegal in England from the mid-13th century to 1961, with your possessions at forfeit to the crown until 1822. This was both barbaric and ineffective. It was considered a health matter for much of the late 20th century, with the emphasis shifting to mental health in the 21st. The last Labour administration bought in the first national suicide prevention strategy for England and Wales in 2002, updated by the Coalition in 2012 and progress reports in 2014, 15, and 17. The plans to their credit recognise the complexity of the causes of suicide and localised and personalised nature of effective prevention interventions, but perhaps less wisely attempt to set national targets. General targets not being obviously helpful, for example localised practical measures, such as university efforts to train their staff in suicide awareness are unlikely to influenced by small movements in low numbers across the general student population each year.
The 2015 plan also noted “limited systematic evidence” of any link between social media, self-harm and suicidal behaviour. Identifying risks of improper use, but also noting the positive side of the support available online. The 2017 Digital Economy Act led to an Internet Safety Strategy Green Paper and draft social media code of practice in 2018. This also focused on providing links to off-platform support, training, and reminding people that online abuse is abuse.
So what we know is that suicide rates overall have fallen consistently over time, generally, and in most age categories. Despite some dubious recent statistical claims then, we are self-evidently not facing a child suicide crisis provoked either by the rapid rise of internet use in the 2000s or social media use in the last decade. We know further that cutting down the options an at-risk adolescent might have to find support is not obviously wise, and that the most effective interventions are the ones closest to the individual as possible, rather than general prescriptions. The call for providers of technology to act in loco parentis or rather in loco nannystatis seems then both bizarre and ineffective. Forcing to companies to prove they did enough to prevent something that is not their responsibility is going to yield tick-box responses, prohibiting access will simple remove potentially valuable sources of support, and more draconian measures, such as fines for failure to take down content the providers have not themselves published, just adds cost for little obvious benefit.
Self-regulation works because those that don’t act on public concerns tend to see their reputations trashed, notably in the case of social media using their own tools. Which is a far more effective palliative to irresponsibility than a fine for not having the correct link to a state-approved format for a safeguarding policy. In short, a pause for thought is the right reaction to a tragic case, not a national strategy or prohibition. Social media does not cause suicide.