Facebook is scanning posts to check if users might kill themselves

Mark Zuckerberg, chief executive officer and founder of Facebook Inc (Getty)
Mark Zuckerberg, chief executive officer and founder of Facebook Inc (Getty)

Facebook is using artificial intelligence to identify users who are at risk of suicide by ‘reading’ their posts – and the tool is sparking privacy fears.

Once Facebook’s AI has identified an ‘at risk’ user based on warning signs in their posts, trained employees examine the posts, and call emergency services if necessary.

But some experts describe the new tool, currently under test in America, as akin to signing up users to medical research programmes without their consent.

A new Harvard study questions whether a private company such as Facebook should be dealing with such data.

Lead author Dr John Torous of Beth Israel Deaconess Medical Centre at Harvard Medical School in the US, said: ‘Facebook is now monitoring how you use Facebook and somehow they’re running an algorithm to determine your risk of committing suicide.’

MORE: Jeremy Corbyn says Labour may back another EU referendum
MORE: Rare tiger killed by her new potential mate at London Zoo

‘Facebook’s suicide prevention efforts leads to the question of whether this falls under the scope of public health.’

The analysis comes in the wake of the death of 14 year-old Molly Russell, who took her own life having been corresponding with people over Facebook-owned Instagram about suicide.

Her father Ian blamed Instagram, which is owned by Facebook, for the death of his daughter.

Dr Torous said: ‘The approach Facebook is trialing to reduce death by suicide is innovative and deserves commendation for its ambitious goal of using data science to advance public health, but there remains room for refinement and improvements.’

What should Facebook do to prevent suicide? (Getty)
What should Facebook do to prevent suicide? (Getty)

Facebook has been passing the information along to law enforcement in the US for wellness checks since March 2017, following a string of suicides which were live-streamed on the platform.

Dr Torous said: ‘The scope of the research seems more fitting for public health departments than to a publicly traded company whose mandate is to return value to shareholders.

‘What happens when Google offers such a service based on search history, Amazon on purchase history, and Microsoft on browsing history?

‘In an era where integrated mental health care is the goal, how do we prevent
fragmentation by uncoordinated innovation?

‘And even if outside the scope of public health, discussions of regulation, longevity, and oversight necessary are still needed for this approach to be equitable and successful.’

Writing in Annals of Internal Medicine the researchers said Facebook has offered some details on its algorithms.

But less is known about the credentials of its Community Operations who review the data – or the outcomes of around 3,500 calls to local emergency services so far.

The tool is being tested only in the US at present.

The rsearchers said Facebook does not claim its suicide efforts are research. But it has conducted experiments on users before and denied it was doing so.

It comes as the NSPCC proposed new regulations which would mean that social media companies that breach “duty of care” laws designed to keep children safe online could face criminal investigation and unlimited fines.

Firms that do not introduce measures to protect children from harm such as sexual abuse, bullying and self-harm could be prosecuted for a breach of their duty of care, if legislation suggested by the children’s charity is accepted into law.

The Daily Telegraph reports that companies will have to appoint named executives to be
personally responsible for upholding duty of care.

If they are found to be in breach of the rules, they could be banned from directorial roles for up to 15 years, in line with current disqualification rules.

—Watch the latest videos from Yahoo UK—