This article is more than 1 year old

The six simple questions Facebook refused to answer about its creepy suicide-detection AI

Code can work out if you're close to topping yourself

Analysis Facebook is using mysterious software that scours material on its social network to identify and offer help to people who sound potentially suicidal.

It's some new technology CEO Mark Zuckerberg mentioned in his 6,000-word manifesto earlier this year. Yet, his lieutenants have kept quiet about how the borderline-creepy thing actually works. It is described as software invisibly monitoring and assessing the state of mind of more than a billion people in real time, through the stuff they talk about and share online. From what we can tell, it alerts human handlers to intervene if you're sounding particularly morose on the social network, prioritizing reports by friends that you're acting suicidal.

What could go wrong? Where could it go next? Will it be used in emotion experiments like we saw in 2014? Will it be used for adverts, if not already; if you hit your glum post quota for the week, will you get banners for vacations and weekend breaks? These aren't even the questions we put to Facebook: they were far simpler, and yet, silence.

In a blog post this week, Guy Rosen, veep of product management, said "pattern recognition" and "artificial intelligence" will detect posts and live videos in which someone may be expressing thoughts of suicide, flagging them up faster to specialist reviewers.

Robot talks unto robot photo via Shutterstock

A picture tells a 1,000 words. Here's about 750 on Facebook using pics to school AI translators

READ MORE

If the software and Facebook's review team deem someone is possibly going to hurt or kill themselves, a dialog box pops up when they visit the site or app offering them details of mental-health helplines, a suitable friend to talk to, and "tips" to "work through a difficult situation." At least it's a helping hand. Only a stone-cold cynic would argue Facebook is only doing this because dead people can't view ads. Right?

Digging in, you have to wonder how smart this software really is – and whether it warranted the soft headlines some journalists gave it. The video detection seems interesting, but the text analysis sounds like a glorified grep. "We use signals like the text used in the post and comments (for example, comments like 'Are you ok?' and 'Can I help?' can be strong indicators)," said Rosen. "In some instances, we have found that the technology has identified videos that may have gone unreported."

Bafflingly, this machine-learning technology will not be deployed in the European Union, seemingly due to the region's General Data Protection Regulation aka GDPR – Europe's new privacy rules.

We asked Facebook six straightforward questions. No snark, we simply sought to scrutinize a system watching over more than a billion people. And although acknowledging it was thinking them over, the product team failed to come up with any responses. Here are the six questions Facebook couldn't or wouldn't answer:

Q. How does the AI system work?

It’s unclear how sophisticated Facebook’s black-box software is, nor what kind of neural network it uses, if it's even using something like that. It may be something as simple as trivial sentiment analysis applied to a person's Facebook post to detect the user’s mood – are they happy, sad, hyperactive, and so on. Or it could be something much more complex that involves working out the likelihood of suicide by looking carefully at the wording and considering a range of other factors such as the person’s gender, location, relationship status, favorite sports team, age, race, you name it.

Q. Where did they get the training data from?

Facebook did not disclose details on what prior information was used to teach such a model to decide if a person is suicidal or not. What biases are present or could be in the data? For example, could suicidal people of color not living in and around expensive costal cities, where Facebook engineers tend to live, go undetected because the system isn't aware of their situations. AI models are typically trained on huge datasets. ImageNet, used to train object recognition systems, has something like ten million pictures of things. What kind of GlumNet is Facebook using? Register articles?

Q. How representative is that data?

Perhaps Facebook dug into spreadsheets of suicide figures in the US and abroad to build profiles of people potentially or likely vulnerable to self-harm. It’s important to question the validity of the data. Biases in training data are passed on to algorithms and it means that decisions made by the model could carry negative consequences if they are incorrect.

People might be incorrectly labelled as suicidal and find messages from Facebook telling them that “someone thinks you might need extra support right now and asked us to help" annoying. A worse problem would be if it failed to notice signs just because someone didn’t fit the assumptions learned by the model. Facebook said the system was tested in America, and “will eventually be available worldwide, except the EU.” Attitudes towards suicide vary across different cultures, and it’s unknown if the same model trained and tested on American values would perform as well for users in Asia, Africa and beyond.

Q. What is the problem with Europe?

It’s probably down to the General Data Protection Regulation, which is a set of strict rules regarding privacy and data security that will be enforced next year. It’s questionable how Facebook’s software doesn’t comply with the new regulations and what this means for any future tools that require user data in the EU. If moneybags Facebook can't get AI and the GDPR working together, what hope is there for anyone else?

Q. What kind of human intervention is used?

The decisions aren’t all left to machines, and the social media giant did say after a potentially suicidal post was flagged it was passed onto a trained member of its community operations team to assess. The blog post mentioned that Facebook identified, with the help of some of its users, "over 100 wellness checks" to decide when someone is truly suicidal. What could these be? Who gets to review someone's posts and videos?

A video on the blog post shows police talking about intervening in high risk situations after being alerted by Facebook. But it’s unclear what other different types of actions are carried out for people at lower levels of suicide risk. At what point does a website call 911 or 999 on you? Facebook said it prioritizes the order in which the team reviews reported posts, videos and live streams, but doesn’t discuss how its ranking system works.

Q. Are there any significant results so far?

What percentage of people go through with suicide after posting on Facebook? How many are stopped? How soon? There are other interesting questions, where the answers could provide useful insights for other social media platforms looking to prevent suicide, and it’d be helpful if Facebook shared them.

Above all: if you need help, there are genuine real trained experts on standby for you, and they will listen. In the US, call them on 1-800-273-8255 free, 24/7, or talk to them online. In the UK, the Samaritans. In Canada, Crisis Services Canada. In Australia, Lifeline. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like