The six simple questions Facebook refused to answer about its creepy suicide-detection AI

Code can work out if you're close to topping yourself


Analysis Facebook is using mysterious software that scours material on its social network to identify and offer help to people who sound potentially suicidal.

It's some new technology CEO Mark Zuckerberg mentioned in his 6,000-word manifesto earlier this year. Yet, his lieutenants have kept quiet about how the borderline-creepy thing actually works. It is described as software invisibly monitoring and assessing the state of mind of more than a billion people in real time, through the stuff they talk about and share online. From what we can tell, it alerts human handlers to intervene if you're sounding particularly morose on the social network, prioritizing reports by friends that you're acting suicidal.

What could go wrong? Where could it go next? Will it be used in emotion experiments like we saw in 2014? Will it be used for adverts, if not already; if you hit your glum post quota for the week, will you get banners for vacations and weekend breaks? These aren't even the questions we put to Facebook: they were far simpler, and yet, silence.

In a blog post this week, Guy Rosen, veep of product management, said "pattern recognition" and "artificial intelligence" will detect posts and live videos in which someone may be expressing thoughts of suicide, flagging them up faster to specialist reviewers.

Robot talks unto robot photo via Shutterstock

A picture tells a 1,000 words. Here's about 750 on Facebook using pics to school AI translators

READ MORE

If the software and Facebook's review team deem someone is possibly going to hurt or kill themselves, a dialog box pops up when they visit the site or app offering them details of mental-health helplines, a suitable friend to talk to, and "tips" to "work through a difficult situation." At least it's a helping hand. Only a stone-cold cynic would argue Facebook is only doing this because dead people can't view ads. Right?

Digging in, you have to wonder how smart this software really is – and whether it warranted the soft headlines some journalists gave it. The video detection seems interesting, but the text analysis sounds like a glorified grep. "We use signals like the text used in the post and comments (for example, comments like 'Are you ok?' and 'Can I help?' can be strong indicators)," said Rosen. "In some instances, we have found that the technology has identified videos that may have gone unreported."

Bafflingly, this machine-learning technology will not be deployed in the European Union, seemingly due to the region's General Data Protection Regulation aka GDPR – Europe's new privacy rules.

We asked Facebook six straightforward questions. No snark, we simply sought to scrutinize a system watching over more than a billion people. And although acknowledging it was thinking them over, the product team failed to come up with any responses. Here are the six questions Facebook couldn't or wouldn't answer:

Q. How does the AI system work?

It’s unclear how sophisticated Facebook’s black-box software is, nor what kind of neural network it uses, if it's even using something like that. It may be something as simple as trivial sentiment analysis applied to a person's Facebook post to detect the user’s mood – are they happy, sad, hyperactive, and so on. Or it could be something much more complex that involves working out the likelihood of suicide by looking carefully at the wording and considering a range of other factors such as the person’s gender, location, relationship status, favorite sports team, age, race, you name it.

Q. Where did they get the training data from?

Facebook did not disclose details on what prior information was used to teach such a model to decide if a person is suicidal or not. What biases are present or could be in the data? For example, could suicidal people of color not living in and around expensive costal cities, where Facebook engineers tend to live, go undetected because the system isn't aware of their situations. AI models are typically trained on huge datasets. ImageNet, used to train object recognition systems, has something like ten million pictures of things. What kind of GlumNet is Facebook using? Register articles?

Q. How representative is that data?

Perhaps Facebook dug into spreadsheets of suicide figures in the US and abroad to build profiles of people potentially or likely vulnerable to self-harm. It’s important to question the validity of the data. Biases in training data are passed on to algorithms and it means that decisions made by the model could carry negative consequences if they are incorrect.

People might be incorrectly labelled as suicidal and find messages from Facebook telling them that “someone thinks you might need extra support right now and asked us to help" annoying. A worse problem would be if it failed to notice signs just because someone didn’t fit the assumptions learned by the model. Facebook said the system was tested in America, and “will eventually be available worldwide, except the EU.” Attitudes towards suicide vary across different cultures, and it’s unknown if the same model trained and tested on American values would perform as well for users in Asia, Africa and beyond.

Q. What is the problem with Europe?

It’s probably down to the General Data Protection Regulation, which is a set of strict rules regarding privacy and data security that will be enforced next year. It’s questionable how Facebook’s software doesn’t comply with the new regulations and what this means for any future tools that require user data in the EU. If moneybags Facebook can't get AI and the GDPR working together, what hope is there for anyone else?

Q. What kind of human intervention is used?

The decisions aren’t all left to machines, and the social media giant did say after a potentially suicidal post was flagged it was passed onto a trained member of its community operations team to assess. The blog post mentioned that Facebook identified, with the help of some of its users, "over 100 wellness checks" to decide when someone is truly suicidal. What could these be? Who gets to review someone's posts and videos?

A video on the blog post shows police talking about intervening in high risk situations after being alerted by Facebook. But it’s unclear what other different types of actions are carried out for people at lower levels of suicide risk. At what point does a website call 911 or 999 on you? Facebook said it prioritizes the order in which the team reviews reported posts, videos and live streams, but doesn’t discuss how its ranking system works.

Q. Are there any significant results so far?

What percentage of people go through with suicide after posting on Facebook? How many are stopped? How soon? There are other interesting questions, where the answers could provide useful insights for other social media platforms looking to prevent suicide, and it’d be helpful if Facebook shared them.

Above all: if you need help, there are genuine real trained experts on standby for you, and they will listen. In the US, call them on 1-800-273-8255 free, 24/7, or talk to them online. In the UK, the Samaritans. In Canada, Crisis Services Canada. In Australia, Lifeline. ®

Similar topics


Other stories you might like

  • These Rapoo webcams won't blow your mind, but they also won't break the bank

    And they're almost certainly better than a laptop jowel-cam

    Review It has been a long 20 months since Lockdown 1.0, and despite the best efforts of Google and Zoom et al to filter out the worst effects of built-in laptop webcams, a replacement might be in order for the long haul ahead.

    With this in mind, El Reg's intrepid reviews desk looked at a pair of inexpensive Rapoo webcams in search for an alternative to the horror of our Dell XPS nose-cam.

    Rapoo sent us its higher-end XW2K, a 2K 30fps device and, at the other end of the scale, the 720p XW170. Neither will break the bank, coming in at around £40 and £25 respectively from online retailers, but do include some handy features, such as autofocus and a noise cancelling microphone.

    Continue reading
  • It's one thing to have the world in your hands – what are you going to do with it?

    Google won the patent battle against ART+COM, but we were left with little more than a toy

    Column I used to think technology could change the world. Google's vision is different: it just wants you to sort of play with the world. That's fun, but it's not as powerful as it could be.

    Despite the fact that it often gives me a stomach-churning sense of motion sickness, I've been spending quite a bit of time lately fully immersed in Google Earth VR. Pop down inside a major city centre – Sydney, San Francisco or London – and the intense data-gathering work performed by Google's global fleet of scanning vehicles shows up in eye-popping detail.

    Buildings are rendered photorealistically, using the mathematics of photogrammetry to extrude three-dimensional solids from multiple two-dimensional images. Trees resolve across successive passes from childlike lollipops into complex textured forms. Yet what should feel absolutely real seems exactly the opposite – leaving me cold, as though I've stumbled onto a global-scale miniature train set, built by someone with too much time on their hands. What good is it, really?

    Continue reading
  • Why Cloud First should not have to mean Cloud Everywhere

    HPE urges 'consciously hybrid' strategy for UK public sector

    Sponsored In 2013, the UK government heralded Cloud First, a ground-breaking strategy to drive cloud adoption across the public sector. Eight years on, and much of UK public sector IT still runs on-premises - and all too often - on obsolete technologies.

    Today the government‘s message boils down to “cloud first, if you can” - perhaps in recognition that modernising complex legacy systems is hard. But in the private sector today, enterprises are typically mixing and matching cloud and on-premises infrastructure, according to the best business fit for their needs.

    The UK government should also adopt a “consciously hybrid” approach, according to HPE, The global technology company is calling for the entire IT industry to step up so that the public sector can modernise where needed and keep up with innovation: “We’re calling for a collective IT industry response to the problem,” says Russell MacDonald, HPE strategic advisor to the public sector.

    Continue reading

Biting the hand that feeds IT © 1998–2021