Boffins' neural network can work out from your speech whether you'll develop psychosis

Software trained from patient transcripts. The 'normal' dataset? From, er, Reddit

Machine-learning algorithms can help psychologists predict, with 90 per cent accuracy, the onset of psychosis by analyzing a patient's conversations, according to this research here.

Psychosis can be a symptom of psychiatric disorders such as schizophrenia or bipolar disorder. People experiencing psychosis find it difficult to tell what’s real or not; some report visual and auditory hallucinations, and are led to believe delusional thoughts. Psychosis can also be brought upon by stress, drugs, or lack of sleep, though.

Mental illnesses typically tend to develop in a person's early twenties, and warning signs start showing at around age 17. It is estimated that about 25 to 30 per cent of young people suffering from at least some symptoms, such as psychosis, eventually develop full-blown psychiatric disorders such as schizophrenia.

By studying a patient’s speech patterns, it should be possible for software to work out whether that person will eventually suffer from psychosis, and thus whether or not they may develop a psychiatric disorder, Phillip Wolff, coauthor of the paper published in npj Schizophrenia and a psychology professor at America's Emory University, explained.

“It was previously known that subtle features of future psychosis are present in people's language, but we've used machine learning to actually uncover hidden details about those features," he said late last week. Some of the features include how coherent someone’s speech is, and how often they talk about voices and sounds.

Wolff and his colleagues used AI algorithms to analyse speech from two datasets. The first dataset was the North American Prodrome Longitudinal Study (NAPLS), which documents, among other things, interviews and conversations with 40 participants in their early twenties who have been identified as being at risk of developing psychosis. Speech from 30 of the participants was used to teach the software what potential psychosis sufferers talk like, and the remaining ten were used to test the trained model for its accuracy in predicting psychosis risk.

"Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes," said Neguine Rezaii, first author of the paper, who completed the research at Emory University and is now a neuropsychiatrist at Harvard University. "The automated technique we've developed is a really sensitive tool to detect these hidden patterns. It's like a microscope for warning signs of psychosis."

The second dataset was scraped from Reddit, a popular internet messageboard, and contained 401 million words from online conversations between 30,000 netizens. This was considered "normal" chatter. Given some, and certainly not all, subreddits we've seen, the word "normal" here is doing some extremely heavy lifting. To be clear, the NAPLS dataset was used to train and test the model, and the Reddit dataset was used to see how the software reacted to seemingly normal people.

Here's how the paper put it, and no, you're not developing psychosis yourself, it truly is written like this all the way through:

By contrasting the smaller body of text [the NAPLS dataset] with the larger body of text [the Reddit dataset], the unique aspects of the smaller body of text can be made more obvious and accentuated. Without such a comparison, a content analysis of a small body of text would contain information not only about what is unique to that text, but also information about what is common to other texts. Text from the social media site Reddit was used to construct a corpus reflecting the content of normal conversations.

In other words, we think, the researchers, after running their algorithms through the Reddit dataset, concluded the Reddit dataset had a higher semantic density than the NAPLS data, meaning Redditors are unlikely to be psychotic. More on that in a moment. We asked the research team to expand on this in plain English, and we've not heard back.

We note that transcripts of video-taped conversations were taken from NAPLS for the training and evaluation sets, whereas the Reddit dataset includes entirely typed-in forum messages. People express themselves online in ways they wouldn't in face-to-face encounters, we reckon, so take this comparison of the two with a pinch of salt.

Word2Vec and Vector unpacking

The team used Word2Vec, a neural network model pre-trained on the New York Times archive, to convert words in the NAPLS dataset into vector spaces. Similar words are grouped closer together, for example the word ‘queen’ is more closely associated with ‘woman’, so both words would be nearer one another in vector space compared to the word ‘man’.

Next, the software used a technique known as vector unpacking to determine the meaning of a given sentence uttered by a person, from the words used in that sentence. The algorithms showed that people at risk of psychosis often use a lot of words to talk about random ideas that don’t have a clear meaning, something the researchers describe as “low semantic density.” High semantic density therefore indicates someone is speaking coherently about a subject.

“The sentence ‘Sometimes things are things’ was used as an example of a sentence with low semantic density and the sentence ‘View the latest news’ was used as an example of a sentence with high semantic density,” the paper stated. They also are more likely to talk about voices and sounds, too.

For the ten NAPLS participants used to test the final model, half of them did not go on to develop schizophrenia, while the other half did develop the disorder later on in life. The model was 90 per cent accurate at predicting who was more likely to enter a psychotic state based on their speech patterns alone. Low semantic density meant they were likely to develop psychosis.

Rezaii hoped that the model could help psychologists diagnose and treat mental illnesses more objectively. "In the clinical realm, we often lack precision. We need more quantified, objective ways to measure subtle variables, such as those hidden within language usage."

"If we can identify individuals who are at risk earlier and use preventive interventions, we might be able to reverse the deficits," Elaine Walker, coauthor of the paper and a psychology professor at Emory University, concluded. "There is good data showing that treatments like cognitive-behavioral therapy can delay onset, and perhaps even reduce the occurrence of psychosis." ®

Broader topics

Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022