Boffins develop method of driving computers insane

Test machine 'claimed responsibility for terrorist bombing'

Boffins in America report that they have successfully developed a method for driving computers insane in much the same way as human brains afflicted by schizophrenia. A computer involved in their study became so unhinged that it apparently "claimed responsibility for a terrorist bombing".

The research involved meddling with a neural network setup dubbed DISCERN, built by Professor Risto Miikkulainen, a top comp-sci boffin at the University of Texas. Grad student Uli Grasemann assisted in the process of driving the machine bonkers.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann. "Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In the computer pottiness experiments, DISCERN was taught simple stories. But they tinkered with the automated mind in a fashion equivalent to the effects of an excessive release of dopamine in a human brain. In humans this tends to cause people to "hyperlearn" – to make incorrect connections between stored information.

This meddling duly sent DISCERN off its rocker in short order. According to a Texas Uni statement describing the research:

DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of "derailment" — replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

The crazed, burbling computer was evaluated by a top psychologist, Ralph Hoffman of Yale uni, who confirmed that it did indeed have bats in its belfry in much the same way as a human suffering from schizophrenia.

"We have so much more control over neural networks than we could ever have over human subjects," says Grasemann. "The hope is that this kind of modeling will help clinical research."

The research is published in the journal Biological Psychiatry. ®

Similar topics

Other stories you might like

  • How AI can help reverse-engineer malware: Predicting function names of code
    Or: What kind of research Google's getting in its Mandiant takeover

    GTC Disassembling and analyzing malware to see how it works, what it's designed to do and how to protect against it, is mostly a long, manual task that requires a strong understanding of assembly code and programming, techniques and exploits used by miscreants, and other skills that are hard to come by.

    What with the rise of deep learning and other AI research, infosec folks are investigating ways machine learning can be used to bring greater speed, efficiency, and automation to this process. These automated systems must cope with devilishly obfuscated malicious code that's designed to evade detection. One key aim is to have AI systems take on more routine work, freeing up reverse engineers to focus on more important tasks.

    Mandiant is one of those companies seeing where neural networks and related technology can change how malware is broken down and analyzed. At this week at Nvidia's GTC 2022 event, Sunil Vasisht, staff data scientist at the infosec firm, presented one of those initiatives: a neural machine translation (NMT) model that can annotate functions.

    Continue reading
  • Microsoft to upgrade language translator with new class of AI model
    Why use 20 systems when one will do

    Microsoft is replacing at least some of its natural-language processing systems with a more efficient class of AI model.

    These transformer-based architectures have been named "Z-code Mixture of Experts." Only parts of these models are activated when they're run since different parts of the model learn different tasks, unlike traditional machine learning systems that require the whole system to perform computations. As neural networks continue to grow, the Z-code model approach should prevent them from becoming too power hungry and expensive to run.

    Microsoft said it has deployed these types of models for its text summarization, custom text classification, and key-phrase extraction services that are available from Azure.

    Continue reading
  • Proprietary neural tech you had surgically implanted? Parts shortage
    Sorry about it. You know how it is with supply chains. Aroogah, arooogah....

    Something for the Weekend? My laptop has just spoken to me. It said: "Ba-ding!" It hasn't said that before and I don't know what it means. Whatever does it want?

    It's my own fault for leaving the audio-out unmuted between remote calls. If I leave it on, every pissy little background app on my system tings and hoots relentlessly throughout the day to alert me about some irrelevant non-event or another. The alerts manage to skip around Do Not Disturb mode by falsely self-identifying as "urgent" or "important." I have tried on occasion to configure them to stop doing this, but I quickly get bored and just silence the whole computer instead.

    Popular culture is to blame. Computers in movies and TV shows always make unnecessary sounds, and I don't just mean the bleeps and bloops of otherwise utterly ignored panels on Star Trek's 1960s flight deck. Even today, if someone in a movie or TV show types on a computer keyboard, the computer dutifully says "dit-dit-dit-dit-dit-dit" for reasons that must forever remain mysterious.

    Continue reading

Biting the hand that feeds IT © 1998–2022