DeepMind boffins brain-damage AI to find out what makes it tick

All that effort and they still aren't sure how it works


Researchers trying to understand how neural networks work shouldn’t just focus on interpretable neurons, according to new research from DeepMind researchers.

AI systems are often described as black boxes. It’s difficult to understand how they work and reach particular outcomes, making people nervous about using them to make important decisions in areas such as healthcare or recruitment.

Making neural networks more interpretable is hot topic in research. It’s possible to look at the connections between different groups of neurons and visualise which ones correspond to a specific class.

If an image classification model is fed different types of pictures, say an image of a cat or dog, researchers can find the ‘cat neurons’ or a ‘dog neurons’.

These interpretable neurons are important as they are the ones that push the neural network to a particular answer, in this case it’s whether the animal in the image is a cat or dog.

A paper from DeepMind to be presented at the International Conference on Learning Representations (ICLR) next month in April shows that studying these interpretable neurons alone isn’t enough to understand how deep learning truly works.

“We measured the performance impact of damaging the network by deleting individual neurons as well as groups of neurons,” according to a blog post.

Deleting groups of neurons changes the strength of the connections between other neurons and can make the neural network performance drop. For example, if the cat neurons are deleted and the model is shown a picture of a cat, it might be more difficult to identify the animal correctly and its accuracy decreases.

But the results showed that these class-specific neurons weren’t all that important after all. After deleting these interpretable neurons, the performance levels didn’t change by much.

On one hand, it’s a little disheartening to find out that looking into the interpretable neurons isn’t enough to untangle the inner workings of a neural network. But it’s not all too surprising either, because it means that the models that were less affected after having their neurons deleted don’t rely on memorizing training data, and generalise better to new images. And that’s how neural networks should work, really.

DeepMind hope to “explain the role of all neurons, not just those which are easy-to-interpret”.

“We hope to better understand the inner workings of neural networks, and critically, to use this understanding to build more intelligent and general systems,” it concluded. ®

Broader topics


Other stories you might like

  • US won’t prosecute ‘good faith’ security researchers under CFAA
    Well, that clears things up? Maybe not.

    The US Justice Department has directed prosecutors not to charge "good-faith security researchers" with violating the Computer Fraud and Abuse Act (CFAA) if their reasons for hacking are ethical — things like bug hunting, responsible vulnerability disclosure, or above-board penetration testing.

    Good-faith, according to the policy [PDF], means using a computer "solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability."

    Additionally, this activity must be "carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services."

    Continue reading
  • Intel plans immersion lab to chill its power-hungry chips
    AI chips are sucking down 600W+ and the solution could be to drown them.

    Intel this week unveiled a $700 million sustainability initiative to try innovative liquid and immersion cooling technologies to the datacenter.

    The project will see Intel construct a 200,000-square-foot "mega lab" approximately 20 miles west of Portland at its Hillsboro campus, where the chipmaker will qualify, test, and demo its expansive — and power hungry — datacenter portfolio using a variety of cooling tech.

    Alongside the lab, the x86 giant unveiled an open reference design for immersion cooling systems for its chips that is being developed by Intel Taiwan. The chip giant is hoping to bring other Taiwanese manufacturers into the fold and it'll then be rolled out globally.

    Continue reading
  • US recovers a record $15m from the 3ve ad-fraud crew
    Swiss banks cough up around half of the proceeds of crime

    The US government has recovered over $15 million in proceeds from the 3ve digital advertising fraud operation that cost businesses more than $29 million for ads that were never viewed.

    "This forfeiture is the largest international cybercrime recovery in the history of the Eastern District of New York," US Attorney Breon Peace said in a statement

    The action, Peace added, "sends a powerful message to those involved in cyber fraud that there are no boundaries to prosecuting these bad actors and locating their ill-gotten assets wherever they are in the world."

    Continue reading

Biting the hand that feeds IT © 1998–2022