AI threatens yet more jobs – now, lab rats: Animal testing could be on the way out, thanks to machine learning

Time for rodents to retrain as PHP programmers


Machine learning algorithms can help scientists predict chemical toxicity to a similar degree of accuracy as animal testing, according to a paper published this week in Toxicological Sciences.

A whopping €3bn (over $3.5bn) is spent every year to study how the negative impacts of chemicals on animals like rats, rabbits or monkeys. The top nine most frequently tested safety experiments resulted in the death of the poor critters 57 per cent of the time in Europe in 2011.

By using software, chemists may be able to spend less on animal testing and save more creatures.

To demonstrate this, first, a team of researchers scoured through a range of databases to label 80,908 different chemicals. Some of these labels include things like corrosion, irritation, serious eye damage, or being hazardous to the ozone layer.

Next, they used a mixture of unsupervised and supervised learning to build a statistical model that groups chemicals together based on how chemically and toxically similar they are to each other. The unsupervised method uses the K-nearest neighbors algorithm to create a vector containing the number of times each label occurs between the chemicals.

cyborg

Give 1,000 monkeys typewriters, they'll write Shakespeare. Give them robot arms, and wait – they actually did that?

READ MORE

These vectors are then used to train a supervised learning model. Using logistic regression and random forest algorithms, the model learned to assign labels to new test compounds – whether they were dangerous, corrosive, etc, based on their component chemicals. It was accurate 70 to 80 per cent of the time, and is on par with the OECD guidelines that results from animal testing are repeatable about 78 to 96 per cent of the time.

Sometimes the accuracy levels drop, when comparing the AI's output to real tests on creatures, because some animals often don’t react to chemicals in the same way. “The reproducibility of an animal test is an important consideration when considering acceptance of associated computational models and other alternative approaches,” the paper, published on Wednesday, concluded.

“These results additionally show that computational methods, both simple and complex, can provide predictive capacity similar to that of animal testing models and potentially stronger in some domains.”

At the moment, the model is still quite simple and only tests for 74 properties. Machine learning is frustrating in that trying to expand it by adding more data can actually make it harder for scientists to understand and explain the system’s predictions, so it’ll be a while before animal testing can really be phased out. ®

Broader topics


Other stories you might like

  • AI and ML could save the planet – or add more fuel to the climate fire
    'Staggering amount of computation' deployed to solve big problems uses a lot of electricity

    AI is killing the planet. Wait, no – it's going to save it. According to Hewlett Packard Enterprise VP of AI and HPC Evan Sparks and professor of machine learning Ameet Talwalkar from Carnegie Mellon University, it's not entirely clear just what AI might do for – or to – our home planet.

    Speaking at the SixFive Summit this week, the duo discussed one of the more controversial challenges facing AI/ML: the technology's impact on the climate.

    "What we've seen over the last few years is that really computationally demanding machine learning technology has become increasingly prominent in the industry," Sparks said. "This has resulted in increasing concerns about the associated rise in energy usage and correlated – not always cleanly – concerns about carbon emissions and carbon footprint of these workloads."

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • AI chatbot trained on posts from web sewer 4chan behaved badly – just like human members
    Bot was booted for being bothersome

    A prankster researcher has trained an AI chatbot on over 134 million posts to notoriously freewheeling internet forum 4chan, then set it live on the site before it was swiftly banned.

    Yannic Kilcher, an AI researcher who posts some of his work to YouTube, called his creation "GPT-4chan" and described it as "the worst AI ever". He trained GPT-J 6B, an open source language model, on a dataset containing 3.5 years' worth of posts scraped from 4chan's imageboard. Kilcher then developed a chatbot that processed 4chan posts as inputs and generated text outputs, automatically commenting in numerous threads.

    Netizens quickly noticed a 4chan account was posting suspiciously frequently, and began speculating whether it was a bot.

    Continue reading
  • Photonic processor can classify millions of images faster than you can blink
    We ask again: Has science gone too far?

    Engineers at the University of Pennsylvania say they've developed a photonic deep neural network processor capable of analyzing billions of images every second with high accuracy using the power of light.

    It might sound like science fiction or some optical engineer's fever dream, but that's exactly what researchers at the American university's School of Engineering and Applied Sciences claim to have done in an article published in the journal Nature earlier this month.

    The standalone light-driven chip – this isn't another PCIe accelerator or coprocessor – handles data by simulating brain neurons that have been trained to recognize specific patterns. This is useful for a variety of applications including object detection, facial recognition, and audio transcription to name just a few.

    Continue reading
  • US Copyright Office sued for denying AI model authorship of digital image
    What do we want? Robot rights! When do we want them? 01001110 01101111 01110111!

    The US Copyright Office and its director Shira Perlmutter have been sued for rejecting one man's request to register an AI model as the author of an image generated by the software.

    You guessed correct: Stephen Thaler is back. He said the digital artwork, depicting railway tracks and a tunnel in a wall surrounded by multi-colored, pixelated foliage, was produced by machine-learning software he developed. The author of the image, titled A Recent Entrance to Paradise, should be registered to his system, Creativity Machine, and he should be recognized as the owner of the copyrighted work, he argued.

    (Owner and author are two separate things, at least in US law: someone who creates material is the author, and they can let someone else own it.)

    Continue reading
  • IBM AI boat to commemorate historic US Mayflower voyage finally lands… in Canada
    Nearly two years late and in the wrong country, we welcome our robot overlords

    IBM's self-sailing Mayflower Autonomous Ship (MAS) has finally crossed the Atlantic albeit more than a year and a half later than planned. Still, congratulations to the team.

    That said, MAS missed its target. Instead of arriving in Massachusetts – the US state home to Plymouth Rock where the 17th-century Mayflower landed – the latest in a long list of technical difficulties forced MAS to limp to Halifax in Nova Scotia, Canada. The 2,700-mile (4,400km) journey from Plymouth, UK, came to an end on Sunday.

    The 50ft (15m) trimaran is powered by solar energy, with diesel backup, and said to be able to reach a speed of 10 knots (18.5km/h or 11.5mph) using electric motors. This computer-controlled ship is steered by software that takes data in real time from six cameras and 50 sensors. This application was trained using IBM's PowerAI Vision technology and Power servers, we're told.

    Continue reading

Biting the hand that feeds IT © 1998–2022