AI software that can reproduce like a living thing? Yup, boffins have only gone and done it

Talk about telling your code to go screw itself


A pair of computer scientists have created a neural network that can self-replicate.

“Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems,” they argue in a paper popped onto arXiv this month.

It’s an important process in reproduction for living things, and is an important step for evolution through natural selection. Oscar Chang, first author of the paper and a PhD student at Columbia University, explained to The Register that the goal was to see if AI could be made to be continually self improving by mimicking the biological self-replication process.

“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection - just like in nature - if there was a self-replication mechanism for neural networks.”

The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it's the weights - which determine the connections between the different neurons - that are being cloned.

The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used to self-replicate its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.

These networks are smaller and have a maximum of 21,100 parameters compared to several million for standard image recognition models.

Accuracy?

The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.

The paper states that the “self-replication occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.

“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,” the paper said.

Chang explained he wasn’t sure why this happened, but it’s what happens in nature too.

How DeepMind's AlphaGo Zero learned all by itself to trash world champ AI AlphaGo

READ MORE

“It's not entirely clear why this is so. But we note that this is similar to the trade-off made between reproduction and other tasks in nature. For example, our hormones help us to adapt to our environment and in times of food scarcity, our sex drive is down-regulated to prioritize survival over reproduction,” he said.

So at the moment it looks like self-replication in neural networks isn’t all that useful, but it’s still an interesting experiment.

“To our knowledge, we are the first to tackle the problem of building a self-replication mechanism in a neural network. As such, our work should be best viewed as a proof of concept,” he added.

But the researchers hoped that one day it may come in handy for computer security or self-repair in damaged systems.

“Learning how to enhance or diminish the ability for AI programs to self-replicate is useful for computer security. For example, we might want an AI to be able to execute its source code without being able to read or reverse-engineer it, either through its own volition or interaction with an adversary.

Self-replication is used for self-repair in damaged physical systems, he noted. "The same may apply to AI, where a self-replication mechanism can serve as the last resort for detecting damage, or returning a damaged or out-of-control AI system back to normal,” Chang added. ®

Similar topics

Broader topics


Other stories you might like

  • D-Wave deploys first US-based Advantage quantum system
    For those that want to keep their data in the homeland

    Quantum computing outfit D-Wave Systems has announced availability of an Advantage quantum computer accessible via the cloud but physically located in the US, a key move for selling quantum services to American customers.

    D-Wave reported that the newly deployed system is the first of its Advantage line of quantum computers available via its Leap quantum cloud service that is physically located in the US, rather than operating out of D-Wave’s facilities in British Columbia.

    The new system is based at the University of Southern California, as part of the USC-Lockheed Martin Quantum Computing Center hosted at USC’s Information Sciences Institute, a factor that may encourage US organizations interested in evaluating quantum computing that are likely to want the assurance of accessing facilities based in the same country.

    Continue reading
  • Bosses using AI to hire candidates risk discriminating against disabled applicants
    US publishes technical guide to help organizations avoid violating Americans with Disabilities Act

    The Biden administration and Department of Justice have warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

    Under the ADA, employers must provide adequate accommodations to all qualified disabled job seekers so they can fairly take part in the application process. But the increasing rollout of machine learning algorithms by companies in their hiring processes opens new possibilities that can disadvantage candidates with disabilities. 

    The Equal Employment Opportunity Commission (EEOC) and the DoJ published a new document this week, providing technical guidance to ensure companies don't violate ADA when using AI technology for recruitment purposes.

    Continue reading
  • How ICE became a $2.8b domestic surveillance agency
    Your US tax dollars at work

    The US Immigration and Customs Enforcement (ICE) agency has spent about $2.8 billion over the past 14 years on a massive surveillance "dragnet" that uses big data and facial-recognition technology to secretly spy on most Americans, according to a report from Georgetown Law's Center on Privacy and Technology.

    The research took two years and included "hundreds" of Freedom of Information Act requests, along with reviews of ICE's contracting and procurement records. It details how ICE surveillance spending jumped from about $71 million annually in 2008 to about $388 million per year as of 2021. The network it has purchased with this $2.8 billion means that "ICE now operates as a domestic surveillance agency" and its methods cross "legal and ethical lines," the report concludes.

    ICE did not respond to The Register's request for comment.

    Continue reading
  • Fully automated AI networks less than 5 years away, reckons Juniper CEO
    You robot kids, get off my LAN

    AI will completely automate the network within five years, Juniper CEO Rami Rahim boasted during the company’s Global Summit this week.

    “I truly believe that just as there is this need today for a self-driving automobile, the future is around a self-driving network where humans literally have to do nothing,” he said. “It's probably weird for people to hear the CEO of a networking company say that… but that's exactly what we should be wishing for.”

    Rahim believes AI-driven automation is the latest phase in computer networking’s evolution, which began with the rise of TCP/IP and the internet, was accelerated by faster and more efficient silicon, and then made manageable by advances in software.

    Continue reading
  • Pictured: Sagittarius A*, the supermassive black hole at the center of the Milky Way
    We speak to scientists involved in historic first snap – and no, this isn't the M87*

    Astronomers have captured a clear image of the gigantic supermassive black hole at the center of our galaxy for the first time.

    Sagittarius A*, or Sgr A* for short, is 27,000 light-years from Earth. Scientists knew for a while there was a mysterious object in the constellation of Sagittarius emitting strong radio waves, though it wasn't really discovered until the 1970s. Although astronomers managed to characterize some of the object's properties, experts weren't quite sure what exactly they were looking at.

    Years later, in 2020, the Nobel Prize in physics was awarded to a pair of scientists, who mathematically proved the object must be a supermassive black hole. Now, their work has been experimentally verified in the form of the first-ever snap of Sgr A*, captured by more than 300 researchers working across 80 institutions in the Event Horizon Telescope Collaboration. 

    Continue reading
  • Shopping for malware: $260 gets you a password stealer. $90 for a crypto-miner...
    We take a look at low, low subscription prices – not that we want to give anyone any ideas

    A Tor-hidden website dubbed the Eternity Project is offering a toolkit of malware, including ransomware, worms, and – coming soon – distributed denial-of-service programs, at low prices.

    According to researchers at cyber-intelligence outfit Cyble, the Eternity site's operators also have a channel on Telegram, where they provide videos detailing features and functions of the Windows malware. Once bought, it's up to the buyer how victims' computers are infected; we'll leave that to your imagination.

    The Telegram channel has about 500 subscribers, Team Cyble documented this week. Once someone decides to purchase of one or more of Eternity's malware components, they have the option to customize the final binary executable for whatever crimes they want to commit.

    Continue reading
  • Ukrainian crook jailed in US for selling thousands of stolen login credentials
    Touting info on 6,700 compromised systems will get you four years behind bars

    A Ukrainian man has been sentenced to four years in a US federal prison for selling on a dark-web marketplace stolen login credentials for more than 6,700 compromised servers.

    Glib Oleksandr Ivanov-Tolpintsev, 28, was arrested by Polish authorities in Korczowa, Poland, on October 3, 2020, and extradited to America. He pleaded guilty on February 22, and was sentenced on Thursday in a Florida federal district court. The court also ordered Ivanov-Tolpintsev, of Chernivtsi, Ukraine, to forfeit his ill-gotten gains of $82,648 from the credential theft scheme.

    The prosecution's documents [PDF] detail an unnamed, dark-web marketplace on which usernames and passwords along with personal data, including more than 330,000 dates of birth and social security numbers belonging to US residents, were bought and sold illegally.

    Continue reading
  • Another ex-eBay exec admits cyberstalking web souk critics
    David Harville is seventh to cop to harassment campaign

    David Harville, eBay's former director of global resiliency, pleaded guilty this week to five felony counts of participating in a plan to harass and intimidate journalists who were critical of the online auction business.

    Harville is the last of seven former eBay employees/contractors charged by the US Justice Department to have admitted participating in a 2019 cyberstalking campaign to silence Ina and David Steiner, who publish the web newsletter and website EcommerceBytes.

    Former eBay employees/contractors Philip Cooke, Brian Gilbert, Stephanie Popp, Veronica Zea, and Stephanie Stockwell previously pleaded guilty. Cooke last July was sentenced to 18 months behind bars. Gilbert, Popp, Zea and Stockwell are currently awaiting sentencing.

    Continue reading

Biting the hand that feeds IT © 1998–2022