Sci-fi author Neal Stephenson wants AIs fighting AIs so those most fit to live with us survive

Fears surrendering to GenAI makes humans less competitive

Science fiction author Neal Stephenson has suggested AIs should be allowed to fight other AIs, because evolution brings balance to ecosystems, but also thinks humans should stop using AI before it dumbs down our species.

Stephenson coined the term “metaverse” in his 1992 classic “Snow Crash”, and his 1999 epic “Cryptonomicon” envisioned digital currencies and the encryption needed to make them possible. The prescience of those works, and the cracking yarns he spins, mean he is in demand as both a novelist and thinker.

T. Rex had to worry about getting gored in the belly by Triceratops. By training AIs to fight we can perhaps preserve a healthy balance in the new ecosystem

In the latter capacity he recently participated in what he described a “a panel discussion on AI as part of a private event in New Zealand.”

He’s since posted his opening remarks from the event, which open by noting understandable anxiety about the sudden arrival of generative AI.

Stephenson suggests remembering that we already share Earth with many non-human intelligences from the animal kingdom and have learned to get along with them.

“We’re used to thinking of them as being less intelligent than we are, and that’s usually not wrong,” he wrote, “but it might be better to think of them as having different sorts of intelligence, because they’ve evolved to do different things.”

His remarks offer ways to categorize non-human intelligences, and he ends up suggesting that the most useful AIs will be like sheepdogs – creatures that can do specific tasks better than a human. He thinks other AIs will be like dragonflies – largely oblivious to humans but doing certain things brilliantly well – while others will be like ravens or crows in that they are aware of humans but don’t care about us.

Stephenson thinks ChatGPT is like a lapdog, a class of intelligence that can’t survive without people and is “acutely tuned in to humans and basically exist to make life easier for us.”

The author thinks AIs which pose a threat to humans will also emerge.

“I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance,” he wrote. “Even T. Rex had to worry about getting gored in the belly by Triceratops, and probably had to contend with all kinds of parasites, infections, and resource shortages.

“By training AIs to fight and defeat other AIs we can perhaps preserve a healthy balance in the new ecosystem. If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations.”

Stephenson doesn’t suggest that out of malice, instead articulating “a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.”

Augmentation and amputation

The novelist admitted that AI-on-AI combat is unlikely in the short term, so turned his attention to other ways of curbing harms it might create.

To do so he invoked Marshall McLuhan’s observation that “every augmentation is also an amputation”.

He illustrated that concept by evoking conversations he has had with educators “who all report … their students use ChatGPT for everything, and in consequence learn nothing.

“We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down,” Stephenson wrote, before asking “who is really the lapdog in a world full of powerful AIs?”

He thinks it’s easy to avoid becoming Eloi: All we need is “simple interventions such as requiring students to take examinations in supervised classrooms, writing answers out by hand on blank paper.

“We know this is possible because it’s how all examinations used to be taken. No new technology is required, nothing stands in the way of implementation other than institutional inertia, and, I’m afraid, the unwillingness of parents to see their children seriously challenged,” he wrote.

“In the scenario I mentioned before, where humans become part of a stable but competitive ecosystem populated by intelligences of various kinds, one thing we humans must do is become fit competitors ourselves. And when the competition is in the realm of intelligence, that means preserving and advancing our own intelligence by holding at arms length seductive augmentations in order to avoid suffering the amputations that are their price.” ®

More about

TIP US OFF

Send us news


Other stories you might like