Air Force colonel 'misspoke' when he said an AI-drone 'killed' its human operator

Plus: Around 3,900 jobs have been axed and replaced with AI, and Microsoft cosies up with a GPU provider to support OpenAI

AI In Brief The US Air Force denied testing software that led to an AI-powered drone "killing" its human operator in simulation after its chief of AI test and operations gave an eyebrow-raising presentation at a defence conference.

Colonel Tucker "Cinco" Hamilton reportedly described the possibility of the AI system turning against its human handler – attacking and killing the operator in a computational simulation – as a cautionary tale. The Royal Aeronautical Society, which hosted the Future Combat Air & Space Capabilities Summit, quoted Hamilton in a report that was quickly picked up by the press (including El Reg).

Ann Stefanek, a spokesperson from the US Air Force, however, told The Register: "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. This was a hypothetical thought experiment, not a simulation."

"It appears the colonel's comments were taken out of context and were meant to be anecdotal," she said. A representative from The Royal Aeronautical Society, on the other hand, said it was Hamilton who made the errors.

In a statement to the Society's magazine Aerospace, the representative said the colonel admitted he "misspoke" and that he was describing a hypothetical thought experiment and not one that had actually taken place at the US Air Force. 

"We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," Hamilton is quoted as saying. "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI," he said.

Getty files second UK lawsuit against Stability AI

Getty called judges at the High Court of Justice in London for injunctive relief to stop Stability AI selling its text-to-image software in its second lawsuit against the company.

The stock photo supplier sued the startup in January this year, and accused it of violating copyright laws by scraping the company's images to train its Stable Diffusion models. In a second filing reported this week by Reuters, Getty has gone one step further and asked judges to block Stability from selling its AI tools altogether.

It has reportedly asked the High Court for Stability to destroy all Stable Diffusion models that were allegedly unlawfully trained on its images or hand over the software, and is suing for damages. The amount has not been finalized. 

Meanwhile, Stability AI has not yet filed a defence to fend off Getty's claims and faces a separate, additional lawsuit filed by Getty in US courts. 

You can blame AI for some layoffs, says report

A report from outplacement firm Challenger, Gray & Christmas suggests that about 3,900 jobs have been cut and replaced by AI in May alone.

The latest Challenger report showed just how frosty the labor market is right now. US-based employers axed 80,089 jobs in May – a 20 percent increase compared to the previous month and a 287 percent jump from the same month last year.

"Consumer confidence is down to a six-month low and job openings are flattening," Andrew Challenger, the company's senior veep, explained in a statement. "Companies appear to be putting the brakes on hiring in anticipation of a slowdown."

Table four in the report reveals that the top reasons for layoffs are the current tough global economic climate still reeling from the COVID-19 pandemic and the ongoing Russia-Ukraine war affecting supply chains. In tenth place is AI. Challenger, Gray & Christmas reckon nearly 4,000 jobs have been replaced with the automated technology. 

Lured by the idea of increased productivity and efficiency, businesses from all sorts of industries are rushing to incorporate AI. Some, like IBM and BT, have previously said that generative AI will replace thousands of workers in HR and customer service roles.

It may not be all doom and gloom, however. Other big orgs like JPMorgan have advertised 3,651 AI-related jobs from February through April – more than any other bank, according to Bloomberg. 

Microsoft to work with third party cloud provider for extra GPUs

Microsoft has reportedly signed a deal with cloud computing startup CoreWeave worth potentially billions of dollars for access to its GPUs, in a bid to help ChatGPT keep pace with demand.

Sometimes ChatGPT cannot process user requests since the service is often overwhelmed. Training and running such AI models is computationally intensive, and requires huge amounts of hardware. Even Microsoft's own Azure cloud platform cannot always support OpenAI.

That's why Redmond has invested in CoreWeave – a startup that has amassed tons of GPUs that were previously used to mine cryptocurrencies, according to CNBC. CoreWeave is now a cloud provider, and focused on renting out its hardware resources. 

The startup said last week it had secured $200 million in its latest round of funding, led by asset manager firm Magnetar Capital, just after it had raised $221 million in its Series B round.

The little cloud provider is reportedly valued at $2 billion, and recently revealed it had managed to get one of its customers up and running on Nvidia's latest H100 chips. ®

More about

TIP US OFF

Send us news


Other stories you might like