This article is more than 1 year old
US military takes aim at 2024 for human-versus-AI aircraft dogfights. Have we lost that loving feeling for Top Gun?
Software fighter pilots could bring a whole new meaning to 'crash and burn'
In brief The US Secretary of Defense Mark Esper said a “full-scale tactical aircraft” controlled by an AI system will go up against a human fighter pilot in a real-world dogfight in 2024.
Although the experiment is a battle between brains and binary, it’s not aimed at replacing military personnel altogether: computers will be used for low-priority work.
“To be clear, AI’s role in our lethality is to support human decision-makers not replace them. We see AI as a tool to free up resources, time, and manpower so our people can focus on higher priority tasks and arrive at the decision point whether in a lab or on the battlefield faster and more precise than the competition,” he said last week at a virtual symposium held by the Pentagon’s Joint Artificial Intelligence Center.
DARPA announced its Air Combat Evolution (ACE) program last year, and held a virtual competition between a real pilot and AI system last month in a flight simulator. The machines thrashed the humans 5-0 in the limited settings.
Don’t believe the GPT-3 hype
The Guardian newspaper was criticized for publishing an opinion piece constructed solely from cherry-picked sentences from reams of text generated by OpenAI’s GPT-3 model.
Headlined "A robot wrote this entire article. Are you scared yet, human?” the article is written from the perspective of a machine trying to explain to humans that AI isn't going to stamp out humanity. On the face of it, the piece is impressive: the grammar is good, and it comfortably glides through history, philosophy, and technology. Are our days as vultures bashing out prose for the masses to tight deadlines numbered?
Well, read the note at the end of the article. You’ll see that the whole thing was put together by a human editor, who copied and pasted sentences from eight 500-word essays composed by the GPT-3 software. The op-ed was not generated all in one go.
Critics were quick to pour cold water on the hype. Full Fact, a fact-checking org in the UK, found a factual error, too. GPT-3 claimed the word robot originated from the Greek language, when in fact it’s technically Czech. Tsk tsk, it's not like you'd catch one of us humans maeking any erros.
New European AI PhD programme
The European Laboratory for Learning and Intelligent Systems (ELLIS), set up to help Europe remain competitive in the field of AI and machine learning, is now funding PhDs at various universities.
“The program supports excellent PhDs across Europe by giving them access to leading research through boot camps, summer schools and workshops of the ELLIS programs,” it said. “Every PhD student is supervised by one ELLIS fellow/scholar and one ELLIS member from a different country and conducts a one year exchange at the other location.”
You can read more about it here.
Portland goes nuclear on facial tech
The latest facial-recognition system ban passed in Portland, Oregon, is the strictest one in any US city yet. AI cameras will not be allowed in any public or private places, such as stores, banks or rail stations.
The new rules, described in two separate ordinances, were passed by Portland City Council last week. The first one bars public entities from acquiring facial-recognition services using them in spaces managed and owned by the city, and goes into effect immediately. The second one prevents private spaces that are open for public use from being surveiled from January 2021.
There are some exceptions, however. For example, city council staff are allowed to use the technology to unlock their own smartphones or tablets at work.
“We must protect the privacy of Portland’s residents and visitors, first and foremost. These ordinances are necessary until we see more responsible development of technologies that do not discriminate against Black, Indigenous and other people of color,” said Mayor Ted Wheeler. ®