A panel of AI experts were grilled on the impact and importance of artificial general intelligence by the US House of Representatives on Tuesday.
The hearing was ominously named “Artificial Intelligence – With Great Power Comes Great Responsibility.” Narrow AI for specific tasks has been rapidly advancing and the committee wanted to know how far off artificial general intelligence (AGI), where a system can learn multiple actions and do them better than humans, would be.
Greg Brockman, cofounder and CTO at OpenAI, defined AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Progress is spearheaded by three areas: data, computation, and algorithms.
A recent OpenAI study estimated there had been a 300,000-times increase in the amount of compute used to train AI systems since 2012. Brockman said it was a trend that he expected to continue over the next five years.
“Now to put that into perspective, that’s like your phone battery, which today lasts for a day, started to last for 800 years, and then five years later it lasts for 100 million years”.
Current AI systems only excel in narrow domains like playing certain games, translating text, or recognizing objects. It’s unknown if throwing more GPUs at a problem will magically result in AGI but it’s something that shouldn’t be ruled out, according to Brockman.
There are more pressing problems, however, as the technology improves. Fei-Fei Li, co-founder of AI4ALL, a nonprofit organization interested in mentoring students from underrepresented backgrounds in AI, warned of a lack of transparency and bias in systems that stems from a lack of diversity in the people building these algorithms.
US politicos wake up to danger of black-box algorithms shaping all corners of American lifeREAD MORE
“There’s nothing artificial about artificial intelligence. It’s inspired by people, it’s created by people, and most importantly it has an impact on people," she said. "It’s a powerful tool that we’re only beginning to understand and that’s a profound responsibility.”
Tim Persons, chief scientist at the Government Accountability Office (GAO), said: “Special attention will be needed for our education and training systems, regulatory structures, frameworks for privacy and civil liberties, and our understanding of risk management in general.”
All experts unanimously agreed that the US had to remain dominant in steering the development of AI and AGI in order to remain competitive with other countries. China has pledged a whopping $7bn to R&D through to 2030, the European Union has promised $24bn by 2020, and the US only spent a measly $600m in 2016.
They were also quizzed on potential ‘doomsday’ scenarios. Brockman compared thinking about AGI to the thinking about the internet in the late fifties.
“If someone was to describe to you what the internet was going to be, how it’d affect the world, and the fact that all these weird things were going to start happening...You’d be very confused. It’d be very hard to understand what these things will look like...
“Now imagine that that whole story - which played out over the course of the past 60, almost 70 years now - was going to play out over a much more compressed time scale. And so that’s the perspective that I have towards AGI."
"It’s the fact that it can cause this rapid change and it's already hard for us to cope with what technology brings. So is it going to be malicious actors, or if the technology wasn’t built in a safe way, or the deployment and values it’s given is something we’re not happy with. All of those I think are real risks, and those are things we want to start thinking about today," he concluded.
Persons agreed, and placed an big emphasis on needing to evaluate the risks “I think the key thing is being clear-eyed about what the risks actually are, and not necessarily being driven by the entertaining yet science fiction-type narratives on these things - or projecting or going to extremes, assuming far more than where we actually are in the technology.” ®