Analysis At a US Senate Committee on Commerce, Science, and Transportation hearing this week chaired by Senator Ted Cruz (R-TX), artificial intelligence experts were grilled on how to keep the US ahead of its competitors.
AI is advancing at an increasing pace. Although research began in the 1940s, recent advances in computational power and big data have propelled AI to the emerging field it is today.
As the biggest technology companies continue to pump money into AI research, promising to revolutionize everything from the automotive to the healthcare industries, governments around the world are starting to take notice.
Following the White House’s report, Preparing for the Future of Artificial Intelligence, Wednesday's meeting was the first Senate hearing on AI, and was given the title "The Dawn of Artificial Intelligence." You can read written statements from experts, submitted before they appeared before the committee yesterday, right here.
The technology is still in its early days, but the idea is that if the US government can get a grip on the current trends now, it’ll be able to capitalize on AI in a way that is beneficial to society.
AI is “available to the bad guys too,” said Dr Andrew Moore, who was a witness at the hearing and Dean of the School of Computer Science at Carnegie Mellon University. "It’s important that the US takes control, maintains competitiveness, and becomes a leader in such a powerful area of technology."
The best way to create a new future is to invent it, and the best way to do that is through research, said Greg Brockman, a witness at the hearing and cofounder and CTO of non-profit research company OpenAI.
“AI has the potential to be our biggest advance yet. We have a lead but we don’t have the monopoly,” said Brockman. Countries like China, Korea, and Canada are all investing in AI.
Compete but remain open
The US should “compete on applications, but remain open and collaborative in research,” senators were told. Publishing research papers allows companies to “pool resources to make faster breakthroughs” and attracts the best talent, Brockman added.
Some companies are better at that than others. Research from Google and Facebook is often public, but there aren’t any papers from Apple or any that show how IBM Watson or Microsoft’s Cortana work.
All the witnesses on the panel agreed that the US government had to be willing to spend more money on AI research. Moore said some of his colleagues working on using AI to build better prosthetic hands were struggling to secure funding for their research, while industry offers “two- to three-million-dollar start-up packages” to switch from academia to industry.
Fei-Fei Li is the latest major researcher to be snared away from academia. Li was director of AI research at Stanford University and a prominent expert in computer vision, and has just left her position to head up Google’s Cloud platform.
Brockman warned that the government has a role to play in democratizing AI, to stop knowledge being locked away in the hands of a few major companies, and it should continue funding universities.
AI is also slowly creeping into the space industry, an area where the US prides itself on its leadership. “We need AI to explore the nooks and crannies of the Solar System,” but there is “no clear financial motive,” said Steve Chien, Senior Research Scientist and Group Supervisor at the Artificial Intelligence Group at NASA’s Jet Propulsion Laboratory.
Pushing AI doesn’t come without risks, however. The committee questioned the panel on imminent problems and long-term risks, referencing Elon Musk, who compared building AI to “summoning the demon.”
Speaking at the hearing as a witness, Eric Horvitz, managing director of Microsoft Research Lab and Interim co-chair on the Partnership on Artificial Intelligence, said there is lots of hype surrounding general AI, but people need to “reflect and review what’s possible.”
Moore compared the current systems to “idiot savants,” saying “they are only able to search the space we prescribe really well.” Brockman believed general AI could be “ten to a hundred years away,” and it was more important to answer questions that can be answered now, which will help address medium- and long-term questions like job loss in the future.
Questions such as liability were considered more pressing. There need to be new types of frameworks for dealing with liability when intelligent computers make mistakes – who do you blame, the programmer, the user or the software itself? “Who is responsible for what? Things are unsettled in this space,” Horvitz said.
Another problem highlighted was the fact that implicit biases can leak into machines through data. It doesn’t help that current systems are black boxes, and the decision-making process is “opaque to human beings.”
“You don’t want high-stakes applications to have cultural biases,” Horvitz added. There are signs that the use of machine learning is open to abuse. A controversial paper released by Chinese researchers claimed that machines could be trained to detect criminals by facial recognition alone.
Although the paper was controversial, having it published in the open means that it can be scrutinized by the wider community of AI researchers.
Right now, AI is only at “the tip of the iceberg.”
“The biggest risk is that we lose the openness we have. Today we can plan for the future by keeping it open,” Brockman said. ®
Sponsored: Ransomware has gone nuclear