This article is more than 1 year old
US gov sets up the National Artificial Intelligence Initiative Office at the last minute before Trump's presidency ends
Plus: Google trains 1.6-trillion-parameter AI model, popular software used for job screenings scraps facial-recognition feature
In brief Just days before President Trump is due to leave office, the White House created its National Artificial Intelligence Initiative Office, which will sit under the Office of Science and Technology Policy.
The org will be in charge of helping industry and academia work together to ensure the development of AI falls in line with US strategy. It will also help pass new policies and establish R&D funding initiatives. The move is one of Trump’s last-ditch efforts to establish something potentially long lasting in the field during his time in government.
“Through the Trump Administration’s historic efforts and the unparalleled enthusiasm and activity of the private sector and academia, the United States remains the world leader in artificial intelligence,” according to a statement.
“The White House’s new National AI Initiative Office will be integral to the Federal Government’s efforts to maintain this leadership position for many years to come.”
Facial-recognition tool used to screen job candidates has been scrapped
HireVue, a software biz that developed one of the most widely used job screening algorithms, has dropped its controversial facial-recognition feature.
The facial features of candidates were analyzed as they spoke during interviews via webcam. Together with other data, such as their tone of voice and words spoken, the job hopefuls were marked by the code on qualities including friendliness and trustworthiness.
Experts have argued such software suffers from biases, and shouldn’t be employed when people’s livelihoods are at stake. In 2019, The Electronic Privacy Information Center, a non-profit, filed a complaint with America's Federal Trade Commission alleging HireVue “committed unfair and deceptive practices in violation of the FTC Act.”
Now, the company has killed off its most controversial feature. “It was adding some value for customers, but it wasn’t worth the concern,” HireVue’s CEO told Wired.
Customize Amazon Alexa with your own wake word, eg: oi!
Amazon has launched Alexa Custom Assistant, a platform that allows organizations to tailor the voice-controlled smart assistant for their own use.
“The Alexa Custom Assistant is built directly on Alexa technology, providing companies access to world class, always-improving voice AI technology, customized with unique wake word, voice, skills, and capabilities,” the internet giant said.
For example, some automakers including Fiat Chrysler Automobiles have signed on as a customer to use Alexa’s software to power smart speakers in its cars. Developers aren’t required to deploy Amazon’s physical assistant unit; it’s software they can use on their own hardware. Amazon hopes that by opening the service up to companies, its machine-learning-powered system will be incorporated into mobile apps, gadgets, and even video games.
Largest neural network with over trillion parameters
Googlers have built a humongous language model with an eye-popping 1.6 trillion parameters, more than nine times that of OpenAI’s GPT-3 system. Language models are all the rage right now, and this one's special trick appears to be the efficient way in which it was taught.
The goliath model was trained on 750GB of text scraped from the internet, would require a lot of computational resources to crunch through all that data. The Googlers managed to do it with just 32 TPU cores, we're told, thanks to a new technique they call the Switch Transformers. The paper describing the method is mindboggling. Essentially, the 1.6-trillion-parameter model is optimized for training as it can handle sparse matrix multiplications more effectively.
“We achieve this by designing a sparsely activated model that efficiently uses hardware designed for dense matrix multiplications such as GPUs and TPUs," the paper stated. "In our distributed training setup, our sparsely activated layers split unique weights on different devices. Therefore, the weights of the model increase with the number of devices, all while maintaining a manageable memory and computational footprint on each device.” ®