Roundup Let's get you up to speed on the latest AI news, beyond what we've already covered lately.
Hey, our CPUs can do AI, too: One of Intel’s processors, the Xeon Platinum 9282, can train the popular computer vision model ResNet-50 faster than Nvidia’s Tesla V100 GPU.
Chipzilla gushed about its 14nm CPU being able to crunch through 7,878 images per second on the ResNet-50 AI architecture compared to Nvidia’s 7,844 images per second on its Tesla V100 and 4,944 images per second for its newer T4 chip. Essentially, that means the Intel’s top-end data-center processor can train ResNet-50 neural nets slightly faster than those two Nvidia GPUs can.
But there’s a catch (there always is).
Firstly, it won’t work for all configurations of the model. It requires ResNet-50 to be written in Caffe so that it can be optimized by Intel’s Optimization for Caffe software package.
Secondly, it was demonstrated by spinning up 28 virtual instances, each one assigned four CPU cores, and using a batch size of 11. That’s a total of 112 cores, and each Xeon Platinum 9282 processor has 56 CPU cores, so that’s two Xeon Platinum 9282 chips for every single Tesla V100 or T4 GPU. Or potentially one 9282 if you rely on Hyper-Threading, which is, uh... awkward.
Thirdly, Intel is using INT8 (8-bit integer) precision, whereas Nvidia's using a mixture of FP32 and FP16. Plus, the 9282 has only just arrived, and isn't available to most customers, whereas the V100's been out for a few years. It's kinda like comparing apples and pears, as industry expert Scott Le Grand observed:
2800 mm^2 of unavailable silicon with specialized int8 hardware going toe to toe with 815 mm^2 of GPU silicon from 2017 running FP16 doesn't float my boat, but I'm not writing press releases for @intel. https://t.co/2BUTbodt4i— Scott Le Grand (@scottlegrand) May 14, 2019
Speech-to-speech language translation: Google engineers have managed to build a system that can translate speech between different languages without having to map it to text first.
The little microphone and speaker button on Google Translate is a lifesaver. It allows users to record speech in a different language to be translated, and then it spits out what’s being said into another language for people to understand. It’s helpful if people are trying to translate between languages that they can’t really read or write in.
In order for speech-to-speech translation to work, however, it requires a middle step of translating what’s being said into text first. Now, Google has developed a system that can directly work with audio without needing to encode the speech samples into text at all.
The Translatotron, introduced this week, processes speech in one language and converts it into a spectrogram for input and then generates another spectrogram as output to translate the speech into another language.
“During training, the sequence-to-sequence model uses a multitask objective to predict source and target transcripts at the same time as generating target spectrograms. However, no transcripts or other intermediate text representations are used during inference,” Google explained.
So, Translatotron does learn to translate speech using text during the training phase, but doesn’t need it during inference. You can listen to some short examples of speech being translated between Spanish and English here.
US Senate bill to introduce ‘digital engineering’ for the military: Two US Senators leading the Senate Artificial Intelligence Caucus have introduced a bill to help the Department of Defense recruit more computer scientists into the military.
The Armed Forces Digital Advantage Act is backed by Senators Martin Heinrich (D-NM) and Rob Portman (R-OH). “Much like how the military recruits for and provides incentives to individuals with foreign language skills, senior military leaders and civilian leadership have repeatedly emphasized the need for a workforce with a digital engineering skillset,” said Heinrich, co-founder of the Senate Artificial Intelligence Caucus and a member of the Senate Armed Services Committee.
“Whether it is Artificial Intelligence, 5G telecommunications services, or cloud computing, transformational digital technologies will present new opportunities and challenges for the Department of Defense. That means we must prepare the Department with a proficient and capable workforce by recruiting in the near term and training for the long term. This bipartisan bill does exactly that.”
The bill hopes to ramp up recruitment by creating a “Chief Digital Engineering Recruitment and Management Officer of the Department of Defense” role. He or she will be in the position for ten years to recruit computer scientists at places like tech conferences as well as developing clear career tracks to train developers in machine learning, data science, and software product management by 2022.
You can read the bill in more detail here.
Microsoft wants to boost the AI talent pool: Microsoft has partnered up with General Assembly, a tech education company, to train 15,000 workers with AI-related skills by 2022.
The goal is to focus on three areas: defining standards for AI skills, training workforces across different industries with AI tools, and building an AI Talent Network for recruiting. Microsoft will found General Assembly’s AI Standards Board to guide the company on what skills it should be teaching. It hopes to help 2,000 developers transition into AI and machine learning jobs in the first year of the collaboration. And train 13,000 more people in the next three years.
“As a technology company committed to driving innovation, we have a responsibility to help workers access the AI training they need to ensure they thrive in the workplace of today and tomorrow,” said Jean-Philippe Courtois, executive vice president and president of Global Sales, Marketing and Operations at Microsoft.
“We are thrilled to combine our industry and technical expertise with General Assembly to help close the skills gap and ensure businesses can maximize their potential in our AI-driven economy.”
AI whiskey: Reskilling techies isn't the only thing Microsoft wants to help with, it'd also like to lend its hand creating, erm, AI whiskey.
Fourkind, an AI consultancy biz and Mackmyra, a Finnish whiskey company are developing new whiskey flavours using AI technology. Machine learning models run on Microsoft's Azure cloud and build with Azure cognitive services can churn out more than 70 million recipes based on ones concocted by Mackmyra. It also takes into account sales data and people's preferences to craft a whiskey people will like.
"The work of a Master Blender is not at risk,” Angela D’Orazio, master blender at Mackmyra said.
“While the whisky recipe is created by AI, we still benefit from a person’s expertise and knowledge, especially the human sensory part, that can never be replaced by any program. We believe that the whisky is AI-generated, but human-curated. Ultimately, the decision is made by a person.“
Mackmyra hopes it'll have the first AI generated whiskey out later this year sometime in Autumn. With this dataset the AI can generate more than 70 million recipes that it predicts will be popular, and of the highest quality based on what kind of cask types there are in the warehouse. ®