Facebook AI guru alt-tabs out, Nvidia EULA audits, Baidu changes, chip tricks, and more

Machine-learning news and code to pore over


Roundup Welcome to El Reg's January roundup of AI-related news beyond all the wonderful and terrible things we've covered separately. Drop us a line if you have any machine-learning news or gossip to share.

Facebook AI chief LeCun steps aside – Yann LeCun, considered to be a pioneer of neural networks for computer vision, has stepped back as Facebook's AI supremo. Jérôme Pesenti, ex-CEO of medical startup BenevolentAI and former IBM Watson vice president, will take the reins, according to Quartz journo Dave Gershgorn.

But LeCun isn’t completely legging it. He will remain at Facebook and continue to lead the social network's machine-learning boffinry nerve-center FAIR in New York. Last year, The Register heard rumors that LeCun was tired of menial managerial tasks. Now, that boring management stuff has been offloaded to other people.

“There was a need for someone to basically oversee all the AI at Facebook, across research, development, and have a connection with product,” LeCun confirmed to Gershgorn on Tuesday this week.

Pesenti, as veep of AI, and Joaquin Candela – head of Facebook’s Applied Machine Learning team in San Francisco – will both report to CTO Mike Schroepfer.

Schroepfer said in a Facebook post that LeCun was now the social network's Chief AI Scientist, effectively allowing Pesenti to oversee the website's machine-learning-powered products so LeCun can focus on research.

Squeeze more out of CPUs – Amazon has published a tutorial on how to use neural-network acceleration engine NNPACK with Apache's deep-learning library MXNet. NNPACK is optimized for performing inference on CPUs, and is useful when your hardware lacks a suitable GPU for AI tasks.

"NNPACK is available for Linux and macOS X platforms. It’s optimized for the Intel x86-64 processor with the AVX2 instruction set, as well as the ARMv7 processor with the NEON instruction set and the ARMv8," explained AWS technical evangelist Julien Simon.

New Baidu AI lab hires – Chinese web juggernaut Baidu has announced the addition of new labs and research scientists in an attempt to reshuffle research efforts since Andrew Ng ejected from the biz.

Kenneth Church – who served as the president of Association for Computational Linguistics, an international society for people working on natural language processing, and has previously worked at IBM Watson, Microsoft and AT&T labs – has joined Baidu.

The Chinese internet monster has also snared Jun Luke Huan and Hui Xiong away from their academic posts at the University of Kansas and Rutgers University, respectively, in the US. It has also created two new internal research factions: the Business Intelligence Lab and the Robotics and Autonomous Driving Lab. Now there are a total of five labs, including its Institute of Deep Learning, Big Data Lab and Silicon Valley Artificial Intelligence Lab.

It’s not entirely clear what happened at Baidu to prompt this internal shakeup. But The Register has heard whispers of internal politics and a culture clash between the teams in China and America that led to the departure of several research staff including its previous chief scientist, Andrew Ng, and AI lab director, Adam Coates.

Squeeze more for less on your GPU – OpenAI published TensorFlow code for gradient checkpointing, a technique that reduces the memory needed on graphics processor chips to train large neural networks.

It’s a tricky concept to understand, but the gist is that this software takes up less space to carry out gradient descent, a algorithm often used to train models.

Feed-forward neural networks are a little clumsy to train because all the nodes in different layers are processed in the reverse order. It means that the results obtained from running through all the nodes in the previous layers have to be kept in memory. So the deeper your network, the more memory it takes to train it.

Here’s where gradient checkpointing comes in. Markers are used for nodes as checkpoints. “These checkpoint nodes are kept in memory after the forward pass, while the remaining nodes are recomputed at most once. After being recomputed, the non-checkpoint nodes are kept in memory until they are no longer required,” according an OpenAI.

OpenAI researchers Tim Salimans and Yaroslav Bulatov said they could fit more than ten-times larger models onto a GPU with a 20 per cent increase in computation time. You can find out more here.

A new AI computer vision challenge – Google researchers have launched a contest to improve image compression techniques using neural networks as well as more traditional methods.

The announcement is linked to a workshop at the upcoming Computer Vision and Pattern Recognition conference (CVPR), happening in Utah, USA, in June. The goal is to come up with novel methods to compress images.

A training dataset containing thousands of pictures has been released, and consists of two parts: dataset professional (2GB) and dataset mobile (4GB).

“The datasets are collected to be representative for images commonly used in the wild, containing thousands of images. While the challenge will allow participants to train neural networks or other methods on any amount of data (but we expect participants to have access to additional data, such as ImageNet and the Open Images Dataset), it should be possible to train on the datasets provided,” wrote Michele Covell, a scientist at Google Research, in a blog post.

The validation part of the dataset will be released this month, and the test dataset will be made public on April 15, before the competition closes on April 22. The results will be announced on May 29, and participants can submit a paper to the Workshop and Challenge on Learned Image Compression (CLIC) at CVPR by June 4. Previous research has shown image compression is possible with recurrent neural networks and generative adversarial networks. The CLIC workshop is being sponsored by Google, Twitter, and ETH Zurich, a Swiss university.

Nvidia can now audit CUDA Toolkit users – Nvidia has updated its software licensing agreement of its CUDA Toolkit allowing it to audit organizations, startling individual developers and academics.

It allows Nvidia to audit CUDA toolkit users to check if they are using the toolchain in an appropriate manner – by showing up at your door if necessary. Enterprise-grade software licenses tend to include these auditing requirements, but to attach them to software development tools that can be used by anyone – from individuals to corporations – has been described as extreme by Reg readers who've been in touch about this developing situation.

“During the term of the AGREEMENT and for three (3) years thereafter, you will maintain all usual and proper books and records of account relating to the CUDA Licensed Software provided under the AGREEMENT. During such period and upon written notice to you, NVIDIA or its authorized third party auditors subject to confidentiality obligations will have the right to inspect and audit your Enterprise books and records for the purpose of confirming compliance with the terms of the AGREEMENT,” the end-user license agreement (EULA) reads.

We asked Nvidia to clarify what exactly counts as a breach of agreement. A spokesperson told us: “Anyone can develop applications on CUDA or use CUDA-based applications for free. What we want to protect against is a person or entity taking CUDA, re-naming ('rebranding') it or charging for it. That said, we have no current plans to audit anyone under our CUDA license, we haven’t done so in the past, and we hope that we’ll not have to do so in the future.”

The EULA goes on to say that if Nvidia finds out that users are breaching agreement terms, then they will be required to pay Nvidia the cost of conducting “the inspection and audit.”

The audit clause was added in September, and spotted at the turn of 2018. It comes at a time when Nvidia also announced it had updated its end-user licensing agreement to ban vendors from selling GeForce and Titan GPUs for datacenters, except for processing blockchain related activities.

Look out for more on this issue this week at El Reg.

Nvidia’s Xavier chip touted again – Let’s just keep talking about Nvidia. Earlier this month it had another go at unveiling Xavier, a processor tailored for self-driving cars.

Xavier was previously teased by Nv CEO Jensen Huang this time last year. Now it seems the thing is inching closer to production. Huang said the SoC will be used as part of the company’s Drive PX Pegasus system, a computer for powering fully autonomous level-five Total Recall-style Johnny Cabs.

Level five being a vehicle control system that just asks for a destination and drives the whole way, down to level one and two, which are varying degrees of intelligent cruise control.

“The computational requirements of robotaxis are enormous – perceiving the world through high-resolution, 360-degree surround cameras and lidars, localizing the vehicle within centimeter accuracy, tracking vehicles and people around the car, and planning a safe and comfortable path to the destination. All this processing must be done with multiple levels of redundancy to ensure the highest level of safety. The computing demands of driverless vehicles are easily 50 to 100 times more intensive than the most advanced cars today,” the biz wrote in a blog post.

Level five? We'll believe it when we see it.

TensorFlow 1.5.0 – The popular open-source AI framework, Tensorflow, has released version 1.5.0. According to its GitHub page, a few bugs have been patched and some major changes include:

  • Prebuilt binaries are now compiled against CUDA 9 and cuDNN 7.
  • Linux binaries are built using Ubuntu 16 containers, potentially.
  • There are glibc incompatibility issues with Ubuntu 14.
  • Starting from 1.6 release, prebuilt binaries will use AVX instructions. This may break TensorFlow on older CPUs. ®

Other stories you might like

  • Google has more reasons why it doesn't like antitrust law that affects Google
    It'll ruin Gmail, claims web ads giant

    Google has a fresh list of reasons why it opposes tech antitrust legislation making its way through Congress but, like others who've expressed discontent, the ad giant's complaints leave out mention of portions of the proposed law that address said gripes.

    The law bill in question is S.2992, the Senate version of the American Innovation and Choice Online Act (AICOA), which is closer than ever to getting votes in the House and Senate, which could see it advanced to President Biden's desk.

    AICOA prohibits tech companies above a certain size from favoring their own products and services over their competitors. It applies to businesses considered "critical trading partners," meaning the company controls access to a platform through which business users reach their customers. Google, Apple, Amazon, and Meta in one way or another seemingly fall under the scope of this US legislation. 

    Continue reading
  • Makers of ad blockers and browser privacy extensions fear the end is near
    Overhaul of Chrome add-ons set for January, Google says it's for all our own good

    Special report Seven months from now, assuming all goes as planned, Google Chrome will drop support for its legacy extension platform, known as Manifest v2 (Mv2). This is significant if you use a browser extension to, for instance, filter out certain kinds of content and safeguard your privacy.

    Google's Chrome Web Store is supposed to stop accepting Mv2 extension submissions sometime this month. As of January 2023, Chrome will stop running extensions created using Mv2, with limited exceptions for enterprise versions of Chrome operating under corporate policy. And by June 2023, even enterprise versions of Chrome will prevent Mv2 extensions from running.

    The anticipated result will be fewer extensions and less innovation, according to several extension developers.

    Continue reading
  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • I was fired for blowing the whistle on cult's status in Google unit, says contractor
    The internet giant, a doomsday religious sect, and a lawsuit in Silicon Valley

    A former Google video producer has sued the internet giant alleging he was unfairly fired for blowing the whistle on a religious sect that had all but taken over his business unit. 

    The lawsuit demands a jury trial and financial restitution for "religious discrimination, wrongful termination, retaliation and related causes of action." It alleges Peter Lubbers, director of the Google Developer Studio (GDS) film group in which 34-year-old plaintiff Kevin Lloyd worked, is not only a member of The Fellowship of Friends, the exec was influential in growing the studio into a team that, in essence, funneled money back to the fellowship.

    In his complaint [PDF], filed in a California Superior Court in Silicon Valley, Lloyd lays down a case that he was fired for expressing concerns over the fellowship's influence at Google, specifically in the GDS. When these concerns were reported to a manager, Lloyd was told to drop the issue or risk losing his job, it is claimed. 

    Continue reading
  • UK competition watchdog seeks to make mobile browsers, cloud gaming and payments more competitive
    Investigation could help end WebKit monoculture on iOS devices

    The United Kingdom's Competition and Markets Authority (CMA) on Friday said it intends to launch an investigation of Apple's and Google's market power with respect to mobile browsers and cloud gaming, and to take enforcement action against Google for its app store payment practices.

    "When it comes to how people use mobile phones, Apple and Google hold all the cards," said Andrea Coscelli, Chief Executive of the CMA, in a statement. "As good as many of their services and products are, their strong grip on mobile ecosystems allows them to shut out competitors, holding back the British tech sector and limiting choice."

    The decision to open a formal investigation follows the CMA's year-long study of the mobile ecosystem. The competition watchdog's findings have been published in a report that concludes Apple and Google have a duopoly that limits competition.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading

Biting the hand that feeds IT © 1998–2022