Behold the Megatron: Microsoft and Nvidia build massive language processor

MT-NLG is a beast that fed on over 4,000 GPUs


Nvidia and Microsoft announced their largest monolithic transformer language model to date, an AI model with a whopping 530 billion parameters they developed together, named the Megatron-Turing Natural Language Generation model.

MT-NLG is more powerful than previous transformer-based systems trained by both companies, namely Microsoft’s Turing-NLG model and Nvidia’s Megatron-LM. Made up of three times more parameters spread across 105 layers, MT-NLG is much larger and more complex. For comparison, OpenAI’s GPT-3 model has 175 billion parameters and Google’s Switch Transformer demo has 1.6 trillion parameters.

Bigger is generally better when it comes to neural networks. It requires them to ingest more training data. MT-NLG is better at a wide variety of natural language tasks such as auto-completing sentences, question and answering, and reading and reasoning compared to its predecessors. It can also perform these tasks with little to no fine-tuning, something referred to as few-shot or zero-shot learning.

As these language models become larger, AI researchers and engineers need to come up with all sorts of techniques and tricks to train them. It requires careful coordination: the model and its training data have to be stored and processed across numerous chips at the same time.

MLT-NLG was trained using Nvidia’s Selene machine learning supercomputer, a system made up of 560 DGX A100 servers with each server containing eight A100 80GB GPUs. Selene is also powered by AMD’s EPYC 7v742 CPU processors and is estimated to cost over $85m, according to The Next Platform.

All 4,480 GPUs use NvLink and NVSwitch to connect to one another. Each one was capable of operating over 113 teraFLOPs per second. It’s incredibly expensive to train these models and even if they’re running on top-of-the-range hardware, it requires software hacks to reduce training times.

Nvidia and Microsoft used DeepSpeed, a deep learning library containing PyTorch code that allowed engineers to cram more data across numerous pipelines in parallel to scale up Megatron-LM. In all, 1.5TB of data was processed to train the model in a process that took a little over a month.

“By combining tensor-slicing and pipeline parallelism, we can operate them within the regime where they are most effective,” Paresh Kharya, senior director of product management and marketing for accelerated computing at Nvidia , and Ali Alvi, group program manager for the Microsoft Turing team, explained in a blog post.

“More specifically, the system uses tensor-slicing from Megatron-LM to scale the model within a node and uses pipeline parallelism from DeepSpeed to scale the model across nodes.

“For example, for the 530 billion model, each model replica spans 280 Nvidia A100 GPUs, with 8-way tensor-slicing within a node and 35-way pipeline parallelism across nodes. We then use data parallelism from DeepSpeed to scale out further to thousands of GPUs.”

MT-NLG was trained on a giant dataset known as The Pile. Compiled by Eleuther AI, a group of AI researchers and engineers leading a grassroots effort to open-source large language models, it is made up of multiple smaller datasets totaling 825GB worth of text scraped off the internet from sources like Wikipedia, academic journal repositories, and news clippings.

Dealing with such large volumes of text means the dataset can’t be cleansed of toxic language. Unfortunately, this means MT-NLG can generate offensive outputs that might be racist or sexist.

“Our observations with MT-NLG are that the model picks up stereotypes and biases from the data on which it is trained,” Kharya and Alvi said.

“Microsoft and NVIDIA are committed to working on addressing this problem. We encourage continued research to help in quantifying the bias of the model...In addition, any use of MT-NLG in production scenarios must ensure that proper measures are put in place to mitigate and minimize potential harm to users.”

For those keen to try MT-NLG out there's bad news; we're told it's not going to be commercially available any time soon. ®


Other stories you might like

  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading
  • China-linked Twisted Panda caught spying on Russian defense R&D
    Because Beijing isn't above covert ops to accomplish its five-year goals

    Chinese cyberspies targeted two Russian defense institutes and possibly another research facility in Belarus, according to Check Point Research.

    The new campaign, dubbed Twisted Panda, is part of a larger, state-sponsored espionage operation that has been ongoing for several months, if not nearly a year, according to the security shop.

    In a technical analysis, the researchers detail the various malicious stages and payloads of the campaign that used sanctions-related phishing emails to attack Russian entities, which are part of the state-owned defense conglomerate Rostec Corporation.

    Continue reading
  • FTC signals crackdown on ed-tech harvesting kid's data
    Trade watchdog, and President, reminds that COPPA can ban ya

    The US Federal Trade Commission on Thursday said it intends to take action against educational technology companies that unlawfully collect data from children using online educational services.

    In a policy statement, the agency said, "Children should not have to needlessly hand over their data and forfeit their privacy in order to do their schoolwork or participate in remote learning, especially given the wide and increasing adoption of ed tech tools."

    The agency says it will scrutinize educational service providers to ensure that they are meeting their legal obligations under COPPA, the Children's Online Privacy Protection Act.

    Continue reading
  • Mysterious firm seeks to buy majority stake in Arm China
    Chinese joint venture's ousted CEO tries to hang on - who will get control?

    The saga surrounding Arm's joint venture in China just took another intriguing turn: a mysterious firm named Lotcap Group claims it has signed a letter of intent to buy a 51 percent stake in Arm China from existing investors in the country.

    In a Chinese-language press release posted Wednesday, Lotcap said it has formed a subsidiary, Lotcap Fund, to buy a majority stake in the joint venture. However, reporting by one newspaper suggested that the investment firm still needs the approval of one significant investor to gain 51 percent control of Arm China.

    The development comes a couple of weeks after Arm China said that its former CEO, Allen Wu, was refusing once again to step down from his position, despite the company's board voting in late April to replace Wu with two co-chief executives. SoftBank Group, which owns 49 percent of the Chinese venture, has been trying to unentangle Arm China from Wu as the Japanese tech investment giant plans for an initial public offering of the British parent company.

    Continue reading
  • SmartNICs power the cloud, are enterprise datacenters next?
    High pricing, lack of software make smartNICs a tough sell, despite offload potential

    SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

    SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

    Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

    Continue reading

Biting the hand that feeds IT © 1998–2022