Huawei claims it’s halved the time needed to build a 1,000-rack datacenter

Promises modular kit gets you up and running in six to nine months, with AI-powered ops to make it efficient


Huawei has entered the datacenter construction business with an offering that it claims can be built in half the time required by competing methods, then run more efficiently.

The prosaically named “Next-Generation Datacenter Facility”, as depicted in a video posted to Chinese social media, employs suspiciously-shipping-container-sized modules stacked into a larger building.

In the video, a pre-school girl and her father use Lego to assemble a cube-shaped building. The scene cuts to film of a very similar building under construction in the real world, before the director makes sure the metaphor can’t be missed by morphing the Lego and actual buildings, as depicted below.

Huawei Next-Generation Data Center Facility

Click to enlarge

But Huawei’s banking on speed and low cost, not cinematic subtlety. The Chinese giant asserts that its approach to big barn building means a thousand-rack facility can be up and running within six to nine months, compared to 18 months for bespoke jobs.

The company’s also created a power supply it asserts is simpler than those offered by rivals, as it can be delivered in two weeks instead of two months.

“Simplified cooling maximizes heat exchange efficiency by changing multiple heat exchanges to one heat exchange, and shortening the cooling link,” Huawei’s enthused.

The Next-Generation Datacenter Facility also includes Huawei’s very own automation and AI-infused optimization tools, said to be capable of fine-tuning cooling to make it more efficient. Power is a major cost for datacenter operations, so if Huawei has done this well it will be appreciated.

The Chinese mega-corp has not said if the datacenters require the presence of Huawei products to realize the promised benefits.

China has introduced a plan to relocate five million datacenter racks from the crowded east of the country to ten designated datacenter precincts in nation’s western sparsely-populated west. That’s one obvious market where rapid datacenter builds will be in demand. If Huawei wins even 20 percent of that business, it’ll be building 1,000 of these datacenters!

Quick builds will also be appreciated elsewhere around the world. Whether Huawei’s design is welcomed by governments remains to be seen. The company’s well documented troubles have centered on its role as a network equipment provider representing a risk of network disposition information, or actual data, reaching Beijing. The contents of a datacenter also represent very useful information to a nation credibly accused of using state-backed groups to conduct offensive cyber-ops.

Huawei denies it would ever act against clients’ interests, or in ways that violate local laws, or would willingly be used an agent of China’s intelligence services.

Modular datacenters based on shipping containers are not new, and one facility that used such a design – OVH’s Strasbourg facility – infamously went up in flames.

Huawei’s stated that this design is resilient and environmentally responsible, in addition to offering rapid builds. ®


Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • Xi Jinping himself weighs in on how Big Tech should deploy FinTech
    Beijing also outlines its GovTech vision and gets very excited about data

    China's government has outlined its vision for digital services, expected behavior standards at China's big tech companies, and how China will put data to work everywhere – with president Xi Jinping putting his imprimatur to some of the policies.

    Xi's remarks were made in his role as director of China’s Central Comprehensively Deepening Reforms Commission, which met earlier this week. The subsequent communiqué states that at the meeting Xi called for "financial technology platform enterprises to return to their core business" and "support platform enterprises in playing a bigger role in serving the real economy and smoothing positive interplay between domestic and international economic flows."

    The remarks outline an attempt to balance Big Tech's desire to create disruptive financial products that challenge monopolies, against efforts to ensure that only licensed and regulated entities offer financial services.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • One of the first RISC-V laptops may ship in September, has an NFT hook
    A notebook with an RV SoC is cool enough. Did we really need the Web3 jargon?

    It seems promoters of RISC-V weren't bluffing when they hinted a laptop using the open-source instruction set architecture would arrive this year.

    Pre-orders opened Friday for Roma, the "industry's first native RISC-V development laptop," which is being built in Shenzen, China, by two companies called DeepComputing and Xcalibyte. And by pre-order, they really mean: register your interest.

    No pricing is available right now, quantities are said to be limited, and information is sparse.

    Continue reading

Biting the hand that feeds IT © 1998–2022