Smartmobe brain maker Qualcomm teases 64-bit ARM server chip secrets

Prototype has 24 cores, in the hands of techies to test drive


Qualcomm, the maker of processors for Nexus smartphones and other mobes and tablets, has revealed early specifications for its upcoming server chips.

The California company is best known for designing the brains in handheld devices, networking kit, and other embedded gear.

Now, in the past few minutes, it's unveiled a pre-production 24-core 64-bit ARMv8 processor for servers in data centers. The system-on-chip is manufactured using FinFET gates, although Qualcomm wouldn't reveal the process size – nor a timeline of when we'll see these things on the market. We reckon these puppies will be available in volume in about a year.

The Qualcomm ARMv8 server-class chip ... with and without its lid

The final production processors will have more cores. Today's silicon will be shipped to Qualcomm's customers to evaluate and port their software to. In short, if you're pals with the biz, you can get your hands on it. Qualcomm expects to spend years working on this technology, and thinks it can turn its expertise in mobile to enterprise IT.

Qualcomm showed off a prototype server powered by the new chip, running GNU/Linux and OpenStack with guests running on the KVM hypervisor. The box was shown running a web server, video streaming, and the usual stuff you do on a Linux machine.

Qualcomm-powered server ... It's just another box running Linux, right?

Anand Chandrasekher, senior veep of Qualcomm's data center group, said this is not his company's mobile processor in a new package – the server-grade chip uses a completely different core design, and has features that makes it suitable for data center workloads, apparently.

When we pushed for details on these features, Chandrasekher told The Register: "This is a server-class CPU. I hate to disappoint you but revealing the features now will help advance my competition. We wish to keep our secrets safe for now."

The chip design biz hopes its tech will find its way into the budgets of cloud infrastructure and platform-as-a-service companies, and organizations crunching big data and tinkering with machine learning. ARM-powered server chips are expected to offer high-density computing power – think of machines with hundreds of cores doing lots of relatively light workloads in parallel – but the concept is struggling to take off.

Will Qualcomm's server-class silicon follow the line of low-power and high-density, or go for broke and gobble tons of watts to compete more with beefier processors?

"Our customers care about performance, acceptable power-compute density, and cost. And we intend to address all three. We'll address compute density and compute efficiency in several ways without getting specific at this point," Chandrasekher told us.

"I've been watching the data center market for a long time, and there is more change taking place now than I've ever seen in the past. The root cause of all of that is the cloud," Chandrasekher added to hacks today in a press conference in San Francisco.

"It's a worldwide phenomenon, and it's across all workloads as well. It's very clear that enterprise is moving to the cloud, and it's doing so very rapidly. The reason why is simple: economics."

Similar topics


Other stories you might like

  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading
  • Beijing reverses ban on tech companies listing offshore
    Announcement comes as Chinese ride-hailing DiDi Chuxing delists from NYSE under pressure

    The Chinese government has announced that it will again allow "platform companies" – Beijing's term for tech giants – to list on overseas stock markets, marking a loosening of restrictions on the sector.

    "Platform companies will be encouraged to list on domestic and overseas markets in accordance with laws and regulations," announced premier Li Keqiang at an executive meeting of China's State Council – a body akin to cabinet in the USA or parliamentary democracies.

    The statement comes a week after vice premier Liu He advocated technology and government cooperation and a digital economy that supports an opening to "the outside world" to around 100 members of the Chinese People's Political Consultative Congress (CPPCC).

    Continue reading
  • Nvidia teases server designs for Grace-Hopper Superchips
    x86 still 'very important' we're told as lid lifted on Arm-based kit

    Computex Nvidia's Grace CPU and Hopper Superchips will make their first appearance early next year in systems that'll be based on reference servers unveiled at Computex 2022 this week.

    It's hoped these Arm-compatible HGX-series designs will be used to build computer systems that power what Nvidia believes will be a "half trillion dollar" market of machine learning, digital-twin simulation, and cloud gaming applications.

    "This transformation requires us to reimagine the datacenter at every level, from hardware to software from chips to infrastructure to systems," Paresh Kharya, senior director of product management and marketing at Nvidia, said during a press briefing.

    Continue reading
  • Nvidia brings liquid cooling to A100 PCIe GPU cards for ‘greener’ datacenters
    For those who want to give their racks an air cut

    Nvidia's GPUs are becoming increasingly more power hungry, so the US giant is hoping to make datacenters using them "greener" with liquid-cooled PCIe cards that contain its highest-performing chips.

    At this year's Computex event in Taiwan, the computer graphics goliath revealed it will sell a liquid-cooled PCIe card for its flagship server GPU, the A100, in the third quarter of this year. Then in early 2023, the company plans to release a liquid-cooled PCIe card for the A100's recently announced successor, the Hopper-powered H100.

    Nvidia's A100 has already been available for liquid-cooled servers, but to date, this has only been possible in the GPU's SXM form factor that goes into the company's HGX server board.

    Continue reading

Biting the hand that feeds IT © 1998–2022