Intel prepping Atom bombs to drop on ARM microservers

Roadmap is top secret, dual-core 'Centerton' Atom looms

To hear Intel Fellow Matt Adiletta tell it, Chipzilla not only invented the term microserver but saw the trend towards wimpy computing coming way ahead of the all this fawning over the ARM architecture and a half-dozen upstarts wanting to take big bites out of the Xeon server processor cash cow.

When El Reg says "fawn", that's an intentional pun that harkens back to FAWN: A Fast Array of Wimpy Nodes, a paper published in May 2008 by a bunch of server geeks researchers at Carnegie Mellon University.

That paper compared the energy profile and performance of x86 and ARM architectures, specifically for server nodes equipped with flash storage. It demonstrated how the combination of low-powered (in terms of both performance and electrical consumption) processors combined with flash could yield a 50X improvement over then-current x86 and hard-disk clusters fielding requests from a key-value store, and on the order of 4X compared to low-power x86 chips mated to flash.

Subsequent papers published by the CMU researchers were done in conjunction with Intel Research, as you can see at the FAWN project.

Adiletta, as it turns out, caught the microserver bug back in 2006, when it wasn't even called that yet. In 2007 his team at Intel created what he calls a "CPU DIMM" that was about the size of a folded wallet, as he explained in a conference call today with the press, that had either Atom or two-core Core desktop/laptop processors on them. It had a lot of pin and signal connectors and plugged into a memory slot, and Adiletta explained that in 2008 he had shown it to none other than Sun cofounder and serial capitalist Andy Bechtolsheim to get his opinion.

Bechtolsheim asked a lot of questions about thermals, performance, reliability, and other feeds and speeds, then was quiet for a bit, holding his head in his hands and rocking a little bit. "Geez, it just hurts my head to think about all of the opportunities this could provide if we can realize it," Adiletta recalls Bechtolsheim finally saying as he came out of his trance.

Intel's point in hosting Thursday's meeting with journos and in telling this story about the meeting with Bechtolsheim is that the company wants to demonstrate that it has not been caught by surprise by either the advent of microservers or the movement of the ARM architecture from the smartphone and embedded spaces into the data center. In fact, the second generation of FAWN research at CMU compared x86 to ARM because Intel knew where the real competition would come from.

El Reg notes that at this time Bechtolsheim was the CTO for servers at Sun Microsystems, which was nearly three years away from being thrown into the arms of Oracle after a catch-and-release by Big Blue. What Bechtolsheim did not do was launch microservers at Sun, but rather he invested his money in Arista Networks, where he became chairman and chief development officer in late 2008.

Bechtolsheim likes to flip back and forth between systems and networking, and has his own kind of ticking and tocking going on.

Adiletta, as the godfather of microservers at Intel, was trotted out to establish this creation myth in our psyches and to also to remind everyone that Intel is expected to launch its dual-core "Centerton" processor before the end of the year – than means soon, obviously – as the first server-class Atom processor.

"Having more chefs in the kitchen helps, up to a certain point, depending on what is being served," Adiletta explained by way of metaphor to illustrate why Intel was enthusiastic about its impending Atom S Series of server chips and the possibilities they present. "We're very bullish on this segment."

Maybe Intel's researchers were indeed enthusiastic about microservers, but their business managers were not so sure and absolutely did not want to upset the Xeon cash cow, particularly during the Great Recession.

As Jason Waxman, general manager of the Cloud Computing Group at Intel, put it when microserver upstart SeaMicro launched a Xeon-based SM10000 cluster in January of this year: "SeaMicro is pretty modest. They were really the first company to push us hard on the Atom, and they are the first to develop a system that supports both Atom and Xeon."

It's the interconnect, stupid genius

A little more than a month later, floundering AMD, looking for some kind of salvation after having big handfuls of server market share ripped from it by Chipzilla, snapped up SeaMicro for $334m and is now working on an Opteron ARM processor due in 2014 with SeaMicro's interconnect fabric gluing them together into what amounts to a data center in a box.

The assembled hacks pressed Adiletta for more details about the Atom S Series roadmap, and this hack in particular asked about the way that Intel would embed interconnects onto the chip as Calxeda has done with its EnergyCore EXC-1000 processors or Applied Micro Circuits has done with its X-Gene chips.

As far as Calxeda is concerned, putting ARM cores and a distributed Layer 2 switch that scales to 4,096 nodes today and to over 100,000 nodes in a few years is the real engineering task with microservers – not welding an Ethernet NIC to an Atom processor. After having bought Ethernet chip maker Fulcrum Microsystems a few years back, Intel certainly could respond with something similar, but Adiletta was not there to provide an actual roadmap, but rather to establish Intel's cred in microservers and ramp up excitement for the Atom S Series.

"This has been a classic question from the communications space for a long time: do you go distributed or do you do centralized," explained Adiletta when asked about integrated networking on the future Atoms.

"Quite frankly," he said, "if you talk to comms folks, it is a religious argument. If you do centralized, then one of the nice things is how you can manage it and hop counts. Latency is interesting. There are pluses and minuses to both approaches. I wish I could go into some real technical details on this, but frankly I am quite bullish on what our approach is going to be."

That wasn't a real answer, and Adiletta would have been taken out behind the Intel PR woodshed if he actually did answer the question. But what seems clear is that Intel is going to put Ethernet ports onto the future "Avoton" Atom S Series chips due in 2013, and will rely on its foundry advantages and tweaks to the Atom core (moving from in-order to out-of-order processing was one such change that Adiletta hinted at) to drive thermals down and performance up, and then haul out the old x86 compatibility saw that it's also using to help promote its Xeon Phi parallel x86 coprocessors in supercomputing.

What Intel should probably do is embed an Atom S chip on a Xeon Phi, use the PCI slot for power only, slap an InfiniBand port on it, stick a boatload of SATA or SAS ports on it for hard disks or SSDs, and throw away the Xeon node for all but the most serious single-threaded work where a brawny core is required. (We are only half joking here.)

"There is a lot of performance that we could gain by adding sophistication to our Atom cores," Adiletta said. "I like where we are. We have the right tools in the toolbox and the management support to go out and do this."

Later in the Q&A session, Adiletta said that the future Atoms would "show very, very good low-power idle power metrics," which of course is something that ARM chips can already boast. And looking at thelow power states in Intel's future "Haswell" microarchitecture, you can be forgiven for thinking that Intel might be tempted to do away with Atoms altogether and just stick with Xeons in the next few years.

For all we know, the lack of serious attention to Atom-based servers up until fairly recently could be another reason why CEO Paul Otellini is retiring next May. Perhaps Otellini wants some young blood to handle the shift that will turn the Atom into the new Xeon and the Xeon into the new Itanium, at least in hyperscale data centers and for more workloads as more and more software gets parallelized.

Remember, after all, that it took the Core architecture to be spun up into a Xeon in Opteron drag to vanquish AMD, and it might take an Atom phone and netbook processor wrapped in Xeon drag to repel the onslaught of the ARMed rebels. ®

Similar topics

Other stories you might like

  • India reveals home-grown server that won't worry the leading edge

    And a National Blockchain Strategy that calls for gov to host BaaS

    India's government has revealed a home-grown server design that is unlikely to threaten the pacesetters of high tech, but (it hopes) will attract domestic buyers and manufacturers and help to kickstart the nation's hardware industry.

    The "Rudra" design is a two-socket server that can run Intel's Cascade Lake Xeons. The machines are offered in 1U or 2U form factors, each at half-width. A pair of GPUs can be equipped, as can DDR4 RAM.

    Cascade Lake emerged in 2019 and has since been superseded by the Ice Lake architecture launched in April 2021. Indian authorities know Rudra is off the pace, and said a new design capable of supporting four GPUs is already in the works with a reveal planned for June 2022.

    Continue reading
  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading

Biting the hand that feeds IT © 1998–2021