Coming Xeon 'scalable' family will run SAP HANA '1.6 times faster'

Optane persistent memory support from Cascade Lake Xeon SPs in 2018


Intel says its coming Xeon SPs (scalable processors) will run in-memory SAP HANA workloads 1.59 times faster than a Xeon E7 v4 system*, and has demonstrated Optane DIMMs.

The Xeon SP family of processors will be available in the middle of this year. El Reg thinks these are Skylake mills. There are four Xeon SP brand variants in reverse order of performance: Platinum, Gold, Silver and Bronze.

Xeon_Platinum

Platinum version of Xeon SP

They offer a new core, cache, on-die interconnects, memory controller and optimised features for compute (duh!), storage and networking. Intel reckons they are suited for, amongst other things, the demands of big-data, and in-memory workloads.

At the SAP Sapphire NOW event, Intel said SAP had certified HANA to support up to 6x greater system memory - for OLAP Processing - on the new Intel platform for 4- or 8-socket configurations over the representative installed base of systems available four years ago. That’s up to 3TB of memory for 4-socket systems and 6TB for 8-socket ones.

The "representative installed base" means Xeon E7 CPUs with support for 0.5TB to 1TB (8-socket) of DRAM. It’s a whacking great jump in maximum memory.

Optane

Intel also demonstrated 3D XPoint Optane persistent memory in a DIMM form factor at the SAP event. Use of this memory in SAP HANA systems should increase system performance further, by bringing data within the low-latency Optane access arena from the longer latency NAND drive storage tier.

Intel said its Optane memory will be available in 2018 as part of a Cascade Lake refresh of Xeon SP. We understood Kaby Lake server CPUs with 200 Series chipsets would support Optane memory. Now we know about Cascade Lake, which is a Xeon supporting Optane NVDIMMs, and think the refresh could refer to a Skylake SP to Kaby Lake upgrade.

We’re told software developers can accelerate their readiness for Intel persistent memory today with the libraries and tools at www.pmem.io. ®

* Based on SAP HANA internal S-OLTP workload (internal testing) with the baseline config being: one-node, 4S Intel Xeon processor E7-8890 v4 with 1,024 GB total memory on SUSE Linux Enterprise Server (SLES) 12 SP1 vs estimates based on SAP internal testing on one-node, 4S Intel Xeon Processor Scalable family system.

Similar topics

Broader topics


Other stories you might like

  • Intel’s Falcon Shores XPU to mix ‘n’ match CPUs, GPUs within processor package
    x86 giant now has an HPC roadmap, which includes successor to Ponte Vecchio

    After a few years of teasing Ponte Vecchio – the powerful GPU that will go into what will become one of the fastest supercomputers in the world – Intel is sharing more details of the high-performance computing chips that will follow, and one of them will combine CPUs and GPUs in one package.

    The semiconductor giant shared the details Tuesday in a roadmap update for its HPC-focused products at the International Supercomputing Conference in Hamburg, Germany.

    Intel has only recently carved out a separate group of products for HPC applications because it is now developing versions of Xeon Scalable CPUs, starting with a high-bandwidth-memory (HBM) variant of the forthcoming Sapphire Rapids chips, for high-performance kit. This chip will sport up to 64GB of HBM2e memory, which will give it quick access to very large datasets.

    Continue reading
  • Intel says Sapphire Rapids CPU delay will help AMD catch up
    Our window to have leading server chips again is narrowing, exec admits

    While Intel has bagged Nvidia as a marquee customer for its next-generation Xeon Scalable processor, the x86 giant has admitted that a broader rollout of the server chip has been delayed to later this year.

    Sandra Rivera, Intel's datacenter boss, confirmed the delay of the Xeon processor, code-named Sapphire Rapids, in a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference. Earlier that day at the same event, Nvidia's CEO disclosed that the GPU giant would use Sapphire Rapids, and not AMD's upcoming Genoa chip, for its flagship DGX H100 system, a reversal from its last-generation machine.

    Intel has been hyping up Sapphire Rapids as a next-generation Xeon CPU that will help the chipmaker become more competitive after falling behind AMD in technology over the past few years. In fact, Intel hopes it will beat AMD's next-generation Epyc chip, Genoa, to the market with industry-first support for new technologies such as DDR5, PCIe Gen 5 and Compute Express Link.

    Continue reading
  • Intel offers 'server on a card' reference design for network security
    OEMs thrown a NetSec Accelerator that plugs into server PCIe slots

    RSA Conference Intel has released a reference design for a plug-in security card aimed at delivering improved network and security processing without requiring the additional rackspace a discrete appliance would need.

    The NetSec Accelerator Reference Design [PDF] is effectively a fully functional x86 compute node delivered as a PCIe card that can be fitted into an existing server. It combines an Intel Atom processor, Intel Ethernet E810 network interface, and up to 32GB of memory to offload network security functions.

    According to Intel, the new reference design is intended to enable a secure access service edge (SASE) model, a combination of software-defined security and wide-area network (WAN) functions implemented as a cloud-native service.

    Continue reading
  • Intel offers GPU management tool ahead of Ponte Vecchio debut
    It's even open source, so someone may actually use it

    With Intel poised to enter the datacenter GPU market, the chipmaker this week showed off a software platform mean to simplify management of these devices at scale at the International Supercomputing Conference in Hamburg, Germany.

    The open-source software, dubbed Intel XPU Manager, is an in-band remote management service for upgrading firmware, monitoring system utilization, and administering GPUs at the individual node level. The code is an important step as Intel prepares to compete against Nvidia, which has a mature software stack for GPUs with AMD working hard to get its software straight for GPU and CPU.

    XPU Manager is a low-level management interface that runs in Kubernetes and is designed to be integrated into existing cluster management and schedulers using RESTful APIs. It also supports local management via the CLI and is validated for use on Ubuntu 20.04 or Red Hat Enterprise Linux 8.4.

    Continue reading
  • Apple’s M2 chip isn’t a slam dunk, but it does point to the future
    The chip’s GPU and neural engine could overshadow Apple’s concession on CPU performance

    Analysis For all the pomp and circumstance surrounding Apple's move to homegrown silicon for Macs, the tech giant has admitted that the new M2 chip isn't quite the slam dunk that its predecessor was when compared to the latest from Apple's former CPU supplier, Intel.

    During its WWDC 2022 keynote Monday, Apple focused its high-level sales pitch for the M2 on claims that the chip is much more power efficient than Intel's latest laptop CPUs. But while doing so, the iPhone maker admitted that Intel has it beat, at least for now, when it comes to CPU performance.

    Apple laid this out clearly during the presentation when Johny Srouji, Apple's senior vice president of hardware technologies, said the M2's eight-core CPU will provide 87 percent of the peak performance of Intel's 12-core Core i7-1260P while using just a quarter of the rival chip's power.

    Continue reading
  • Intel freezes hiring for PC chip team, cites 'macroeconomic uncertainty'
    Inflation, Apple M2, PC market shrink: Could the timing have been worse?

    Intel's PC chip division is the latest team caught in the current tide of economic uncertainty, as the company freezes hiring in the group. 

    In an internal memo obtained by Reuters, Intel told employees all hiring and job requisitions in the client computing group were on hold for at least two weeks. During that time, the chipmaker will reportedly be reevaluating its priorities with "increased focus and prioritization in our spending [to] help us weather macroeconomic uncertainty," Intel said. 

    The client computing group, which designs end-user hardware, is Intel's largest by sales, having generated $9.3 billion of the $18.4 billion Intel made last quarter. Despite its place at the top, the CCG's Q1 takings were still down 13 percent compared to the same time in 2021. It was also the only Intel division to lose money compared to Q1 2021, another potential reason for the hiring freeze in the sector. 

    Continue reading

Biting the hand that feeds IT © 1998–2022