Intel R&D spending surges after years of neglect as Gelsinger pledges to make Chipzilla great again

A timeline of the x86 giant's stumbles – and commitments for the future

Analysis Intel is cranking up its research spending to fix past mistakes, catch up with and overtake the competition, and build a foundation to grow in future.

The US giant spent $15.19bn on research and development in fiscal 2021, more than 20 per cent of the company's $74.7bn revenue. That was about a 12 per cent increase from research and investments in 2020, and it was largest year-over-year increase since 2012, when R&D spending went up by 20 per cent.

Compare that to recent years, when research and development spending was stagnant or barely increased and Chipzilla spent billions on stock price support instead. Enter CEO Pat Gelsinger, who took the reins last year and hit the reset button on Intel's priorities to focus on engineering.

"Intel's investing heavily in research and development to improve its semiconductor process roadmap and also in new chip designs," Kevin Krewell, an analyst at Tirias Research, told The Register.

"Gelsinger is willing to spend more to regain the lead in both process and chip designs."

The chief exec wants to rekindle the old magic of Intel as a technology leader, a status it lost in the past decade. He identified manufacturing as a priority, and cranked up investments in fab technology, materials and packaging to support future growth.

Intel just recently announced new fabrication plants in Columbus, Ohio, that will start up in 2025 and make chips down to 2nm and beyond. It is also expanding manufacturing in Arizona, Oregon, and New Mexico, and overseas in Ireland.

Intel was years ahead of rivals in putting significant innovations to manufacturing in a cost-effective way, Jesús del Alamo, director of the Microsystems Technology Laboratories at MIT, told The Register.

"They lost that edge maybe six, eight years ago or so. That technical leadership has moved on to Asia – to Samsung and TSMC," del Alamo said.

Loss of focus

Intel's research investments virtually dried up starting in 2013 when former COO Brian Krzanich took over as CEO, trickling in as just single-digit percentages of revenue. He was fired in 2018 after a consensual relationship was exposed. Research investments dipped in 2019 for the first time in close to a decade under subsequent CEO Robert Swan, formerly Intel's chief financial officer.

"There was not the kind of decisive investment in the future of the company," said David Kanter, principal analyst and chip researcher at Real World Technologies.

Swan was an accountant, and not a technology visionary needed to make the huge research and development investments to drive long-term product roadmaps and revenue, Kanter said.

Intel at the time buckled under an incoherent product roadmap, poor manufacturing execution, and a desire for short-term gains in markets such as mobile devices. That created an opening for Arm, AMD, Qualcomm, and Apple to become more competitive.

"So when Pat came in, he basically said, 'my goal is to restore Intel to its greatness. We will be leaders in semiconductor manufacturing, and packaging, and in the markets in which we play.' That requires cutting a check for research and development," Kanter said.

The road ahead

The next inflection point for Intel's research and development strategy is around 2024 to 2025, when it will start printing chips with next-gen transistor and packaging technologies, which are today's top research priorities.

The new technologies include gate-all-around (GAA) transistors, which will provide better transistor density and performance. Intel calls its GAA implementation RibbonFET. It involves lifting the transistor's channels up into the gate fin so that the gate material surrounds the channels, which increases the contact area. Intel is also moving power to the back side of the wafer through what it calls PowerVia technology. That could help deliver better performance and power efficiency over today's layouts.

Research papers presented at the International Electron Devices Meeting (IEDM) last month outlined "a long-term path toward more than 10x density improvement in packaging and a 30 per cent to 50 per cent area improvement in transistor scaling," Gelsinger said on an earnings call last week.

Intel is also investing in advanced packaging to tie together multiple process technologies and chip designs. A processor family code-named Lakefield, which featured Foveros packaging, was quickly canned.

Kanter said Lakefield was an indicator of Gelsinger's strategy: a willingness to quickly test moonshot projects in real-world environments so they can be commercialized more quickly.

Intel is already sampling Ponte Vecchio, a GPU accelerator viewed as another moonshot project packing together the company's most cutting edge technologies. Ponte Vecchio has 47 tiles and 100 billion transistors. Rival products like Nvidia's A100 and AMD's MI250 accelerators also have dense packaging.

"Intel, in terms of publicly disclosed roadmap, is doing more aggressive stuff than TSMC. The thing about semiconductors – it's not just the basic concept that counts, you're going to be running countless trials. All of that experimentation is not cheap, it needs to get done," Kanter said.

Gelsinger's boost in research spending is also a response to idiosyncrasies with chip shortages and manufacturing advances. Intel, which used to make chips mostly for itself, will now fabricate chips for Qualcomm. Meanwhile, TSMC fabs some components for Intel. Technologies like EUV and high-NA EUV also demand more research spending.

The Moore the merrier

Time will tell if Gelsinger's research and development gambit pays off. But industry observers said Gelsinger's reliance on Moore's Law to advance technology and drive business strategy is fundamentally sound.

During the earnings call, Gelsinger reflected on 50th anniversary of the Intel 4004, calling it "the chip that changed the world."

"We are committed to accelerate this impact for the next 50 years as the insatiable need for compute that started with the 4004 continues to drive the value of Moore's Law," Gelsinger said.

MIT's del Alamo said any talk of Moore's Law ending is overblown as people have narrowed the definition. Moore's Law means different things to different people, but it still is a measuring stick to drive chip innovation.

"There are tremendous opportunities ahead. But it's becoming really costly to seize those opportunities, because it's no longer a geometrical scaling. Every generation of technology involves very significant new innovations to make it happen, and you have to be willing to invest tremendous resources on research and development," del Alamo said.

Intel is the last-standing steward for academic chip research in the US with companies including IBM committing fewer resources. Intel is well organized and methodical about supporting and funding research at US universities, del Alamo said, adding he has worked with the company on projects.

"It's going to require fundamental research involving new material systems, new physical principles, the approaches of storing information and managing information. It's as exciting as ever," del Alamo said.

Intel's isn't the only one promising record-high financial commitments to factories and chip research. TSMC spent [PDF] roughly $4.7 billion on research and development in fiscal 2021, and its capital spending is increasing by a third to $44bn this year to boost production.

Gelsinger ultimately will have to answer to stockholders on the billions spent on research and development, but no semiconductor company can have a bright future without it, Pat Moorhead, principal analyst at Moor Insights and Strategy.

"All things equal, in the beginning, increasing research and development at a higher rate of revenue expansion means a lower net income. Research and development pays dividends – figuratively – in the future and if Intel can convince investors it's money well spent, it doesn't become a drag on the stock," Moorhead said. ®

Broader topics

Other stories you might like

  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • AMD to end Threadripper Pro 5000 drought for non-Lenovo PCs
    As the House of Zen kills off consumer-friendly non-Pro TR chips

    A drought of AMD's latest Threadripper workstation processors is finally coming to an end for PC makers who faced shortages earlier this year all while Hong Kong giant Lenovo enjoyed an exclusive supply of the chips.

    AMD announced on Monday it will expand availability of its Ryzen Threadripper Pro 5000 CPUs to "leading" system integrators in July and to DIY builders through retailers later this year. This announcement came nearly two weeks after Dell announced it would release a workstation with Threadripper Pro 5000 in the summer.

    The coming wave of Threadripper Pro 5000 workstations will mark an end to the exclusivity window Lenovo had with the high-performance chips since they launched in April.

    Continue reading
  • Lenovo reveals small but mighty desktop workstation
    ThinkStation P360 Ultra packs latest Intel Core processor, Nvidia RTX A5000 GPU, support for eight monitors

    Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.

    Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.

    Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.

    Continue reading
  • AWS buys before it tries with quantum networking center
    Fundamental problems of qubit physics aside, the cloud giant thinks it can help

    Nothing in the quantum hardware world is fully cooked yet, but quantum computing is quite a bit further along than quantum networking – an esoteric but potentially significant technology area, particularly for ultra-secure transactions. Amazon Web Services is among those working to bring quantum connectivity from the lab to the real world. 

    Short of developing its own quantum processors, AWS has created an ecosystem around existing quantum devices and tools via its Braket (no, that's not a typo) service. While these bits and pieces focus on compute, the tech giant has turned its gaze to quantum networking.

    Alongside its Center for Quantum Computing, which it launched in late 2021, AWS has announced the launch of its Center for Quantum Networking. The latter is grandly working to solve "fundamental scientific and engineering challenges and to develop new hardware, software, and applications for quantum networks," the internet souk declared.

    Continue reading

Biting the hand that feeds IT © 1998–2022