Feature In 1965, Gordon Moore published a short informal paper, Cramming more components onto integrated circuits.
In it, he noted [PDF] that in three years, the optimal cost per component on a chip had dropped by a factor of 10, while the optimal number had increased by the same factor, from 10 to 100. Based on not much more but these few data points and his knowledge of silicon chip development – he was head of R&D at Fairchild Semiconductors, the company that was to seed Silicon Valley – he said that for the next decade, component counts by area could double every year. By 1975, as far as he would look, up to 65,000 components such as transistors could fit on a single chip costing no more than the 100-component chips at the time of publishing.
He was right. Furthermore, as transistors shrank they used less power and worked faster, leading to stupendous sustained cost/performance improvements. In 1975, eight years after leaving Fairchild to co-found Intel, Moore revised his "law", actually just an observation, to a doubling every two years. But the other predictions in his original paper of revolutions in computing, communication and general electronics had taken hold. The chip industry had the perfect metric to aim for a rolling, virtuous milestone like no other.
Since then, according to Professor Erica Fuchs of Carnegie Mellon University, "half of economic growth in the US and worldwide has also been attributed to this trend and the innovations it enabled throughout the economy.” Virtually all of industry, science, medicine, and every aspect of daily life now depends on computers that are ever faster, cheaper, and more widely spread.
Professor Fuchs has an additional point to make: Moore's Law is dead.
Many disagree, especially chip makers. But even if it's not dead, Moore's Law looks unwell, with Intel taking five years, rather than two, to make its latest process node transition. And Moore's Law looks to be on increasingly expensive life support. A 2018 study from researchers at MIT and Stanford concluded that the research and development spent on keeping the rate of semiconductor growth up increased some 18 times since the early 1970s, with ever-decreasing effectiveness. Yet with Intel publishing a new roadmap going into 2025 and promising three new iterations of chip technology, and TSMC and Samsung also promising quick-fire movement into the 1nm range and beyond, what's actually happening?
The size of the problem
Modern chip manufacturers specify their processes in nanometres, which for a long time was a convenient way to describe the length of a particular feature in the standard metal-oxide semiconductor field effect transistor (MOSFET) at the heart of integrated logic. These planar devices have a simple layered construction. A switching area, called the gate, lies underneath a switched area called the source-drain channel, and a voltage on the first switches current in the second. The feature size – say, 22nm – referred to the smallest gate length, hence the number of transistors that could fit in a particular area.
Around the mid 1990s, though, the physics started to get unhelpful. MOSFETs are configured in complementary pairs (CMOS) in logic chips, where one turns off and the other on to make a logic one, and vice-versa for a zero. This means they only use power when switching, not when they're holding a state, meaning many millions of transistors could be put on a chip without it burning up. But at a certain point, as transistors get smaller they become less good at isolating voltage and leakage current goes up, much as many materials become transparent when they're made thin enough. Noise too becomes a problem, as does gate delay time – the speed at which a voltage on the gate switches the channel, thus how fast the transistor operates – and while various engineering fixes like high-K dielectrics – thinner insulation with better performance – prolonged the life of planar transistors into the upper 20nm ranges, new non-2D structures were needed.
The first big change was generically called the FinFET, where the channel no longer lies flat but sticks up like a fin from the surface of the chip. This lets the gate cover more than one surface of the channel, increasing the coupling between them without needing thinner insulation, increasing density and reducing gate delay. First demonstrated in a 28nm process in 2002 by TMNC, various FinFET architectures have been adopted by all high-end chip manufacturers. Intel, for example, introduced its gate-on-three sides FinFET in 2012's 22nm Ivy Lake architecture.
With no simple gate length metric, though, feature size lost whatever physical meaning it had and became just a name for each new process. This made comparisons of different manufacturers' 14, 10, 7, and 5nm processes difficult, Intel putting itself at a particular disadvantage by accurately but unhelpfully labelling successive iterations that didn't involve a step change as 10+, 10++ etc. while performance-wise the designs were equivalent or better to the 7nm of its competitors. The company recently realigned itself with the industry, with a roadmap going down to 2nm or, as it now calls it 20A – an angstrom being a unit of length one-tenth of a nanometre.
By way of comparison, Samsung's 5nm process, 5LPE, was introduced in 2018 and has transistors some 57nm apart and a density of 127 million transistors per square mm. TSMC's 2019 5nm process, N5, has 48nm pitch and 178 million t/mm2. Intel's new roadmap puts its equivalent process, Intel 4, into 2021 with approximately 200 million t/mm2.
FinFET physics won't scale when the fin width drops below around 5nm, which is going to be at most companies' 3nm nodes. The current move, by Intel and others on their own paths to 2nm by 2026 or before, is to extend the FinFET concept to Gate All Around – GAA – which as the name suggests embeds the channel in a mostly complete gate coat. With transistors now increasingly looking like very small cylinders, the same technology is also referred to as nanoribbon or nanosheets.
Work is also being done with GAA to partially merge the two transistors of the standard CMOS state switch into a single combined structure with shared layers called a Complementary or CFET. At the theoretical limit, this could double the useful transistor density, and may be a way from 2nm to 1nm, but nobody has committed to it or similar designs yet.
In a further attempt to keep Moore's nose above the surface, advanced packaging techniques such as face-to-face, where two chip dies are stacked, top to top, double the number of transistors in a single package – although not per square mm of silicon./
The industry has decided on what it wants to do. The problem is how: physics is even harsher on production lines than architectures at these scales.
The big challenges
Chips are made in stages. The raw silicon wafer goes through a complex path of lithography, coating, etching, deposition, and testing, all in different conditions but with the proviso that no process can be damaged by subsequent events – so ones that need high temperature have to happen first, and can't be repeated after more sensitive stages. To put things in perspective, 2nm is the width of just 10 silicon atoms, and many things have to work at that scale.
Lithography is perhaps the biggest problem – how silicon chips are printed. A thin film of a photosensitive resist lacquer is applied to the surface of a chip, and a pattern of light then shone on it through a mask. An etchant then eats through the unexposed resist, exposing the parts of the silicon defined by the mask. These exposed areas are then appropriately treated to make them the right sort of compound for their part in the finished circuit.
At the low nanometre feature sizes, every part of this is challenging. The spun films may be as thin as 5nm, or around 50 atoms thick, but if they don't form a perfectly smooth layer without bumps or dips, the exposure process will be flawed. The light used to expose the layers will have to be extreme ultraviolet, EUV, which has a short enough wavelength to create the tiny features. Most recent fabs to date have used mid-UV light at 193nm, which through a variety of optical and process tweaks can create features with a pitch of around 40nm.
Both TSMC and Samsung have started using EUV on their 5nm lines for some but not all processes. TSMC has said publicly that it's using EUV for inter-layer connections, contacts, and the metal patterns that connect components, as well as marking the places where other features need to be cut out.
To go further, though, EUV may need to move to multiple masks per wafer as sufficient light at sufficient resolution can't be delivered at once across an entire wafer. Less light means longer exposure times, which cuts into throughput and thus profit, and double-masking also involves extra steps with the same effect. All this has to happen at a much higher precision than hitherto, creating severe engineering challenges.
To alleviate this, the industry is looking at various ways of more precisely treating areas of a chip using atomic or molecular-level etching and deposition, instead of repeatedly exposing the entire wafer to every different stage. With the ability to target specific areas comes the possibility to identify and remedy surface defects, improving yield and throughput. All these processes are experimental at the moment.
Inspection and testing are also problematic. Optical inspection lacks the resolution to find all problems, while another technique in use, scanning a circuit with an electron beam as in electron microscopy, has the resolution but lacks the speed needed for a fab line. Another technique, X-ray diffraction, is used in labs; this is the same idea at heart as was used to determine the structure of DNA in 1952. A tight beam of X-rays, which have a tenth the wavelength of EUV, is shone at various angles through the wafer; the diffraction patterns formed as it passes through different areas of electron density can be analysed to reveal the structure. Although capable of great precision, use in 3D systems with buried features, and particular applications for systems like memory with highly regular structures, the use for lab lines is currently infeasible due to cost, size, and lack of speed. As with e-beam inspection, where efforts are being made to create multi-beam tools, work is ongoing.
The end of it all
Despite the bullishness of the industry, the path to 2nm in 2025 is not guaranteed. And the economics of chip making have already changed dramatically – a report by the Center for Security and Emerging Technology (CSET) estimates that on TSMC's last three nodes – 10, 7, and 5 nm – the cost for an equivalent chip has remained largely stable at $274, $233, and $238. The cost of a wafer went up from around $6k to $17k, but the greater number of chips per wafer balanced that out. However, during the three-node transition from 65nm through 40nm to 28nm, cost per wafer went up by only a third from $2k to $3k while per-chip dropped by two-thirds from $1,428 to $453. Those days have gone, and are not coming back.
Even if there are two or three more cycles left through transistors, packaging and architectural changes, the drivers that Gordon Moore saw for silicon have been replaced by force of habit. What alternatives are there?
- The hour grows late, the enemy are at the gates... but could Intel's exiled heir apparent ride to the rescue?
- Intel talks up its 10nm Tiger Lake laptop system-on-chips as though everything is going according to plan
- Moore's Law is deader than corduroy bell bottoms. But with a bit of smart coding it's not the end of the road
The Institute of Electrical and Electronics Engineers (IEEE) tracks promising technologies through its International Roadmap for Devices and Systems (IRDS) "Beyond CMOS" initiative. It reports on separate technologies for storage and logic, categorising them into production, prototype, and emerging. Of the five prototype storage technologies, most have been in that state for decades, like Phase Change RAM (PCRAM), with just one getting into the More Moore category. That's Spin Transfer Torque (STT-RAM) which is a fairly complex device that is of most interest as it promises to be robust and fast, but not immediately competitive with DRAM or intrinsically more scalable. Of the seven emerging technologies, none is close to production, let alone catching up with silicon by 2025.
For logic, the situation is no less promising. Alongside a number of non-CMOS but still silicon FET designs of fairly conventional mien, the list of emerging technologies includes transistor lasers, Domain Wall Logic – a transistor-less network of minute magnetic wires, excitonics, spin wave, topological insulator – and some 11 technologies that are generally the latest new idea in long-established research domains, like spintronics and optronics.
It's not just that there's no clear leader for continuing Moore's Law once CMOS runs out, it's that there's not even a pack of hopefuls. Moore's Law has induced more than half a century of intensive investment in making CMOS better, the end results of which are production lines finely tuned to creating billion-transistor chips at an atomic scale, with an armoury of tools and expertise around them. No technology still in the lab is going to leapfrog that by 2025.
New developments will continue, especially in non-general purpose computing such as AI and numeric analysis as architectures are fine-tuned to particular tasks. But on every front, the economics and physics of Moore's Law no longer apply. It's been a wild ride, as significant as the Industrial Revolution, and there's lots to sort out for generations to come. But the great engine that set us on the new course is falling silent, and the time is coming to say – no Moore. ®