This article is more than 1 year old

Future of computing crystal-balled by top chip boffins

Bad news: It's going to be tough. Good news: You won't be replaced

Going extreme – maybe

But no matter what materials are used to increase electron mobility and reduce leakage, there remains the problem of using photolithography to etch the transistors into the silicon – whther it's coated with a III-V material or not.

As Borkar explained, "As far as the lithography is concerned, the limit is 190nm light, right?" The 90nm light he was referring to is in the ultraviolet range. The next step, however, is extreme ultraviolet lithography, known in the trade as EUV – which is around 13-13.5nm.

"The next is 13, and there is nothing else in sight," Borkar said.

But EUV – the next Holy Grail of chipbaking technology – is proving elusive. "EUV has several challenges," Bohr told us. One of those challenges is to create the reflective masks needed for EUV. "The other challenge," Bohr said, "is coming up with a high-intensity light source that has enough photons to expose the photoresist that you want – enough photons at the appropriate wavelengths, at the EUV wavelengths."

When we asked Bohr about the high power-consumption levels of EUV technology – a difficulty noted by others – he said "I'm not sure that's such a big deal. The industry is trying to develop a high-intensity light source, and you might measure that in power – how many watts of output does it have – and trying to achieve higher power levels is what we're actually trying to do. Just because the machine consumes 100 watts or 200 watts or whatever, I don't think that's a major problem."

There does remains one other big problem, he said. "It's getting enough intensity so you can expose the wafer quickly and get good throughput from the machine" – which is what Simon Segars of ARM's Physical IP Division was talking about at this year's Hot Chips conference when he said, "To have a fab running economically, you need to build about two to three hundred wafers an hour. EUV machines today can do about five."

As process dimensions drop to 20nm and below, 190nm lithography is resorting to doubling up the light-guiding masks. When we asked Borkar if it would be possible to etch even smaller features with multiple masks, he said it would be possible, but expensive. "Hey, it's an act of desperation, but you gotta continue this going," he said. "We'll try hard – there is no stone that's not unturned. You've got to dig them up."

But should EUV become an affordable reality – and that remains a big "but" – it'll be clear sailing for a while, at least in terms of lithography. "Just imagine," Borkar said, "now with 190, 191, 192 nanometer light, I'm going down to 32 nanometers. In a breeze. So with 13 nanometers I can go down to a couple of nanometers, right?"

When we reminded him that at a "couple of nanometers" process size, he would be, as Bohr likes to say, "running out of atoms," Borkar just smiled. "I'll let the next generation figure that one out," he said.

Exascale efficiency

As might be guessed, microprocessor architect Pawlowski was more interested in having the "process guys" hand him highly efficient chips than in exactly what materials wizardry they might use to stay on that ever-shrinking ramp.

Steve Pawlowski

Steve Pawlowski

Computational efficiency is on Pawlowski's mind, and it's core to the exascale initiative that Intel is conducting with the US and foreign governments. He even suggested that we do a bit of background reading on that topic: an article in the July/September issue of the IEEE Annals of the History of Computing by Stanford University's Jonathan Koomey and others entitled "Implications of Historical Trends in the Electrical Efficiency of Computing".

"What they're looking at is computational efficiency and how that's evolved since 1946 up to 2009," he said about the article. "And they've basically shown that it's followed a Moore's law, in that the improvement of flops-per-watt has basically doubled every 1.5 to 1.6 years. We've actually seen that before, but these guys formulated it and put it in a paper."

Following that trendline to its logical conclusion, Pawlowski said, "We went to the US government and said 'By the time you get to exascale, these are going to be 150-watt machines – is that what you want?' And they said, 'No, we need to have something at about 20 watts'."

Problem. To reach that low-power requirement would require architectures that use only about three picojoules per flop (floating point operation). A picojoule, by the way, is one million-millionth of a joule (10-12; if you're sitting quietly at rest, you emanate about 100 joules of heat energy every second.

Translation: a picojoule is a very, very small amount of energy.

Even if those aforementioned process guys can continue their scaling and materials-development successes for the forseeable future, once the exascale level is reached, Pawlowski says, "Our current Core architecture is maybe about at 16 picojoules per flop – and now we've gotta get about 5 or 6X better than that."

When we asked Pawlowski what technologies and techniques he was looking at to get that large of an increase in computational efficiency, he had a few ideas – but few he could share with us. "There's not really too much I should tell you," he chuckled. "But the bottom line is we're tearing apart the applications. So instead of just building the machine and then saying, 'Okay, here, programmer, go write it', we're actually looking at the applications for the different workloads."

More about

TIP US OFF

Send us news


Other stories you might like