HPC

PCI recast for supercomputing future

Double speed or more by 2015


The next generation of the PCI interconnect standard will be aimed squarely at high-performance computing, and it will be developed using a different scheme than were previous generations.

"The solution space that we're targeting for 'gen-four', if you will, is going to be directly focused to service the needs of HPC applications," the PCI-SIG's Serial Communications Workgroup chair Ramin Neshati told The Reg during this week's Intel Developers Forum.

"Gen-three" – aka PCIe 3.0, which was released last November after years of work – runs at a healthy 8GT/s (gigatransfers per second). The target for gen-four is 16GT/s over copper, a transfer rate snappy enough to have few applications outside of HPC.

"By and large, we believe that gen-one, gen-two, and even gen-three will be good enough for the broad spectrum of applications for a long, long time," Neshati said. When asked what "a long, long time" means, his answer was simple and straightforward. "Forever."

"Gen-four will be more of a boutique-type application for very few topologies," he said. "Gen-three will be good enough for the world."

Initial studies for gen-four, aka PCIe 4.0, have begun, and the same low-cost, high-volume, and compatibility goals underpin those studies and the discussions they involve. Although the goal is 16GT/s over copper, Neshati says that higher transfer rates might be possible.

The development process for PCIe 4.0 will be different from previous generations. "In gen-one, gen-two, gen-three," he said, "we identified a worst-case scenario – say, for example, a server channel of 20-inch [with] two connectors. A very tough topology to solve."

The advantage of building a standard around a worst-case scenario is easy to understand: if it can handle a worst case, less-demanding cases should be a walk in the park.

For PCIe 4.0, however, the PCI-SIG is taking a different tack – what Neshati described as "a more optimistic channel" – using as its design base a short channel of eight to 10 inches with one connector.

"If you solve it for that topology, then any worse topology – longer channel – will have to pay to get there," Neshati said. This "pay as you go" scheme, as he called it, would for example require the extra expense of a repeater if an implementation required a longer channel.

"So there's a mental shift here," he said, "from a 'solve it for the worst case' to 'solve it for the best case', and then add costs to solve it for the worst cases."

The reasoning behind the shift is simple: at these performance levels, solving for the worst case would introduce costs that would burden implementers of less-demanding applications.

So, how will 16GT/s over copper be accomplished? "We're looking at connector improvements, keeping it mechanically the same but electrically improving the connector," Meshati said. Other improvements to be investigated might include changes in silicon design, channel improvements to mitigate crosstalk and discontinuity, and using different materials in the channel.

"With these knobs," Neshati said, "we think we have line of sight to get to 16 gig on copper – maybe even higher."

But the HPC world will need to wait a bit before incorporating PCIe 4.0 into their installations. Neshati thinks that the bit rate will be set late this year or early next, which will then become the basis of further studies leading to a specification, and then to silicon to test the spec.

"We're targeting – based on member feedback – a 2015, 2016 adoption cycle for gen-four," he said. To get products into the field by then, he believe that the spec will need to be finalized by 2013 – an aggressive timeline, to say the least.

The question, of course, arises that if PCIe 3.0 will last "forever" and PCIe 4.0 will start in HPC and only gradually trickle down into servers, how will the PCI-SIG itself remain relevant?

"As long as there are two pieces of silicon that need to talk to each other, and they need to talk to each other through a standard interface," Neshati told us, "then PCI-SIG will be relevant." ®

Similar topics


Other stories you might like

  • Saved by the Bill: What if... Microsoft had killed Windows 95?

    Now this looks like a job for me, 'cos we need a little, controversy... 'Cos it feels so NT, without me

    Former Microsoft veep Brad Silverberg has paid tribute to Bill Gates for saving Windows 95.

    Silverberg posted his comment in a Twitter exchange started by Fast co-founder Allison Barr Allen regarding somebody who'd changed your life. Silverberg responded "Bill Gates" and, in response to a question from Microsoft cybersecurity pro Ashanka Iddya, explained Gates's role in Windows 95's survival.

    Continue reading
  • UK government opens consultation on medic-style register for Brit infosec pros

    Are you competent? Ethical? Welcome to UKCSC's new list

    Frustrated at lack of activity from the "standard setting" UK Cyber Security Council, the government wants to pass new laws making it into the statutory regulator of the UK infosec trade.

    Government plans, quietly announced in a consultation document issued last week, include a formal register of infosec practitioners – meaning security specialists could be struck off or barred from working if they don't meet "competence and ethical requirements."

    The proposed setup sounds very similar to the General Medical Council and its register of doctors allowed to practice medicine in the UK.

    Continue reading
  • Microsoft's do-it-all IDE Visual Studio 2022 came out late last year. How good is it really?

    Top request from devs? A Linux version

    Review Visual Studio goes back a long way. Microsoft always had its own programming languages and tools, beginning with Microsoft Basic in 1975 and Microsoft C 1.0 in 1983.

    The Visual Studio idea came from two main sources. In the early days, Windows applications were coded and compiled using MS-DOS, and there was a MS-DOS IDE called Programmer's Workbench (PWB, first released 1989). The company also came up Visual Basic (VB, first released 1991), which unlike Microsoft C++ had a Windows IDE. Perhaps inspired by VB, Microsoft delivered Visual C++ 1.0 in 1993, replacing the little-used PWB. Visual Studio itself was introduced in 1997, though it was more of a bundle of different Windows development tools initially. The first Visual Studio to integrate C++ and Visual Basic (in .NET guise) development into the same IDE was Visual Studio .NET in 2002, 20 years ago, and this perhaps is the true ancestor of today's IDE.

    A big change in VS 2022, released November, is that it is the first version where the IDE itself runs as a 64-bit process. The advantage is that it has access to more than 4GB memory in the devenv process, this being the shell of the IDE, though of course it is still possible to compile 32-bit applications. The main benefit is for large solutions comprising hundreds of projects. Although a substantial change, it is transparent to developers and from what we can tell, has been a beneficial change.

    Continue reading
  • James Webb Space Telescope has arrived at its new home – an orbit almost a million miles from Earth

    Funnily enough, that's where we want to be right now, too

    The James Webb Space Telescope, the largest and most complex space observatory built by NASA, has reached its final destination: L2, the second Sun-Earth Lagrange point, an orbit located about a million miles away.

    Mission control sent instructions to fire the telescope's thrusters at 1400 EST (1900 UTC) on Monday. The small boost increased its speed by about 3.6 miles per hour to send it to L2, where it will orbit the Sun in line with Earth for the foreseeable future. It takes about 180 days to complete an L2 orbit, Amber Straughn, deputy project scientist for Webb Science Communications at NASA's Goddard Space Flight Center, said during a live briefing.

    "Webb, welcome home!" blurted NASA's Administrator Bill Nelson. "Congratulations to the team for all of their hard work ensuring Webb's safe arrival at L2 today. We're one step closer to uncovering the mysteries of the universe. And I can't wait to see Webb's first new views of the universe this summer."

    Continue reading
  • LG promises to make home appliance software upgradeable to take on new tasks

    Kids: empty the dishwasher! We can’t, Dad, it’s updating its OS to handle baked on grime from winter curries

    As the right to repair movement gathers pace, Korea’s LG has decided to make sure that its whitegoods can be upgraded.

    The company today announced a scheme called “Evolving Appliances For You.”

    The plan is sketchy: LG has outlined a scenario in which a customer who moves to a locale with climate markedly different to their previous home could use LG’s ThingQ app to upgrade their clothes dryer with new software that makes the appliance better suited to prevailing conditions and to the kind of fabrics you’d wear in a hotter or colder climes. The drier could also get new hardware to handle its new location. An image distributed by LG shows off the ability to change the tune a dryer plays after it finishes a load.

    Continue reading

Biting the hand that feeds IT © 1998–2022