Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Top CompSci boffins name the architectures we'll need in 2030

Number one: Make designing special-purpose hardware as easy as writing software

The International Symposium on Computing Architecture has revealed the five architectural challenges it thinks computer science needs to solve to meet the demands of the year 2030.

Their recommendations, distilled from the Architecture 2030 Workshop at June's ISCA in Korea and available here, draws on the contributions of speakers from several universities, the IEEE's Rebooting Computing Initiative & International Roadmap of Devices and Systems and an industry survey.

The resulting document, Arch2030: A Vision of Computer Architecture Research over the Next 15 Years, starts by saying we currently have a “specialization gap”. Computing has improved in recent decades, the authors say, because we've coasted on Moore's Law. To keep up with the demands of future workloads, “Developing hardware must become as easy, inexpensive, and agile as developing software.”

Next comes a call for “The Cloud as an Abstraction for Architecture Innovation”. Translated, this means researchers should go to town using all of cloud providers' best bits – machine-learning optimised CPUs, FPGAs, GPUs in large numbers – to create otherwise unimaginable architectures. The authors also say researchers must redouble efforts to virtualise those architectures so they can span different clouds.

3D integration in silicon, “shortening interconnects by routing in three dimensions, and facilitating the tight integration of heterogeneous manufacturing technologies” is also recommended. If it can be pulled off, we'll get “greater energy efficiency, higher bandwidth, and lower latency between system components inside the 3D structure.” Which sounds lovely.

As does a call for architectures “Closer to Physics”, a phrase used to call for devices that make use of new materials, or techniques like quantum computing, that emphasise analog processing of data instead of today's approach of forcing digital computing into manufactured tools.

Processors assembled from carbon nanotubes, the document says, “promise greater density and lower power and can also be used in 3D substrates.”

Lastly the document identifies machine learning (ML) as 2030's most in-demand workload and offers the following observation about how to deliver it:

“While the current focus is on supporting ML in the Cloud, significant opportunities exist to support ML applications in low-power devices, such as smartphones or ultralow power sensor nodes. Luckily, many ML kernels have relatively regular structures and are amenable to accuracy-resource trade-offs; hence, they lend themselves to hardware specialization, reconfiguration, and approximation techniques, opening up a significant space for architectural innovation.”

The Register looks forward to the day when we can see all this in action in Dell HPEMC's new ultra-hyper-converged meta-infrastructure running VMware's XenSphere. ®

 

Similar topics

Similar topics

Similar topics

TIP US OFF

Send us news


Other stories you might like