HPC

Supercomputers get their own software stack – dev tools, libraries etc

OpenHPC group to take programmers to the highest heights


SC15 Supercomputers are going to get their own common software stack, courtesy of a new group of elite computer users.

The OpenHPC Collaborative Project was launched just before this week's Supercomputer Conference 2015 in Austin, Texas, and features among its members the Barcelona Supercomputing Center, the Center for Research in Extreme Scale Technologies, Cray, Dell, Fujitsu, HP, Intel, Lawrence Berkeley, Lenovo, Los Alamos, Sandia and SUSE – in other words, the owners and builders of the world's biggest and fastest machines.

The project describes itself as "a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries."

It comes with the backing of the Linux Foundation, hardly surprisingly since the open-source software is used in virtually every supercomputer in the world.

Just six of the top 500 supercomputers don't use GNU/Linux, and all of them use some flavor of Unix, so no look-in for Windows nor OS X.

Supercomputers also provide their own unique problems. To the extent that the software issue for the monster machines got its own mention in a US Presidential Executive Order in July:

Current HPC [high-performance computing] systems are very difficult to program, requiring careful measurement and tuning to get maximum performance on the targeted machine. Shifting a program to a new machine can require repeating much of this process, and it also requires making sure the new code gets the same results as the old code. The level of expertise and effort required to develop HPC applications poses a major barrier to their widespread use.

This new group hopes to resolve at least some of those problems with pre-built packages that will include "re-usable building blocks." In other words, programmers can get up to speed faster, and write portable code for more than one supercomputer, no matter their architecture, thus creating a more viable workforce for programming the beasts.

In addition there are "plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability." And if you are a budding supercomputer programmer then all the code will be made freely available.

According to the announcement, there are four main goals with the project:

  • Create a stable environment for testing and validation: The community will benefit from a shared, continuous integration environment, which will feature a build environment and source control; bug tracking; user and developer forums; collaboration tools; and a validation environment.
  • Reduce Costs: By providing an open source framework for HPC environments, the overall expense of implementing and operating HPC installations will be reduced.
  • Provide a robust and diverse open source software stack: OpenHPC members will work together on the stability of the software stack, allowing for ongoing testing and validation across a diverse range of use cases.
  • Develop a flexible framework for configuration: The OpenHPC stack will provide a group of stable and compatible software components that are continually tested for optimal performance. Developers and end users will be able to use any or all of these components depending on their performance needs, and may substitute their own preferred components to fit their own use cases.

The uniqueness of the big machines has caused "duplication of effort and has increased the barrier to entry," according to the Linux Foundation's Jim Zemlin. "OpenHPC will provide a neutral forum to develop an open source framework that satisfies a diverse set of cluster environment use-cases." ®


Other stories you might like

  • Warehouse belonging to Chinese payment terminal manufacturer raided by FBI

    PAX Technology devices allegedly infected with malware

    US feds were spotted raiding a warehouse belonging to Chinese payment terminal manufacturer PAX Technology in Jacksonville, Florida, on Tuesday, with speculation abounding that the machines contained preinstalled malware.

    PAX Technology is headquartered in Shenzhen, China, and is one of the largest electronic payment providers in the world. It operates around 60 million point-of-sale (PoS) payment terminals in more than 120 countries.

    Local Jacksonville news anchor Courtney Cole tweeted photos of the scene.

    Continue reading
  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading

Biting the hand that feeds IT © 1998–2021