Amazon sounds death knell for rocket-science grids

Clustered instances semi standard


Comment Amazon's Cluster Compute Instances officially sounded the death knell for grid computing efforts that once held promise as the "next big thing".

Cluster Compute Instances takes a multiple of x64 and links them together using 10 Gigabit Ethernet interfaces and switches. The EC2 virtual server slices function just like any other sold by Amazon, except that the HPC variants have 10 Gigabit Ethernet links and also have a specific hardware profile that allow for fine-tuning of applications.

This semi-standardization of clustered instances reduces not only the cost to run a grid or high-performance computing HPC application, but also the vast complexity associated with building grids and the associated applications.

And it's all thanks to the cloud. No, really, grid applications are one of the best use-cases for cloud service yet. Not only does the cloud have scale, but there are simple deployment methods and far less operational concerns. And the cloud has market momentum versus Grid's scientific and academic connection.

Much in the same way that Linux usurped the marketing crown from Unix - as well as eventual market share - cloud computing took away all the glory from grid computing, which circa 2004/2005 was the term used to describe large-scale distributed computing systems - unless of course you listened to pundit Nicholas Carr and called it Utility Computing. Either way, cloud won.

And while the technological approach underlying grid and cloud are a bit different - an oversimplified explanation involves the fact that most clouds run stacks atop of virtual machines whereas grids tend to use whole machines for processing - the underlying notion of elasticity and pay-as-you go consumption is roughly the same, although the implementation and operations require different approaches and skillsets.

So why cloud and not grid? Grid computing has tended to focus on computationally intense operations, whereas cloud is more oriented toward scale and ease of deployment. Most HPC applications are typically designed to perform one specific set of functions on a specific set of hardware, whereas new-school data processing tools like Hadoop were developed to run on distributed systems that care much less about the underlying infrastructure.

I'm not suggesting that new-school applications would or should only run in the cloud. What I am saying is these new architectural patterns mean that developers can mimic a distributed environment much more easily, and that data can cross enterprise and data center boundaries in new ways. There are also many more deployment options when you are targeting clouds than your own data center.

With the exception of very specific privacy and security issues - which can arguably be addressed anyway - there are fewer and fewer reasons why any organization would want or need to run their own massive server farm.

This is not to suggest that grid and HPC will become completely obsolete but rather that, going forward, will exist in the context of cloud and will be prime candidates to parcel out to providers who can provide a vast amount of on-demand compute capacity.

In place of large numbers of servers that have to be procured and managed, cloud-based grids application deployments will look a lot more like XML and a lot less like rocket science.

Perhaps what matters most is the way developers and system administrators interact with a large amount of computing resources. It's not so much the specific code or application infrastructure that makes the cloud more appealing but the methods and capabilities that make the cloud significantly easier to use and manage.

To be clear, the new AWS offering is not a "complete" solution. Just as AWS lacks tooling for standard AMIs, so too do you need the proper tooling to manage your HPC applications on the new cluster instances. But it doesn't matter. You no longer have to own, deploy and manage hundreds of boxes to run an HPC application. You simply deploy a bunch of AMIs and kill them when the job is done.

The last iteration of grid computing required too much hardware, too much software and way too much money to reach its true potential. Clouds, both public and private, are a giant step on the data processing evolutionary scale. ®

Broader topics


Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Zendesk sold to private investors two weeks after saying it would stay public
    Private offer 34 percent above share price is just the thing to change minds

    Customer service as-a-service vendor Zendesk has announced it will allow itself to be acquired for $10.2 billion by a group of investors led by private equity firm Hellman & Friedman, investment company Permira, and a wholly-owned subsidiary of the Abu Dhabi Investment Authority.

    The decision is a little odd, in light of the company's recent strategic review, announced on June, which saw the board unanimously conclude "that continuing to execute on the Company's strategic plan as an independent, public company is in the best interest of the Company and its stockholders at this time."

    That process saw Zendesk chat to 16 potential strategic partners and ten financial sponsors, including a group of investors who had previously expressed conditional interest in acquiring the company. Zendesk even extended its discussions with some parties but eventually walked away after "no actionable proposals were submitted, with the final bidders citing adverse market conditions and financing difficulties at the end of the process."

    Continue reading
  • Singapore promises 'brutal and unrelentingly hard' action on dodgy crypto players
    But welcomes fast cross-border payments in central bank digital currencies

    In the same week that it welcomed the launch of a local center of excellence focused on crypto-inspired central bank digital currencies, Singapore's Monetary Authority (MAS) has warned crypto cowboys they face a rough ride in the island nation.

    The center of excellence (COE) was established by the Mojaloop Foundation – an open source effort to create payment platforms to make digital financial services accessible to those without access to banks. The COE aims to "accelerate financial inclusion in emerging markets" through hackathons, workshops and pilot projects while examining expanded CBDCs payment capabilities."

    Singapore's sovereign wealth fund has invested in Mojaloop, and MAS chief fintech officer Sopnendu Mohanty serves as a board advisor and the authority provides representatives to the Foundation's working group, alongside folks from the Bill & Melinda Gates Foundation, Google, and more.

    Continue reading

Biting the hand that feeds IT © 1998–2022