Concerned about cloud costs? Have you tried using newer virtual machines?

Study confirms AWS prices go down as you upgrade host CPU family, not up – and not so fast, GPU users


Better, faster, and more efficient chips are driving down cloud operating costs and pushing prices lower, according to research from IT infrastructure standards and advisory group, the Uptime Institute.

With each generation of processor family, cloud pricing has trended downward with one notable exception, Owen Rogers, research director for cloud computing at Uptime Institute, explained in a write-up this week.

The research tracked Amazon Web Services (AWS) pricing across six generations of AMD and Intel CPUs and three generations of Nvidia GPUs using data obtained from the cloud provider’s price list API. While Rogers acknowledged AWS’ Graviton series of Arm-compatible CPUs, they weren’t included in testing.

All tests were conducted on AWS’ US-East-1 region, however, Rogers notes his findings should be similar across all AWS regions.

Of the eight AWS instances Rogers tracked, the majority saw a steady decline in customer pricing with each subsequent CPU generation. Pricing for the AWS m-family of general purpose instances, for example, dropped 50 percent from the first generation to present.

Some instances — AWS’ storage optimized instances in particular — saw even more precipitous pricing drops, which he attributed to other factors including memory and storage.

It comes as no surprise that CPU performance in these instances tends to improve with each generation, Rogers noted, citing the various performance and efficiency advantages to architectural and process improvements.

For example, AMD’s third-gen Epyc Milan processor family and Intel’s Ice Lake family of Xeon Scalable processors claim a 19-20 percent performance advantage over previous-generation chips. Both families are now available in a variety of AWS instances, including a storage-optimized instance announced last week.

“Users can expect greater processing speed with newer generations compared with older versions while paying less. The efficiency gap is more substantial than simply pricing suggests,” he wrote, adding that it is plain to see in AWS’ pricing.

In other words, while intuitively you may think instances based on older processor tech should be less expensive, more modern, more power efficient instances are often priced lower to incentivize their adoption.

"However, how much of the cost savings AWS is passing on to its customers versus adding to its gross margin remains hidden from view,” he wrote.

Some of this can be attributed to customer buying habits, specifically those that favor cost over performance. “Because of this price pressure, cloud virtual instances are coming down in price,” he wrote.

The GPU pricing anomaly

The exception to this rule are GPU instances, which have actually become more expensive with each generation, Rogers found.

His research tracked AWS’ g-and p-series GPU-accelerated instances over three and four generations, respectively, and found that the rapid growth of total performance alongside the rise of demanding AI/ML workloads have allowed cloud providers — and Nvidia — to rise prices.

“Customers are willing to pay more for newer GPU instances if they deliver value in being able to solve complex problems quicker,” he wrote.

Some of this can be chalked up to the fact that, until recently, customers looking to deploy workloads on these instances have had to do so on dedicated GPUs, as opposed to renting smaller virtual processing units. And while Rogers notes that customers, in large part, prefer to run their workloads this way, that may be changing.

Over the past few years, Nvidia — which dominates the cloud GPU market — has, for one, introduced features that allow customers to split GPUs into multiple independent virtual processing units using a technology called Multi-instance GPU or MIG for short. Debuted alongside Nvidia’s Ampere architecture in early 2020, the technology enables customers to split each physical GPU into up to seven individually addressable instances.

And with the chipmaker’s Hopper architecture and H100 GPUs, announced at GTC this spring, MIG gained per-instance isolation, I/O virtualization, and multi-tenancy, which open the door to their use in confidential computing environments.

Migration migraines persist

Unfortunately for customers, taking advantage of these performance and cost savings isn’t without risk. In most cases, workloads aren’t automatically migrated to newer, cheaper infrastructure, Rogers noted. Cloud subscribers ought to test their applications on newer virtual machine types before diving into a mass migration.

“There may be unexpected issues of interoperability or downtime while the migration takes place,” Rogers wrote, adding: “Just as users plan server refreshes, they need to make virtual instance refreshes a part of their ongoing maintenance.”

By supporting older generations cloud providers allow customers to upgrade at their own pace, Rogers said. “The provider doesn’t want to appear to be forcing the user into migrating applications that might not be compatible with the new server platforms.” ®


Other stories you might like

  • Quantum internet within grasp as scientists show off entanglement demo
    Teleportation of quantum information key to future secure data transfer

    Researchers in the Netherlands have shown they can transmit quantum information via an intermediary node, a feature necessary to make the so-called quantum internet possible.

    In recent years, scientists have argued that the quantum internet presents a more desirable network for transferring secure data, in addition to being necessary when connecting multiple quantum systems. All of this has been attracting investment from the US government, among others.

    Despite the promise, there are still vital elements missing for the creation of a functional quantum internet.

    Continue reading
  • Drone ship carrying yet more drones launches in China
    Zhuhai Cloud will carry 50 flying and diving machines it can control with minimal human assistance

    Chinese academics have christened an ocean research vessel that has a twist: it will sail the seas with a complement of aerial and ocean-going drones and no human crew.

    The Zhu Hai Yun, or Zhuhai Cloud, launched in Guangzhou after a year of construction. The 290-foot-long mothership can hit a top speed of 18 knots (about 20 miles per hour) and will carry 50 flying, surface, and submersible drones that launch and self-recover autonomously. 

    According to this blurb from the shipbuilder behind its construction, the Cloud will also be equipped with a variety of additional observational instruments "which can be deployed in batches in the target sea area, and carry out task-oriented adaptive networking to achieve three-dimensional view of specific targets." Most of the ship is an open deck where flying drones can land and be stored. The ship is also equipped with launch and recovery equipment for its aquatic craft. 

    Continue reading
  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading
  • Cloud security unicorn cuts 20% of staff after raising $1.3b
    Time to play blame bingo: Markets? Profits? Too much growth? Russia? Space aliens?

    Cloud security company Lacework has laid off 20 percent of its employees, just months after two record-breaking funding rounds pushed its valuation to $8.3 billion.

    A spokesperson wouldn't confirm the total number of employees affected, though told The Register that the "widely speculated number on Twitter is a significant overestimate."

    The company, as of March, counted more than 1,000 employees, which would push the jobs lost above 200. And the widely reported number on Twitter is about 300 employees. The biz, based in Silicon Valley, was founded in 2015.

    Continue reading
  • Talos names eight deadly sins in widely used industrial software
    Entire swaths of gear relies on vulnerability-laden Open Automation Software (OAS)

    A researcher at Cisco's Talos threat intelligence team found eight vulnerabilities in the Open Automation Software (OAS) platform that, if exploited, could enable a bad actor to access a device and run code on a targeted system.

    The OAS platform is widely used by a range of industrial enterprises, essentially facilitating the transfer of data within an IT environment between hardware and software and playing a central role in organizations' industrial Internet of Things (IIoT) efforts. It touches a range of devices, including PLCs and OPCs and IoT devices, as well as custom applications and APIs, databases and edge systems.

    Companies like Volvo, General Dynamics, JBT Aerotech and wind-turbine maker AES are among the users of the OAS platform.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022