Amazon finally opens doors to its serverless analytics

Still managing app servers by hand? What is this, 2012?


If you want to run analytics in a serverless cloud environment, Amazon Web Services reckons it can help you out all while reducing your operating costs and simplifying deployments.

As is typical for Amazon, the cloud giant previewed this EMR Serverless platform – EMR once meaning Elastic MapReduce – at its Re:Invent conference in December, and only opened the services to the public this week.

AWS is no stranger to serverless with products like Lambda. However, its EMR offering specifically targets analytics workloads, such as those using Apache Spark, Hive, and Presto.

Amazon’s existing EMR platform already supported deployments on VPC clusters running in EC2, Kubernetes clusters in EKS, and on-prem deployments running on Outposts. And while this provides greater control over the application and compute resources, it also required the user to manually configure and manage the cluster.

What’s more, the compute and memory resources needed for many data analytics workloads are subject to change depending on the complexity and volume of the data being processed, according to Amazon.

EMS Serverless promises to eliminate this complexity by automatically provisioning and scaling compute resources to meet the demands of open-source workloads. As more or less resources are required to accommodate changing data volumes, the platform automatically adds or removes workers. This, Amazon says, ensures that compute resources aren’t underutilized or over-committed. And customers are only charged for the time and number of workers required to complete the job.

Customers can further control costs by specifying a minimum and maximum number of workers and the virtual CPUs and memory allocated to each worker. Each application is fully isolated and runs within a secure instance.

According to Amazon, these capabilities make the platform ideal for a number of data pipeline, shared cluster, and interactive data workloads.

By default EMS Serverless workloads are configured to start when jobs are submitted and stop after the application has been idle for more than 15 minutes. However, customers can also per-initialize workers to reduce the time require starting the process.

EMR Serverless also supports shared applications using Amazon’s identity and access management roles. This enables multiple tenants to submit jobs using a common pool of workers, the company explained in a release.

At launch, EMS Serverless supports applications built using the Apache Spark and Hive frameworks.

Regardless of how the application is deployed, workloads are managed centrally from Amazon’s EMR Studio. The control plane also allows customers to spin up new workloads, submit jobs, and review diagnostics data. The service also integrates with AWS S3 object storage, enabling Spark and Hive logs to be saved for review.

EMR Serverless is available now in Amazon’s North Virginia, Oregon, Ireland, and Tokyo regions. ®


Other stories you might like

  • Will cloud giants really drive colos off a financial cliff?
    The dude who predicted the Enron collapse bets they will

    Analysis Jim Chanos, the infamous short-seller who predicted Enron's downfall, has said he plans to short datacenter real-estate investment trusts (REIT).

    "This is our big short right now," Chanos told the Financial Times. "The story is that, although the cloud is growing, the cloud is their enemy, not their business. Value is accrued to the cloud companies, not the bricks-and-mortar legacy datacenters."

    However, Chanos's premise that these datacenter REITs are overvalued and at risk of being eaten alive by their biggest customers appears to overlook several important factors. For one, we're coming out of a pandemic-fueled supply chain crisis in which customers were willing to pay just about anything to get the gear they needed, even if it meant waiting six months to a year to get it.

    Continue reading
  • Price hikes, cloud expansion drive record datacenter spending
    High unit costs and fixed capex budgets propelling enterprises cloudwards

    The major hyperscalers and cloud providers are forecast to spend 25 percent more on datacenter infrastructure this year to $18 billion following record investments in the opening three months of 2022.

    This is according to Dell’Oro Group research, which found new cloud deployments and higher per-unit infrastructure costs underpinned capex spending in Q1, which grew at its fastest pace in nearly three years, the report found.

    Datacenter spending is expected to receive an additional boost later this year as the top four cloud providers expand their services to as many as 30 new regions and memory prices trend upward ahead of Intel and AMD’s next-gen processor families, Dell’Oro analyst Baron Fung told The Register

    Continue reading
  • Having trouble finding power supplies or server racks? You're not the only one
    Hyperscalers hog the good stuff

    Power and thermal management equipment essential to building datacenters is in short supply, with delays of months on shipments – a situation that's likely to persist well into 2023, Dell'Oro Group reports.

    The analyst firm's latest datacenter physical infrastructure report – which tracks an array of basic but essential components such as uninterruptible power supplies (UPS), thermal management systems, IT racks, and power distribution units – found that manufacturers' shipments accounted for just one to two percent of datacenter physical infrastructure revenue growth during the first quarter.

    "Unit shipments, for the most part, were flat to low single-digit growth," Dell'Oro analyst Lucas Beran told The Register.

    Continue reading
  • Datacenter operator Switch hit with claims it misled investors over $11b buyout
    Complainants say financial projections were not disclosed, rendering SEC filing false and misleading

    Datacenter operator Switch Inc is being sued by investors over claims that it did not disclose key financial details when pursuing an $11 billion deal with DigitalBridge Group and IFM Investors that will see the company taken into private ownership if it goes ahead.

    Two separate cases have been filed this week by shareholders Marc Waterman and Denise Redfield in the Federal Court in New York. The filings contain very similar claims that a proxy statement filed by Switch with the US Securities and Exchange Commission (SEC) in regard to the proposed deal omitted material information regarding Switch's financial projections.

    Both Redfield and Waterman have asked the Federal Court to put the deal on hold, or to undo it in the event that Switch manages in the meantime to close the transaction, and to order Switch to issue a new proxy statement that sets out all the relevant material information.

    Continue reading
  • Iceotope: No need to switch servers to swap air-cooled for liquid-cooled
    Standard datacenter kit just needs a few tweaks, like pulling off the fans

    Liquid cooling specialist Iceotope claims its latest system allows customers to easily convert existing air-cooled servers to use its liquid cooling with just a few minor modifications.

    Iceotope’s Ku:l Data Center chassis-level cooling technology has been developed in partnership with Intel and HPE, the company said, when it debuted the tech this week at HPE’s Discover 2022 conference in Las Vegas. The companies claim it delivers energy savings and a boost in performance.

    According to Iceotope, the sealed liquid-cooled chassis enclosure used with Ku:l Data Center allows users to convert off-the-shelf air-cooled servers to liquid-cooled systems with a few small modifications, such as removing the fans.

    Continue reading
  • Is a lack of standards holding immersion cooling back?
    There are just so many ways to deep fry your chips these days

    Comment Liquid and immersion cooling have undergone something of a renaissance in the datacenter in recent years as components have grown ever hotter.

    This trend has only accelerated over the past few months as we’ve seen a fervor of innovation and development around everything from liquid-cooled servers and components for vendors that believe the only way to cool these systems long term is to drench them in a vat of refrigerants.

    Liquid and immersion cooling are by no means new technologies. They’ve had a storied history in the high-performance computing space, in systems like HPE’s Apollo, Cray, and Lenovo’s Neptune to name just a handful.

    Continue reading
  • Oracle shrinks on-prem cloud offering in both size and cost
    Now we can squeeze required boxes into a smaller datacenter footprint, says Big Red

    Oracle has slimmed down its on-prem fully managed cloud offer to a smaller datacenter footprint for a sixth of the budget.

    Snappily dubbed OCI Dedicated Region Cloud@Customer, the service was launched in 2020 and promised to run a private cloud inside a customer's datacenter, or one run by a third party. Paid for "as-a-service," the concept promised customers the flexibility of moving workloads seamlessly between the on-prem system and Oracle's public cloud for a $6 million annual fee and a minimum commitment of three years.

    Big Red has now slashed the fee for a scaled-down version of its on-prem cloud to $1 million a year for a minimum period of four years.

    Continue reading

Biting the hand that feeds IT © 1998–2022