Serverless neither magically faster nor cheaper, dev laments

Seems there is some work involved, as one AWS punter discovers

Adopting the latest hip technology – like "going serverless" – does not always work out as well as we'd hope.

Take AWS customer Einar Egilsson, who decided to migrate his .NET Core web API application from a classic setup using Linux VMs and Elastic Beanstalk (which scales resources up and down as required) to serverless with AWS Lambda – but found it both slower and hugely more expensive.

Egilsson said he was attracted by faster deployment as well as wanting to experiment with the latest tech. On Elastic Beanstalk, .NET Core apps are only supported in Docker containers, taking several minutes to deploy, whereas on Lambda the support is native.

So he set up a parallel deployment on Lambda, also using API Gateway to publish and manage his API application.

Deployment time went down by 20 per cent or so, but he was disappointed to find performance around 15 per cent slower than before.

The shocker, though, was the cost. His app receives around 10 million requests per day. On Elastic Beanstalk, the cost was around $164 per month. On Lambda and API Gateway, it was going to be around $1,350 per month, had he not spotted the mounting bill and reverted.

Much of the cost is for API Gateway, around $1,000 per month for this usage, and until recently API Gateway was the only way to publish an HTTP endpoint for a Lambda function.

After Egilsson posted about his experience, the tech community was quick to dive in with suggestions. For example, it is no longer necessary to use API Gateway if you do not need its features. Since November 2018, you can use standard Application Load Balancers, which should be much cheaper. Reserved instances would save money.

Then there is this insight from Hacker News:

Serverless is not a replacement for cloud VMs/containers. Migrating your Rails/Express/Flask/.Net/whatever stack over to Lambda/API Gateway is not going to improve performance or costs.

You really have to architect your app from the ground up for serverless by designing single-responsibility microservices that run in separate lambdas, building a heavy javascript front-end in your favorite framework (React/Ember/Amber/etc), and taking advantage of every service you can (Cognito, AppSync, S3, Cloudfront, API Gateway, etc) to eliminate the need for a web framework.

Egilsson's experience is worth noting, though. There is always a trade-off, and the scalability and low maintenance of a serverless solution does not come for free. The presumption is that you benefit from not having to think about server or VM maintenance.

Second, it is essential to do the sums. Services that seem to perform similar functions can be very different in cost. The mitigating factor is that cloud services, unlike misguided hardware purchases, are relatively easy to turn off. ®

Similar topics

Narrower topics

Other stories you might like

  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • Workday nearly doubles losses as waves of deals pushed back
    Figures disappoint analysts as SaaSy HR and finance application vendor navigates economic uncertainty

    HR and finance application vendor Workday's CEO, Aneel Bhusri, confirmed deal wins expected for the three-month period ending April 30 were being pushed back until later in 2022.

    The SaaS company boss was speaking as Workday recorded an operating loss of $72.8 million in its first quarter [PDF] of fiscal '23, nearly double the $38.3 million loss recorded for the same period a year earlier. Workday also saw revenue increase to $1.43 billion in the period, up 22 percent year-on-year.

    However, the company increased its revenue guidance for the full financial year. It said revenues would be between $5.537 billion and $5.557 billion, an increase of 22 percent on earlier estimates.

    Continue reading

Biting the hand that feeds IT © 1998–2022