AWS chops data transfer fees by massive extension of free tier – 2 months after rival previewed R2 Storage

Customer: 'I should send Cloudflare a Christmas card to say thanks'


AWS has improved its free tier for data transfer, from 1GB to 100GB per month for transfer to the internet, and from 50GB to 1TB for CloudFront, its content delivery network.

According to AWS chief evangelist Jeff Barr "as a result of this change millions of AWS customers worldwide will no longer see a charge for these two categories of data transfer," which is effective from 1 December.

The free allocation will apply to services such as S3 (Simple Storage Service), web applications on EC2 (Elastic Compute Cloud) VMs and so on. There is an exclusion for the AWS GovCloud and for China regions.

Other details are that the number of free HTTP and HTTPS requests to CloudFront are to be increased from 2 million to 10 million, and the offer of two million free CloudFront function invocations per month is no longer limited to the first year.

While Barr attributes the change to a "tradition of AWS price reductions" many industry watchers link the price reduction to competition from others, including Cloudflare which in September previewed R2 Storage, which implements Amazon's S3 API but without egress fees.

In July, Cloudflare accused AWS of excessive rates for data egress, stating that customers in North America and Europe pay 80 times what the service costs AWS to operate.

Cloudflare CEO Matthew Prince said on Twitter, in response to the AWS news, "Well, that was fast!! I'm doing the dance of joy! Great news for our mutual customers. And the next step toward the inevitable end of cloud egress [fees]."

A customer commented on Hacker News, "I use 500GB–1TB per month on Cloudfront, costing about $50–100 per month, and I was going to move this over to Cloudflare to take advantage of their savings. However, this AWS change will basically wipe out my entire Cloudfront bill. I should send Cloudflare a Christmas card to say thanks."

It does seem that AWS intends to stop customers bleeding to Cloudflare with this move. There are caveats, though. One is that this is a free tier, so that customers using data transfer beyond these amounts will still pay the high AWS fees. For these users, the free tier becomes a discount. Enterprises for whom content distribution is core to their business will benefit by looking elsewhere. Netflix, for example, built its own CDN connecting to ISPs around the world.

Second, AWS has plenty of other fees to fall back on. S3, for example, charges fees for storage and for API operations such as PUT and GET requests. There are further charges for analytics, Lambda integration, and so on.

Rival storage service Wasabi, which this week announced a new storage region in London, claimed the S3 charges follow a strategy of "make the transaction charges so ridiculously small that customers don't notice, don't care, or can't figure out how to calculate them. Then do this for all of your hundreds of thousands of customers and trillions of objects, and kick back and watch your coffers overflow as time goes by." (Wasabi charges only for storage and claims to be 80 per cent cheaper than S3.)

Cloudflare is not clear on this matter of operation fees, in the context of R2, and said only that "R2 will zero-rate infrequent storage operations under a threshold — currently planned to be in the single digit requests per second range. Above this range, R2 will charge significantly less per-operation than the major providers."

Is Prince really celebrating or will Cloudflare worry about R2 losing some of its appeal? Questioned on this matter earlier this month, in an earnings call, Prince said that if AWS were to take egress fees to zero, "it would force us to continue to innovate in that space, just like it would force everyone else to innovate in the space."

Prince also said that interest in R2 was "off the charts". He hopes for Cloudflare to integrate with all the hyperscale public cloud providers and to be "the fabric that connects all of that together." ®

Similar topics

Narrower topics


Other stories you might like

  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head of engineering weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022