Microsoft adds silicon muscle into latest Azure SQL database configs
Intel's 'Ice Lake' and AMD's 'Milan' chips bump up speeds and feeds
Microsoft is bulking up its Azure SQL database services on the back of Intel's "Ice Lake" Xeon and AMD's "Milan" Epyc server chipsets.
The company this week started a public preview of its standard-series – previously known as Gen 5 – provisioned databases and elastic pools that can now scale up to 128 vCores and 625GB of memory, a jump over the previous maximum of 80 vCores and 415GB of memory.
The expansion is in line with user demand for greater scalability in the service, says Scott Kim, principal product manager for Azure SQL Database.
"More cores improve workload throughput, and newer chipsets improve single core performance," Kim wrote in a blog post.
The 128-vCore compute size runs on Intel's Xeon Platinum 8370C and AMD's Epyc 7763v chipsets. With the new size, the databases and elastic pools deliver maximum input/output operations per second (IOPS) of 327,680 and 409,600 respectively. Microsoft says this is the highest of any Azure SQL compute size.
Elastic pools let enterprises manage and scale multiple databases that have unpredictable demands on use. All the databases in a pool are on a single server and share various resources, in theory allowing SaaS developers to find the most cheapest way to squeeze out the most performance.
At the same time, the number of concurrent workers for general-purpose and business-critical databases and pools also increases to 12,800 (for the databases) and 13,440 (for the pools).
Right now, the 128-vCore public preview is only supported in 10 regions, including four in the US, two in Europe, and one each in Canada, Australia, Japan, and the UK. Kim also wrote that zone redundancy for 128-vCore compute sizes will be supported early next year.
In addition, the renaming of the Gen 5 hardware to standard-series only applies to Azure Portal and related documentation. For developers using the REST API or an Arm template to create the SQL database, the existing scripts still work, according to Microsoft.
Microsoft is also bringing more processors to the new premium-series hardware for Azure SQL Hyperscale databases that are designed for compute and data-intensive workloads. The configurations are in preview.
Like the 128-vCore Standard version, the premium configurations – both the hyperscale and memory-optimized series, which also is in preview – are based on Intel's Xeon 8370C and AMD's Epyc 7763v chips but offer "significantly improved performance and scalability over the [current] standard-series… hardware offerings," Kim wrote in a companion blog post.
- Windows Server domain controllers may stop, restart after recent updates
- SQL Server license prices rise ten percent as version 2022 debuts
- Couchbase claims fourfold performance boost for DBaaS using a tenth of the memory
- Microsoft will help trim your Azure bill to encourage loyalty
A key difference with the memory-optimized premium-series is the 10.2GB per core of memory – twice that of the other Premium hyperscale offering – and 830GB per instance, as well as the 40 percent higher price tag.
"Due to the added memory and lower price, [the] premium-series memory optimized option is a great alternative to M-series hardware in Azure SQL Database, which is to be retired in September 2023," Kim wrote.
Microsoft is also trying to make it more attractive to move to the premium-series hyperscale configuration by making its per-vCore price similar to the Standard-series, writing that "it absolutely does make sense to move to premium."
A sample demo of a HammerDB benchmark running TCPP-like workloads showed that the premium-series hyperscale hardware delivers about a 20 percent performance improvement for a similar price, he wrote.
Both premium-series preview configurations are supported in three US regions and one each in Canada and the UK. Like the standard-series public preview, zone redundancy will come next year, as well the Azure SQL Database maintenance window.
A sample demo of a HammerDB benchmark running TCPP-like workloads showed that the premium-series hardware delivers about a 20 percent performance improvement for a similar price. ®