This article is more than 1 year old

Database consolidation is a server gain. Storage vendors should butt out

Cost benefits are oversold – and may be completely wrong

Register Debate Welcome to The Register Debate in which we pitch our writers against each other on contentious topics in IT and enterprise tech, and you – the reader – decide the winning side. The format is simple: a motion is proposed, for and against arguments are published today, then another round of arguments on Wednesday, and a concluding piece on Friday summarizing the brouhaha and the best reader comments.

During the week you can cast your vote using the embedded poll, choosing whether you're in favor or against the motion. The final score will be announced on Friday, revealing whether the for or against argument was most popular. It's up to our writers to convince you to vote for their side.

For this debate, the motion is: Consolidating databases has significant storage benefits, therefore everyone should be doing it.

And so, kicking off this week's debate and arguing AGAINST the motion, is CHRIS MELLOR...

The idea that consolidating databases has significant storage benefits and therefore everyone should be doing it is missing the point.

A database consolidation exercise could involve a migration from a disk array to an all-flash array. This could well deliver storage benefits, such as significantly lower data centre power and cooling needs, reduced rack space take-up and faster performance. But this should not be confused with the actual database consolidation - it’s database storage acceleration. All-flash array suppliers may well say it is worth doing. DRAM-caching disk array vendors could have a different view. But this is a database storage issue and quite separate from database consolidation.

Database consolidation is fundamentally about database server consolidation. And it’s made possible by server CPUs having many more cores than before.

Look at this progression on HPE ProLiant DL360 core counts; Gen 7 DL360s supported just 6 cores, Gen 8 DL360 has increased this to 12. Gen 9 took the core total to 22 and the latest Gen 10 DL360s supports up to 28 cores.

That means in general that a single later generation DL360 can do the same processing work as several earlier DL360s.

It’s not just cores that are multiplying. Storage devices are getting bigger. Disk drives are bulking up with 10, 12, 14, 16 and 18TB drives, and 20TB models are getting ready for prime time. SSDS are getting larger with increasing layers in 3D NAND; 64 moving to 96 layers with 128 layers now appearing, while the number of bits per cell increases from three (TLC) to four (QLC).

The PCIe bus which hooks up storage devices to memory is doubling speed from PCIe Gen 3 to Gen 4. Memory speed and capacity is increasing with a DDR4 to DDR5 transition. Individual servers are turning into far faster and more capable data processing powerhouses than those in use five years ago, with 10-year old servers possibly having less CPU power than the latest mobile phone CPUs.

Imagine a business’ relational database system running on a group of servers which access a shared storage area network (SAN) array across am 8Gbit/s Fibre Channel network. It could be an iSCSI network but the ideas we’re bringing out will stay the same.

The servers run the database application code and read and write database records fetched from or sent to the SAN.

Let’s suppose you have 20 four-core database servers accessing a 200TB SAN and consolidate them to eight 10-core servers. You have got rid of 12 servers, with their power draw, their cooling needs and their rack space. The eight left do all the work. Great. But they still access the 200TB SAN. The database storage resource has not changed its capacity at all. There’s no saving there. But there could well be added cost because it may well need to change its storage networking infrastructure.

If our consolidation exercise results in just eight servers doing the same work as the original twenty then they will each need 2.5 times the Fibre Channel bandwidth of a single original server. That means upgrading the SAN and server Fibre Channel HBAs and the Fibre Channel switches in the storage network from 8Gbit/s to faster 16Gbit/s. In this aspect of database consolidation there is likely to be increased storage networking cost.

Consolidation benefits

So database consolidation isn’t affecting storage capacity and may increase storage costs. The benefit is concentrated on the servers. This has been well known for years.

A July 2020 “Best Practices for Database Consolidation” Oracle paper [PDF] states: “The primary purpose of database consolidation is to reduce the cost of database infrastructure by taking advantage of more powerful servers that have come onto the market that include dozens of processor cores in a single server. … Database consolidation allows more databases to run on fewer servers, which reduces infrastructure, but also reduces operational costs.”

The paper asserts: “Cost reduction is the primary business driver of database consolidation,” followed by simplified administration, improved security and isolation.

What the Oracle paper does not say is that consolidating databases such as Oracle’s or Microsoft’s SQL Server on fewer servers also saves licensing costs where database instances are licensed per server.

Database consolidation onto fewer servers saves server cost because you need fewer servers, and also saves database instance licensing expense as you need fewer per-server instance licenses.

There is no storage benefit here but the potentially significant server-based benefits make database consolidation an attractive idea that can serve you right. ®

Cast your vote below. You can track the progress of the debate right here.

JavaScript Disabled

Please Enable JavaScript to use this feature.

More about

More about

More about


Send us news

Other stories you might like