The software-defined data centre concept has attracted considerable attention and hype, with its promise of reducing hardware costs and automating control of infrastructure.
Backers of the idea say the SDDC will enable policy-driven management of resources, allowing applications to be deployed across commodity hardware to suit the demands of particular workloads.
The underlying infrastructure will essentially be invisible to those managing it, delivering IT “as a service” to the rest of the business.
But while it may offer a neat vision for the future, reaping the benefits is likely to be a huge task for most firms.
It will require new technologies and different approaches to data centre design and management. Despite the flurry of vendor activity, many are waiting to see whether the SDDC is mere marketing buzz, or a glimpse of the future.
The problem is that the term SDDC is as much a general ideal as a specific set of technologies. Currently, it’s described as having several key elements: software-defined compute, storage and networking, and with a management layer.
The SDDC term was coined in 2012 by former CTO of virtualisation giant VMware, Steve Herrod, who at the time described today’s data centres as "a history museum". SDDC has since been front and centre of the firm’s product marketing message.
VMware arguably has the most riding on its uptake and, more importantly, the adoption of the underlying technology. Having led the way with server virtualisation, VMware has been forced to seek new avenues of growth as the market continues to mature.
This involved buying network virtualisation startup Nicira for $1.3bn in 2012, addressing storage virtualisation with the acquisition of Virtso, and the launch of its vSAN product last year. It has also built out its cloud management services with the purchases of DynamicOps, adding to its existing vCloud product set.