Hyper-modular data centre Vapor IO exited stealth mode recently to publicly push its vision for the hyper-collapsed data centre.
The start-up’s mission is to improve management of data centres while delivering greater intelligence at the network edge.
So what is involved in Vapor IO’s vision of the hyper-collapsed data centre?
First, there’s Vapor IO CORE (Core Operating Runtime Environment). This is designed to provide an open interface that applications and operating systems can query to make decisions about scale, efficiency and power consumption. Vapor IO has also announced the Open Data Center Runtime Environment (Open DCRE) as an open-source infrastructure management and analytics platform that it’s contributed to the Open Compute Project (OCP). Open DCRE includes sensors and firmware to track metrics like power usage effectiveness (PUE) and environmental data such as humidity and airflow. You can help use this to allocate workloads.
The second part of Vapor's vision is the Vapor Chamber. Instead of using a traditional server rack line with a hot side and a cold side, the Vapor Chamber assembles server blades in a 9-foot diameter cylinder, so that a single fan system can rationalise airflow to control temperature as required.
Claiming to be able to lower both data centre capex and opex, the suggestion is the hardware could be tied more closely to increasingly complex workloads.
"Hyper-collapsed data centre" appears to be Vapor IO’s own term: another way of describing this formation of server architecture might be to call it a metropolitan or industrial micro-embedded data centre, i.e. one that is located in a city or engineering-related location very possibly serving Internet of Things-related sensors.
Cole Crawford, chief executive and founder of Vapor IO, says there’s a role for this set-up in complex, hybrid clouds, where the data centre runs different workloads that lack knowledge of the IT equipment underneath. "This becomes progressively more problematic as we move towards the Internet of Everything,” said Crawford.
The problem is workload complexity, as the Internet of Things requires real-time transactional processing with Big Data analytics from cloud data centres.
Matt Trifiro, vice president of marketing at data centre operating system start-up Meosphere, upholds this opinion: “Complexity at scale will kill the data centre. In today’s world, all applications are becoming highly available, distributed systems that require operators to orchestrate thousands of containers across a giant pool of resources managing individual machines no longer works.”
Crawford contends that while PUE is a common standard for measuring data centre power efficiency, it doesn’t really tell us anything beyond the power delivery process.
“We are striving to move the industry in a direction that includes the workload as a critical data point for efficiency measurement,” he said. ®