The container-cloud myth: We're not in Legoland anymore

Why interconnectivity in the cloud is tougher than just stacking bricks

Everything is being decoupled, disaggregated and deconstructed. Cloud computing is breaking apart our notions of desktop and server, mobile is decoupling the accepted concept of wired technology, and virtualisation is deconstructing our understanding of what a network was supposed to be.

Inside this maelstrom of disconnection, we find this thing we are supposed to call cloud migration. Methods, tools and protocols that guarantee they will take us into the new world of virtualisation, which promises “seamless” migration and robust results.

It turns out that taking traditional on-premises application structures into hosted virtualised worlds is way more complex than was first imagined.

Questions of application memory and storage allocation are fundamentally different in cloud environments. Attention must be paid to application Input/Output (I/O) and transactional throughput. The location of your compute engine matters a lot more if data can be on-premises private, hybrid or public cloud located – or, heaven forbid, some combination of the three.

Essentially, the parameters that govern at every level and layer of IT can take on a different shape. The primordial spirit of IT has changed: decoupling creates a new beast altogether.

But, as you may have heard, the new world of containers, software-defined infrastructure and microservices is supposed to have come to our rescue. If you believe all the hype, it’s like the power of Lego building blocks has arrived – a new way to interlock and assemble our decoupled component elements of computing.

In Legoland (the concept, not the theme park), objects can be built, disassembled and then rebuilt into other things or even combined into other objects. The concept of meshed interlocking connectivity in Lego is near perfect. Or at least it is in the basic bricks and blocks model until the accoutrements come along.

Clicking & sticking slickness

The danger comes about when people start talking about interconnectivity in the cloud (and the Big Data that passes through it) and likening new “solutions” to the clicking and sticking slickness ease we enjoy with Lego. It’s just not that simple.

For all the advantages of microservices, they bring with them a greater level of operational complexity.

Samir Ghosh, chief executive of Docker-friendly platform-as-a-service provider WaveMaker, reckons that compared with a “monolithic” (meaning non-cloud) application, a microservices-based application may have dozens, hundreds, or even thousands of services, all of which must be managed through to production – and each of those services requires its own APIs.

Chris Stolte, chief development officer and co-founder of Tableau Software, said the upcoming 9.1 release of his firm’s data visualisation product is specifically engineered with new connection intelligence.

He claims “significant investments” in enterprise features and a web data connector to connect to a “limitless number” of sources, including Facebook, Twitter, Google Sheets, SAP, Google Cloud SQL, Amazon Aurora and Microsoft Azure.

Similar topics

Other stories you might like

Biting the hand that feeds IT © 1998–2022