Ten years on: How did that cloud strategy pan out?

How to avoid vendor lock-in


So the CEO is hearing all about clouds now and the financial director is looking at his pile of beans and as usual wants you to do more with less. And both think it is time for you to build or buy a cloud. Where do you start?

The answer is by being brutally honest with yourself and your bosses about everything around you.

A service provider building a greenfield cloud to peddle infrastructure or platform cloud services to augment your carrier and hosting services has it easy. It is simply a matter of examining what type of cloud it wants to supply to customers.

It picks a cloud controller fabric – VMware vCloud, the open source OpenStack or CloudStack, or maybe Windows Server 2012 and Hyper-V with System Center. This cloud doesn't have to integrate with anything but the provider’s billing systems: it just has to create a self-service portal for customers and a more sophisticated management console for the provider’s own admins.

Not so for you. You are sitting there with mission-critical systems – physical boxes running siloed workloads or at best virtualized machines that have a few workloads sharing capacity atop a hypervisor.

A fine mess

You probably have a mix of Risc/Unix boxes and maybe some proprietary mid-range and mainframe systems running legacy code.

You have Windows systems running Exchange Servers for email and groupware and any number of SQL Server databases and home-grown apps and third-party apps, and probably Linux systems running other infrastructure workloads such as data warehouses or analytics and maybe Java applications.

Exactly what the mess consists of hardly matters. You have a mix of apps and platforms and developers and admins with their own set of preferences and prejudices. And now the top brass wants you to turn this hodge-podge of hardware and software into a cloud.

It is understandable if you are jealous of Amazon Web Services and other clouds, says Bryan Che, general manager of the cloud business unit at Red Hat, the commercial Linux and Java platform distributor.

"The biggest motivation for CIOs is when they take a look at the complexity and inefficiencies of their own operations," he says.

"And then they take a look at the public cloud providers such as Amazon, Rackspace and IBM and on any measure they can think of – how quickly they can provision, how much it costs to get that infrastructure, how many administrators they need to manage it and so on – it is orders of magnitude different from what CIOs experience in their own data centers."

Toe-dipping

The odds are you have a lot of Windows systems in your shop, and therefore have VMware's ESXi hypervisor inside its vSphere server virtualization toolset in your shop virtualizing some of your Windows and Linux operating systems for x86 servers.

You could be dabbling with Red Hat's KVM-based Enterprise Virtualization hypervisor or Microsoft's Hyper-V, and where Oracle databases, middleware and applications are involved, you might even be virtualizing atop Oracle's own rendition of the open-source Xen hypervisor.

But again, based on market stats, you may have started out with VMware GSX Server and ESX Server a decade ago in your test and development environment when you first started virtualizing servers. Then you took five or six years to gradually start virtualizing more of your IT infrastructure.

It will come as no surprise that VMware wants you to do the same thing all over again with its vCloud Director tools.

"In the US five years ago, or in emerging countries such as Peru today, companies didn't start out with their first virtualized workload being Exchange Server," says Neela Jacques, director of product marketing for the cloud infrastructure suite at VMware.

"Not because Exchange Server couldn't be virtualized – it is by almost every VMware customer – but because if you start there, you need to think about how to tune storage and do backup and disaster recovery.

“By starting with test and dev with virtualization, you could ensure that you had a high degree of success, gain your skills and then move on to infrastructure and finally tier-two apps. Then maybe three years later you got to business-critical apps.

“Just as it was a big mistake to try to start virtualization with the most complex workloads, it is true for clouds too."

Jacques adds that if you have not built a cloud yet, you should start with the now-virtualized test and dev environment, adding vCloud Director and gaining experience with the self-service portal.

Then you move on to the more sophisticated cloud management tools and high-availability portions of the vCloud Suite, then maybe look at cloud-bursting and disaster-recovery features.

Pastures new

The one thing you do not want to do, says Jacques, is give in to the temptation of implementing a greenfield application – such as an electronic medical records application – on a full-on all singing and dancing cloud.

"This is where you can fall right into the trap," Jacques tells El Reg.

"It is not that you can't build a cloud for a business critical app – you absolutely can. But if you start there, you can make decisions that can hurt you in the long run, such as creating a highly scripted, management-heavy environment to meet the needs of one project.

“It makes sense not to over-complicate your first cloud. With VMware, start with vSphere and vCloud Director. If you want your cloud to do everything, we have the technology, but I don't know if you will be able to get up to speed on day one."


Other stories you might like

  • Google calculates Pi to 100 trillion digits
    Claims world record run took 157 days, 23 hours … and just one Debian server

    Google has put its cloud to work calculating the value of Pi all the way out to 100 trillion digits, and claimed that's a world record for Pi-crunching.

    The ad giant and cloud contender has detailed the feat, revealing that the job ran for 157 days, 23 hours, 31 minutes and 7.651 seconds.

    A program called y-cruncher by Alexander J. Yee did the heavy lifting, running on a n2-highmem-128 instance running Debian Linux and employing 128 vCPUs, 864GB of memory, and accessing 100Gbit/sec egress bandwidth. Google created a networked storage cluster, because the n2-highmem-128 maxes out at 257TB of attached storage for a single VM and the job needed at least 554TB of temporary storage.

    Continue reading
  • IT downtime not itself going down, power failures most common cause
    2022 in a nutshell: Missing SLAs, failing to meet customer expectations

    Infrastructure operators are struggling to reduce the rate of IT outages despite improving technology and strong investment in this area.

    The Uptime Institute's 2022 Outage Analysis Report says that progress toward reducing downtime has been mixed. Investment in cloud technologies and distributed resiliency has helped to reduce the impact of site-level failures, for example, but has also added complexity. A growing number of incidents are being attributed to network, software or systems issues because of this intricacy.

    The authors make it clear that critical IT systems are far more reliable than they once were, thanks to many decades of improvement. However, data covering 2021 and 2022 indicates that unscheduled downtime is continuing at a rate that is not significantly reduced from previous years.

    Continue reading
  • Digital sovereignty gives European cloud a 'window of opportunity'
    And US hyperscalers want to shut it ASAP, we're told

    OpenInfra Summit The OpenInfra Foundation kicked off its first in-person conference in over two years with acknowledgement that European cloud providers must use the current window of opportunity for digital sovereignty.

    This is before the US-headquartered hyperscalers shut down that opening salvo with their own initiatives aimed at satisfying regulator European Union, as Microsoft recently did – with President Brad Smith leading a charm offensive.

    Around one thousand delegates turned out for the Berlin shindig, markedly fewer than at CNCF's Kubecon in Valencia a few weeks earlier. Chief operating officer Mark Collier took to the stage to remind attendees that AWS' CEO noted as recently as this April that 95 per cent of the world's IT was not spent in the cloud, but on on-premises IT.

    Continue reading
  • IBM buys Randori to address multicloud security messes
    Big Blue joins the hot market for infosec investment

    RSA Conference IBM has expanded its extensive cybersecurity portfolio by acquiring Randori – a four-year-old startup that specializes in helping enterprises manage their attack surface by identifying and prioritizing their external-facing on-premises and cloud assets.

    Big Blue announced the Randori buy on the first day of the 2022 RSA Conference on Monday. Its plan is to give the computing behemoth's customers a tool to manage their security posture by looking at their infrastructure from a threat actor's point-of-view – a position IBM hopes will allow users to identify unseen weaknesses.

    IBM intends to integrate Randori's software with its QRadar extended detection and response (XDR) capabilities to provide real-time attack surface insights for tasks including threat hunting and incident response. That approach will reduce the quantity of manual work needed for monitoring new applications and to quickly address emerging threats, according to IBM.

    Continue reading

Biting the hand that feeds IT © 1998–2022