This article is more than 1 year old

Looking for simplicity in the cloud? The future is going to be open and hybrid

And here's where developers need to start

Sponsored Feature SUB:

Hybrid cloud used to be seen as a waypoint on the transition to an entirely-cloud-based future. But it's becoming increasingly clear that it's likely to be the default destination for many organizations, and this leaves them facing tough choices about how they manage their tech budgets and their tech workforces.

Research conducted by analyst firm IDC and commissioned by Red Hat, points out that enterprises are "under tremendous pressure when it comes to modern application deployment." While businesses are demanding tech teams develop complex applications, the entire industry faces a shortage of developers. 

So it's all the more frustrating when those skilled developers and other specialists are forced to spend their time maintaining infrastructure and legacy tech, rather than focusing on delivering modern applications, or introducing key new technologies such as data science and machine learning.

Quick cloud path to new technologies

The cloud ostensibly offers a quick path to new technologies and more easily manageable infrastructure. But the reality is more nuanced. Organizations will inevitably have legacy applications which are unsuitable for lifting and shifting onto the cloud. And data sovereignty and resiliency issues can also force companies to keep data and infrastructure confined to particular locations or regions.

Working with multiple cloud providers may address some of the geographic coverage or sovereignty concerns, but can be more complex, with different providers having different tooling or technology offerings, for example.

While going all in with a single cloud provider might provide more consistency in terms of tooling and services, concerns about vendor lock-in mean many organizations remain wary of trusting their entire operation to a single vendor and its associated software stack.

All of which has led to something of an industry consensus that hybrid cloud will be the preferred operating model for most organizations for the foreseeable future.

As IDC points out, "for developers to create applications for multi-cloud environments, the major challenge lies in inconsistency across complex technology ecosystems. The opportunity therein lies in abstraction of those complex technology ecosystems to reduce friction for developers and enable high availability of applications in production."

That generally leads to a broader reappraisal of how applications are managed and developed. If an organization needs to retain some capacity on-prem or maintain legacy code or datastores, it should still be able to take advantage of the cloud or at least enjoy a cloud-like experience. At the same time, when applications are developed for the cloud, they should be designed from the beginning to be portable between on-prem environments and the customer's choice of clouds without the need for refactoring, or for developers to retrain on new tools or platforms.

This creates the need for an "open" hybrid cloud, an IT architecture that offers workload portability, orchestration, and management across environments, including on-prem and one or more clouds. This means development teams and their businesses can utilize the optimal solution for a given workload or task, to the point of choosing a specific cloud provider for an AI workload, for example.

Container yourself

In practice, this has meant applications have become containerized, with an orchestration layer taking care of container management and deployment. Together with the use of APIs to connect containers and services, and modern development pipelines developed around continuous integration and deployment, this makes it easier to update and modernize applications – certainly compared to traditional "monoliths".

Kubernetes may have become the default when it comes to open-source container orchestration but it can be a challenge to implement. And developers will still need other tooling, and to take care of security and authentication issues, and the underlying infrastructure.

Of course, major cloud platforms offer their own services and native tooling. Sometimes these are clearly proprietary and sometimes they do appear in sync with wider trends in open source. But this can mask divergences between cloud platforms' offerings and upstream projects, which mean features and tooling differ between providers. In some instances, license changes within open source projects have resulted in cloud providers offering commercial services based on an earlier version.

Both scenarios will be a concern for development teams who want to make their applications as open as possible, to ensure they can be as portable as possible.

It's worth noting that in the latest version of Red Hat's State of Enterprise Open Source report, 80 percent of IT leaders said they expected to increase their use of enterprise open source software, while over three quarters considered it "instrumental" in enabling them to take advantage of hybrid cloud architectures. 

But when it comes to adopting containers, almost half of those same IT leaders worried that they do not have the necessary skills (43 percent) and almost as many were concerned that a lack of the necessary development staff or resources will hold them back. 

Favoring open source software should mean that companies have a broader talent pool to recruit from because so many teams, or individual developers, already have a strong bias towards open source software. Those preferences often influence the tools and platforms developers want to work with, the projects they want to work on, and even the employers they will consider joining.

Time to open up

The key question then is how can developers and development teams get access to a common application and service software development and deployment experience no matter where they are working, and without the configuration and management headaches that can eat up precious developer bandwidth.

IDC identifies the cloud services model as the best approach to enabling organizations to "shift those valuable resources to making software that competitively differentiates, brings in revenue, and improves business operations – empowering developers to do more of what they want to be doing."

That's also the approach Red Hat has taken by putting OpenShift, its enterprise container platform, at the heart of a broad portfolio of managed cloud services. OpenShift provides container orchestration across on-prem, private, and public clouds. While most of OpenShift is self- managed (including versions for the public cloud), there are also managed versions available on AWS, Azure, GCP, and IBM Cloud. This delivers extended support, as well as tested and verified fixes for upstream container platforms like Kubernetes. It also means validated integrations, for storage and third-party plug-ins, for example, and software defined networking.

It also provides a full range of additional integrated services that are essential for developers building cloud native applications. These include OpenShift API management, which allows developers to configure, publish and monitor APIs for their cloud-native applications.

Similarly OpenShift Streams for Apache Kafka lets developers exploit real time data streams while offloading the management of the underlying infrastructure. And that allows them to enable the real time, analytics driven and scalable applications which are needed to power modern ecommerce or the sort of instant decision making or fraud detection that businesses now expect.

In addition, OpenShift Database Access offers on-demand data access, sharing, storage, synchronization and analysis. OpenShift Service Registry allows teams to publish, discover and reuse artifacts built on these services, which further accelerates the development process. And OpenShift Data Science helps machine learning and AI specialists to build their models, and ease the deployment of AI and ML applications to production.

Customers using the managed versions of OpenShift on the public cloud also get access to Red Hat's global Site Reliability Engineering team, which provides the proactive management and automated scaling that underpins resilient cloud native applications.

Trinity of tactical benefits

There are other benefits too. Red Hat's dynamic approach to the underlying infrastructure ensures that customers are only using the capacity they need when they need it, for example. So scaling up an application around a major event or key business period can be automated, with resource provisioning levels returning to their previous state immediately once the surge in demand has passed.

Because these are all managed services, enterprises and other organizations don't need to allocate responsibility for the day-to-day management of the platform onto their dev or ops teams. Developers can get resources up and running quickly, without the need to wait for infrastructure, or indeed, the experts to manage it.

That's important when it comes to addressing some of the key challenges organizations typically face in forging ahead with their digital transformation - i.e. overcoming the in-house skills gap and the technical debt which sometimes arises when overworked developers rush to finalize an application only to spend more precious time refactoring it later.

A managed service also means organizations get to enjoy the three key tactical benefits for development teams identified by IDC. Firstly, it allows development teams to "get out of the business of infrastructure administration", and to focus on developing features that deliver value for the business, and for end users.

Multi-cloud and hybrid environments already account for the bulk of the market, with OpenShift designed to provide the common platform that enables flexible applications and services that can work seamlessly together across both on- and off-prem infrastructure. That delivers a consistency and abstract complexity which makes developers more productive, while their applications are more likely to be more resilient and fault tolerant as a result.

Last, but definitely not least, it provides a consistent experience that simply makes for happier developers.

If resources – including Dev and Ops team members – are not being squandered and budgeting becomes more transparent, surely this keeps the CEO and CFO happy too?

Sponsored by Red Hat.

JavaScript Disabled

Please Enable JavaScript to use this feature.

More about

TIP US OFF

Send us news