This article is more than 1 year old

Devs face a choice: Go all in with a single cloud giant's toolset – or bring together best tools from a bunch of vendors

Here are some compelling reasons why you should take that latter route

Sponsored DevOps has never been more important, but the dominance of a few public cloud platforms and code repositories have posed developers with a choice. Is it better to adopt a single-vendor DevOps toolset in the hope of better integration, or to choose best-of-breed tools from multiple vendors?

There are compelling reasons to take the latter approach, not only because of the excellence of specialist tools, but also to avoid the risk of lock-in to a specific platform. History tells us that lock-in, where it succeeds, results in higher prices and reduced innovation. Companies that create an effective lock-in tend to keep prices high and licensing complex and expensive, knowing that their customers have no easy way to move elsewhere.

This is a risk businesses cannot afford, at a time when getting DevOps right is critical to achieving the agility demanded by today’s competitive environment. Being able to deploy frequently, sometimes many times daily, and to recover quickly in the event that a service fails or a fault is discovered, is a key business advantage. Getting the technology in place to deploy safely and smoothly at this kind of velocity depends on having the right DevOps tool chain, making this a critical choice for any business.

Consensus-free zone

Today there is something close to a consensus about best practice in software development. The argument for CI/CD (Continuous Integration and Continuous Delivery) has largely been won. CI is based on the idea that it is easier for development teams to integrate small pieces of code, by merging frequently with a central code base, than delaying and risking more complex merging problems. Continuous Delivery is about frequent build, test and release cycles.

These concepts work because of the high degree of automation that today’s DevOps tools enable. They fit well with other development trends, like containerization, deploying applications in containers that package the application with its dependencies, and microservices in place of monolithic applications, so that each individual microservice can be scaled and maintained separately. Orchestrating those containers in large-scale applications is increasingly done by Kubernetes (K8s).

There is no such consensus, though, when it comes to DevOps tools. There is a lot to think about, including:

  • Planning and requirements gathering
  • Collaboration and communication
  • Source code management
  • Build and test automation (Continuous Integration)
  • Issue tracking
  • Configuration management
  • Application monitoring
  • Security and DevSecOps
  • Deployment automation (Continuous Delivery)

DevOps is not just about individual tools, but forming a toolchain; all these tools must work together and with a high level of automation and traceability. This requirement, together with the use of public cloud platforms, makes integrated DevOps suites that come from a single vendor, as opposed to selecting best-of-breed tools for each function, superficially attractive. There are tools suites that promise a wide range of DevOps functions, including code repositories, planning, continuous integration, performance and usability testing, release orchestration, configuration management and application monitoring, all from a single vendor.

There are, however, a number of issues with this approach. Look more deeply at the tools, and you find that they lack the maturity and depth of features of specialist offerings. Once committed to the notion of a single-vendor suite, it becomes hard to back out and adopt best-of-breed tools from others, especially if you have promised the accounts department that there will be only one bill to pay.

Adopting a DevOps suite from a big cloud provider carries even more risk. Why do infrastructure-as-a-service vendors offer DevOps tools? The answer is not that they are profitable in themselves, but that they hope to nudge developers and administrators towards deploying to their own cloud, by adding integrations that make this the easiest and best-integrated option. There is a tendency for every feature to involve consuming and paying for more services, for logging, for analytics, for testing, and so on.

This is the same playbook used by Microsoft in the heyday of Windows. The software giant invested in Visual Studio, which ran only on Windows and developed applications only for Windows, in order to keep developers wedded to – guess what - the Windows platform. The outcome was good for Microsoft, but not so much for their customers. It was the open-source community, including the Linux operating system and the Apache web server, that gradually turned the tide. Innovations like containers, which have done much to improve the DevOps process, would probably not have come about if Microsoft had succeeded in keeping developers hooked on Visual Studio and Windows Server.

The history of the web browser is another example of a similar process. Microsoft achieved dominance with Internet Explorer in the nineties and early 2000s, and then reduced its development effort, the result being that de facto web standards did not advance at any pace. It took the efforts of Mozilla Firefox and - later - Google Chrome to move users away from IE and allow web standards to progress. If the browser had remained in the hands of a single vendor, the power of HTML 5.0 and X-innovations like Web Assembly would likely never have occurred.

Road to lock in

In DevOps, what is the downside of integrated suites versus best of breed choices? Perhaps the most obvious issue is that the tools in the suites tend not to be as comprehensive and feature-rich as individual products. In configuration management, for example, you can find several mature and well-understood products that are designed from the outset to be vendor-neutral and to integrate well with other tools. It is important not to rely only on blobs or ticks on vendor-supplied feature lists to see what is supported and make comparisons between tools. Developers working day to day know that the details of how something is implemented is what counts.

If you build a dependency on DevOps tools offered by a single vendor, what happens if you decide in future to move to a different toolchain? It is easy to move source code from one repository to another, but some categories of DevOps data and metadata are not easy to port. This is the reality of lock-in, that there is so much friction and cost in moving that the single vendor retains a customer’s business even when it is apparent that they would be better off elsewhere.

A further issue is the existing expertise of a DevOps team. Unwillingness to learn new technology is a bad reason in itself for sticking with existing tools, but equally there is friction in moving to a new toolchain and there have to be solid reason for doing so. In CI/CD, there are tools from specialist companies which are well established and flexible in dealing with different environments and integrating with other DevOps tools. The ability to integrate is not just a feature, but fundamental to the effectiveness of these products, so it will never be neglected.

Even when a single vendor tries to create the best possible tools, it is unlikely that they will be able to innovate as effectively as multiple vendors, dedicated to DevOps tooling, who form an ecosystem of both open source and commercial products. These specialist companies know that their success depends on keeping their products competitive with the best available and keeping pace with technology trends, examples today being containers, Kubernetes, and cloud deployment.

The primary benefit of an all-in-one DevOps suite is that it reduces the burden of integration; but perhaps this is not as big an issue as some vendors claim. The truth is that best-of-breed DevOps offerings have always had to integrate with other tools and are designed to do so. There is also a wealth of experience in doing just that, so while of course it is possible to get the integration wrong, it is not hard to find the scripts, extension points, and more importantly, the skills to achieve this. For example, Git is a de-facto standard today for source code management and every tool that deals with source code has to be designed to integrate well with Git. The result is that that you can use Git from one provider, and an IDE or a CI/CD tool from another vendor, and know that it will be easy to get them working together.

The IT industry is a pragmatic place defined by about compromise and trade-off, and there is more to agility and velocity than adopting one tool suite in the belief it will meet all future needs. The reality in most enterprises is of multi-cloud and hybrid cloud for a variety of factors.

This, therefore, suggests that agility means keeping in mind that the IT landscape can - and will - change, and a technology that seems a safe bet today may not look so good five or ten years on. It is specialist and vendor-neutral tools that are most likely to adapt and evolve with those changes.

Sponsored by CircleCI.

More about

TIP US OFF

Send us news