DevOps writer Gene Kim spoke at the Dynatrace Perform event last week, saying not a word about Dynatrace but focusing on technical debt and developer productivity.
Kim is the author of "novels" The Phoenix Project (2018) and The Unicorn Project (2019), which describe DevOps principles as seen through the story of a fictional company called Parts Unlimited.
The Unicorn Project is about software development according to Kim's "Five Ideals". Normally we are impatient with such things, but one warmed to Kim when he started talking about the bus factor: "How many people need to be hit by a bus before the project is in grave jeopardy?"; and the lunch factor: "In order to get something done, how many people do you need to take out to lunch?" In The Phoenix Project, Parts Unlimited suffered from a bus factor of 1, a guy called Brent who held in his head all the secrets of how stuff worked. He was able and friendly, but a disaster for the team.
Neither the bus factor nor the lunch factor are technical concepts, and this is a common thread – that tools and technology matter less than the extent to which organisations and teams are dysfunctional. That said, advances in technology have made a difference, and Kim argued at Perform that the DevOps movement is disruptive in the same way that manufacturing was in the 1980s, "when it was revolutionised through the application of the lean principles." The key thing is automation that enables multiple deployments per day while preserving "reliability, security and stability." From Kim's perspective, this is the essence of the often-abused concept of digital transformation.
Rapid deployment is not only about automation but also architectural and coding practices. Ideally, said Kim, "anyone can implement what they need by looking at one file or module, and making the needed change." Not ideal is having "to understand and change all the files, all the modules, all the applications, all the containers, because the functionality is smeared across that entire surface area." Ideally, changes can be tested in isolation from other components.
Another DevOps benefit, he said, is that it can relieve the developer of worrying about things they probably dislike, which he described as "everything outside of my application" – dependencies, secrets management, YML files, patching, building Kubernetes deployment files, and even "why my cloud costs are so high". Perhaps he read this. These things are important, he said, but "we want them not in people's heads but in the tools that developers use in their daily work." He said of DevOps professionals that "our job is to liberate developers from having to care about these things." Later in his talk, he argued that companies should spend 3-5 per cent of their developer effort on improving developer productivity.
The DevOps insight is similar to that of test-driven development (TDD), though TDD had a narrower focus – that once a strong DevOps workflow is in place, developers can focus on improving code without being anxious about breaking things. The thing to measure, Kim said, is "to what degree do we fear doing deployments?" The idea seems to be that if you get all this right, the focus, flow and joy follows.
Kim also talked about "paying down technical debt." This, he said, is where shortcuts are taken to get features into production, leading to lower quality, and more defects. In time, this means that "defect fixing dominates work" so that feature delivery declines and, at worst, customers leave and morale plunges.
In a section on corporate examples, Kim referenced Nokia in 2010, with a hopelessly slow 48-hour build process for Symbian, its mobile operating system at the time. Even Windows Mobile "was actually a better bet than staying on Symbian OS" even though it "did not treat them so well either." As for Microsoft, Kim pointed to its own "near-death" experience, which he identified as the 2002 security standdown and feature freeze, and the Bill Gates memo on trustworthy computing – "when we face a choice between adding features and resolving security issues, we need to choose security." That was paying technical debt, he said.
Why did Dynatrace have Gene Kim to speak? Kim's ideas are not tool-specific, but the company sees itself as meeting the need for automated observability. That is, observability that uses AI to make sense of the mountains of logs and telemetry data generated by modern applications. The product is 15 years old but was reinvented in 2012, leading to "a completely new Dynatrace" in 2014, CTO and founder Bernd Greifeneder told us.
"It discovers what's running and it discovers the dependencies, building a topological graph horizontally and vertically, which service is running on which technology," he said. Next comes AI, which interprets the data and identifies anomalies. The company, said Greifeneder, is in the business of "BizDevSecOps because we learned that security as it is in typical enterprises falls short entirely with cloud native environments... You can't protect applications any more through a simple perimeter firewall so you need to move security into each and every service."
At Perform the company presented new features including native log support for Kubernetes and multi-cloud; a software intelligence hub which forms a catalogue of Dynatrace extensions; session replay for mobile applications which lets developers see "every click, swipe and tap from the user's perspective" – with the claim that it also respects data privacy – and cloud automation which embeds Keptn, an automation tool for observability and remediation.
Improved tooling is good but can it fix the bus factor or the lunch factor? The answer is no, but it might help with paying down that technical debt. ®