This article is more than 1 year old

Great, we're going to get DevOps-ed. So, 15 years of planning processes – for the bin?

Architecting for change

In large organisations, the question is rarely “what are these newfangled practices and technologies,” but more “how could we actually do them here?”

DevOps* has been here nigh on 10 years, and in the past three or so of those, large, normal organisations, like Allstate and Duke, have been learning its mysteries.

“I think that for the IT staff, once they try it, they will never do it another way,” Allstate agile transformation manager Matt Curry said when I asked him about applying DevOps. That’s something you hear over and over again when it comes to putting DevOps in place.

While putting improvements and changes in often seems like something that can’t happen at your organisation, the results are too enticing to ignore, and the business side of the house is expecting big things for IT, like yesterday. “Based on our business feedback,” Duke Energy’s director of digital strategy and delivery John Mitchell told me, “it’s 10x better.”

Less analysis paralysis, more continuous planning

A focus on improving software with DevOps techniques requires an organizational mind-shift. In the traditional mindset, even in the past 20 years of supposedly doing agile, software was seen as a lengthy project, executed to fulfill a tome of requirements, targeted at a specific launch date. Slow and careful release trains and planning also limited the number of releases each year, putting a damper on the feedback loops which improve software in a small batch approach.

Most organizations, then, have taken a project-oriented approach to software. This means that IT staff and contractors are forced into a huge, up-front analysis and commitments used to manage them to schedule.

Former CIO of US Citizenship and Immigration Services (now at AWS), Mark Schwartz says: “To demonstrate that [IT staff and contractors were] performing that type of work responsibly and for the business to verify that it was doing so, the scope of each task had to be defined precisely, bounded, and agreed upon in advance. The work had to be organized into projects, which are units of work with a defined set of deliverables, a beginning, and an end.”

Now, as any overly clever agile-cum-DevOps fan-girl will quickly point out: “Yes, but where’s the part that ensures the software is actually useful?” Of course, such a thing is the goal of all those controls described by Schwartz.

A more contemporary view of software, though, is angling to discover exactly what the software should be by systematically understanding users, discovering what works (and doesn’t!), building the software, observing how people use it, and then starting the process over again. This reorientation changes the organizations’ planning process: “it’s only when we started shifting the focus on ‘outcomes’,” John Mitchell told me, “[that] we start to see that there is a new approach we can take in front of the planning process.”

In general, people have only the foggiest notion of what their software should actually do until they start trying. Thinking that you can deeply understand the problem you’re solving, Allstate's divisional chief information officer claims, vice president technology and strategic ventures Opal Perry told me in an interview last May, “[is] a traditional pitfall where we thought we knew with absolute certainty where we were going and it turned out we thought we were going south. But we need to go north.”

This means there’s not only much less time spent on up-front planning, but much less time along the way spent verifying that developers have been following the plan. Instead of verifying the status of projects, you verify that actual business value is delivered in the form of software that’s useful.

Project Management

With all the talk of “products, not projects,” you’d expect all those PMP-types in the Project Management Office to freak out. Which, to a certain extent, is always a good idea for those who enjoy paycheques. However, as many noted, PMO capability is still needed, especially for more complex applications.

Recently, after giving a long, DevOps soliloquy at a large enterprise, an astute project manager beset with modernising a rats' nest of mission critical, but aged services soliloquised back at me. They made odd poetry out of a long list of cross-service dependencies, regulations, COTS-uses, data concerns, and integrations. “Yeah. Sounds like you need some project management,” I recall saying in my snarky character: “Good luck - next question!”

Less glib, Matt Curry outlined a heuristic for getting enterprise-grade project management involved. “PMO is super helpful when my batch sizes are large and my feedback loops are long,” he said. “When batches become significantly smaller and the feedback loops are shorter the need for that [PMO] is lessened. The second place that project management is useful is when you have a lot of external coordination.”

Finance

Handling financing in a DevOps-oriented organization takes some care. Previously, because IT purchased their own kit for development, QA, and staging labs would require a capital expense (capex) approach. The amount of servers needed, of course, was a drop in the bucket compared to the amount of hardware needed for production, which were even larger capital expenses. With a DevOps approach, which typically depends on using public cloud, these expenses switch to operating expenses (opex).

The application teams, of course, love operating in an opex model because it speeds up finance planning and lab building times: they can get to the value of actually creating and releasing software quicker. However, if the accounts don’t pay close attention, they’re gonna have a bad time.

Namely, while the opex of the pre-production environments may seem smaller than upfront, capex, once the application moves to production, the opex might blossom like algae in a stagnant creek. This is especially true if the application is cursed with success, chewing up opex capacity at an unpredicted rate. If you can effectively manage 10,000 machines in production, Israel Gat, a renowned independent software & IT consultant, points out, financially you might be better off running in your own data centre. The exact cut rate for that number will always be debated - with server vendors tossing endless FUD into the debate - but it's worth finance keeping a close eye on where compute should be done and how it'll effect planning.

Tickets no more

With the promise of de-crudding the process of acquiring IT assets and release management, it’s little wonder that traditional IT service management changes as well. Perhaps it’s little wonder that ticket-driven IT is decreased. Duke Energy’s John Mitchell notes: “It’s so nice to not have to ask, plan and wait for infrastructure. Also, with our cloud engineering team co-located with the software engineers, they solve problems in real-time instead of [waiting on] tickets. It’s so cool watching one of our hipster mobile devs walking and talking like best buds with a big burly ops engineer.”

This measurable, in your face metric is also a good way to motivate those BOFHs. "It wasn't easy to win them over at first,” Brian Silles said, “ But once they saw 35-40 backup tickets per week go down to mostly zero, they got on board.”

But think of the poor CABs!

Then, there are the basics of trying to put 15 pounds of tickets in a 5 point bag: “if I’m doing 8 or 15 releases a week,” HCSC’s Mark Ardito asked, “how am I going to get through all those CABs?” The Change Advisory Boards - who hardly ever “advise” so much as stick you in a box of pain until you confess to your enterprise architecture policy subversion - needs to speed up whatever benefit they’re bringing. Most organizations I talk to are baking much of their policy enforcement into their automation, build pipelines, and platforms. It’s also clear that the usual 9 to 5 of enterprise architects needs to change (exactly how is still fuzzy).

Something like Chef’s InSpec is finding early success here to enforce policy in the pipeline and monitor drift in production, while the cloud native platforms and add-ons like the various Cloud Foundry distros, Red Hat OpenShift, and Istio all have components that seek to make robots out of those CABs.

Starting

Finally, after all that ironic up-front planning and contemplation, there’s the method for choosing and sequencing your first applications to rub DevOps all over. The resounding advice from those who’ve done it - or realised they should have - is to start small. “We started small,” John Mitchell said when reminiscing about starting up, “Then [when] we started getting noticed, more business pouring in.”

The likes of Home Depot have spoken extensively about the process of starting with small projects, then building up to larger projects. These initial projects aren’t “science projects,” but have actual business value (like running the paint and tool-rental desks in Home Depot’s case). Success means creating actual business value (read: less suck, more cash). On the other hand, as you learn how to do the DevOps, mistakes along the way have less negative impact than, say, bringing down the .com site.

Sometimes, though, you have to go big or go home, as the wide-toothed, neck-vein popping set like to say. “Ultimately, it is a matter of the cash flow situation of the company,” says Israel Gat, “Starting small is less risky, but operational/financial parameters might force you to adopt an ‘all In!’ strategy.”

Once you select software to work on, the process of good design-think kicks in. But instead doing - you guessed it! - up-front analysis and specification, designers stay involved during the whole process. This means expectation and organisational changes for your design people and departments: they’re now in the soup every day, not just contemplating chamfering in their tidy work-spaces.

The only easy day was yesterday

Once the engine starts, it has to be maintained, which is typically a change in mentality and motions for “leadership.” The organisation needs to continually crank-down on wastes like time waiting in ticket and review board queues, relentlessly squeezing out efficiencies where possible. The most vital, helpful part of DevOps is something it stole, outright, from Lean manufacturing: continuous improvement. DevOps itself has been undergoing changes as technologies automate some of the more manual steps and these large organisations bring more learning to the practice, perhaps even “killing off” DevOps as it evolves to whatever’s next.

At the leadership layer this emphasis on continuous learning implies creating and maintaining an organization that’s always eager to get better and, even, change dramatically. The MBA-wonks call this “a sense of urgency,” and as documented long ago, if the organisation doesn’t have that urge to change, little will happen. What I’ve seen in recent years is that, sadly, unless there’s an external threat to the organisation - cough, cough, Amazon, cough - not much will change, despite whatever decrees an executive or an eager young DevOps expert will spew into the organisation. There’s relief though if this sound exhausting. As my more macabre thoughtlords and ladies are fond of (mis-)quoting: “It is not necessary to change. Survival is not mandatory.” ®

* “Yes, but what is DevOps?!” you maybe screaming or typing. Let us just assume, for now, that it means: “Improving the quality of your software by speeding up release cycles with cloud automation and practices, with the added benefit of software that actually stays up in production.”

We'll be covering DevOps at our Continuous Lifecycle London 2018 event. Full details right here.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like