CloudBees CEO says customers are slowing down on 'black box' code from AIs
Learning from the lessons of the past
interview Anuj Kapur, CEO of DevOps darling CloudBees, reckons that AI could retest the founding assumptions of DevOps as a whole, but warns against the risk of creating black-boxed code in the pursuit of greater efficiency. He also says that some customers who rushed into AI-generated code for fear of missing out (FOMO) are starting to slow down and be more considered.
CloudBees has been around for a while. It was founded in 2010 and has long been associated with the Jenkins automation server. In 2022, Cisco and SAP veteran Anuj Kapur succeeded CloudBees Co-Founder Sacha Labourey as President and CEO shortly before the generative AI revolution grabbed the imagination of the tech industry.
The biz has remained relevant over the years, most recently pivoting toward generative AI in DevOps. In June, it launched an early access version of its CloudBees Unify Model Context Protocol (MCP) Server to act as a bridge between its Unify DevOps tooling and an expanding ecosystem of LLM-powered agents.
"It's been exciting," understates Kapur. "The turn that we have made, in some ways, it's been a reflection of what our customers have wanted us to do as well as the problem statements."
CloudBees followed much of the rest of the tech industry in jumping aboard the AI train. At the start of 2024, Labourey forecasted that AI would "take a pretty big place on the platform" during an interview with The Register.
Kapur echoed those comments in our interview this week, but with a small twist of commercial reality. Yes, customers are very excited about the productivity possibilities, but some are realizing those possibilities need to be tempered by experience. After all, it's all very well to move fast, but what happens when too many things get broken?
We are effectively about to outsource a whole generation of software to prompt engineering and effectively create a black box of code that's generated not by human beings
"We are effectively about to outsource a whole generation of software to prompt engineering and effectively create a black box of code that's generated not by human beings," says Kapur, "but based on foundational models, which may be fantastic from an efficiency perspective, but it sort of begs the question to say, what are the downstream implications of that volume and velocity on the quality that comes out at the back end and our ability to actually diagnose root causes of failure when those things do happen?
"Because that code will fail."
Kapur calls the black-box implications of machine-generated code "profound."
"What is the quality?" he asks. "What is the test coverage? What is the recoverability? What are the vulnerabilities associated with it? And the code that's generated is only as good as the prompts that are input, which, in some cases, are largely input by humans.
"So in some ways, we are at that point where we need to be thoughtful with regards to how quickly we embrace [AI], and what are the downstream checks and balances that need to happen in the pipelines to ensure that the code that's been generated ... can actually yield outcomes that are appropriate."
He adds, "You should be focused on the second-order implications of the trend, not the first-order implications," quoting the Collison brothers of payments giant Stripe. "And that, for us, was that we needed to focus on testing because we felt that in the current generation, test coverages are pretty minimal." Sure, the volume and velocity of code might go up by a factor of 10 with AI tooling, but Kapur observes, "what happens to those downstream implications?"
- CloudBees co-founder buzzes about open source drama and AI
- AI can't replace devs until it understands office politics
- AI may be after your job, but this AI agent promises to help you get a new one
- Brain activity much lower when using AI chatbots, MIT boffins find
Kapur also suggested many customers initially suffered from FOMO, worrying that a failure to jump on the bandwagon might give an edge to more cutting-edge competitors, but are getting over it. "I would say that that fear has receded," he told us. "That's caused them to be a lot more thoughtful about the velocity that they need to prosecute this trend at, and the safeguards they need to create."
This could mean companies will take a breath after a headlong charge into churning out code spat out by AI, Kapur says. "I think we're at the point where you are going to see some public walk-backs."
Unless a more conservative and cautious approach is adopted. "Our customers," he says, "certainly ones that have the regulatory burden or the reputational burden ... I think those customers think through that. They think through the negative implications of going too fast.
"I think [that] forces a level of conservatism with regards to what your customers truly will be able to adopt.
"Customers," he says, "have learned the lesson of that from past cycles of basically going too fast and speeding into a corner." ®