This article is more than 1 year old
AI in the enterprise: Prepare to be disappointed – oversold but under appreciated, it can help... just not too much
Today we launch our Register Debates in which we spar over hot topics and YOU decide which side is right – by reader vote
Register Debate Welcome to the inaugural Register Debate in which we pitch our writers against each other on contentious topics in IT and enterprise tech, and you – the reader – decide the winning side. The format is simple: a motion is proposed, for and against arguments are published today, then another round of arguments on Wednesday, and we publish a concluding piece on Friday summarizing the brouhaha and the best reader comments.
During the week you can cast your vote using the embedded poll below, choosing whether you're in favor or against the motion. The final score will be announced on Friday, revealing whether the for or against argument was most popular. It's up to our writers to convince you to vote for their side.
For our first debate, the motion is: Artificial intelligence in the enterprise is just yesterday's dumb algorithms rebranded as AI
Machine-learning is the new encryption: every product must be seen to have it, it seems. Is this artificial intelligence genuinely useful, does the technology as it stands today have a place in the enterprise, or is it simply a cynical marketing turn to dress up previous algorithms as intelligent?
And now, arguing FOR the motion, is THOMAS CLABURN...
Artificial intelligence is a terrible term that obscures both the field's shortcomings and its benefits. In the context of enterprise IT systems, it often amounts to yesterday's algorithms rebranded, or automation in fancy dress. But it can also be a source of real promise and innovation, if applied with some measure of human smarts.
This dichotomy follows from the misapplication of both "artificial" and "intelligence."
The term "artificial" implies a contrast with "natural" human intelligence. Yet AI systems are just delayed expressions of human thought encoded in programming code. They do what they're told based on rules people define. And that's as it should be – what bank, for example, would want to entrust its funds to unpredictable AI rules?
The term "intelligence" is even more fraught with vagueness. We humans, who ostensibly possess intelligence, can't agree on a definition because we don't fully understand the human mind.
In Raymond Kurzweil's 1990 book, The Age of Intelligent Machines: Thoughts About Artificial Intelligence, an essay [PDF] by AI pioneer Marvin Minsky says that AI is basically machines doing what we do: "Even though we don't yet understand how brains perform many mental skills, we can still work toward making machines that do the same or similar things. 'Artificial intelligence' is simply the name we give to that research."
The problem with Minsky's definition is that when machine-human interchangeability is the standard, you may get people presenting themselves as machines doing the work of people. That scenario played out at ScaleFactor, a startup that supposedly used AI to automate bill payments was found to be using human employees. Caveat emptor.
A startup that supposedly used AI to automate bill payments was found to be using human employees. Caveat emptor
To complicate matters further, this thing we call AI, so slippery to define, may be based on algorithms that suffer from explainability or reproducibility gaps and research that perhaps isn't as novel as has been claimed. Hence the allegation that machine learning – one of many fields within what we call AI – amounts to alchemy.
A computer scientist interviewed for this article who asked not to be named suggested there are several buckets you can use to categorize AI, one of which is the BS bucket. Within, you'll find simple statistical algorithms people have been using forever. But there's another bucket of things that actually weren't possible a decade ago.
There's enough room in the way Minsky describes AI to encompass both its pitfalls and its potential. As a result, IT practitioners can't just create business value with a one-click install or a purchase order. They have to understand how to solve the problem before they can set their machines to the task.
"I don't think there's one 'AI in the enterprise,' said Jeffrey Bigham, associate professor at the Human-Computer Interaction Institute at Carnegie Mellon University, in an email to The Register. "The vast majority of businesses are still in the early phases of collecting and using data. Most companies looking for data scientists are looking for people to collect, manage, and calculate basic statistics over normal business processes."
For a lot of companies, he suggests, the operative tool is Excel rather than GPUs for model training.
"On the one hand, there's nothing new about that; on the other hand, even this basic quantification and (maybe even) prediction of business processes is pretty new to a lot of companies," he said.
Bigham said businesses have used data processing to improve their operations for as long as those capabilities have been available. Walmart, he said, "has famously had a strong machine learning group for at least a couple of decades. They quantified their business processes super early, and at their scale they can show real impact with fancy statistics that eke out a few fractional percentage points of benefit."
Small companies, he said, can take advantage of the improved ease of data collection and processing with fairly basic tools. That's where he believes they'll get the most return on investment. "It makes no sense for the neighborhood bakery to train a giant neural net to predict its need for flour – they wouldn't have the data, and probably any fractional benefit wouldn't show up in their profit," he said.
'The question is where their attention is'
Svetlana Sicula, research VP at Gartner, told The Register in a phone interview that the hype surrounding enterprise AI has largely subsided. "Enterprises are doing interesting things," she said. "The question is where they are [in their adoption of the technology] and where their attention is."
It can be difficult to get companies that have had success with AI to talk openly about it, she said, because innovative firms don't want others to know what they're doing. Nonetheless, she insists it has real value, even if some of it is rehashed technology.
"AI continues a wave of automation," she said. "There's a lot of confusion around what AI and automation does. AI can allow you to do new things in new ways."
She pointed to generative algorithms used in life sciences applications, recounting discussions with IBM about how their customers are using generative techniques to suggest new compounds with desired characteristics. She also cited parametric design – where furniture designers, for example, give CAD tools a set of parameters and the software generates varied product designs that include specified features.
Sicula disagreed with the suggestion that AI is mainly the warmed-over algorithms of yore, but she allowed that older algorithms do get used when newer alternatives prove too opaque.
Explainability and trust in AI is a big issue for companies. "It's like meeting a new person," she said. "You don't know if they're qualified to answer a question or how truthful they are before you get to know them."
The issue she sees is there's still a major gap between the science of AI and the engineering of AI. "If science is about finding new things, engineering is about making known things stable, repeatable, performant, and resilient at scale."
It's the engineering side of things that's missing in enterprise AI at the moment, she said.
She recounted a conversation she had with an academic machine learning expert about validation, which she described as being similar to QA. "This person asked 'What does QA stand for?'," she said, referring to a term that anyone familiar with software engineering and DevOps would recognize means "quality assurance."
AI, for all its possibilities, remains bound by persistent human shortcomings. Sicula made clear that AI has limits, noting that the COVID-19 pandemic broke many AI solutions for fraud detection and supply chain management because the incoming data suddenly changed.
So here's a functional definition of AI: a system that's intelligent until it isn't. ®
Cast your vote below, though you may want to wait until you see the against argument later today. You can track the debate's progress here.