Promo Artificial Intelligence has crested the top of the Gartner hype cycle, and is on the lips of every technical marketing exec. Companies are doing things with it, but many projects are still proving out the concepts. You can’t talk about the weather these days without a gimmicky, gee-whiz weather chatbot trying to impress you with its opinion on the rain – and sometimes getting it wrong. And let’s just pretend Microsoft Tay never happened.
There’s a chasm between many current AI deployments and a mature, grown-up approach with sensible business benefits. What do companies need to know to get from here to there?
When artificial intelligence first emerged as a discipline, scientists had great hopes for it. They wanted to create ‘hard’ AI systems that mimicked humans - HAL 9000-style monoliths that would do everything that people could. The problem with that is that we don’t really understand how human intelligence works.
After AI failed to deliver on its initial promises, scientists scaled back their expectations, instead focusing on specific tasks that they could tackle with nondeterministic algorithms. This is ‘narrow’ AI, and even though it’s a step back from general AI, it still gets important jobs done. Today’s software won’t argue with you about the world economy while fixing you a cup of tea or make you feel better when you’re depressed. It can still recognize your face, though, and understand you when you tell it to turn your living room lights on.
These technologies work differently to traditional programming, which uses explicit, sequential instructions. Machine learning software looks at data and uses statistical modelling to classify the data points therein. The ‘learning’ process involves finding data that fits certain baseline norms. Training it to find pictures of horses involves showing it lots of horse pictures tagged as such, along with another set of pictures of other things. The machine then learns what data points are common to horse images, and can use them to identify new pictures.
Machine learning would be boring if limited purely to equine ambitions. In the Harvard Business Review, machine learning researcher Andrew Ng says that we can apply it to any mental task that a typical person can do with less than a second of thought. That takes us beyond photo matching into loan approvals, language translation, and preventative maintenance, and beyond.
Nick Patience, founder and research vice president of software at analyst firm 451 Research, highlights two broad kinds of application for machine learning. “It’s either a known use case and already done by people, or it’s something that people didn’t know was possible,” he says. Tasks may not have been possible because humans simply couldn’t get hold of the data to start with, or because they had the data but couldn’t process it.
”Something like IoT just would not be possible without machine learning because there is just too much data coming from all sorts of devices,” he explains.
Patience sees great implications for machine learning in healthcare, which has been an early adopter of the technology. “A lot of this stuff is around image recognition – spotting what a tumour looks like,” he says, adding that algorithms analysing radiology scans can apply the same rigour across thousands of images. “These algorithms don’t get tired on a Friday afternoon, that’s the beauty of them.”
That’s an example of Patience’ first use case – computers doing something that humans are already good at, but doing it more quickly. But AI can also teach healthcare professionals what they didn’t already know. “There’s so much inefficiency in healthcare, and so much fragmentation of data,” he says. Structured healthcare records may have useful historical data about a patient’s care, but discharge summaries and nurse’s notes often have useful information that goes unanalysed. Married with other data, that information can often provide early indicators of further problems, or might help to inform decisions about how to provide home care that could lighten the burden on the clinical system.
Brian McCarthy, a partner in McKinsey’s Atlanta practice, posits insurance as another area where AI can boost performance. “[AI gives you] the ability to take photographs of the damage in the vehicle and then use deep learning methods to assess the degree of damage,” he says. The AI might find no damage, mild damage, or a complete write-off. “Because we can use visual analytics on cars, we can better assess the claim being factual and the degree to which you should automatically pay it or send out an adjuster. That’s a big deal for an insurance company.”
These use cases highlight an important point about machine learning and deep learning: they become more valuable as you apply them to processes that either carry real costs for the business, or which provide a significant competitive edge.
Debbie Landers, IBM Canada’s vice president of Cognitive Solutions, provides a real-world use case in the oil and gas sector. Woodside is an oil and gas operator that explores, develops, produces and supplies assets in that sector.
The company has been using Watson, its cognitive AI system, to help the company with engineering questions. “It is trained by a team of engineers who provided it with 30 years of documented expertise that encompasses thousands of documents per project—engineering studies, environmental reports, risk analyses, developmental concepts and so on,” she says.
Watson is an amalgam of different narrow AI techniques. It uses machine learning and its beefier cousin, deep learning, which uses many layers of neural networks to compute more accurate statistical models. It also uses information clustering techniques to read and analyze vast amounts of unstructured information, creating a knowledge base that it can draw on when answering questions.
“Now it is helping them answer different questions as they go to the next drilling area or look at their next set of operations,” explains Landers. “The Watson system combines its back-end information processing with deep learning on the interface side, enabling the engineers to ask it questions using natural language and get back evidence-based answers leveraging vast amounts of historical insight. Watson not only enables the historical insight today but is capturing it to ensure it is not lost as the work force transitions.”
IBM is taking a similar approach in the medical space, with Watson for Oncology, for example, being primed to analyze notes and reports, as well as the vast amount of research literature practitioners may struggle to keep on top of.
The market for AI is difficult to quantify, because it touches so many sectors, argues Patience. His list rapidly expanded into areas ranging from retail optimization to sales funnel forecasting, eDiscovery and beyond.
“Machine learning isn't the end in itself but it's absolutely as big as mobility and cloud computing have been,” he says. “It's clearly not a fad.”
All of which is very encouraging, but only for companies who know how to use it. What’s involved in getting started with this stuff? Data scientists are expensive, as are programmers well-versed in machine learning algorithms and frameworks like TensorFlow and MXNet.
The alternative is to take the most common 80% of tasks and use an API-based service to handle them. IBM has working hard at this, rolling machine learning into their Bluemix cloud-based service set. The idea is to avoid the classification grunt work altogether and just get to the good stuff, says Dan Kara, research director at ABI Research, focusing on robotics innovation.
“If you look at IBM, these APIs into their cognitive structures are pretty well structured,” he says. “If you were an IT person in the banking industry, do you want to drop down into more difficult coding to take advantage of these capabilities, or do you just basically want to have a drag and drop interface or a visual programming interface? It’s just a little bit simpler.”
Being able to wrangle cloud-based APIs alone won’t get you there though, warns Patience. He adds that companies must prepare their data architectures, too.
“They need to have some sort of big data approach already,” he tells companies interested in taking machine learning further. Companies should break down data silos and create ‘data lakes’ (he calls them ‘data water processing plants’) so that machine learning algorithms can manipulate and analyse large data sets.
“Unless you have that sort of approach to start with, you won’t have a lot of joy with machine learning,” he concludes. “Once you have that integrated data approach, then you can use algorithms that are available from these various cloud providers to try things.”
The initial enthusing over AI is done with. The sci-fi scenarios are mostly over, too. We’re already seeing the backlash articles as it slides into Gartner’s trough of disillusionment. It’ll emerge from the other side of that, too, and be met by a sensible base of programmers who understand its capabilities, and how to use it. At that point, it’ll be making its way into business tools without anyone having to slap a label on it, and delivering significant benefits. Then, the real magic will happen.