This article is more than 1 year old

Cognitive computing: What can and can’t we do, and should lipreading be banned?

Daisy, Daisy, give me your answer do

Digital decisions

Decisions are a crucial component of the cognitive computing process. They take existing bodies of evidence, which could be anything from actuarial data through to patient trials, depending on the industry they work in, and then use it to make the best decisions in responses to questions posed by users.

Currently, cognitive systems still advise people rather than prescribing a final option, according to Big Blue. They may present a variety of options to users, who can then pick from the results. That’s an important point, because when dealing with human-like, complex problems, there may be no ‘right’ answer: there may only be an optimal one.

Confidence scoring and traceability are important factors here. A cognitive system can usually present users with a value representing its confidence in a decision. If the human user needs to understand how that decision was reached, a cognitive computing system may be able to present it with a trail of ‘reasoning’.

Presentation is everything in cognitive computing. The point is to create systems that people can interact with easily when dealing with complex tasks. Today, people with questions plough through whichever automated system they have available to them, but have to work hard to interpret the results.

Type “Will my employer increase my matched pension contribution if I participate in the group health insurance scheme?” and you might find yourself struggling to interpret dozens of different search results, none of which really answers your question.

Computer says 'no'

Natural language processing is an important thread that runs through the entire cognitive computing story, explains James Haight, analyst at Blue Hill Research, a boutique research firm with a focus on emerging technology.

“There has been an amazing acceleration of natural language processing. You can take content and understand what it means, which is the major breakthrough,” said Haight. This applies to content discovery, but also applies to user interaction, he adds, “whether you’re speaking to it or typing to it".

This is why a cognitive system that has pre-read all of the relevant documents may be able to listen to you if you ask it a question and then give you a concrete, understandable answer.

These interactions should also span machines rather than just people, say experts. Computers may talk to each other, and to cloud-based services, to complete jobs that humans may ask them to do.

That can be particularly useful in fulfilling another requirement of cognitive computing: that machines be iterative and stateful. A cognitive system should remember interactions with a user, using the history as a basis for future queries.

Today, we see this in simple personal assistant systems.

“Who is the President of the US?” you may ask Google Now.

“Barack Obama is the President of the United States of America”, comes the reply.

“How old is he?” you continue. “He is 54 years old,” the computer replies. It knows who you asked it about before, and fills in the blanks.

Tomorrow, cognitive computers may extend those iterative interactions into far more complex conversations.

Where can it be used

Personal assistants such as Google Now and others are one area where these cognitive systems can be easily applied, because the heavy lifting is done via back-end, cloud-based services.

“Where we will see quick adoption is on the low-end incremental improvements in the consumer space or frontline business productivity stuff,” said Haight, adding that he’s looking forward to having an Office-integrated Cortana schedule appointments and handle other tasks automatically behind the scenes.

Barking “Calculate the average daily sale this quarter” at Excel and have it work out the task itself would be a step in the right direction.

He also sees opportunities in large, high-end projects where the payoffs could be huge, such as in a hospital, where you could tie patient outcomes to quantifiable savings, for example.

Sarris looks for applications where the threshold for accuracy is relatively low and the opportunity for benefit is relatively high. “Those may be applications where you can be 80 per cent correct and still produce results that are valuable in the sense of saving humans time or effort, or augmenting their skills,” he said.

“In general, cognitive computing systems can be deployed in any area that historically involves the deployment of sophisticated, context sensitive, human reasoning,” said Bishop. “This potentially opens up lots of new jobs to computational automation.”

The automation part might be a hot button for many, though, warns Haight. He sees resistance in mid-range projects for just such reasons. “In the middle ground, there is huge resistance,” he said. “People are afraid of it.”

It’s easy to see the fears. Creepy, soft-spoken HAL-like AI bots coming to steal our jobs? No, thank you very much. But then, the same dialogues have sprung up around most information science developments in history, from robots to PCs.

At least one recent study has questioned that rhetoric, arguing that technology has created more jobs than it destroyed in the last 140 years.

Next page: Making it work

More about


Send us news

Other stories you might like