This article is more than 1 year old

Cognitive computing: What can and can’t we do, and should lipreading be banned?

Daisy, Daisy, give me your answer do

Making it work

Assuming we can overcome those fears, there remain considerable challenges around deployment. You don’t just buy one of these things and plug it in. IBM points to a four-step journey towards cognitive computing. It starts with charting the course, and identifying potential use cases in your organization, IBM has said.

Experimentation is the next step, in which prototypes are tested using use case scenarios with users. This part of the process may itself be incremental, argues Sarris. It’ll call for the same cycle of acting, discovery, and learning that cognitive systems themselves try to master.

“What we need to put in place is a culture that encourages commercial use and is tolerant of more of a startup-like MVP (Minimum Viable Product) approach,” said Sarris. “These technologies often need some burn-in and iteration, including feedback and refinement by human users, and in some cases by the application itself.”

When a viable use case has been tested, the system must be developed. That involves feeding it data – lots of it, ideally, and perhaps not just from your own systems. The data will have to be massaged by professionals well versed in such things, and with a high level of domain-specific knowledge. As IBM said, they must be trained, rather than programmed.

Finally, you get to deploy the thing, starting with the developed solution as a baseline, and then embarking on a continuous improvement cycle, in which the machine itself learns, and the data feeds become more sophisticated.

Let’s not get ahead of ourselves, though. It’ll take a considerable investment for organizations to get on board with cognitive computing, and they should also manage their expectations.

Bishop believes there are three things humans do which are simply incomputable. The first is true creativity.

The second is understanding. Computers will never truly understand concepts, he argues, basing this on John Searle’s Chinese Room Argument suggesting instead that they display a kind of computational quasi-understanding.

Finally, he doesn’t believe that computers will ever be truly conscious.

These three things together form what he calls the humanity gap. Unless a computer can achieve all three, it will always be behind the curve, compared with us. “It seems to me that in these areas at least there will always be spaces where humanity can do more than mere computational systems,” he said.

They might at least do a good job of diagnosing your sciatica in the future, though. And their handwriting might be a bit better than your doctor’s, too. ®

More about

TIP US OFF

Send us news


Other stories you might like