This article is more than 1 year old

AI in the enterprise: AI may as well stand for automatic idiot – but that doesn't mean all machine learning is bad

Is AI just a rebrand of yesterday's dumb algorithms? We present the argument against this motion – and don't forget to vote

Register Debate Welcome to the inaugural Register Debate in which we pitch our writers against each other on contentious topics in IT and enterprise tech, and you – the reader – decide the winning side. The format is simple: a motion is proposed, for and against arguments are published today, then another round of arguments on Wednesday, and a concluding piece on Friday summarizing the brouhaha and the best reader comments.

During the week you can cast your vote using the embedded poll, choosing whether you're in favor or against the motion. The final score will be announced on Friday, revealing whether the for or against argument was most popular. It's up to our writers to convince you to vote for their side.

For our first debate, the motion is: Artificial intelligence in the enterprise is just yesterday's dumb algorithms rebranded as AI.

This morning, Thomas Claburn argued for the motion.

And now, arguing AGAINST the motion, is KATYANNA QUACH...

After years of hype, it’s easy to become cynical about AI. Elon Musk’s declarations that fully autonomous cars - ones that can operate with no human intervention - are just round the corner are laughable, especially when numerous Tesla drivers have died in Autopilot car crashes.

If semi-autonomous cars aren’t completely safe, what makes you think that vehicles that can drive by themselves are going to be any better? AI algorithms? Surely not. Experiments have shown that computer vision systems in self-driving cars can be tricked into swerving across lanes by adding stickers onto the road. Also, let’s not forget that a piece of black electrical tape tacked onto a 35mph (56kph) road sign fooled Tesla vehicles into speeding up to 85mph (136kph).

The problem with these so-called adversarial examples don’t just affect Tesla; they can impact all types of deep learning models. These vulnerabilities exist because machine learning algorithms aren’t very smart and can only perform specific tasks that they have been trained to do. Feed them an input that slightly deviates from the ones encountered during the training process, and they'll make mistakes.

There are all sorts of issues with modern machine learning, so it's no wonder you’re highly suspicious of the rising number of companies claiming to use AI in their products or services. And so you should be, but does that mean there are no companies out there that are really using machine learning at all? Of course not.

The smart assistant wars

Let’s start with one of the most well-known examples from Big Tech: Amazon, Apple, Alphabet, Facebook, and Microsoft. Each biz has invested in a voice-activated AI assistant in smart speakers in one form or another. Amazon has Alexa, Apple has Siri, Alphabet has Google Home, Facebook has Portal, and, well, Microsoft just decided to stop rolling out Cortana for the Harman Kardon Invoke device.

Sure, there’s a good argument to be made that these gizmos aren’t all that useful and are terrible with less common accents. Yes, abilities like reading from Wikipedia or setting a timer for the oven are hard-coded instructions rather than fancy AI algorithms. But there is still a touch of machine learning in the works.

Speech recognition and speech-to-text models are needed to kickstart these smart speakers into gear after all. A simple wake word like “Hey, Siri!” or “OK, Google” uttered by a user triggers neural networks to process the audio sample, and it gets ready to transcribe audio instructions. Natural language understanding algorithms help the assistants identify what function to perform based on the user’s speech. Big Tech has profited handsomely from these AI-powered speakers and have sold hundreds of millions of units.

Deep learning isn’t just limited to massive corporations with big bucks and large teams of engineers either. Smaller well-established companies and brand spanking new startups have successfully commercialised other areas of new-fangled machine learning.

Reinforcement learning is notoriously difficult to get working in the real world. Although the algorithms may have produced some of the flashiest results – think DeepMind’s AlphaGo bot that bested the world’s leading Go player – they’re difficult to commercialise. But that’s exactly what Covariant AI, a robotics upstart, has managed to do.

Earlier this year it announced that one of its robot arms was now helping sort through equipment like light sockets at a warehouse owned by Obeta, a German company that supplies electrical parts. “Even though we are just getting started, the systems we have deployed in Europe and North America are already learning from one another and improving every day," Pieter Abbeel, founder and chief scientist at Covariant, previously said in a statement.

Sometimes market breakthroughs don’t come from flashy startups

Sometimes market breakthroughs don’t come from flashy startups. The first FDA-approved AI medical device that can diagnose eye diseases caused by diabetes doesn’t come from the likes of Google. The Chocolate Factory may have teams working on computer vision models to spot signs of diabetic retinopathy, a leading cause of blindness, in retinal scans. But it was beaten by IDx LLC, a lesser-known company based in Iowa, who now sell a product known as IDx-DR that automatically detects damage in the back of the eyeball from medical images.

AI has now progressed enough that cloud companies even offer off-the-shelf models to help companies perform tasks like translating languages or recognizing objects and faces. The tricky part is working out how to integrate these pre-built systems with your own data pipeline so that transferring data to and from the cloud to a particular product or service is seamless.

Although this might count as cheating since companies are offshoring the heavy lifting to cloud providers, you can’t deny that they’re still some form of machine learning that’s being offered even if it is being outsourced to Google Cloud, AWS, or Microsoft Azure. In these cases, it’s often that machine learning is just a small part of the solution and that most of the product or service being sold is still based on good ol’ fashioned engineering.

There is no magic trick when it comes to AI; it’s merely another software tool – albeit a very powerful one – that can be used by developers. Deploying machine learning models still requires traditional hardcoded techniques. Modern AI is used in commercial settings today, it’s just that these algorithms run quietly in the background cranking away at mundane tasks that are not normally nearly as exciting as what’s being hyped. And as they say, if it works it’s not AI. ®

Cast your vote below. You can track the debate's progress here.

JavaScript Disabled

Please Enable JavaScript to use this feature.

More about

TIP US OFF

Send us news


Other stories you might like