This article is more than 1 year old

Truth and consequences for enterprise AI as EU know who goes legal: GDPR of everything from chatbots to machine learning

Regulations On A European Approach For Artificial Intelligence

Column One of the Brexit bonuses we’ve been enjoying since January 1st is that we have abandoned our influence within the world’s regulatory superpower.

America and China may have industrial and military dominance, but by placing a decent proportion of global economic activity under the world’s strongest regulatory regime, the EU forces the pace for everyone else. GDPR commands respect around the world.

So when the draft "Regulation On A European Approach For Artificial Intelligence" leaked earlier this week, it made quite the splash - and not just because it’s the size of a novella. It goes to town on AI just as fiercely as GDPR did on data, proposing chains of responsibility, defining "high risk AI" that gets the full force of the regs, proposing multi-million euro fines for non-compliance, and defining a whole set of harmful behaviours and limits to what AI can do with individuals and in general.

What it does not do is define AI, saying that the technology is changing so rapidly it makes sense only to regulate what it does, not what it is. So yes, chatbots are included, even though you can write a simple one in a few lines of ZX Spectrum BASIC. In general, if it’s sold as AI, it’s going to get treated like AI. That’ll make marketing think twice.

What it'll mean for you

For businesses who implement, buy in or plan to use AI, this will sound like the worst sort of bureaucratic overreach, imposing all sorts of brakes and costs on the latest and greatest tools. Imagine having to implement GDPR all over again, only this time for something that the regulator won’t even bother to define and which the salespeople say will touch every aspect of line of business operation.

But while this may not be what business wants, it may well be what it needs - if not in the full-fat form leaked out, then essentially in its harm-reduction approach.

AI is a brash, frontier world right now, and people are getting hurt. When AI goes wrong - facial recognition, job applicant screening, crime profiling - the consequences can be swift to hit and very hard to put right. And AI that works may be even worse.

A regulated market puts responsibilities on your suppliers that will limit your own liabilities: a well-regulated market can enable as much as it moderates. And if AI doesn’t go wrong, well, the regulator leaves you alone. Your toy Spectrum chatbot sold as an entertainment won’t hurt anyone: chatbots let loose on social media to learn via AI what humans do and then amplify hate speech? Doubtless there are "free speech for hatebots" groups out there: not on my continent, thanks.

It also means that countries with less-well regulated markets can’t take advantage. China has a history of aggressive AI development to monitor and control its population, and there are certainly ways to turn a buck or yuan by tightly controlling your consumers. But nobody could make a euro at it, as it wouldn’t be allowed to exist within, or offer services to, the EU. Regulations that are primarily protectionist for economic reasons are problematic, but ones that say you can’t sell cut-price poison in a medicine bottle tend to do good.

Police men and security guards with automatic weapons guns stand talking in London's Westminster (seat of UK government). Pic: Kristi Blokhin/shutterstock

Report on AI in UK public sector: Some transparency on how government uses it to govern us would be nice

READ MORE

These regulations aren’t in place yet, nor will they be for a while, nor are they likely to go through unamended. If your enterprise tech strategy includes public-facing AI, which of course it does, you have the chance to consider it in the context of these regulations and, if you don’t like them and you’re an EU company, set about preparing your arguments. If you’re not in an EU country, but you might want to do business, then you don’t get that option - sorry. But at least you can start the process of planning for compliance.

But wherever you are whether you develop, use, experiment with or follow AI tech, then be clear: there will be regulation, and it will look somewhat like this. The EU foresees a lot of the business of managing compliance being done by companies themselves formed into industry groups, which will inevitably get power and heft of their own.

You may already be in one of the industry organisations for AI ethics or assessment; if not, then consider them the seeds from which influence will grow.

Above all, don’t worry. A lot of early AI innovation with the sort of mix of success and failure you’d expect in a healthy developing market has been in medicine, which along with aviation is one of the most heavily regulated areas in business.

And aviation? You can now buy a red button for your Cessna that will aviate, navigate and communicate for you if the pilot becomes incapacitated - selecting a nearby airfield and landing without any further input. That product may have come out sooner and cheaper if it didn’t have to meet aviation regs, but would you like to push it?

There will be regulation. There will be costs. There will be things you can’t do then that you can now. But there will be things you can do that you couldn’t do otherwise, and while the level playing field of the regulators’ dreams is never quite as smooth for the small company as the big, there’ll be much less snake oil to slip on.

It may be an artificial approach to running a market, but it is intelligent. ®

More about

TIP US OFF

Send us news


Other stories you might like