This article is more than 1 year old

Pound falling, Marmite off the shelves – what the UK needs right now is ... an AI ethics board

Call for probe into 'social, ethical and legal implications'

Analysis The UK government has been urged to establish an AI ethics board to tackle the creeping influence of machine learning on society.

The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI “raises a host of ethical and legal issues”.

“We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI,” the report said.

It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

Innovate UK – an agency of UK.gov's Department of Business – said that “no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time.”

Stan Boland, CEO of FiveAI, a British company on a mission to build “the world’s most reliable autonomous vehicle software stack,” told The Register that he agreed that validation and verification of autonomous systems is challenging, but there are solutions.

(Fun fact: Boland was also previously the CEO of Acorn Computers, creators of the BBC Micro - common in UK classrooms in the 1980s. The company shut down in 2000, but its legend lives on in the form of the hugely successful company ARM - which was created as a subsidiary of Acorn Computers in 1990.)

To provide validation, the “best practices from the semiconductor chip and embedded software industries” will have to be relied upon.

Companies will need to be diligent about generating a varied amount of test cases that mimic the random changes in environment, and not update software that could potentially change the behaviour of the system without testing first.

Rodolfo Rosini, cofounder of Weave.ai, an AI startup, agreed that algorithms have to be made more transparent – especially where a higher level is trust is needed such as for military or healthcare use.

Being unable to understand how an AI system makes decisions has been called “the black box” problem.

In deep learning, as input data filters through layers of neural networks, it is assigned a weight value that determines how it passes to the output. A system will make decisions based on the outputs calculated – whether it be classifying if an image contains a cat or dog or cancerous tumours.

Rosini said that “debugging software” is out there that could make the learning process more transparent – such as the Local Interpretable Model-Agnostic Explanations (LIME) technique. LIME aims to make it easier for developers to understand how a system makes predictions.

Forcing companies to implement such software could be a possible future strategy, but this was not explicitly explored in the report. The committee did, however, recognise that “suitable government frameworks were needed,” but were warned that regulations should not be too restrictive, as it could hamper the progress of AI.

Who's done what?

Human agency is another issue. As AI and robots begin to take over human tasks, it becomes difficult to decide who is accountable and liable if the system misbehaves.

This is a particular problem with autonomous cars, and the government is currently in the process of debating the Modern Transport Bill.

The government plans to “extend compulsory motor insurance to cover product liability to give motorists cover when they have handed full control over to the vehicle (ie, they are out of the loop). And, that motorists (or their insurers) rely on courts to apply the existing rules of product liability, under the Consumer Protection Act, and negligence, under the common law, to determine who should be responsible.”

The government have a history of being slow to adapt to technology. The General Data Protection Regulation (GDPR) EU law aims to protect personal data and will come into effect by 2018.

It will replace the data protection directive previously made in 1995 and has been updated to adapt to the rising use of the internet. It includes data from social networks and cloud providers.

Chris Holder, a partner of Bristows, a law firm that has a strong interest in technology, previously told The Register that the GDPR fails to cover data generated or processed from machine to machine, which poses problems for robotics, AI and the Internet of Things – all areas of emerging technology.

The report also shows that the government haven’t exactly been committed to solving these issues, as it has not progressed on establishing a Robotics and Autonomous Systems Leadership Council, which it promised to in March of 2015 after the committee made the suggestion.

AI ethics is also a priority in the industry. Last month, the (somewhat sinisterly named) Artificial Intelligence to Benefit People and Society partnership was announced between Google, Facebook, Amazon, IBM and Microsoft.

The goal, according to Google's blog post, is to “create a forum for open discussion around the benefits and challenges of developing and applying cutting-edge AI.” But, no one really knows what they are up to either. ®

More about

TIP US OFF

Send us news


Other stories you might like