This article is more than 1 year old

EU lawmakers seek coordinated hand-wringing over AI ethics

Rules created in isolation will drive AI makers to operate in areas without restraint

European policymakers have asked for help unravelling the "patchwork" of ethical and societal challenges as the use of artificial intelligence increases.

The European Commission’s group on ethics in science and new technologies on Friday issued a statement (PDF) warning that existing efforts to develop solutions to the ethical, societal and legal challenges AI presents are a “patchwork of disparate initiatives.”

It added that "uncoordinated, unbalanced approaches in the regulation of AI" risked "ethics shopping," resulting in the "relocation of AI development and use to regions with lower ethical standards."

Instead, the group wants to start a process that will “pave the way towards a common, internationally recognized ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.”

The Commission said in a separate statement that it wanted to kick off a “wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.”

As part of this, the institution has launched a call for applications for an expert AI group that will produce a set of draft guidelines for the ethical development and use of AI, based on fundamental EU rights.

The group will also advise the Commission on developing a “European AI Alliance” and support the implementation of a European initiative on AI that is due next month.

“Artificial intelligence has developed rapidly from a digital technology for insiders to a very dynamic key enabling technology with market creating potential,” said research commissioner Carlos Moedas.

“And yet, how do we back these technological changes with a firm ethical position? It bears down to the question, what society we want to live in?”

The group is expected to consider various ethical and social conundrums, including the explainability and transparency of AI systems, how to calculate the risks posed by interconnected AI devices, and who is morally responsible for them.

The deadline for applications to the high level group is April 9. ®

More about

TIP US OFF

Send us news


Other stories you might like