This article is more than 1 year old

Transparent algorithms? Here's why that's a bad idea, Google tells MPs

Look, people use microwaves and cars, and they effectively operate as black boxes

Opening up the processes that underpin algorithms may well magnify the risk of hacking, widen privacy concerns and stifle innovation, Google has told MPs in the UK.

The comments came in Google's response to the House of Commons Science and Technology Committee's inquiry into algorithmic decision-making, which is questioning whether organisations should be more open about how machines influence such choices.

Google, whose business model has relied heavily on secretive algorithms, was careful to emphasise that it supported responsible use of the technology, as it played down concerns over their "black box" nature.

"In the broadest sense, an algorithm is simply a set of instructions," Google whispered soothingly in the committee members' ears. "Cooking recipes, instructions on how to play a game, and walking directions are all everyday illustrations of what could be called algorithms."

Google argued some approaches to increased transparency, like disclosing code or data in its raw form, have their downsides. "A flood of technical detail [might] fail to provide adequate notice or understanding about the critical characteristics of a technology.

"Even when transparency is warranted, it is worth noting that there are many ways that it could be implemented."

For instance, the algorithmic model can be can be selectively tested with different types of input to "provide an indicator of whether it might be producing negative or unfair effects".

Or it could maybe give people a "visual indication of key metrics" that relate to the way the model works, without going into the "full complexity".

The real issue here is trust – and increased transparency "is not the only way" of achieving this, the Chocolate Factory went on.

"Many technologies in society operate as 'black boxes' to the user – microwaves, automobiles, and lights – and are largely trusted and relied upon without the need for the intricacies of these devices to be understood by the user."

The megacorp also offered up some legitimate concerns businesses have about letting people get too good a look under the hood – namely protecting trade secrets.

Data protection laws do allow companies to keep some of their cards close to their chest, Google said, and watering these down could impact innovation.

"This would be especially damaging for new entrants, who are likely to find it harder to compete with established players with the resources to swiftly imitate and create a competing service."

This view was echoed by TechUK, the representative body for smaller firms, which pointed out that such information is likely to be commercially sensitive.

"Creating a situation where UK firms wanting to develop or use algorithmic decision making technologies are held back by requirements that prevent innovation, creativity and competition in the market, while global competitors are not, would be detrimental to the government's post-Brexit vision of Global Britain," the group said.

Elsewhere in its submission, Google suggested that opening up the tech behind decision-making algorithms – even in a controlled environment – "magnifies the risk of gaming and hacking", and might even pose a problem for personal data protection.

"Typically, in order to review the output of an algorithm, you need to know the data that is used as input. However, in some instances this data will consist of personal information. While it's possible to de-identify and aggregate data and limit privacy exposure, that in turn will limit the extent to which you can analyse individual applications of the algorithm, or check for bias or incomplete data."

The committee is still accepting written submissions on the subject, which is also looking at the risk of errors and bias related to increased reliance on algorithms in decision-making. ®

More about

TIP US OFF

Send us news


Other stories you might like