This article is more than 1 year old
Google DeepMind's use of 1.6m Brits' medical records to test app was 'legally inappropriate'
Privacy watchdog scolds London hospital
Updated Google's use of Brits' real medical records to test a software algorithm that diagnoses kidney disease was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health.
In April 2016 it was revealed the web giant had signed a deal with the Royal Free Hospital in London to build an application called Streams, which can analyze patients' details and identify those who have acute kidney damage. The app uses a fixed algorithm, developed with the help of doctors, rather than anything fancy like AI. It essentially takes things like your blood pressure, age, symptoms, and other personal information, and computes the chance you've suffered acute kidney damage. This is then used to alert docs to provide treatment.
The software – developed by DeepMind, Google's highly secretive machine-learning nerve center – was first tested with simulated data. Then it was tested using 1.6 million sets of real NHS medical files provided by the London hospital. However, not every patient was aware that their data was being given to Google to test the Streams software. Streams has been deployed in wards, and thus now handles real people's details, but during development, it also used live medical records – as well as simulated inputs – and that's what's got the Royal Free into trouble.
Dame Fiona has told the hospital's medical director Professor Stephen Powis that he overstepped the mark, and that there was no consent given by people to have their information used in this way pre-deployment.
"It is my view, and that of my panel, that purpose for the transfer of 1.6 million identifiable patient records to Google DeepMind was for the testing of the Streams application, and not for the provision of direct care to patients," she wrote in a letter dated February, which was leaked to Sky News on Monday.
The pact between the hospital and Google raised eyebrows at the time, but was sold as a legal way to develop apps using sensitive data. It now appears that the US tech goliath and the Royal Free blew it.
"This letter shows that Google DeepMind must know it had to delete the 1.6 million patient medical records it should never have had in the first place," said Phil Booth, coordinator for medical privacy pressure group medConfidential.
"There were legitimate ways for DeepMind to develop the app they wanted to sell. Instead they broke the law, and then lied to the public about it. Such gross disregard of medical ethics by commercial interests – whose vision of 'patient care' reaches little further than their business plan – must never be repeated."
The UK Information Commissioner's Office is investigating the matter and is expected to rule shortly on the Google-Royal Free agreement after academic outcry over the software project.
"We have been very grateful to Dame Fiona for her support [and] advice during this process, and we would absolutely welcome further guidance on this issue," Professor Powis told Sky News.
DeepMind denies any wrongdoing. Its spokespeople stressed to The Register on Monday night that Streams has real benefits for patients.
"Nurses and doctors have told us that Streams is already speeding up urgent care at the Royal Free and saving hours every day. The data used to provide the app has always been strictly controlled by the Royal Free and has never been used for commercial purposes or combined with Google products, services or ads – and never will be," said a DeepMind spokesman.
"Clinicians at the Royal Free put patient safety first by testing Streams with the full set of data before using it to treat patients. Safety testing is essential across the NHS, and no hospital would turn a new service live without testing it first." ®
Editor's note: This article was updated after publication after we learned that Streams is not an AI program, but a fixed algorithm developed with the help of medical professionals. We're happy to clarify this point.