Google-stablemate DeepMind is creating a blockchain-like system to show how sensitive medical data passing through its processors will be used, allowing healthcare professionals to check if data has been tampered with.
Its healthcare arm, DeepMind Health, is working to improve medical diagnoses with machine learning tools. Large amounts of confidential data are required to develop these tools – something DeepMind hasn’t always been trusted to handle.
Last year, DeepMind was criticized for gaining access to current and historic patient records for 1.6 million individuals across three London Royal Free NHS Trust hospitals – which extended well beyond the scope of the research they had publicly disclosed.
The announcement of Verifiable Data Audit this week is an attempt to gain back some of the lost trust. Data processed by DeepMind’s servers will be logged, and a “special digital ledger” will also be added to explain how and why the data has been used.
For example, if results from a blood test were uploaded, it’ll show they were used in an algorithm to check for possible acute kidney injury.
The ledger and entries within the system will be used in a similar way to a blockchain (a distributed database), the company explained in a blog post by Mustafa Suleyman, cofounder and head of applied AI and Ben Laurie, head of security and transparency. Like a blockchain, the ledger added to the system can’t be erased and the database list continually grows. A value known as a “cryptographic hash” will be generated every time a data entry is added to the ledger.
The hash value makes it possible to see the latest entry and all the previous values in the ledger as well, so the data log can be tracked over time.
It’s impossible to change a specific entry secretly, as this alters the hash value and disrupts the data trail.
DeepMind compares the system to a game of Jenga, a game of physical skill. “You might try to gently take or move one of the pieces – but due to the overall structure, that’s going to end up making a big noise!”
The system's online interface will be accessible by authorized staff working in hospitals, so data entries can be scrutinized in real time. Alarm signals can be triggered if workers spot any unusual or suspicious activity.
It seems that DeepMind has been interested in this idea for a while, as it published a paper [PDF] in 2015 detailing how data logs could be secured. ®