Microsoft researchers smash homomorphic encryption speed barrier

Artificial intelligence CryptoNets chew data fast but keep it safe


Exclusive Microsoft researchers, in partnership with academia, have published a paper detailing how they have dramatically increased the speed of homomorphic encryption systems.

With a standard encryption system, data is scrambled and then decrypted when it needs to be processed, leaving it vulnerable to theft. Homomorphic encryption, first proposed in 1978 but only really refined in the last decade thanks to increasing computing power, allows software to analyze and modify encrypted data without decrypting it into plaintext first. The information stays encrypted while operations are performed on it.

This has major advantages from a security standpoint. Hospital records can be examined without compromising patient privacy, financial data can be analyzed without opening it up to theft, and it's perfect for a computing environment where so much data is cloud-based on someone else's servers.

There is, of course, a problem. The first fully homomorphic encryption system, built by Craig Gentry (now an IBM Research cryptographer), was incredibly slow, taking 100 trillion times as long to perform calculations of encrypted data than plaintext analysis.

IBM has sped things up considerably, making calculations on a 16-core server over two million times faster than past systems, and has open-sourced part of the technology. But, in a new paper, Microsoft says it's made a huge leap forward in speed – and applied the encryption system to deep-learning neural networks.

Professor Kristin Lauter, principal research manager at Microsoft, told The Register that the team has developed artificial intelligence CryptoNets that can process encrypted data without having to decrypt the information thrown at the AI. The team claims that its CryptoNet-based optical recognition system is capable of making 51,000 predictions per hour with 99 per cent accuracy while studying a stream of encrypted input images.

The paper's abstract explains how this technology could be used in the cloud to process encrypted data without needing the decryption keys:

The encryption ensures that the data remains confidential since the cloud does not have access to the keys needed to decrypt it. Nevertheless, we will show that the cloud service is capable of applying the neural network to the encrypted data to make encrypted predictions, and also return them in encrypted form.

The key to Redmond's approach is in the pre-processing work. The researchers need to know in advance the complexity of the math that is to be applied to the data. They need to structure the neural network appropriately and keep data loads small enough so the computer handling them isn't over-worked.

To make this possible, the team developed the Simple Encrypted Arithmetic Library (SEAL) – code which it revealed last November. Detailed parameters have to be set up before the data run is attempted, to keep multiplication levels low.

In testing, the team used 28 x 28-pixel images of handwritten words taken from the Mixed National Institute of Standards and Technology (MNIST) database and ran 50,000 samples through the network to train the system. They then tried a full run on an additional 10,000 characters to test its accuracy. The test rig was a PC with a single Intel Xeon E5-1620 CPU running at 3.5GHz, with 16GB of RAM, running Windows 10.

There's still a lot of work to be done, Lauter said, but the initial results look very promising and could be used for a kind of secure machine learning-as-a-service concept, or on specialist devices for medical or financial predictions: information sent over to the neural network remains encrypted all the way through the processing.

"I'm not in that part of the company's decision-making process, so can't guarantee when Microsoft will have a product using this technology," she said. "But from a research point of view, we are definitely going towards making it available to customers and the community." ®

Similar topics


Other stories you might like

  • How to keep a support contract: Make the user think they solved the problem

    Look what you found! Aren't you clever!

    On Call Let us take a little trip back to the days before the PC, when terminals ruled supreme, to find that the more things change the more they stay the same. Welcome to On Call.

    Today's story comes from "Keith" (not his name) and concerns the rage of a user whose expensive terminal would crash once a day, pretty much at the same time.

    The terminal in question was a TAB 132/15. It was an impressive bit of kit for the time and was capable of displaying 132 characters of crisp, green text on a 15-inch CRT housed in a futuristic plastic case. Luxury for sure, unless one was the financial trader trying to use the device.

    Continue reading
  • Apple kicked an M1-shaped hole in Intel's quarter

    Chipzilla braces for a China-gaming-ban-shaped hole in future results, predicts more product delays

    Intel has blamed Apple's switch to its own M1 silicon in Macs for a dip in sales at its client computing group, and foreshadowed future unpleasantness caused by supply chain issues and China's recent internet crackdowns.

    Chipzilla's finances were robust for the third quarter of its financial year: revenue of $19.2 billion was up five per cent year over year, while net income of $6.8 billion was up 60 per cent compared to 2020's Q3.

    But revenue for the client computing group was down two points. CFO George Davis – whose retirement was announced today – was at pains to point out that were it not for Apple quitting Intel silicon and Chipzilla exiting the modem business, client-related revenue would have risen ten per cent.

    Continue reading
  • How your phone, laptop, or watch can be tracked by their Bluetooth transmissions

    Unique fingerprints lurk in radio signals more often than not, it seems

    Over the past few years, mobile devices have become increasingly chatty over the Bluetooth Low Energy (BLE) protocol and this turns out to be a somewhat significant privacy risk.

    Seven boffins at University of California San Diego – Hadi Givehchian, Nishant Bhaskar, Eliana Rodriguez Herrera, Héctor Rodrigo López Soto, Christian Dameff, Dinesh Bharadia, and Aaron Schulman – tested the BLE implementations on several popular phones, PCs, and gadgets, and found they can be tracked through their physical signaling characteristics albeit with intermittent success.

    That means the devices may emit a unique fingerprint, meaning it's possible to look out for those fingerprints in multiple locations to figure out where those devices have been and when. This could be used to track people; you'll have to use your imagination to determine who would or could usefully exploit this. That said, at least two members of the team believe it's worth product makers addressing this privacy weakness.

    Continue reading

Biting the hand that feeds IT © 1998–2021