This article is more than 1 year old
Complexity has broken computer security, says academic who helped spot Meltdown and Spectre flaws
Graz University of Tech's Daniel Gruss thinks natural sciences can save us
Complexity has broken cybersecurity, but a reappraisal of computer science can keep us safe.
So says Daniel Gruss, assistant professor in the Secure Systems group at Austria's Graz University of Technology. Gruss and his colleagues discovered some of the biggest recent security snafus, including the Meltdown and Spectre microprocessor design flaws, a working Rowhammer exploit, attacks on Intel SGX including Plundervolt, and many more besides.
Speaking at the Black Hat Asia conference, held virtually on Friday in the Singapore time zone, Gruss outlined his belief that while it is possible to make a system provably secure – with great effort – this is seldom done in production. In any case, the world has become accustomed to using mazes of interlinked, unproven, and often not-publicly-documented systems, he argued.
We'll have job security for everybody working in security analysis. I suppose that is a good thing.
The resulting complexity makes it hard to say all parts of a system are secure because of the many subsystems and interactions that cannot be checked or secured. Even if the design faults buried in that thicket of systems require the deft investigations that Gruss and his colleagues have used to hunt for subtle side-channel data leaks, the insecurity remains.
The assistant professor also advanced his theory that as Moore's Law runs out, we'll use more and more systems with more and more processor and accelerator cores all interacting with each other, which means even more security risk. Building simpler systems is not an option, he believes, because humanity now has an expectation of pervasive, high-performance computing.
All of which lands us in a world where individual systems are flawed, interactions among them can't be secured, we build and link more computers every day even though we know that just increases risk, and we can't or won't change.
Happily, Gruss thinks there's a way to stop that mess of contradictions proving catastrophic.
His first suggestion is that computer science needs to rethink itself. Today, he said, the subject is considered a formal science. Gruss said that needs to change, for two reasons.
For starters, he said, the complexity of computers and networks now approaches that of structures, organisms, and populations seen in biology. His other reason is that adopting empirical methods is a better way of testing systems and their interactions.
"Our systems are getting more and more complex so we have to invest more and more time into studying them like nature," he said.
Gruss thinks that may be good news for security pros because the world will clearly need more of them, and plenty will have new skills to learn. "In 30 years I would expect we have more people studying and analysing systems, and more variety of security jobs," he told the virtual event.
He also hedged a little, saying it's not yet possible to predict the perspective from which we will need to assess security in the future.
Woo-yay, Meltdown CPU fixes are here. Now, Spectre flaws will haunt tech industry for yearsREAD MORE
"Without the internet, ransomware would not flourish as well as it does," he said, illustrating how a change in usage patterns can bring about completely unforeseen problems.
He also suggested that insurers will therefore have more of a role to play because the empirical method may reveal more risks. And insurers do love designing products that ameliorate risk. Insurance cannot, of course, reduce the need for vigilance, and there is the argument that encourages criminals.
"We'll have job security for everybody working in security analysis," Gruss said. "I suppose that is a good thing." ®