Security is an architectural issue: Why the principles of zero trust and least privilege matter so much right now

We need to solve underlying problems, not increase complexity with point fixes


Systems Approach I’ve been interested in architecture – of the physical building variety, as distinct from computer or network architecture – for as long as I can remember. So I was pretty excited when I got to work in a Frank-Gehry-designed building at MIT in the late 2000s.

As it turns out, the building is something of a case study in the perils of high-profile architecture, with a litany of defects including mold, ice falling on passersby from the roof, and a conference room that made people (including both me and Frank Gehry) sea-sick.

While MIT eventually settled a lawsuit against Gehry and the builders, it was never entirely clear how many of the issues were a matter of design versus implementation. But it was pretty clear that architectural decisions have significant implications for those who have to live with them.

The Stata Center at MIT

No ice falling off the Gehry-designed Stata Center on this occasion ... Click to enlarge

Which brings us to the Internet and its architectural shortcomings. While the the Internet has been hugely successful in almost every dimension, even those most closely associated with it have pointed out that it lacked a solid architectural foundation on the matter of security.

Vint Cerf, for example, argued that the Internet’s original architecture had two basic flaws: too little address space, and no security. David Clark, the “architect of the Internet”, suggested [PDF] that how we apply the principle known as the “end-to-end argument” [PDF] to the Internet should be rethought in the light of what we now know about security and trust (among other things).

To paraphrase the concerns raised by Internet pioneers, the Internet has done really well at connecting billions of people and devices (now that the address space issues are dealt with in various ways), but it remains quite flawed in terms of security.

The original design goal of making it easy for a distributed set of researchers to share access to a modest number of computers didn’t require much security. The users mostly trusted each other, and security could be managed on end-systems rather than being a feature of the network.

In 1988, the Morris Worm famously illustrated the limitations of depending on end-system security alone. So today we have an architecture where the default is that every device can talk to every other device, and any time we want to enforce some other behavior, we need to take some specific action – such as inserting a firewall and explicitly blocking all traffic except some specified subset.

And that approach of adding point fixes, like firewalls, has led to a proliferation of security devices and technologies, none of which really changes the architecture, but which does increase the overall complexity of managing networks.

A few significant developments in the past decade give me reason to think there may be cause for optimism. One is the emergence of “zero trust” approaches to security, which pretty much inverts the original security approach of the Internet. The term was coined at Forrester in 2009 and can be thought of as a corollary to the principle of least privilege laid out by Saltzer and Schroeder in 1975:

Every program and every user of the system should operate using the least set of privileges necessary to complete the job

Rather than letting every device talk to every other device, zero trust starts from the assumption that no device should be trusted a priori, but only after some amount of authentication does it get access to a precisely scoped set of resources – just the ones necessary to complete the job.

Zero trust implies that you can no longer establish a perimeter firewall and let everything inside that perimeter have unfettered access to everything else. This idea has been adopted by approaches such as Google’s BeyondCorp in which there is no concept of a perimeter, but every system access is controlled by strict authentication and authorization procedures. From my perspective, the ability to enforce zero trust has also been one of the major benefits of software-defined networking (SDN) and network virtualization.

In the early days of network virtualization, my Nicira colleagues had a vision that everything in networking could eventually be virtualized. At the time I joined the team, the Nicira product had just virtualized layer 2 switching and layer 3 routing was about to ship. It took a little while after the VMware acquisition of Nicira for us to make our way up to layer 4 with the distributed firewall, and in my mind that was the critical step to making a meaningful impact on security.

Now, rather than putting a firewall at some choke point and forcing traffic to pass through it, we could specify a precise set of policies about which devices (typically virtual machines in those days) could communicate with each other and how. Rather than operate with “zones” in which lots of devices that didn’t need access to each other nevertheless could communicate, it was now a relatively simple matter to specify precise and fine-grained security policies regarding how devices should communicate.

A similar story played out with SD-WAN. There are lots of reasons SD-WAN found a market, but one of them was that you no longer had to backhaul traffic from branch offices to some central firewall to apply your security policy. Instead you could specify the security policy centrally but implement it out at the branches – a significant win as more and more traffic headed for cloud services rather than centralized servers in a corporate data center.

This paradigm of specifying policy centrally and having software systems that implement it in a distributed manner also applies to securing modern, distributed applications. Service meshes are an emerging technology that applies this paradigm, and a topic that we’ll go deeper on in future.

So while it is too early to declare success on the security front, I do think there are reasons for optimism. We don’t just have an ever-expanding set of point fixes to an architectural issue. We actually have some solid architectural principles (least privilege, zero trust) and significant technological advances (SDN, intent-based networking, etc.) that are helping to reshape the landscape of security. ®

Larry Peterson and Bruce Davie are the authors of Computer Networks: A Systems Approach and the related Systems Approach series of books. All their content is open source and available on GitHub. You can find them on Twitter and their writings on Substack.

Broader topics


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022