This article is more than 1 year old

Security is an architectural issue: Why the principles of zero trust and least privilege matter so much right now

We need to solve underlying problems, not increase complexity with point fixes

Systems Approach I’ve been interested in architecture – of the physical building variety, as distinct from computer or network architecture – for as long as I can remember. So I was pretty excited when I got to work in a Frank-Gehry-designed building at MIT in the late 2000s.

As it turns out, the building is something of a case study in the perils of high-profile architecture, with a litany of defects including mold, ice falling on passersby from the roof, and a conference room that made people (including both me and Frank Gehry) sea-sick.

While MIT eventually settled a lawsuit against Gehry and the builders, it was never entirely clear how many of the issues were a matter of design versus implementation. But it was pretty clear that architectural decisions have significant implications for those who have to live with them.

The Stata Center at MIT

No ice falling off the Gehry-designed Stata Center on this occasion ... Click to enlarge

Which brings us to the Internet and its architectural shortcomings. While the the Internet has been hugely successful in almost every dimension, even those most closely associated with it have pointed out that it lacked a solid architectural foundation on the matter of security.

Vint Cerf, for example, argued that the Internet’s original architecture had two basic flaws: too little address space, and no security. David Clark, the “architect of the Internet”, suggested [PDF] that how we apply the principle known as the “end-to-end argument” [PDF] to the Internet should be rethought in the light of what we now know about security and trust (among other things).

To paraphrase the concerns raised by Internet pioneers, the Internet has done really well at connecting billions of people and devices (now that the address space issues are dealt with in various ways), but it remains quite flawed in terms of security.

The original design goal of making it easy for a distributed set of researchers to share access to a modest number of computers didn’t require much security. The users mostly trusted each other, and security could be managed on end-systems rather than being a feature of the network.

In 1988, the Morris Worm famously illustrated the limitations of depending on end-system security alone. So today we have an architecture where the default is that every device can talk to every other device, and any time we want to enforce some other behavior, we need to take some specific action – such as inserting a firewall and explicitly blocking all traffic except some specified subset.

And that approach of adding point fixes, like firewalls, has led to a proliferation of security devices and technologies, none of which really changes the architecture, but which does increase the overall complexity of managing networks.

A few significant developments in the past decade give me reason to think there may be cause for optimism. One is the emergence of “zero trust” approaches to security, which pretty much inverts the original security approach of the Internet. The term was coined at Forrester in 2009 and can be thought of as a corollary to the principle of least privilege laid out by Saltzer and Schroeder in 1975:

Every program and every user of the system should operate using the least set of privileges necessary to complete the job

Rather than letting every device talk to every other device, zero trust starts from the assumption that no device should be trusted a priori, but only after some amount of authentication does it get access to a precisely scoped set of resources – just the ones necessary to complete the job.

Zero trust implies that you can no longer establish a perimeter firewall and let everything inside that perimeter have unfettered access to everything else. This idea has been adopted by approaches such as Google’s BeyondCorp in which there is no concept of a perimeter, but every system access is controlled by strict authentication and authorization procedures. From my perspective, the ability to enforce zero trust has also been one of the major benefits of software-defined networking (SDN) and network virtualization.

In the early days of network virtualization, my Nicira colleagues had a vision that everything in networking could eventually be virtualized. At the time I joined the team, the Nicira product had just virtualized layer 2 switching and layer 3 routing was about to ship. It took a little while after the VMware acquisition of Nicira for us to make our way up to layer 4 with the distributed firewall, and in my mind that was the critical step to making a meaningful impact on security.

Now, rather than putting a firewall at some choke point and forcing traffic to pass through it, we could specify a precise set of policies about which devices (typically virtual machines in those days) could communicate with each other and how. Rather than operate with “zones” in which lots of devices that didn’t need access to each other nevertheless could communicate, it was now a relatively simple matter to specify precise and fine-grained security policies regarding how devices should communicate.

A similar story played out with SD-WAN. There are lots of reasons SD-WAN found a market, but one of them was that you no longer had to backhaul traffic from branch offices to some central firewall to apply your security policy. Instead you could specify the security policy centrally but implement it out at the branches – a significant win as more and more traffic headed for cloud services rather than centralized servers in a corporate data center.

This paradigm of specifying policy centrally and having software systems that implement it in a distributed manner also applies to securing modern, distributed applications. Service meshes are an emerging technology that applies this paradigm, and a topic that we’ll go deeper on in future.

So while it is too early to declare success on the security front, I do think there are reasons for optimism. We don’t just have an ever-expanding set of point fixes to an architectural issue. We actually have some solid architectural principles (least privilege, zero trust) and significant technological advances (SDN, intent-based networking, etc.) that are helping to reshape the landscape of security. ®

Larry Peterson and Bruce Davie are the authors of Computer Networks: A Systems Approach and the related Systems Approach series of books. All their content is open source and available on GitHub. You can find them on Twitter and their writings on Substack.

More about

TIP US OFF

Send us news


Other stories you might like