Among his many responsibilities, Chris Young is the Cisco executive charged with leading its security challenge. Last week at Cisco Live! Australia, Vulture South talked to Young about securing the Internet of Everything.
El Reg:Cisco has put a $19 trillion value on the Internet of Things: how do we stop it becoming a $19 trillion honeypot?
Young: What's important to know, at the end of the day, most of the behaviour that we're seeing – most of the risks and challenges of the Internet of Everything mirrors the risks and challenges of society. When I think about cyber-attacks and cyber-security issues. At the end of the day, what are people doing?
They're stealing money, they're stealing information, or they're trying to disrupt someone's operations. Those are all problems that we see in the physical world. It's just magnified and scaled in a way that we can't contemplate in our own physical world. But all the motivations are human, at the end of the day.
It's certainly reasonable to expect, as monetary opportunities are created, [that] there will be criminals that will seek to exploit those opportunities for their own gain.
That's going to happen as we see the evolution of IoE.
El Reg:Canonical's Mark Shuttleworth [recently] wrote that proprietary firmware is an incurable problem ... no matter how invisible the firmware looks when somebody writes it, and blows it onto silicon, sticks it in a home router, sends the router out to people.
It's only the determination of the attacker, whether somebody decides “I'm going to see if there is an embedded factory password in that particular device”, they'll find it.
How feasible is it to say “lift all the value-add one step away from the firmware, make the firmware a visible and open set of API handles, and stop trying to believe that the firmware is the foundation of our value-add.”
Young: The reality is – in security, a lot of the security problems, the criminals are going to go after where they're going to get the most return for whatever they have in mind. I focus less on vulnerabilities of any given element of the stack, if you will – hardware, software, etcetera – what's important is that there's always going to be vulnerabilities in any product that exists.
A good example – some vulnerabilities in products that could be exploited for security purposes, are things that have been put there so the product could be tested.
The point is that the context becomes very important in thinking about the security model. If we follow the context, we have to follow the value if we want to understand what we need to protect. That's really important.
Most of the movement in the industry right now is that we're moving to more software-based models, more value in software. Hardware's still important because you have to get performance and scalability, but I do expect that we can get the right level of security, whether it be on consumer devices or others, by thinking about where attacks might happen, and being in a position to mitigate that.
Do we need updates?
El Reg: If we look at the kinds of devices that are expected to prevail in the Internet of Things model, a lot of them are going to be small and not very smart. A lot of the stuff hasn't been created yet. Now that we have an opportunity to do the architecture before we do the product: if we're looking at a sensor that we're expecting to last two years on battery, talking over RPL and doing a very limited set of functions – doesn't it make sense to say “we only need a tiny bit of execution on that device”, and everybody would like to have updateability, but that's not a big value point.
Can't we just say “the best way to stop someone hijacking the device” is to freeze it. If we then want to change it – it's got to be a compelling enough reason to go out and put a new one there. Doesn't that make more sense than the security vulnerability that says “if we can push software out to those things, someone else can, too”?
Young: This is the conversation I have with customers all the time. All context is not the same – you have to optimise your security model for the business context, and the environment in which you operate.
If you've got a device that's 100 miles offshore on an oil platform, then yes, you're going to need to be able to remotely manage and update that, because you can't physically go and swap something out. But if you've got virtual machines in the data centre, maybe you want to kill those every night, and reboot them in the morning with a “golden image”, so you've got really good certainty that nothing bad has happened to that machine.
The context in which those two different environments operate is very deterministic around what the security model will and won't allow.
It's going to depend on the context – on what kinds of use-cases you're talking about. You talked about very small devices. For small consumer, wearable tech, if you've got a compromised device, maybe the best thing to do is treat it like a compromised credit card number.
As soon as the bank sees you've got a compromised credit card, what do they do? They send you a new one.
The Internet of i-Things
El Reg: The smart watch will get bought because it is cool. It will get connected to the smart phone, because that's the model that Google wants, that Samsung wants, that Apple wants, and so on.
Is the industry setting itself for a really big backlash on the basis that in six months' time, what you have is a lot more vulnerability than is balanced by the utility of the device. For a toy, my health records have popped up in Bulgaria, and most of the use-case was “wow, cool toy”. That's too much vulnerability for too little utility, isn't it?
Young: That decision – this is where I do think the user has to take some responsibility for their own interests. Security is still everyone's responsibility, whether you're talking about your personal security, your home, your business – the individual still has a role.
You can't assume that there's some way to outsource all security care-abouts. A good example, in my own home I spend a lot of time worrying about the things you just talked about – what vulnerabilities exist, I worry a lot about when people come to my house and do work, who gets what alarm code. Those are important considerations.
At the end of the day, the individual is still responsibility for their awareness around security. And it goes even further than that. Children – you raise your kids, you teach them who they should be suspicious of. “Don't talk to strangers”, “Make sure when you go on a class field trip, you hold hands with somebody in your class, so you don't get lost.”
As human beings, we start to learn these principles very early in life, and there's no reason we shouldn't start thinking about how that extends to a world of connected devices and information about ourselves.
It's inevitable that we have to do that. We can't just assume that because there's this technical world that exists in the ether, that we don't see every day, that we're not responsible for our security in that context.
El Reg: And here, we get an easy illustration, or at least some scope of that boundary. One of the solar equipment vendors, in every respect held as the gold standard in the industry [yet] every piece of their equipment ships with an unchangeable factory default password – well, we can never connect that to the Internet, can we?
Young: In that world you put the consumer in the position where they have to choose not to use that product, or live with that problem.
This goes back to my point that every company is a technology company, and every company is a security company. Here's the thing: in Cisco, I'm responsible for the security business, security is my vertical. We sell security products that help a customer solve a security problem.
But I also am responsible for security as a horizontal, so I have teams of guys that work with other groups – the data centre, enterprise networking, service provider networks – to ensure that we have some basic security practices about how we develop our products.
[This is] the Cisco Secure Development Life Cycle. We train people, security ninjas, we certify people and we teach our developers to build software and our hardware products with basic security tenets in mind, like not hard-coding passwords.
That's the kind thing that every company is going to have to do, particularly as they start to think about connecting their users and devices to some broader ecosystem, connected to the Internet.
Everybody's going to have to follow secure development life cycle. Everybody's going to need basic, foundational security. Identity is going to become important in all of this – and that's not too much to ask of any vendor, of any product.