This article is more than 1 year old

Trust, not tech, is holding back a safer internet

Excuse me, citizen, did you packet this data yourself?

Opinion The tech sector is failing at cybersecurity. Global spending on the stuff is at $190 billion a year, a quarter of the US defense budget. That hasn't stemmed what's estimated to be trillions in annual cybercriminal damages. People are fond of saying that the Wild West days of the internet are over, but on those numbers an 1875 Dodge City bank vault looks like Fort Knox.

So where's the sheriff? There are plenty of posses; no end of companies both small and large selling security by the bushel. Firewalls, scanners, heuristic, intrinsic, behavioral, managed, managerial, in-cloud, on-prem, you can mix and match the buzzwords and buy into every new idea. What you can't do is make your systems safe.

If you do want a safe bet in cybersecurity, it's that things aren't going to change any time soon without some fundamental shift in how the market works – if 40 years of constant failure can be called working.

We have so little reason to trust what's on offer or those offering it. Several stories last week show this: Apple, which makes a big play of intrinsic platform security, is heading to court for ignoring user consent and silently gathering app data anyway. Microsoft, even as it announces the extension of its security platform into Linux, reveals it fumbled its switches on its service infrastructure and took business-critical access away from its customers. These are the big shots in town, but they can't shoot straight.

It's almost as if we can't rely on the private sector to protect us against crime. Guess what: we never could and we never will. The state has to take on that role – usually late, usually badly, and usually against the wishes of those who like their crimes kept in the private sector, but usually to better effect than the alternatives.

Public governance and policing of cybercrime is a mixed bag. After a decade or so of mischief, most legislatures got around in the 1990s to defining and outlawing computer misuse by unauthorized parties. If you get caught, there's at least a book to throw at you. It's the catching that's the problem.

State agencies concentrate on areas where IT is used to further more traditional crimes – drugs, extortion, organized theft and international money laundering, all those fun things. Less so the cybercrime that depends on the characteristic ability of the internet to let small groups operate at scale to commit data-centric badness and move on quickly from target to target. Effective policing here needs to replicate what works in the physical world: inhabit the places where the crimes take place, work with the consent of the general population, and become proficient with the tools, thought processes, and human networks of the criminals.

Would you trust the police – by extension, the state – with your data, personal or corporate? Bit of a problem there, especially with so many governments constantly banging on about forcing open encryption standards whether you like it or not. Yet that's the accommodation we've reached with the state over hundreds of years of postal services and old school telecommunications. We even consent to the massive increase in our legal vulnerability surface that comes when we buy a car.

And there are points in our virtual lives where trust just has to be given, if not in the inherent goodness of organizations but at least in the ability to take any misdemeanors to task. Even with end-to-end encryption and without active malicious attacks, your ISP and mobile providers know a great deal about you. Run services in the cloud as an organization, or use a VPN as an individual, and that's a lot more implicit trust.

With attention to transparency, responsibility, and accountability, the state's approach to controlling cybercrime would be a lot more effective. Cybercrime and its control is at heart a problem of data acquisition and pattern recognition, like all sleuthing, and the more you can do of both the better at it you can be – and the greater the risks of abuse.

What sort of automated data gathering would you consent to, if you knew and trusted the purpose, nature and limits of that? If there was a national endpoint security system, would you opt in? How would you decide? These are very hard questions that go to the heart of the social contract, but that's a conversation we'll have to have with ourselves and with the politicians.

Criminality didn't end when the Wild West got its rule of law, and we never get the police we really want, just those we can put up with. We know we can't put up with cybersecurity that demands a defense budget-sized investment in return for a global crimewave. We need a better sheriff: let's draw up the job description. ®

More about

TIP US OFF

Send us news


Other stories you might like