Sysadmin blog The security defenses available to us are clumsy and inadequate. Anti-malware applications are grand at dealing with well known threats, but pathetic and worthless at dealing with emerging ones. Software vendors are too entrenched in politics, feasibility studies and bad attempts at public relations to bother to properly and expediently patch their software.
Meanwhile our economy becomes ever more dependant on the interconnectivity of computer systems: we have come too far to go back. Governments know this and see the failure of academia, corporations and private citizens to mitigate the threats. If we, as corporations and individuals, want the internet to remain free and open as it is today, then we have to solve these problems before the governments of the world try to do it for us.
The internet was built on the presumption of innocence. Basic protocols such as email don’t inherently contain a way to verify that the sender is legitimate. We all know how well that has worked out. Peer-to-peer protocols have many legitimate uses, but their nature lends them to illegal uses and so the vast majority of peer-to-peer traffic infringes copyright. Even the venerable Domain Name System is under attack: most new domain registrations are malicious.
It could be that the only to preserve the freedom of the internet is to do away with the presumption of innocence. I believe that, if we do not do this in the next ten years, we will lose control of the internet to government and we will never get it back. Look at email. Currently we rely on blacklists (such as Spamhaus) to tell us which email domains exist only to send spam. As noble as these projects are, this is completely backwards. A series of central registries with whom operators of legitimate email servers can (freely) register is the only way to make spam go away. If you are caught spamming, you fall off the planetary whitelist and getting back on should not be easy.
Similarly, peer-to-peer technologies could benefit from exactly the same concept. I rely on peer-to-peer to get access to things like Linux ISOs that are vital for my work. At the same time, however, I do not want to allow peer-to-peer traffic on my corporate network, in case copyright infringement is traced to my corporate IP. The ability to tell my firewall “deny all peer-to-peer traffic except that which has been registered with this whitelist as legitimate” would solve the problem. But short of assembling that list myself, there currently exists no such beast.
The same is becoming true of the DNS system itself. DNS blacklists are a fantastic first step, but they don’t go far enough. The day has come to start building confidence ranking into the DNS system itself. This is starting to take shape now with the controversial concept of DNS reputation.
If I had the time and capital to start a tech company out of my basement, I would be pursuing all of these ideas. Assembling blacklists is a losing battle, but there is money to be made in assembling whitelists. Individuals and corporations who prefer to experience the web in its raw form should have the option to do so, but as someone who has several networks under my care, I know that I would prefer a whitelisting approach.
We are rapidly approaching the point where due diligence means presuming all traffic to be malicious unless it can be proven otherwise. It makes no sense for each company and individual in the world to independently build and maintain their own whitelists of legitimate sources of traffic. The market is wide open for the creation of a handful of whitelists to which we could subscribe. Building protocol whitelists certainly won’t solve all our problems, but it would be more secure than what we are doing now. Human nature is what it is: so securing the internet means the end of presumed innocence. ®