For those of us approaching what I'd like to think of as early maturity (or even late youth), it's inspiring to see some things come to pass that we'd dreamed of during our formative years, such as complete smoking bans and low cost air travel within Europe. It's also good to see some of the promises turning into reality in the world of communications, like the prospect of a wide choice of broadband service providers and low cost bandwidth, which seemed such a long way off back at the start of telecom deregulation.
So, while I am still recovering from being able to fly to and from CeBIT for £155, maybe it's a good moment to consider the temptations of cheaper network bandwidth.
The IT industry has always been good at throwing resource at a problem, whether it's memory, disk space or CPU cycles. The issue is that the new capacity or horsepower made available soon gets consumed, usually putting us back where we started. I recently upgraded my CRM application, for example, which had ‘progressed’ from a proprietary format to .NET running on a SQL database. Not only had it ballooned in size, but my brand new 3 GHz machine with a gig of RAM is practically on its knees trying to run it.
When you look at some of the developments in the communications arena, are things really any different?
WAN congestion has historically been an expensive problem that was worth attention because of BT's near-monopoly on leased line services here in the UK. Workarounds such as EPS 8 and 9 circuits had their use, but have fallen increasingly out of favour as they were not designed to support the higher speeds of today's digital modems. Now that Ofcom has really got stuck into deregulation though, the cost per bit has plummeted and this kind of workaround is no longer necessary.
As an IT manager, if you are running between fire drills, it's easy to take advantage of this newly available bandwidth to keep enterprise applications running (a 10Mbps LES circuit compared to a 64K leased line is an embarrassment of riches). But other problems like patch Tuesday and SPAM can still bring the network to its knees without warning if the use of that bandwidth is not under control.
Fortunately there are vendors that take the idea of managing your network bandwidth seriously and its important for IT managers to take a look at these, despite the temptation to continue to feed badly behaved applications with ever bigger pipes.
One company that has a strong philosophical starting point is Blue Coat Systems of Sunnyvale, CA. The company, which began life as CacheFlow, now asserts that there are three components to a connection: performance (compression, caching, bandwidth management and protocol optimisation), security (encryption and threat management), and choice.
Of the three, choice is critical, as it not only means prioritisation, but understanding the unwanted applications that may be on the network such as uncontrolled personal IM and Skype, and doing something about them.
The idea is expressed very well in Blue Coat's literature: "At the core, Blue Coat appliances use a proxy/cache architecture, which enables two key functions—visibility of user/application interaction, and control of that interaction. With visibility comes context and understanding, and with control comes the ability to accelerate desirable applications, while limiting the impact of undesirable applications."
The latest announcement from the company adds the protocols CIFS and MAPI to the fairly common industry offering of SSL and HTTP acceleration. It's good to see a company with a mantra of application acceleration looking to move beyond the low hanging fruit of TCP optimisation.
As I recently commented in another review, no single box can address all the key network issues. In combining extra protocol optimisation with a sound fundamental approach, however, Blue Coat has its customers' backs covered, making it a strong contender to provide one of the key components necessary to achieve a genuinely well managed distributed application network.
Copyright © 2006, IT-Analysis.com