This article is more than 1 year old

How IT will evolve to photonics

Professor Rod Tucker charts a course to the all-optical, low-energy future

Replacing electronics with photonics will one day be an important way to run IT while consuming far less power than is the case today. But while that idea looks great on paper, the research is still young.

The Internet’s voracious appetite for electricity needs some near-term solutions, so asThe Register followed-up the our piece on photonics we also spoke to Professor Rod Tucker of the University of Melbourne, director of both the Institute for a Broadband-Enabled Society and the Centre for Energy-Efficient Telecommunications.

The Register: Is it a fair start to state that the Internet and telecommunications industries need to find an inflection point, or electricity will become a problem?

Prof Tucker: Yes.

We’ve done quite a lot of modelling at the University of Melbourne – a detailed energy model of the global Internet. There’s good data out there about how many new users each year are connecting in developing countries. The current traffic load is growing at about forty percent, per annum.

At the moment, the network uses between one and two percent* of the world’s electricity, and if we don’t do anything, it could become ten percent between 2020 and 2025.

The question then becomes ‘could the growth of the Internet be stymied by the availability of electricity?’ Cost will become a real issue. Already, telcos are discovering that the cost of energy for their networks is becoming a significant part of their opex, when only a few years ago, they didn’t even record what they spent and would have had trouble telling you their electricity bills.

It’s also become an engineering issue. In switching centres and data centres, the engineering challenge of getting energy in and getting heat out is becoming significant. It’s becoming a bottleneck: can you actually build the facilities you need?

Getting the heat out is the hardest part. If you use air-conditioning to cool the equipment, it might already be running at full load. There’s a limit to how much heat you can generate, all in one place.

Finally, real estate becomes difficult. The heat load means that a telco has to leave more space between racks, just to get the air through.

The Register: OK. If I look a long way into the future, low-power optics such as they’re working on at CUDOS aim to change the energy landscape. In the near-term, how do we start bridging the gap between where we are and a low-power all-optical future?

Prof Tucker: Photons are big and don’t interact with matter very well. Electrons are small and interact very well – which means for many things, electrons are still more efficient. So it’s a challenge of finding the best mix of technologies. Optics will always be the best technology to transmit data from one place to another, whereas new electronic technologies are the best way to improve the switching centres.

There’s still plenty of work to be done in new electronic technologies – not just the current CMOS but there are newer alternatives in the future, such as organic electronics.

We’ve done work at the Centre for Energy-Efficient Telecommunications, to look at what the fundamental limits are. What’s the best you can do? We believe the world is still four orders of magnitude away from the limits of energy efficiency in electronics. That is, our systems are currently ten thousand times less efficient than the physics will allow.

The Register: My apologies for interrupting, but there’s a lot of hand-wringing in the world of microprocessors, that we’re getting close to the fundamental limits in feature sizes, that we’ve got to find new ways to keep giving Moore’s Law a kick along. How could we be several orders of magnitude away from or best efficiency?

Prof Tucker: When you’re moving data around a network, there’s more to it than the simple process of transmitting the data. A lot of energy is used by the switching and intelligence that routes the data.

The energy required to get the data from A to B in the network is swamped by what’s used within the routers.

So the challenge is not only in making better electronics – it’s in improving the architecture, for example figuring out ways to use the routers less, because Layer 2 Ethernet switching uses far less energy than IP routing. If you can revise the architecture of the network to keep the transmissions out of the routers, it’s better. Reducing the number of router hops is a great advantage, in terms of energy efficiency.

The Regster:

Prof Tucker: That’s because the access network is the most inefficient part of the entire Internet. Most of the energy consumed in the Internet is in the home modem and access network connection.

In ADSL you have to amplify the signal and do lots of signal processing to make it work – and both of these use lots of power. Fibre-to-the-premises is much more efficient for access networks.

But there’s also tremendous scope for improvement in FFTP. G-PON is a very good standard, and the most energy-efficient of the current access networks, but there’s lots of room for improvement.

In G-PON, the 2.5 Gbps of data is split to the user, so if there’s 32 modems connected to the one splitter, they are receiving the whole 2.5 Gbps and identifying which packets are “mine”, and discarding the others. We’re associated with a Bell Labs project called Greentouch ( here), and one of the aims is to allow the user modem to operate at a lower bit rate.

The fibre modem could become ten times more efficient.

And because in architectures like G-PON the fibre is passive all the way from the exchange to the home, things like this would be an easy upgrade.

The Register: And there are the inefficiencies in wireless …

Prof Tucker: The wireless network is terribly inefficient. A base station consumes a kilowatt or more, depending on the number of antennas installed at the base station. They shout all day and all night, and most of the energy is going nowhere.

So there’s important work going on looking at how to sleep the base stations more efficiently. Also, the network will change to use fewer big base stations and more little ones. If you make the cells smaller, you can have smaller, lower power transmitters.

The Register: OK. So how do we get there from here? What’s the basic research that needs to happen to move us along?

Prof Tucker: The exciting things for the near future aren’t so much new basic technologies, but new architectures for the network, architectures that enable more efficient passing of packets, with less processing. I’m a technologist, but it’s architectures and protocols that are more significant to improve the inefficiencies.

The key is to create something that you could build seamlessly onto the top of IPv4 or IPv6, wrap the routed protocol into another protocol that gets your packet across the world more efficiently.

It’s like MPLS: something the user doesn’t have to know about, but it gets the packet through the network more efficiently without the user noticing.

It’s worth mentioning that IPv6 is a slight step backwards in energy efficiency – it’s a small hit, but the bigger address header means that for small packets like VoIP or gaming traffic, the header is a significant part of the whole. That extra overhead makes it less efficient. It’s a good example of how the protocol can have an impact on energy efficiency.

The Register: So what would you imagine beyond things like MPLS?

Prof Tucker: We need to enhance the network to avoid routing. For example, if you upload something to Facebook, most of it ends up in a single data centre – so you want the most direct connection to that data centre.

The Register: As you mentioned before, moving things down to Layer 2…

Prof Tucker: Or Layer 1, if you can, wherever you can.

We’re also researching content distribution networks – CDNs. IPTV and video-on-demand are becoming more prevalent, and free-to-air television will become part of the network. Simply using existing IP networks for IPTV, as happening at the moment, is inefficient. CDNs adapted for IPTV can greatly improve the network.

All together, this is the study of information logistics: how we process, transport, store and deliver information. If you’re distributing the latest James Bond movie, there’s a lot that can be done by being careful about how you store it and transport it. Storing the content near the user is more energy-efficient than centralising it. On the other hand, if the content is an old Hitchcock that’s only watched by buffs, it’s better to put it in a central server.

We’re also working on a project inspired by the concept of energy star ratings on whitegoods – something like an energy star rating for Internet services – so that users will be able to make informed judgements about which provider they use in terms of their services. We want to enable members of the public to compare different providers, not only can you motivate the users to be selective, help to drive good behaviour in the industry

We’re collaborating with service providers on this, working up techniques whereby we can get a calculation of the energy consumption of different services. ®

*The Register notes the apparent discrepancy between the 4-5 percent of the world’s electricity mentioned in the previous article and the 1-2 percent cited by Professor Tucker. His figure is referring to the network; the higher estimate is inclusive of all end user systems and computers. ®

More about


Send us news

Other stories you might like