This article is more than 1 year old
Data speed record crushed
Streaming across The Pond
U.S. and European scientists have set a new data transfer speed record, shattering the previous mark using nothing but good old fashioned Ethernet.
The researchers sent one terabyte of data from Sunnyvale, California to Geneva in less than an hour. Their 2.38Gb/s sustained rate for a single TCP/IP data stream beat the old top mark by a factor of 2.5. At this rate, users could send a full CD in 2.3 seconds or 200 full length DVD movies in an hour. Wouldn't that make Hollywood mad?
"To put the numbers into perspective, at a transfer rate of 2.38 Gb/s, we could easily transfer the printed text in the entire Library of Congress in less than a day between Sunnyvale, California and Geneva, Switzerland," said Dr. Wu-chun Feng, team leader of network research RADIANT at Los Alamos National Labs.
It was Feng's networking team at Los Alamos who caught the attention of fellow researchers and ultimately inspired an attempt at the Internet2 Land Speed Record. RADIANT demonstrated a 4Gbs single TCP/IP data stream at Supercomputing 2002, which prompted calls from California Institute of Technology (Caltech), the European Organization for Nuclear Research (CERN), and the Stanford Linear Accelerator Center (SLAC).
Instead of using specialized interconnects such as Quadrics or Myrinet, the scientists achieved their record with 10Gig E NICs from Intel along with a standard Linux TCP implementation. This could be a sign of good things to come for end users, as commonly used networking gear appears ready to satisfy the bandwidth hungry.
The scientists, of course, will get to play with the speedy kit first and are hoping the performance boost could help accelerate collaborative research efforts. Projects in areas such as grid computing where data is spread among a number of institutions should benefit from the extra network pace.
Some of the researchers are also hoping this proof point will encourage software developers to begin writing some applications with high bandwidth in mind.
For those in search of the fine details, the scientists used the optical networking of the LHCnet, DataTAG, TeraGrid, StarLight, and a Chicago-Sunnyvale link loaned by Level(3) Communications.They also used the Intel 10-Gigabit Ethernet (10GbE) PCI-X network adapters (PRO/10GbE LR) on a Cisco 12400 Series router with 10GbE and OC192 optical modules at Sunnyvale, Cisco 7600 Series routers with four 10GbE and OC48 optical modules at Chicago and at Geneva. ®