This article is more than 1 year old
Bridgeworks sidesteps latency with pipelining and AI
Overcomes the small problem of the speed of light
Network latency is a fact of life. There is nothing you can do about it, except join the network queue and wait. But Bridgeworks thinks it has solved the latency problem with pipelining and artificial intelligence. Can it be true?
SANSlide is the product that does this and it aims to increase the speed of storage backup replication and SAN linking across TCP/IP wide area network links. The thinking behind it starts with network latency. The speed of light means that the time for data to cross a link increases with distance. For example it could take data travelling from the UK to Germany WAN 6.6ms to traverse the wire. Add in the communications gear at each end of the line which processes the signal and the recorded latency could be 32ms.
It is a characteristic of TCP/IP that a stream of data to be transmitted, such as a file, is broken up into component packets each of which are sent until the receive window is full, and the sender waits for an acknowledgement of receipt (ack) from the remote site. The ack triggers the transmission of the next packet series. A missing ack triggers a packet retransmission on the assumption that the original transmission has failed.
With a perfect link and streamed data, the link speed and data transmission speed would be the same. With packetisation and the send-ack sequence the data transmission speed is slowed. Add in latency and the link's efficiency drops significantly. With the UK-Germany example above a packet series is sent and 64ms later the ack comes back. Every packet series transmission is followed by 64ms of waiting.
A Bridgeworks example of the effect of this is to take a 1GbitE link, have a 32KB window size and a 100ms round trip latency, and arrive at a 320KB/sec transmission speed.
Nothing can be done about latency directly. Instead suppliers such as QLogic, Cisco, Brocade and others, who make iSCSI and FCIP storage data WAN transmission products, have tried making data packets larger, and compressing and deduping data to avoid repetitious bits in packets. But the latency time between the data packets is fixed and, it's assumed, immutable.
Well, yes, but you can have more than one logical TCP/IP connection on a link. Bridgeworks makes storage protocol conversion bridges - SAS to Fibre Channel, that sort of thing - and it knows about dealing with protocol-wrapped data packets and converting them to another protocol. Its idea is to send a packet across a link, then open another connection and send a packet on that, and to keep opening connections until you get an ack on the first connection and send the next packet in the sequence. That way, you increase the number of packets in transit on the wire and could have, say, a pipeline of ten packets in transit on ten connections with an ack coming back on each connection, say, 64ms after the packet on that connection was sent.
The number of open connections depends upon the round trip time (RTT) for the ack. As it shortens or lengthens the number of connections needed to keep the pipeline full reduces or increases in synchrony. Bridgeworks CEO David Trossell says: "The number of connections created will come to a steady state when the packet time on the network × the number of connections = the bandwidth of the connection."