Fifty microsecond storage access and 12 million IOPS gives X-IO's Axellio server the edge at the edge.
Server innovation is now focussed on anywhere but the commodity CPU and memory, giving non-server vendors the opportunity to take on the incumbents, and storage company X-IO is a case in point with its Axellio product.
Its direct-access NVMe storage technology is set to revolutionise servers, giving them microsecond access to terabytes of flash-stored data and the ability to simultaneously ingest and process great gobs of data in realtime.
X-IO went through well-publicised troubles but is now back and free of debt, recapitalised and EBITDA-positive. Sales and support revenues from its ISE product line have fuelled the Axellio development and the product is now ready.
Prototype Axellio box
It is a converged server and storage system that scales out in huge clusters. They have been tested scaling linearly to 256 nodes and can go on to 1,000 nodes and beyond.
X-IO envisages these servers being used for Internet of Things edge computing where masses of sensor-derived data arrives as a torrent of bytes across the network links and needs some processing as soon as its ingested; realtime analytics with simultaneous ingest. There is no time to move the data to another server system for analytical processing. It has to be done on the spot and these servers need plentiful CPU resources and fast-access storage.
Think DSSD without the million-dollar check and proprietary networking to an external block array.
So, to the hardware, which comes in a 2U rackmount enclosure with:
- 2 x 2 Xeon server motherboards
- 4 socket Xeon E5-26xx v3 and v4 CPUs
- 16 to 88 cores (current)
- 24 to 176 threads (current)
- Upgradable to new Intel CPUs
- 32 DIMMs, 16GB-2TB
- Optional 2 x NVDIMMs for storage cache
- 12 to 72 x 2.5-inch dual port NVMe SSDs (8TB current) in 1 to 12 x 6-drive FlashPacks
- Up to 1PB of NVMe-accessed flash storage with 16TB NVMe SSDs later this year
- 12 million IOPS with 4K blocks
- Data ingest at >200Gbps (>30GBps)
- As low as 35µs latency, 60 GBps sustained
Rear view of Axellio enclosure
X-IO claims simultaneous ingest and random access processing of stored data at 480Gbps (60GBps) full duplex at less than 50μs average access latency. A rack could hold 20 of these systems providing a dense compute and storage environment.
It says the systems, which run Linux, are Optane-ready and offload processing modules can be used:
- 2 x Intel Phi for parallel computing
- 2 x Nvidia K2 GOUs for video processing
- 2 x Nvidia K80 Tesla for scientific computing and machine learning
- Solarflare Precision Timing Protocol (PTP) packet capture (PCAP) offload
But there is more. The two dual-socket server motherboards and the set of NVMe SSDs are connected by a patented FabricXpress internal communications system. Its non-transparent bridging gives the ODM hardware its fast storage access edge.
X-IO envisages selling through partners into markets such as defence and intelligence, complex data analytics, financial market data analytics, cybersecurity and the generic Internet of Things where edge processing boxes need the Axellio performance and scaling.
It has announced Ascolta, a VION company, as one such partner, offering realtime packet capture, data fusion and analysis for the defence and intelligence markets. A second is ISSAC, for end-to-end automated analytics. There are other partnerships with companies involved in time-series databases and both regulatory and cybersecurity analytic environments. Expect more to be announced.
The Axellio system is sold as a server+storage engine component to solution-selling partners. Although it is, in our view, basically a server, it is not being sold in direct competition against Cisco, Dell, HPE and Lenovo.
We're told X-IO's ISE software will be ported to it at some stage in the future.
Server innovation around the core Xeon+DRAM heart is now rampant as, it seems, everyone but the server vendors pile in. Symbolic IO is one case in point, Aparna Systems another, and now here is X-IO. Tomorrow there will be another. ®