Analysis: First impression: Oracle's version of the Vblock, the Exalogic Elastic Compute Cloud, takes a distinctively different approach to storage.
Oracle's Exalogic box is a standard 19-inch rack (42U) configuration containing an integrated set of hardware and software components for running, Oracle says (pdf), "applications of all types, from small-scale departmental applications to the largest and most demanding ERP and mainframe applications". Since the box costs around a million dollars this is not for small and medium businesses though.
It is optimised for enterprise Java, Oracle Fusion Middleware and Fusion Applications and can also run "thousands of third-party and custom Linux and Solaris applications."
The rack contains servers and storage components, and an InfiniBand fabric to interconnect the components in the rack, up to eight Exalogic racks into a single system, and Exadata Database Machine racks, unto the eight-rack limit. If more racks are needed to be connected then data centre switches are needed and can interconnect, Oracle says, hundreds of racks. There are also multiple 10Gbit Ethernet ports to connect accessing servers and 1Gbit Ethernet ports for management functions.
Exalogic is available in quarter-rack, half-rack and full rack configurations.
The servers, compute nodes, are hot-swappable and diskless, and there can be up to 30 of them per Exalogic rack, each one being a 1U box with two 6-core Xeon processors, meaning 360 cores in total. They share a clustered disk storage subsystem comprising 40TB of SAS disk. This amounts to a little over 1.3TB of disk capacity per server. This is not a serious amount of disk storage and the Exalogic racks are intended for compute work. Oracle itself says: "Each Exalogic configuration is a unit of elastic cloud capacity balanced for compute-intensive workloads."
The compute nodes have 96GB of RAM and two 16GB SSDs, branded FlashFire. A full Exalogic rack has 960GB of SSD capacity and 2.8TB of fast ECC DIMM RAM plus redundant InfiniBand HCAs (Host Channel Adapters). A quarter-rack has eight compute nodes (96 cores), 768GB of RAM, 256GB of FlashFire SSD and 40TB of disk storage. Intriguingly the 40TB disk storage amount is the same whether you buy a quarter, half or full Exalogic rack, suggesting that this storage has a purely local focus in some way. The 40TB of SAS disk comes, we understand, from a Sun 7000-based storage server and data is striped and mirrored.
The compute nodes function, Oracle says, as a single processing resource, meaning we could characterise this storage as directly-attached storage (DAS).
The flash storage is not used as a cache layer between the SAS disks in the rack and the servers. It stores the operating system image for the compute node and also functions as local swap space for it, as well as storing diagnostic data generated during fault management procedures. We're told this SSD resource eliminates Java virtual machine heap limitations.
Oracle's Exalogic white paper does not mention any other storage in the rack but Larry Ellison's Oracle Open World presentation did say that each rack has a 4TB read cache and a 72GB write cache, and the implication is that this is flash storage too, with the emphasis on reading data rather than writing it. We don't know whether this pair of caches sit between the compute nodes and the local-to-the-rack storage server or a connected Exadata machine or, likeliest of all and what we think, inside the 40TB storage server.
The Sun 7000 can come with asymmetric amounts of read-optimised flash and write-optimised flash. There is no 40TB Sun 7000 configuration - the 7110 having up to 4.2TB of 2.5-inch 10,000rpm SAS disk, and the 7310 being an entry-level, 2-node, cluster with up to 192TB of capacity. It uses 7,200rpm disks with 1 or 2TB capacities, up to 600GB of read flash, and optional write flash acceleration.
But there is no 7210 and it looks as if Oracle has effectively produced one just for the Exalogic machine. An Exadata rack essentially has a 2-tier storage structure - SAS disks to hold local data with flash caches to speed read and write access.
The use of InfiniBand to link components in the Exalogic system, with its ability to have defined virtual lanes and priorities, is reminiscent of a mainframe's channel architecture.
Each Exalogic rack can scale up from the quarter rack starter (8 X 1U servers through the half rack stage and up to a full rack of 30 servers). Each Exalogic system can scale out to eight Exalogic or Exadata racks. Oracle does not use the cluster word to describe multiple connected Exalogic racks but it certainly looks like the full configuration is an 8-node cluster.
SAN? What SAN?
One thing missing from Exalogic is a block access storage array in EMC VMAX, HDS USP-V or IBM DS8000 terms. There is no SAN, no concept, it appears, of networked storage in Oracle's integrated stack lexicon. Storage and pressing are closely and deeply intertwined in order to get the fastest possible processing of data in a machine with monster compute capacity compared to storage capacity.
Pushing this message will be away for server-and-storage vendors to fend off external storage suppliers and lower their attach-rate in customer accounts. We can well imagine Oracle telling its customers that they don't need VMAX arrays, or USP-V ones or DS8000s. You want a SAN? Whatever for? ®