This article is more than 1 year old
Oh, 3PAR. One moment you're gliding along. The next, you're in the rain as HPE woos Nimble
It's been a good 20 years. Time to move on?
Comment This week, HPE offered to acquire Nimble Storage for around $1.09bn, plus another $200m in share options.
Nimble sells all-flash and hybrid storage solutions, with a lot of intellectual property focused around storage analytics in the form of their “InfoSight” SaaS platform. Commentators are seeing this as a good deal for both Nimble and HPE, but is it really as good as it seems?
HPE’s storage portfolio is currently focused heavily around the 3PAR platform, acquired by the then-HP in 2010. 3PAR was founded in 1999 and was successful for both a new architecture and features such as the embedded system ASIC that allowed things like zero detect to be implemented at “wire speed.” HPE acquired 3PAR for $2.35bn after a bidding war with Dell, which today, ironically, has more storage systems than it knows what to do with.
There’s no doubt that the 3PAR asset was sweated, replacing the ageing EVA and being extended upwards to the high end and down to the low end. At the outer edges of the HPE storage portfolio, we have MSA for entry-level systems and XP for high-end availability and mainframe.
Since 2010, HPE has continued to add features to the 3PAR platform (disclosure: I have done work for HPE documenting the new features as they have been released). Most recently, space saving technology was enhanced with compression as part of its “Adaptive Data Reduction” feature set. Throughout the evolution of the 3PAR platform, the system has continued to use a dedicated hardware ASIC for some performance-sensitive processes, including zero detect and some of the deduplication tasks.
Having dedicated hardware is both a blessing and a curse. In 1999, the ASIC was a game changer, however some 18 years later, processor speeds are much higher and one does wonder how much of the ASIC functionality is still required and how much can be done with modern processors.
Let’s move over and look at Nimble. The company was founded in 2008 and launched its first product in 2010 at a Tech Field Day event in Seattle. I had my first briefing in 2011 at the EMEA launch. Looking back at the launch deck, it’s interesting to see that the platform was promoted as converged storage, meeting primary, backup and DR needs.
The company founders have a background working at NetApp and Data Domain, so it’s not really surprising that there are some technical similarities to NetApp’s Data ONTAP, with data stored in NVRAM before being committed to disk and kept in flash for fast subsequent reads. The lawsuit claims that arose between the two companies were settled in 2015.
The nice feature of the Nimble architecture is the ability to use flash in a different way that isn’t focused on using it as a standard tier of storage or write cache. Over time the platform has developed to be highly scalable, and of course the company has introduced InfoSight, a SaaS service that provides deep analytics on the activity of applications and data on the Nimble platform. Many people see this as one of, if not the, key pieces of technology the company has developed.
Nimble had an initial public offering of shares in December 2013, opening at around $31 from a launch of $21. However after rising higher, the shares tanked towards the end of 2015 and exactly a year ago today were reported at $7.67, well below the IPO level. Prior to the HPE announcement, shares were trading around $8.50, around an 11 per cent gain over the year.
Unfortunately the storage array market isn’t a growth business (as I discussed last year), so competition is tough, with vendors fighting to effectively buy business from each other.
On many occasions I’ve started putting fingers to keyboard to write about 20-year architectures, based on the premise that, at some point, it’s more practical to start again than rewrite or amend existing storage array technology. The reason for this is simple: software and hardware design was based on the technology available at the time. Starting with a clean slate means not carrying the baggage of an existing design and so more effective solutions can be developed.
Looking at the 3PAR architecture, it could be argued that this point is being reached. 3PAR is not truly scale-out (although can scale to eight nodes) and was designed in the age of disk. The technology has adapted well to flash, however deduplication and compression weren’t native features, so there is (without giving away trade secrets) some degree of compromise in the design of these features in the platform. At this stage, these compromises aren’t enough to stop the platform being competitive.
Incidentally, many other storage vendors are in the 20-year scenario. NetApp has/had the issue with Data ONTAP (the original version), forcing the company acquire SolidFire and Engenio. Dell EMC acquired XtremIO as an all-flash solution in preference to an all-flash VMAX, however that strategy seems to have been reversed at least in the short term with VMAX all-flash. Dell EMC Unity (ex CLARiiON, ex VNX) was an eventual re-write of the original single-threaded code to take advantage of multi-threaded processors.
The interesting dynamic in the market is that start-ups can create new solutions without having any baggage to worry about. In contrast, the incumbent vendors need to manage the transition for customers from one platform to another, which creates its own set of issues. Conversely, somehow, new startup vendors need to grab market share against an entrenched set of traditional hardware products, that may have more maturity and a bigger customer ecosystem. This is a hard task for any company to fight against.
So vendors want to get the most out of their hardware solutions and be as non-disruptive to customers, while knowing that their hardware storage solutions will at some stage be superseded. Twenty years seems to be about the right lifespan in today’s market.