Backup, cloud, convergence, GDPR, malware and server SANs are flavours of the storage week as we approach midwinter. These market areas are growing strongly while traditional SANs and hybrid arrays fall back.
Here's a snapshot of what happened, finishing up with a server SAN boost from Wikibon. But we start with a secondary storage convergence story.
Secondary data silo converger Cohesity entered the European region a year ago. It has stepped up its commitment to the region and appointed Klaus Seidl as VP Sales for EMEA, with the main objective of expanding Cohesity's activity there with particular focus on the DACH market (Germany, Austria and Switzerland).
Seidl comes from Riverbed Technology, NetApp and, most recently, SimpliVity, where EMEA revenues matched US ones; a rare event. Presumably Cohesity was a better berth than HPE-acquired SimpliVity, which could use HPE's existing sales infrastructure.
Swedish Compuverde's Hybrid Cloud v1.0 combines software-defined storage (SDS) with the ability to synchronise two-way data traffic among multiple data centres.
It supports disaster recovery scenarios where snapshots from one data centre can be mounted to up to 16 different locations connected over the internet.
Partial file updates use minimum network bandwidth while maintaining complete synchronization between file shares at various data sites.
Compuverde's storage SW provides vNAS file systems, hyperconverged virtualisation and hybrid cloud support.
Some players in the storage industry are just booming. Druva, which offers data-management-as-a-service (DMaaS), says it has enjoyed record growth of 500 per cent in annual recurring revenue for its data protection service. This makes 10 consecutive quarters of double-digit company growth.
It has plans to increase its global workforce by 14 per cent over the next six months.
The company has achieved FedRAMP certification (a prerequisite for doing business with federal organisations), claiming that makes it the only one among its competitors to officially do business with US government agencies transitioning to the cloud.
Druva was ranked #175 in the 2017 Deloitte Technology Fast 500 with 616 per cent growth, placing it as the fastest growing cloud data protection solution provider. Ah, did you hear that Rubrik and Veeam?
It gained 300 new customers over the past six months, including AIG, ANDRITZ, General Electric, Hulu, Intuit, Marriott, PwC, ServiceNow and Xerox. The customer base totals more than 4,000, and includes four of the nine top consulting firms and four of the top 10 US-based pharmaceutical companies.
This company is becoming a major player in its market and progress has been helped by general malware and ransomware attacks. That rancid rising tide is generally lifting all backup boats, some more than others.
iguazio has announced nuclio, an open-source serverless platform which can work standalone or as an integral/managed part of the iguazio data platform.
The company's on-premises product delivers file, object, NoSQL and streaming services, and supports application microservices/containers, through integrated Kubernetes, and nuclio serverless functions.
We're told that, because of its real-time function OS architecture, it's up to ~100x faster than Amazon Lambda, IBM OpenWhisk and Oracle Fn.
Open-source in-memory data gridder Hazelcast has joined the Eclipse Foundation. Its primary focus will be on JCache, the Eclipse MicroProfile and EE4J.
Hazelcast will be collaborating with members to popularize JCache, a Java Specification Request (JSR-107) which specifies API and semantics for temporary, in-memory caching of Java objects, including object creation, shared access, spooling, invalidation, and consistency across JVM's. These operations help scale out applications and manage their high-speed access to frequently used data. In the Java Community Process (JCP), Hazelcast's CEO, Greg Luck, has been the co-spec lead and then maintenance lead on "JCache – Java Temporary Caching API" since 2007.
Snowflake has snagged Overstock as a user of its data-warehouse-as-a-service. Overstock is a home goods and furnishings retailers and will use Snowflake for its data science initiatives.
It says packaging complex features for new data science models can now be done in hours or days, compared to the weeks it could take before. The company has 20 years of retail data for its data wonks to sift for buying patterns.
Storage Made Easy
Storage Made Easy announced File Fabric support for compliance-enabled vaults, an immutable storage capability for the on-premises IBM Cloud Object Storage System.
It protects data in-place from deletion or modification as required for regulations like SEC Rule 17a-4(f) and FINRA (Rule 4511).
Users can take a policy-based approach to data security, access, compliance and the data lifecycle without copying data between systems to tape or optical media, or moving it offsite.
The File Fabric has two relevant components: ForeverFile, an archive and ransomware protection feature continuously archiving data in real-time, and the Cloud File Server. With this users see compliant storage as an hierarchical permissioned file system where documents and other electronic records are automatically retained for predetermined periods.
Talena and Imanis Data
Talena is a competitor of DatosIO and backs up distributed databases. It changed its name to Imanis Data in July-August. There were no exec changes or funding events that might have driven such a change. Why did it happen?
Chief marketing guy Sanjay Sarathy told us: "The name change from Talena to Imanis Data was in the making for a while, driven by the fact we occasionally got confused for doing 'talent management' or sometimes people heard the word 'Talend' and assumed that.
"Since Imanis stems from the latin root 'immense' or 'vast' we thought it far more relevant to the space we focused on, namely management of very large data sets."
Veritas has added data classification to its eDiscovery platform to help organisations speed up response time to GDPR subject access requests (SARs) and ensure compliance.
The product has pre-designed classifications for faster scanning and tagging of data, redaction tools for smarter data review processes, and annotation capabilities to simplify how case handlers mark up review documents and share notes with each other.
It is available as both software and an appliance.
This server SAN booster has issued a projections document, which starts out by saying the field continues to grow fast, and is projected to replace most traditional storage arrays by 2026.
Server SAN is cheaper to install, maintain and upgrade than traditional storage SAN arrays. It offers much higher performance and enables point-to-point communication with consistent low-latency high-bandwidth connection between any application and any data source.
The performance potential is the single most important strategic reason for migrating to Server SAN, and will be a prerequisite for real-time analytic systems and AI-supporting systems of record.
The Wikibon crew think traditional storage (mainly SAN and NAS) is declining at about -18 per cent CAGR out to 2026. In contrast, the combination of enterprise Server SAN and Cloud Hyperscale Server SAN is projected to grow at 18 per cent. The overall growth in storage is projected to be about 3 per cent.
They discuss UniGrid – a new system architecture where the storage and network layers are separated out from the the compute layer and operate independently from compute (and in the future, compute and DRAM). Unigrid opens the door, they declare, to ultra-low latency systems with multiple different architectures across hundreds and thousands of heterogenous processor nodes. Unigrid can be used as an architectural foundation for True Private Cloud, Enterprise Hyperscale and Cloud Hyperscale.
This sounds like composable infrastructure big time.
For more about the Server SAN projections, assumptions, classifications and so forth, have a look at Wikibon's document, Server SAN Projections 2016-2026. ®