And now, in alphabetical order, all the storage news you may have missed
From Actian to WANdisco, we've got it all
Another week, another week of storage news laid out in our farmer’s market on groaning stalls full of free-range and organic produce. Walk around and check it out.
Actian says the the newest version of its NoSQL database combines object technology and enterprise-class database features to provide complex data models to business. It claims to have reinvented the traditional object-oriented database to deliver the first NoSQL database with enterprise-class performance and scale.
There is an up to 300 per cent throughput improvement and it features:
- ACID and distributed transaction support,
- 2-phase commit,
- Online schema evolution,
- 2-level cache,
- Multi-session/multi-threaded architecture.
It can be extended with Actian DataConnect to other Actian data management and analytics products and to other applications on-premise or in the cloud.
This Actian NoSQL database version 9.3, previously known as Versant, is immediately available.
It criticises Pure’s FlashArray systems for not really being tier 1 enterprise capable, identifying, for example, the lack of replication, snapshotting, unproven six ‘nines’ reliability, land a lack of file services.
The provision by Pure of replication, snapshotting and file services, amongst lots of other things makes some of Sam Grocott’s points moot, and shows Pure is aware if what it needs to provide.
Here is EMC squaring up to Pure in the tier 1 enterprise arena, and saying it has a proven advantage. True, but the size of the advantage is shrinking.
Druva, a cloud data protection and information management supplier, has announced extended data protection and governance support for Microsoft SharePoint Online and extended Office 365 support:
- Centralised information management across Office 365 (OneDrive, Exchange Online and SharePoint) and end-users devices,
- Automatic backup and archiving of content, cloud to cloud, to prevent data loss, with IT admins able to recover directly back to their Office 365 environment, from any snapshot point,
- Data governance with data collected and preserved from dispersed enterprise data - including endpoints and cloud apps - with a single dashboard, and support for customer compliance and legal workflows.
Druva inSync support for SharePoint Online is available in Summer 2017.
In-memory data grid supplier Hazelcast has announced the 0.4 release of Hazelcast Jet – an application-embeddable, distributed processing engine for big data stream and batch.
Hazelcast Jet is an Apache 2 licensed open source project that performs parallel execution to enable data-intensive applications to operate in near real-time. It includes event-time processing with tumbling, sliding and session windowing.
Hazelcast claims Jet is appropriate for applications such as sensor updates in IoT architectures (house thermostats, lighting systems), in-store e-commerce systems and social media platforms.
It says stream processing has overtaken batch processing as a preferred method of processing big data sets for companies that require immediate insight into data. However, the data must be partitioned – i.e. take a fragment of the stream and analyse it.
To classify data windows during processing, each data element in the stream needs to be associated with a timestamp. In Jet 0.4 this is achieved via event-time processing (a logical, data-dependent timestamp, embedded in the event itself). A drawback of event-time processing is that events may arrive out of order or late, so you can never be sure if you see all events in a given time window.
To alleviate this issue, the latest release of Jet includes windowing functionality which enables users to evaluate stream processing jobs at regular time intervals, regardless of how many incoming messages the job is processing. Jet offers three types of windows:
- Fixed/tumbling – time is partitioned into same-length, non-overlapping chunks. Each event belongs to exactly one window.
- Sliding – windows have fixed length, but are separated by a time interval (step) which can be smaller than the window length. Typically the window interval is a multiplicity of the step.
- Session – windows have various sizes and are defined basing on data, which should carry some session identifiers.
In a latency benchmark study Jet outperformed its competitors with a 40ms average latency for stream processing computations which remained flat as messages increased. Flink and Spark’s execution latencies were hundreds of ms rising to seconds at the higher message throughputs.
Results in milliseconds
The study compares the average latencies of Hazelcast Jet, Flink and Spark Streaming under various different criteria such as message rate and window size. The full benchmark is available here.
Hedvig and HPE
There is a pre-tested, pre-validated, rack-scale integrated Hedvig and HPE system, containing Hedvig software-defined storage - its Distributed Storage Platform - with HPE Apollo 4200 servers. It’s available in 48TB and 96TB configurations.
- Multi-protocol support to collapse disparate tiers of storage: Unify block, file, and object interfaces in a single platform. Eliminate the need for disparate SAN, NAS, and object storage and collapse traditional tiers by selecting workload-optimised servers.
- App-specific data services for virtual machines and containers: Fine-tune data services such as replication, deduplication, and compression on a per-application basis. Seamlessly provision storage from any hypervisor or container system.
- Native, multi-site replication for active data across any cloud: Replicate data across any private or public cloud site. Build highly available infrastructures that allow seamless failover of applications, even among different public cloud providers.
This is the first joint system since Hedvig announced its strategic investment from Hewlett Packard Pathfinder, HPE’s venture investment and partnership program.
Through HPE Complete, customers can purchase perpetual and subscription licenses to Hedvig in addition to receiving support for the system directly from HPE and the HPE partner ecosystem.
The HPE Complete and Hedvig joint system became generally available on June 5, 2017. Pricing starts at $115/TB/year plus associated HPE hardware costs.
Regarding IBM’s ending of DeepFlash 150 product sales, Eric Herzog, IBM’s VP for Product Marketing and Management and VP w-w Storage Channels, said: “We have another product that supports all flash and Spectrum Scale - the IBM Elastic Storage Server, also know was the IBM ESS. On April 11 we announced an enhanced version with a new high density JBOD for IBM ESS that increased bandwidth which shipped in May.”
“At the same time, we also made a statement of direction as part of this announcement about "new Flash Enhanced Elastic Storage Server models" without a shipment date. Those will be coming in the 2nd half of 2017.
"As you know, from an application, workload and use case perspective it is our software that matters. For the older product you had in the article yesterday and the IBM ESS that is Spectrum Scale. The current IBM ESS with all flash has Spectrum Scale giving end users all the same application, workloads, and use cases from the older product you mentioned. The current IBM Elastic Storage all flash is selling well.
“Our storage growth, per IDC, has shown it is our investment in leading edge solutions that continues to be our focus and driver.”
Intel announced its DC P4501 low-power data centre SSD at the end of May and we received some tech specs from the company. Capacities are 500GB< 1, 2 and 4TB in a 2.5-inch and M.2 firm factors. The drives use Intel 3bits/cell (TLC) 3D NAND.
Random read/write IOPS are up to 360,000/46,000 and sequential read/write bandwidth is 3,200MB/sec and 900MB/sec - these things are heavily read-optimised.
We’re told the Intel-designed controller has new firmware and supports 128 queues. Intel describes its endurance carefully; Random/JEDEC up to 1 DWPD (drive write per day) or 5 PB written, sequential up to 3 DWPD or 20 PB written.
Compared it to its faster sibling, the DC P4500 with random read/write IOPS of up to 710,000/68,000, we get a sense of how its random read/write performance has been cut back.
Here are some notes on HCIA HW/SW vendor Nutanix from William Blair analyst Jason Ader, following William Blair's 37th Annual Growth Stock Conference:
- Nutanix is positioning its hyper-converged infrastructure (HCI) solutions in the white space between public clouds (which lack control) and traditional, siloed on-premises data centres (which lack automation and agility).
- Nutanix believes customers will continue to see the value in owning versus renting infrastructure, as long as that infrastructure offers cloud-like consumption and agility, high levels of automation, continuous innovation, and one-click simplicity.
- As the hyper-converged appliance market becomes more commoditised, Nutanix's main differentiation going forward will be its enterprise cloud operating system software, which continues to move up the stack and add more features in security, networking, automation, and orchestration.
- Nutanix is the only HCI vendor that operates across multiple hardware platforms (eg, Dell, Lenovo, IBM, HPE, and Cisco.)
- Nutanix’s next big engineering initiative will be to offer a frictionless experience between public and private clouds, a major technical challenge that nobody has tackled yet.
A major technical challenge for enterprise vendors is to offer customers a seamless experience between public and private clouds. Key challenges here include:
- Physical networking (when a workload moves to the cloud, there is currently no way to maintain IP configurations/addresses),
- Identity management,
- Storage management functions like replication.
Nutanix currently has about 150 engineers working specifically on these hybrid cloud challenges with the goal of enabling a frictionless experience across public and private clouds.
Management called out VMware and Microsoft as its main competitors in the long term, as both companies have been devoting substantial resources to the development of hybrid cloud technologies (e.g., VMware on AWS, Microsoft Azure Stack).
Sheppard Robson, an architecture practice in London with round-the-world operations, has replaced an ageing storage infrastructure with a combination of Panzura and Amazon Web Services (AWS) S3 for unstructured data, and HPE-owned Nimble Storage for high-performance block data.
This eliminated the need for its traditional NetApp NAS (network-attached storage) storage model, and also enabled Sheppard Robson to remove an entire data centre being used for disaster recovery.
Simon Johns, IT director at Sheppard Robson, said; "We've been able to move all our unstructured data to Panzura and all of our VMware clones and all the structured data to Nimble Storage, enabling us to decommission all the NetApp arrays that had been handling all that data. The combination of Panzura and Nimble Storage gives a complete, advanced hybrid cloud storage solution for both file and block data. We completely removed our disaster recovery sites – a direct consequence of implementing Panzura. As a result, we've been able to shut down an entire data centre and provided collaboration between remote offices."
Synology has launched new DiskStation DS1517 and DS1817 systems; scalable 5-bay and 8-bay desktop NAS products, using with DiskStation Manager (DSM) v6.1 software.
Both the DS1517 and DS1817 can be scaled up to a raw capacity of 150TB and 180TB respectively with two DX517 expansion units. They support Synology High Availability (SHA), providing redundancy in case of unexpected network failure or disasters, and ensuring seamless transition between clustered servers.
The DS1517 is powered by a quad-core 1.7GHz processor, 2GB RAM, and four Gigabit LAN ports featuring failover and Link Aggregation. It delivers sequential throughput performance over 449 MB/sec writing and 436 MB/sec reading when using RAID 5.
The DS1817 is powered by a quad-core 1.7GHz processor with RAM module expandable up to 8GB. Thanks to the built-in 10GbitE interfaces. It can achieve sequential throughput performance exceeding 1,577 MB/secs reading and 739 MB/secs writing when using RAID 5. Built-in 10GBASE-T ports and 1GBASE-T LAN ports pave the way for upgrade to a 10GbitE environment, and provide support of Link Aggregation and failover.
It provides storage for virtualisation environments with VMware, Citrix, and Hyper-V certificates.
Synology says these systems are for professionals and growing small/medium-sized businesses. Each can serve as a centralised data backup destination. They have a 3-year warranty, extendable to 5 years in some regions.
- Avere says full-service digital animation studio Jam Filled is using Avere’s FXT Edge Filers Avere Systems to support over 360 artists across two locations, providing 24/7 rendering power and nearly 100 per cent uptime. An Avere cluster front-ends the studio’s 6,400-core render farm deployed at a co-location facility. Jam Filled 2D and 3D work customers include Mattel Creations, NBCUniversal Universal Kids, Nickelodeon, and other major networks.
- Excelero has been assigned US patent # 9,658,782 referring to for underlying technology to its NVMesh server SAN system. Nine additional US patents are pending.
- Hortonworks announced a Flex Support subscription scheme to provide seamless support to organisations as they transition from on-premise to cloud. It also unveiled Hortonworks Dataflow (HDF) 3.0, the next generation of its open source data-in-motion platform, which enables customers to collect, curate, analyze and act on all data in real-time, across the data center and cloud.
- Huawei and Tableau have integrated Huawei’s FusionInsight big data platform (Hadoop ecosystem, massively parallel processing database (MPPDB), and big data cloud services) and Tableau’s data visualisation software.
- Lenovo has announced a new global business partner program, with one globally tiered structure, designed to reward loyalty and success specifically tailored to PC and Data Centre businesses. It says, based on partner feedback, it has simplified the schemes by removing targets, clip levels and reducing complexity of the incentives whilst at the same time improving the tools and access to rewards.
- Pivot3’s vSTAC platform has received a key certification required for US federal government purchases, becoming the only HCI vendor with this Common Criteria certification for data protection products.
- Sphere 3D Corp. has announced new HVE Appliances supporting Non-Volatile Memory express (NVMe) technology. These NVMe-enabled appliances allow for over 3 times the drive read / write performance when compared to SSD only platforms, and are available as either converged and hyper-converged appliances or "Datrium Ready" open converged nodes.
- Storage Made Easy (SME), has a new File Approvals feature within its File Fabric product, which provides better Workflow approval processes across data stores, helping companies to scale up their productivity when dealing with file approvals.
- Replicator WANdisco has adopted the maximum availability architecture guidelines of Oracle Corp. for its Fusion product. Apparently this was pivotal in securing its $1.5m contract with an unidentified US financial services institution, announced in October 2016, with the client being a "large Oracle customer".