This article is more than 1 year old

Why successful AI needs fast data access

Intel Optane builds a data pipeline for any type of AI workload

Sponsored Spending on artificial intelligence systems will grow from $37.5bn worldwide in 2019 to $97.9bn in 2023, according to IDC. And use cases cover everything from ERP, manufacturing software and content management to automated customer service agents, threat intelligence and fraud investigation.

The number of smaller data centres in operation is also likely to increase, IDC analysts say. This will enable enterprises to navigate local data protection regulation and host AI workloads in edge computing environments to optimise application performance.

However, respondents to Gartner’s 2019 CIO Agenda survey indicate that data scope and quality continue to be major challenges to AI adoption. Also, they expressed concerns that organisations cannot store, process and analyse sufficiently large quantities of good quality information to build successful use cases. Gartner notes that many organisations struggle to scale their AI pilot projects into enterprise-wide production implementations, which inevitably limits the technology’s business value.

AI data pipeline critical to success

The process intensive nature of AI means that access to high performance, scalable IT infrastructure is critical to its success in real world deployments. This entails the ability to analyse large data sets quickly and accurately enough to produce actionable insight or provide the basis for automated responses within particular applications or services.

For example, chatbots need to ingest and process language and provide a sufficiently fast and accurate response to the human participant to avoid conversational disruption caused by undue latency. Speed is also the essence for fraud analysis and investigation applications, which must feed back risk scores almost immediately to enable financial institutions to conducting safe transactions in real time.

But building, testing, optimising, training, inferencing and maintaining the accuracy of the AI models depends on having large quantities of data to train data in the first place. This in turn is sourced from a diverse array of high quality and dynamic data inputs. Gathering all of that information and storing it ready to feed into adjacent processing engines is a big undertaking in itself. And to do that at optimum speed requires an AI data pipeline that provides ready access to reliable and well-structured data sets for analytics purposes. However, most of the underlying legacy network and storage architecture is ill-equipped to provide this.

Storage hit with high speed data volume and diversity

The storage architecture also has to handle extreme fluctuations in the volume and type of data being processed and analysed by the AI workload. This can range from petabytes at the ingestion stage to gigabytes of structured and semi-structured during training and turning into just kilobytes when data is subsumed into the final trained model.

Workloads vary in terms of read write operations too, beginning with 100 per cent writes at the ingest stage which progress to 50/50 read write during preparation before they shift to 100 per cent reads during training and inference. And high throughput and low latency is a must to support fast execution of compute intensive training and inference processes, irrespective of the IO demands or the type of data being processed.

Due to the performance limitations of spinning disk technology, simply adding more hard disk drives (HDDs) to existing storage infrastructure is unlikely to deliver the foundation needed to build an AI data pipeline. But simply upgrading to SATA-based flash memory solid-state drives (SSDs) probably won’t solve the problem either, even when latency is improved using faster interfaces based on the non-volatile memory express (NVMe) protocol.

Intel Optane customised for AI

The improved performance and low latencies offered by Intel® Optane™ SSDs can reduce the time it takes to build and train those AI models. The longer it takes for different system components to get the data, the slower the whole computing process goes. Yet in some cases the fastest memory technology can be up to 10,000 times slower than the fastest CPU memory, which creates a crippling bottleneck each time data is accessed from storage. By getting data closer to the CPU as possible, the CPU waits less time to receive the information.

Intel® Optane™ technology is a caching device for flash storage components that supplements existing random access memory (RAM) to speed up access to compute intensive applications and complicated data sets common to AI workloads, thereby helping to move information along the AI pipeline faster and more smoothly. It is based on non-volatile memory (NVM) technology called 3D XPoint™, which is up to 1,000 times faster than NAND flash memory due to its use of a 3D cell and array architecture that is, like DRAM, bit addressable.

The Intel® Optane™ technology stack combines a portfolio of Intel system memory controllers, interface hardware and software combined with 3D XPoint® memory that integrates memory and storage together into a single memory device which is cheaper than DRAM and much faster than NAND and is non-volatile. That means it can retrieve stored information even after being powered off, whereas volatile DRAM is wiped when the system is powered down.

The technology is incorporated into Intel® Optane™ solid state drives with capacities ranging from 375GB to 1.5TB, which are purposely designed for latency-sensitive workloads such as AI. The performance of the Intel® Optane™ storage infrastructure can be boosted further when used in tandem Intel® QLC 3D NAND SSDs. When paired with NVMe, this combination delivers performance up to four times better than equivalent SSDs with SATA interfaces.

Baidu TCO gains

The advantages of using Intel® Optane™ technology for end user organisations include improved workload performance, faster boot and recovery times, and lower total cost of ownership (TCO).

One company already enjoying this is Baidu. The Chinese internet giant has worked with Intel to develop a private cloud storage solution that tackles the challenges posed by massive unstructured small files, as a key component in its ABC (AI, Big Data, Cloud) strategy.

Baidu AI Cloud’s ABC object storage engine provides an integrated interface for application scenarios such as AI training and high performance computing, and Intel® technologies help to ensure stable performance output even with a rapid increase in file quantities, while drastically reducing TCO.

Baidu AI Cloud ABC Storage employs a combination of SSDs with Intel® Optane™ technology and Intel® QLC 3D NAND technology. The Intel® Optane™ SSD is used as cache to optimize the read efficiency and synchronization latency, boosting the metadata processing speed. Four Intel® SSD D5-P4320 drives are included with each storage server, providing the large capacity storage. The Baidu AI Cloud ABC storage team said Intel’s storage technologies have “helped our solution yield optimum results in terms of stability and input/output operations per second (IOPS)”.

iFLYTEK and Dell show improvements

iFLYTEK, an AI-powered voice recognition and natural language processing (NLP) specialist, is combining second generation Intel® Xeon™ scalable processors with Intel® Deep Learning Boost and Intel® Optane™ SSDs to improve the TCO of its AI Cloud computing platform. The Chinese company has built a hot data cache using Intel® Optane™ SSDs to provide fast access to its AI models in order to improve average response times. The second generation Intel® Xeon™ processors enable iFLYTEK to balance and optimise system performance by removing the storage bottlenecks.

<p.Dell EMC has teamed up with Intel to build a high performance storage solution that can cater for the full AI life cycle. This includes Dell PowerEdge servers with Dell EMC network switches, Isilon storage, and an optimised software stack. Intel® OptaneTM SSDs provide lower latency and higher throughput than standard NAND PCIe SSDs.

Those use cases represent just a few of the many ways that Intel® Optane™ technology is already being used to optimise storage resources across multiple AI workloads. More innovation is on its way as industry verticals as diverse as healthcare, transportation, retail and manufacturing wake up to AI’s potential. But they will need to make similarly smart choices around fast, scalable storage infrastructure to derive maximum benefit.

Sponsored by Intel®

More about

TIP US OFF

Send us news