This article is more than 1 year old

Avere screeches straight through VDI storms

Striping and replicating across front end accelerators with pedal to the metal

Avere, which makes clustered filer accelerator nodes, is striping data across caches and even caching multiple copies to handle burst read requests.

When servers deluge a filer with Virtual Desktop Infrastructure (VDI) boot requests, the filers can struggle. A thousand or more boot requests can occur in a VDI storm and configuring filers to cope with that extreme spike in demand can be expensive. One way to solve this would be to cluster filers and have them use the fastest disks and even solid state drives (SSDs).

But Avere says it has a better idea: use its clustered FXT accelerator nodes in the form of bulk data filers using commodity SATA drives. The FXTs have a tiered storage design – with RAM, NVRAM, then NAND and finally SAS disk – to cache read and write I/Os according to their characteristics. It has now upgraded its operating system, AOS, with v1.4 doing array-level things such as striping and replication in and between FXT cluster nodes to deal with request bursts.

It works like this: the FXT cluster detects the access rate to files. As soon as one client server starts sending in requests for a file at a high rate, that file is transferred to an FXT node's RAM. So far so straightforward. As the request rate to that file increases, with more client servers joining in, additional FXT nodes respond, using the first node's RAM-cached copy.

If the request rate continues to increase, the cached file in the first FXT node is now striped across the FXT nodes handling the requests, increasing the cluster's ability to respond even further. Step four is to have the cached file replicated so that all the FXT nodes involved each have their own RAM-cached copy.

As the request frequency eventually deceases, the caching scheme reverses through the stages, with the file eventually stored in the first FXT node's SAS disks before, finally, disappearing from FXT node presence at all and reverting back to the copy on the backend filer's SATA drives.

Because AOS 1.4 can support more requests from servers, Avere has also increased the number of backend filers supported to 24. It's also added an optional second connection to the backend filers, with data access by NFSv3 over the first connection and and access control by either CIFS or NFSv4 across the second.

Avere reckons the storage industry is starting a new era with a transition from disk drives to solid state storage under way. El Reg thinks we're going to see multiple NAND flash tiers in the FXTs next year, single level cell followed by multi-level cell. We might also see the FXTs front-ending a massively large file primary data and archive store and providing fast access to the primary data and, perhaps migrating older data to the archive section of the filer backend. This is sheer speculation by the way. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like