ROBO HCIAs erupt from Atlantis. Thankfully it's not Rise of the Machines

It's hyper-converged all-flash appliances. Phew!

Software biz Atlantis has announced a pint-size hyper-converged appliance for remote and branch offices.

Atlantis’ HyperScale CX-4 software is a two-node design integrating compute, all-flash storage, networking and virtualisation.

It is available on Dell’s PowerEdge FX2 servers, which feature blade servers and integrated 10GbitE switching.

The idea is that ROBOs run what are micro-data centres, and integrated systems are better for them as they need less local management resources.

It’s not a new idea, but what is new is basing the ROBO system on a hyper-converged infrastructure appliance (HCIA). Typically these are four node (server) systems.

Atlantis has cut that down the middle to produce a more affordable ROBO HCIA box set. Its product has 48 compute cores (FC630 Server with two Intel Xeon E5-2680v3 CPUs and 24 cores per node) and 4TB of effective storage capacity* inside a 2U enclosure**. That comes from either three 800GB SSDs or four of them.

It claims this product “provides the lowest entry point cost for any hyper-converged appliance.” There is “a simplified deployment process that pre-configures networking for each remote appliance to connect to [a] central Atlantis Manager virtual appliance.”

There are 4-node HyperScale CX-12 and CX-24 models which have 12TB and 24TB of effective capacity respectively. Atlantis launched these in May last year, using SuperMicro servers. Cisco, HP and Lenovo servers were also supported via an architecture reference design.


Dell FX2 hyper-converged platform

Management is a big problem with ROBO kit, and here Atlantis is providing central management and 24/7 support.

Reducing cost is a focus of Atlantis’ marketing, with the company claiming customers will see “drastic savings in ROBO infrastructure and operations costs.” It claims this is “ the most affordable hyper-converged appliance on the market and includes data protection, high-availability and disaster recovery capabilities at no additional cost.”

CEO Chetan Venkatesh said: “Prior to this offering, it would be unheard of for a ROBO environment to be equipped with an all-flash hyper-converged appliance because of the cost.”

The cost is still relatively substantial, unless you are a mid-market or above business; these are all-flash systems, after all. And there is a potential problem: 4TB is not a lot of capacity, and means that there will need to be a data transfer pipe to the central data centre for capacity data.

Pick up a Dell/CX-4/CX-12/CX-24 pdf marketing brief here and 10page spec-sheet here.

The HyperScale CX-4, CX-12 and CX-24 appliances are available now on Dell FX2 servers through Dell’s channel in the USA, Europe and the Middle East; not globally, and they ship direct to customers. You can also choose to have Cisco, HP, Lenovo and SuperMicro hardware. ®

*This is effective capacity after data reduction and Atlantis says this is guaranteed.

**It seems odd that a micro datacentre consists of a 2U system needing a rack frame to house it. Is the rest of the rack empty? If not what else is in it? Assume it’s a half size rack and you still have some 18U of space going to waste, unless there is other equipment you need to house.

Similar topics

Narrower topics

Other stories you might like

  • VMware claims 'bare-metal' performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual datacenter product updates across CPU, GPU, and DPU
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Now Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading

Biting the hand that feeds IT © 1998–2022