This article is more than 1 year old

Playing nice with a host of tech-pushers pushed OpenStack close to edge

A cross-vendor framework for edge functionality is no small task

If one thing stood out at OpenStack's Vancouver summit in May, it's that the open-source project isn't just about data centre-based cloud computing any more.

When Rackspace and NASA founded OpenStack eight years back, they wanted it to drive more efficient computing in the data centre by delivering cloud computing resources on standard hardware.

Since then, OpenStack has become commonplace for homegrown, on-premises cloud infrastructure. 72 per cent of the respondents to the OpenStack Foundation's October 2017 survey used it that way, and that's up from 62 per cent in 2015.

Today, the OpenStack Foundation sees hardware architectures diversifying beyond commodity x86 platforms into GPUs, FPGAs and Arm-based systems. It also sees approaches to software becoming more complex as containers, microservices and serverless computing take hold, and it sees computing happening increasingly at the edge, outside the data centre.

Alan Clark, chair of the OpenStack Foundation's board and CTO for SUSE, tells us that it will need help to do that.

"OpenStack is key to that open infrastructure, but we recognised two years ago that not all technologies are going to be developed within this community, and we shouldn't try to push for that," he says. Instead, it must play well with others and tap into complementary projects from other industry associations and open-source groups. He calls those "adjacent communities".

"There are good examples around storage and networking," he says, highlighting OpenNFV as one of the first groups that it worked with.

But these working relationships aren't always smooth. "Every community has a personality. Every community works differently, and has a different terminology." The early days of collaboration with OpenNFV were rocky. "They were frustrated, because their blueprints – their requests for features – were getting a high rate of rejections and they didn't understand why. It was mostly down to the differences in how communities work."

OpenStack and OpenNFV had to learn how to communicate, and it took time to get rejection rates down and align the two groups.

Close to the edge

As OpenStack's community tackles more technologies, it'll have to build and navigate more relationships. One of its destinations involves perhaps more cross-group collaboration than any other: the move to edge computing. OpenStack's advocates want the project to power a devolution of computing power to the edge – peripheral data centers and devices, away from the central hub of the mega data centre. The challenge there is defining just what the edge is.

Beth Cohen, Verizon cloud technology strategist, opened this year's submissions by pitching a case for her company's virtual network services product – effectively OpenStack in a tiny box. This was the product of much discussion. "We spent two days arguing about what is edge computing," Cohen says. An edge computing committee spent months writing about it and eventually came up with a whitepaper definition.

In summary, OpenStack's concept of the edge involves distributed nodes, often with intermittent connectivity and latency concerns. But Cohen thinks that the small, low-powered, sensor-type devices that we often think about as part of the IoT might be too small to be included. "We need computing capability," she says.

Part of the complexity comes in the broad set of applications for edge computing. OpenStack's edge computing committee sees a range of use cases spanning retail to manufacturing.

"There are a lot of open-source groups focused on edge, because there are a whole bunch of use cases," says Clark. "That's where edge is struggling a bit, because there are so many use cases and you need to get focused and figure out which ones you're trying to target."

Telcos like Verizon and AT&T have been the ones primarily driving these edge discussions, so it's no surprise that they're focusing on mobile networking and 5G rollouts. 5G base stations will be very high frequency and very short range, meaning there will be more of them. Efficient equipment using network function virtualization will be a key tool in that rollout, and it will be important to move functionality close to cellular users because low latency, high-bandwidth applications like augmented reality are likely to feature early on.

The telcos don't want to reinvent edge technologies for different use cases, so they're using a building block approach for delivering edge-based systems. According to Cohen: "A composable structure of modules was top of mind for us."

No matter what sits at the edge, it's probably going to be a long way from your engineers, so automation for remote provisioning and configuration becomes important. That automation must span IoT, networking, cloud infrastructure and application software provisioning.

A critical part of OpenStack's edge story is Akraino, the edge stack product launched by AT&T, Intel and Wind River under the auspices of the Linux Foundation in February.

Akraino seeks to pull together technologies from different open-source initiatives to build a common stack. That stack includes tooling for continuous integration and development and SDKs for edge applications development, along with middleware and APIs for integration with third-party edge providers.

Declarative provisioning and management of these systems will be a big part of the automation process. That means stating exactly how these systems will spin up, which access which resources, in a pre-written file. With thousands of devices ranging from base stations to drones all running various bits of these edge stacks and often disconnected for periods of time, having an admin do it from a centralized console won't always be an option.

Airship

Declarative provisioning happens to be a key feature in another project, also from AT&T, called Airship, which was announced at the conference. Airship is used to automate the creation of clouds on bare-metal systems out of the box using Kubernetes-based containers. The idea is to spin up a vanilla container-based machine from nothing, using pre-baked instructions. The promise of Airship is that it will also offer a single workflow for managing the lifecycle of the cloud infrastructure and its applications.

Airship will form part of the Akraino stack, managing software provisioning as just one part of the automation process. It draws on OpenStack Helm, another project for deploying OpenStack and its services on Kubernetes, which has just unhooked itself from Kubernetes' apron strings and been accepted as a project by the Cloud Native Computing Foundation.

Intel and Wind River have also submitted an edge-related project upstream to OpenStack. Called StarlingX, this project is a hardened cloud infrastructure software stack for managing low-latency edge applications, with a focus on high availability. It also plugs into Akraino.

Play nicely

If nothing else, these developments shows that vendors and operators are serious about building a cross-vendor framework for edge functionality within OpenStack. Clark reckons Verizon, AT&T, Intel and Wind River are driving these developments, and thus open-source, based on commercial need.

"There is an organics in open-source software and it has a lot to do with projects living and dying based on interest," he says. "There's a lot of socialization that goes on."

When it comes to edge computing, this socialisation will have to extend into other industry associations if it is to work properly. There are just too many moving parts to make it a single-stack project.

"Lot of tools must be developed, because you're adding that complexity of latency and intermittent connections," says Cohen. Expect a variety of organisations to participate in the edge initiative alongside the Linux Foundation, such as OpenNFV, the Open Networking Foundation, the Metro Ethernet Forum, and others.

As OpenStack grows beyond its original mandate, managing complexity will be a key factor.

"From the beginning, they architected OpenStack so that you could break it down into small pieces," says Clark. "Even with something like [OpenStack's core compute project] Nova, which is a big chunk, they keep trying to break pieces out so that you can manage them separately."

As OpenStack grows, it will create more top-level projects independent of the original OpenStack project. So far, it has two: the first is Kata Containers, its secure VM-based container project that announced version 1.0 during the recent Vancouver summit. This merges Intel's Clear Containers with open source hypervisor runtime RunV, supporting the Linux Foundation's Open Container Initiative (OCI).

The other is Zuul, the CI gating and testing tool which has been an initiative for years, but which became a full-fledged OpenStack project at the summit.

As OpenStack continues to tackle more technologies and bring them into the fold, it will no doubt encounter some speed bumps along the way. Sometimes the cultural differences are painfully obvious. Foundation execs painted a picture of a diverse, value-filled open infrastructure on stage when opening the summit. Directly afterward, Canonical CEO Mark Shuttleworth argued the cloud was commoditising and spent most of his keynote slot ripping VMWare and Red Hat with direct price comparisons. Everyone has their own agenda in open source, and sometimes it shows in embarrassing ways.

The next version of OpenStack after Queens is Rocky, which will drop at the end of August. Expect a range of improvements including support for functions as a service in the form of Qinling, as OpenStack continues to build in support for new approaches to cloud-based computing.

The Foundation wants to embrace new initiatives outside of the core project. It has its work cut out. Not in starting projects, but in combining them. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like