This article is more than 1 year old

At the edge, nobody can hear your IoT devices scream …

Red Hat’s approach to locking down remote industrial networks and data processing facilities

Sponsored Feature If you've ever wondered what edge computing looks like in action, you could do worse than study the orbiting multi-dimensional challenge that is the multi-agency International Space Station (ISS).

It's not exactly news that communication and computing are difficult in a physically isolated environment circling 400km above the earth, but every year scientists keep giving it new and more complex scientific tasks to justify its existence. This quickly becomes a big challenge. Latency is always high and the data from sensors can take minutes to reach earth, slowing decision making on any task to a crawl.

It's why the ISS has been designed with enough computing power onboard to survive these time lags and operate in isolation, complete with the processing and machine learning power to crunch data onboard. This is edge computing at its most daring, dangerous, and scientifically important. Although the ISS might sound like an extreme example, it is by no means alone. The problem of having enough computing power in the right place is becoming fundamental to a growing number of organizations, affecting everything from manufacturing to utilities and cities.

The idea that the edge matters is based on the simple observation that the only way to maintain performance, management and security in modern networking is to move applications and services closer to the problem, away from a notional data center. Where in traditional networks computing power is in centralized data centers, under edge computing the processing and applications move to multiple locations close to users and where data is generated. The datacenter still exists but becomes only one part of a much larger distributed system working as a single entity.

The model sounds simple enough, but it comes with a catch – moving processing power to the edge must be achieved without losing the centralized management and control on which security and compliance depends.

"Whatever organizations are doing, they want the data and service to be closer to the customer or problem that needs solving," says Ian Hood, chief strategist at Red Hat. Red Hat's Enterprise Linux and Red Hat's OpenShift ContainerPlatform local container platform is used by the ISS to support the small, highly portable cross-platform applications running on the onboard HP Spaceborne Computer-2.

"It's about creating a better service by processing the data at the edge rather than waiting for it to be centralized in the datacenter or public cloud.," continues Hood. The edge is being promoted as the solution for service providers and enterprises, but he believes that it's in industrial applications that the concept is having the biggest immediate impact.

"This sector has a lot of proprietary IoT and industrial automation at the edge but it's not very easy for them to manage. Now they're evolving the application they got from equipment makers such as ABB, Bosch, or Siemens to run on a mainstream compute platform."

Hood calls this the industrial 'device edge', an incarnation of edge computing in which large numbers of devices are connected directly to local computing resources rather than having to backhaul traffic to distant datacenters. In Red Hat's OpenShift architecture, this is accommodated in three configurations depending on the amount of compute power and resilience needed:

-          A three-node RHEL 'compact' cluster comprising three servers, a control plane and worker nodes. Designed for high availability and sites that might have intermittent or low bandwidth.

-          Single node edge server, the same technology but scaled down to a single server which can keep running even if connectivity fails.

-          Remote worker topology featuring a control plane at a regional datacenter with worker nodes across edge sites. Best suited for environments with stable connectivity, three node clusters can also be deployed as the control pane in this configuration.

The common thread in all of these is that customers end up with a Kubernetes infrastructure that distributes application clusters to as many edge environments as they desire.

Beyond the datacenter

Hood says the challenge of edge computing begins with the fact that the devices themselves are exposed on several levels. Because they are located remotely, they are physically vulnerable to tampering and unauthorized access at the deployment site for example, which could lead to a loss of control and/or downtime.

"Let's say the customer deploys the edge compute in a public area where someone can access it. That means if someone walks away with it, the system must shut itself down and erase itself. These servers are not in a secured datacenter."

Hitherto, system makers have rarely had to think about this dimension beyond the specialized realm of kiosks, and point-of-sale, and bank ATMs. However, with edge computing and industrial applications, it suddenly becomes a mainstream worry. If something goes wrong, the server is on its own.

As devices that do their job out of sight in remote locations, it's also possible to lose track of their software state. Industrial operational technology teams must be able to verify that servers and devices are receiving the correct, signed system images and updates while ensuring that communication between the devices and the management center is fully encrypted.

JavaScript Disabled

Please Enable JavaScript to use this feature.

Other potential security risks associated with edge computing are harder to size given that the vulnerability extends to every element of the system. You could call this edge computing's mental block. Admins find themselves migrating from managing a single big problem to a myriad of smaller ones they can't always keep their eye on.

"The risks start in the hardware platform itself. Then you need to consider the operating system and ask whether it's properly secured. Finally, you must make sure the application code you are using has come from a secure registry where it has been vetted or from a secure third party using the same process."

The biggest worry is simply that the proliferation of devices makes it more likely that an edge device will be misconfigured or left unpatched, which punches small holes into the network. An employee could configure containers with too many privileges or root or allow unrestricted communication between different ones, for example.

"Today, most customers still rely on multiple management platforms and proprietary systems. This forces them to use multiple tools and automation to set up edge servers."

Red Hat's answer to this issue is the Ansible Automation Platform, which makes it possible to build repeatable processes across all environments, including the central cloud or datacenter or edge devices. This unified approach benefits every aspect of the way edge servers and devices are managed from setup and provisioning of the OS to patching, compliance routines and security policies. It's hard to imagine how industrial edge computing could work without such a platform but Hood says that organizations today often take a DIY approach.

"If they're not using a tool like Ansible, they'll revert to scripts, hands on keyboards, and multiple OS management systems. And different departments within an organization own different parts of this, for example the division between the IT side and the operations people looking after the industrial systems."

For Hood, migrating to an edge computing model is about choosing a single, consistent application development and deployment platform that ticks every box including the software and firmware stack managed by the OS to the applications, communication and deployment systems built on top of this.

"The approach organizations need to take whether they use Red Hat OpenShift or not is that the deployment of infrastructure needs to be a software-driven process that doesn't require a person to configure it. If it's not OpenShift you'll likely find that it's a proprietary solution to this problem."

The Swiss Federal Railways IoT Network

Another Red Hat implementation Hood involves a partnership with Swiss Federal Railways (SBB), a transport company deploying a growing family of digital services for its 1.25 million daily passengers and the world-famous timetable where no train must ever run late. Connected components include onboard technology such as LED information displays, seat booking technology, Wi-Fi access points, and CCTV and collision detection systems for safety monitoring.

This large, complex network of devices comprises multiple proprietary interfaces and management routines. Latency quickly became an issue as did the manual management workload of looking after numerous sensors and devices for a workforce which already has its hands full with trains, signaling and tracks.

Instead, SBB turned to Red Hat's Ansible automation which has allowed the service to manage IoT devices and edge servers centrally without having to send technicians to visit each train and edge server one at a time. Through Ansible, SBB was also able to get on top of the problem of exposing too many SSH keys and passwords to employees by centralizing these credentials for automated use. What SBB couldn't contemplate, says Hood, was lowering its management overhead at the expense of making the security infrastructure more cumbersome and potentially less secure.

In Hood's view, SBB demonstrates that it's possible for a company with a complex device base to embrace edge computing without inadvertently creating a new level of vulnerability for itself on top of the problems of everyday cybersecurity defense.  Observes Hood:

"Edge computing is just another place for attackers to go. If you leave the door open someone is guaranteed to walk through it eventually."

Learn more about Red Hat's approach to edge computing and security here.

Sponsored by Red Hat.

More about

More about

More about

TIP US OFF

Send us news