This article is more than 1 year old
WTF is 'Computing First Networking'? Think load balancers for the age of edge
The 'pre-process data on the edge' idea turns out not to be that simple
A new buzzphrase crossed your correspondent's desk: "Computing First Networking."
WTF is it?
According to a brief from analyst firm Gartner, it's "an emerging networking solution that provides services for computational tasks by sharing computing resources from multiple edge sites" and "a new type of decentralized computing solution that optimizes the efficiency of edge computing."
But why does edge computing need greater efficiency? For the last handful of years, we've been told that edge solves whatever ails you by moving compute closer to where data is created so you can work on that data - then act on the outcomes of that work at the edge - without having to schlep data into a cloud or data centre for processing. Or pay for that schlepping.
Reality check: Gartner reckons compute resources at the edge might not have enough capacity to handle all the work they're asked to perform, so you need to find other resources to do the job. Perhaps even other resources on the edge.
At this point cunning Reg readers will probably have thought to themselves that assigning workloads to available resources is just the sort of job that load balancers can perform.
Sorry. Gartner reckons load balancers weren't built to work with edgey resources because they often run containers and serverless workloads that mean utlisation rates change very quickly in the "MECs" – the Multi-access Edge Computing sites that house a router, servers, power, and cooling – out on the edge.
Enter Computing First Networking (CFN), which Gartner's Owen Chen describes as "a new type of decentralized computing solution [that] leverages both computing and networking status to help determine the optimal edge among multiple edge sites with different geographic locations to serve a specific edge computing request."
CFN does this using dynamic anycast (dyncast), which Gartner describes as "a distributed technique that follows the resource pooling idea to dispatch service requests to the optimal [edge] site in a dynamic manner."
"Instead of measuring the overall metrics such as CPU/GPU/memory usage of an MEC site, dyncast leverages a compute-aware module called station demon to acquire compute status in application granularity. This helps calculate a compute metric that reflects the workload of a certain application deployed on the MEC site."
- What will the factory of the future look like? Let's start with Intel, Red Hat, and 5G
- Here comes the blob: Asia's top 'net boffin thinks 'shapeless services' could replace the Internet
- VMware’s stack heads to Arm architecture – out on its new two-faced edge
Gartner offers the example of enabling autonomous cars as one application for CFM.
The firm imagines that MEC sites would take on the job of collecting and processing traffic information so that driverless cars know what to expect on the road but could also have a role streaming video into cars to entertain their occupants. Analysing traffic is clearly the most important job a MEC can perform, so Gartner imagines CFM would find other resources that aren't busy and can therefore be handed the job of streaming video.
And that resource will probably be there for the taking, because in a well-wired city there'll be MECs on plenty of 5G base stations. At the end of the working day, when CBDs and industrial parks empty out, the MECs there should be ready to take on some work while the MECs next to highways are maxed out.
CFN is popping up in papers at wonkish conferences but didn't figure on Gartner's 2021 Hype Cycle for edge computing. So it's probably not something you'll need to deploy in a hurry. It is, however, a sign that doing edge well won't be as simple deploying the edge-centric servers and software overlays announced every other week in recent times. ®