This article is more than 1 year old

5G is not just a radio: Welcome to the fibre-tastic new mobile world

To succeed, operators need fibre, NFV, legacy kit and radio

Analysis When an executive from Nokia, of all companies, said 5G was as much about fibre as wireless, it was clear this was going to be different from previous mobile standards generations. 5G will not be driven by mobile broadband speeds as 4G was.

If higher data rates and larger numbers of broadband devices are an mobile network operator's only goals, it will be better to stick with LTE for many years to come, especially with Gigabit LTE becoming a reality (and the operator can always label it "5G" anyway). The superior efficiencies of the 5G New Radio will not outweigh the cost of deploying a brand new network, except for a greenfield company.

The full benefits of 5G will only be achieved with a full architecture change, not just a radio upgrade – and many of those changes, such as virtualization, can be phased in gradually while the LTE radio stays in place. Adding the 5G New Radio in at some point will certainly create a powerful combination – the radio and the architecture will bring out the best in one another in performance and cost-efficiency terms.

But that will only be worth the expense and disruption if there are new revenues, because mobile broadband – as Finnish operator DNA made clear in an interview with Mobile Europe – will not deliver sufficient revenue upside to justify 5G. All that means that operators should not fixate on New Radio – LTE enhancements, combined with virtualization and strong investment in fibre, can get them a long way to their goals.

That shift of emphasis away from the radio access network (RAN) standard, and towards backhaul/fronthaul and virtualization, means the balance of power is changing too, and those suppliers which have dominated the value chain thanks to their radio expertise will have to ensure they can also assert control in other areas, from convergence to orchestration.

SK Telecom joins the battle to drive NFV interoperability and control

For the first time, the radio is not the heart and soul of the mobile network. Once, it was the radio where interoperability was essential, where vendors gained their competitive edge and customer lock-in, where operators differentiated their services.

Now, as telco networks become increasingly software-driven, that centre of control and industry power has shifted to the software which manages and coordinates all the applications and network function virtualization (NFVs).

SK Telecom of South Korea is the latest in a series of operators seeking to establish their influence over this vital element, by creating in-house technologies to ensure multi-vendor interoperability and future-proofing, and by extending their overall ecosystem power by opening their developments to other carriers. The company’s T-MANO joins AT&T’s ECOMP and China Mobile’s OPEN-O (now merged to form the Open Network Automation Protocol or ONAP), and developments from Telefonica, NTT Docomo and others, as a potential de facto standard.

Open MANO is a key enabler of 5G business case

MANO (management and orchestration) is the brain of the virtualized network, and operators are battling to seize leadership in this area away from the major vendors, and even to establish themselves as standards setters for the entire industry. For the full vision of 5G to be realized, it will be essential for individual operators’ virtualized networks to interoperate and to be able to be orchestrated in a unified way. Network slicing will deliver on its potential to drive new economics and services, if those slices can be cut from a huge, flexible, ever-shifting pool of capacity. If it is implemented within individual operators’ networks, its impact will be far lower. An open, standard orchestrator is essential to make the vision into a reality.

That orchestrator may evolve from a conventional standards effort like ETSI’s Open Source MANO (OSM) or an open source initiative like the Linux Foundation’s ONAP. The two approaches may converge, and will certainly feed into one another, but in political terms, they are at loggerheads. The type of organization which ends up in the leadership role for this vital piece of software will help decide whether 5G will be largely an open source platform or a familiar standards body-driven one.

The economics of that matter to operators. Open source can drive down capital expenditure, accelerate adoption and broaden innovation, but it will almost certainly drive up operating expenses, as mobile network operators will have to devote considerable in-house effort to developing and deploying optimized solutions based on the raw open source foundation – or outsource that effort to vendors, old or new. A standards body solution comes with greater harmonization but will tend to be supplied and deployed more heavily by established vendors, swinging the pendulum towards capital expenditure and threatening vendor lock-in again.

That lock-in issue is key to the economics of 5G. Operators are keen to reassert their own control of their networks by ensuring that they define the vital underpinnings of virtual platforms – such as MANO, or the interface between physical cells and virtual basebands – rather than the vendors. That will enable them to swap kit and software from multiple vendors in and out of their networks simply, and to use open source or startup offerings without the risk that usually works against those solutions.

The investments by Orange, BT, AT&T and others, in startups which could challenge the power of the major equipment providers, show how keen operators are to have a broader, more competitive supply chain. In many cases – as epitomized by AT&T’s Domain 2.0 effort to build a new supply chain around software-defined networking (SDN) – virtualization and SDN will be the triggers for this.

It will also help avoid the situation in which the industry found itself with CPRI – officially a standard interface, but one that was driven by vendors. Each supplier has implemented CPRI specifications slightly differently, so that the basebands and remote radio heads which the interface connects, cannot be mixed and matched easily. But will operator-driven interfaces be equally prone to fragmentation?

SKT’s T-MANO follows a path plotted by Docomo

Initially, then, SK Telecom’s T-MANO was an in-house effort to establish its own APIs (application programming interfaces) so that it could use a common MANO approach for all its virtualized network elements, and introduce multiple vendors to the mix. This echoes what NTT Docomo described back in March 2016 when it announced the first commercial deployment of a multi-vendor, interoperable virtualized EPC (evolved packet core).

The operator’s CTO, Seizo Onoe, had said just months before that multi-vendor NFV technology was “regarded as pie in the sky”. So it was seen as a major breakthrough when he provided details of Docomo’s multi-vendor NFV plans and initial suppliers.

“Many NFV technologies already deployed still rely on single vendor, so we expect this truly multi-vendor NFV technology will be a long-awaited game changer in the mobile industry’s ecosystem,” Onoe said at the time. Its approach is to choose vendors in different areas on the condition that they ensure interoperability with their rivals’ systems – an approach, now emulated by SKT, which could hasten the development of interfaces and models that might be replicated round the world or included in future standards.

In Docomo’s deployment, Ericsson’s Cloud Execution Environment (CEE) – based on the OpenStack open source technology for orchestrating virtual functions – is the integration and cloud management platform for all the NFV functions from Ericsson and other vendors. The Swedish vendor claims it can interwork with any carrier-class virtualized network function and SDN on the market.

The other vendors involved in the first stages were Cisco and NEC, the former supporting SDN-based automation of the VNFs with its Application Centric Infrastructure (ACI); the latter providing the first system to be virtualized, the vEPC, along with the VNF Manager (from its Netcracker subsidiary).

Docomo is initially deploying a vEPC in order to be able to increase capacity and maintain connectivity during spikes in activity, or in the event of disasters or outages, though other use cases, and a complete transition to a virtualized approach, will follow in future. It is virtualizing LTE EPC functions including the Mobility Management Entity (MME), the Serving Gateway (S-GW), and the Packet Data Network-Gateway (PGW). Some vEPC providers, like Affirmed Networks, can disaggregate their VNFs so operators can select different components of a vEPC from different suppliers. In Docomo’s case, NEC will provide the whole platform.

T-MANO aims to define open APIs

Like Docomo, SKT had struggled to make a multi-vendor environment viable. It said that, before it developed T-MANO, it had to build and operate a separate management and orchestration platform for each vendor of NFV equipment. Despite vendor assurances that they would support open interfaces, in practice these have been immature or poorly specified, and each supplier’s specs have differed, forcing operators to incur additional cost and complexity by supporting multiple MANO systems.

Clearly, this cancels out some of the biggest supposed benefits of virtualization – flexibility, multi-vendor interoperability, dramatically lower operating costs. SKT, like NTT Docomo and others, refuse to see the potential of the NFV technologies – which they have helped to pioneer – squandered so early in the game.

Now, its suppliers will have to support its APIs if they want to be included in its aggressive deployments of virtualized and 5G networks over the coming few years.

SK Telecom said it will apply T-MANO first to its virtualized VoLTE (voice over LTE) routers and then expand to the virtualized LTE EPC and MMS server. From 2019, it will only deploy virtualized EPC, and like NTT Docomo, will insist that any supplier supports its APIs. It is also extending its NFV efforts to other areas of the network, and presumably T-MANO will follow over time. Unlike most operators, including AT&T, SKT does not see the RAN as the last and most challenging network function to be virtualized. It started introducing NFV to some base stations in 2016 as part of its Cloud-RAN project.

“With the commercialization of T-MANO, SK Telecom secures the basis for accelerating the application of NFV technologies to provide better services for customers,” said Choi Seungwon, head of the operator’s Infrastructure Strategy Office, in a statement. “We will continue to develop NFV technologies and accumulate operational know-how for virtualized networks to thoroughly prepare for the upcoming era of 5G.”

Now the Korean company is going a step further and opening up its APIs for other operators to use, a move which echoes AT&T’s with ECOMP. SK has not yet said anything about a potential standards effort, but it did emphasize that T-MANO is based on the specifications set by ETSI, so it seems closer to the OSM approach than to the Linux Foundation and ONAP.

Operators compete to drive standards

Other operators, notably Telefonica, already have their technology in the heart of OSM, but the commercial readiness of T-MANO will count for a lot. It was the fact that AT&T had deployed ECOMP itself, with measurable results, that ensured it the leading role in ONAP, while China Mobile’s less proven OPEN-O took a subordinate role. The same could happen if T-MANO were embraced by ETSI OSM, reducing the influence of Telefonica.

Already, two elements of the technology are included in the ETSI specs, said SKT. In 2015, it already commercialized an NFV system orchestrator based on ETSI standards, named T-OVEN.

This is where the fragmentation risk is most obvious – if a converged approach cannot be found between the very different approaches and industry cultures of ETSI and ONAP (not to mention other vendor-specific or open source initiatives now under development in MANO).

ONAP has an impressive list of mobile network operator supporters already – Orange is the most advanced deployer, and there is also support from Bell Canada, China Mobile, China Telecom, China Unicom, Veon and Reliance Jio. Amdocs, which helped develop ECOMP, will hope that it will be in pole position to help such operators with the challenges of deploying solutions based on open source technology, with a range of ECOMP-related services.

On the ETSI OSM side, operator backers include SKT itself, Telefonica, BT, Telenor and Sprint.

Other major carriers are still on the fence. Verizon’s VP of global technology and supplier strategy, Srinivasa Kalapala, told Light Reading in March that his firm was doing due diligence on both MANO options, but has concerns about both. On the OSM side, he questions whether the technology is more than a VNF orchestra-tor, rather than a full service management platform; while “the concern we have with ONAP is whether it is truly open. How many groups are contributing?”.

ETSI vs ONAP could split the platform

OpenStack, which is central to ONAP, is at the heart of the telco dilemma over open source. The platform provides a simpler approach to MANO than ETSI, and should enable operators to embark on virtualization with lower cost and faster time to market. However, some operators believe it is too flimsy for the heavy demands of a telco network and requires too much in-house development.

But the former arguments are winning out, at least for the first wave of implementations. Carriers may want to add more functionality later, but the interest in OpenStack reflects the overall drive to accelerate progress. This brings with it a new attitude to standards organizations, with OpenStack, ONF and OpenDaylight rising in th0eir influence over carriers, while the power of traditional bodies like ATIS, TIA, ITU-T and the TM Forum is waning.

AT&T argues that ECOMP has gone beyond what ETSI offers, with its model-based approach that can be adapted to any set of capabilities according to the operator’s need. But those advances could be fed back into ETSI, especially if that body becomes convinced of the need for a standard model-based approach too, something it has resisted so far.

The model approach is designed to simplify the process of virtualization and orchestration. Network engineers design services and set policies – using tools which AT&T has also open sourced, in the Service Design and Create portion of ECOMP – and then those services and policies are attached to the model so that operations can be automated.

Copyright © 2017, Wireless Watch

Wireless Watch is published by Rethink Research, a London-based IT publishing and consulting firm. This weekly newsletter delivers in-depth analysis and market research of mobile and wireless for business. Subscription details are here.


Similar topics

Similar topics

Similar topics


Send us news

Other stories you might like