NetworldEurope Strategic Research and Innovation Agenda (SRIA)

NetworldEurope is the new incorporation of the European Technology Platform (ETP) for communications networks and services, the follow-up of NetWorld2020 to follow the European changing policies as stated in Horizon Europe.

In December 2022, the NetworldEurope Steering Board approved for publication the Strategic Research and Innovation Agenda (SRIA) 2022. This was the result of a large effort from the Expert Group, with around 200 experts from almost 100 institutions involved. Their announcement said:

This SRIA is divided in two large and related parts. The main body (by simplification referred as “whitepaper”) consists of a simplified, higher level vision of our technological roadmap, with an additional deep section providing simplified metrics tables that identifies reference specifications for technology at different times (referred as nodes) and expected technological features associated with those nodes. A second part, the technical annex, provides a deeper reaching discussion on the technologies we envisage as key for the future, under the overall scope of ICT. The diversity of technological domains represented there, and required for future communication infrastructures, underlines the relevance of these different chapters for many initiatives in European Research, from optics to satellite, and NetworldEurope will be actively engaged in discussing these views with all interested stakeholders in the future. For simplification, in the whitepaper, we provide a summarized version of the technical annex.

The following is from SRIA whitepaper:

With more and more intelligence and computing power available per resource, in the future, the resources of these systems, configurable and orchestratable dynamically (i.e., also reprogrammable in runtime), do not have to be limited to particular predefined roles and can be used both to deploy/support new services (both network and end-user services) and to better match the requirements of services running over the infrastructure. With this however, unlike 5G, 6G will be not only more flexible in both its services and in its realization but will also exhibit much higher dynamics, in service types/loads but also in its own topology. With that higher dynamics and the seamless co-existence of virtual and physical entities, the currently physically separate islands of 5G and prior systems will often overlap in resources in 6G. This applies both to different domains of one single network (Terminal/RAN/Core), just as it applies to several networks (e.g. run by different MNOs) and to entirely different systems (mobile networks and clouds, mobile networks and NTN systems, etc).

Using the offered large variety of novel challenging ICT services, a massive number of devices will be served by these systems generating, exchanging and treating very large quantities of data. The infrastructure that supports society (IoT, cyber-physical systems) will be integrated with the Internet, which will help improve the effectiveness and efficiency of both. Useful insights can be generated based on the automatic analysis of all that data (e.g., using machine learning methods, ML, and artificial intelligence, AI). Beyond the analysis, AI/ML can also be used to optimize deployment, adaptation, reconfiguration and other decisions or to create better-suited system modularizations and novel entities better suitable for the overall required processing. Hence, it is paramount to approach AI/ML systemically to correctly assess the relevant trade-offs: AI/ML instrumentations per se require massive data transfers, are computation-intensive and, ultimately, consume massive amounts of energy. Relying on siloed solutions and dedicated implementations limits the usefulness of AI/ML, while it increases both its costs (resources) and the cybersecurity risks (attack surface). 

The postulates above imply that the future network technology will have to support the general Internet economy and the particular needs of the cyber-physical infrastructure, like those encountered in the production industry, alike. It will have to work with virtual objects and remote objects, the density, distribution, longevity and interconnection of which in any area can vary a lot, including remote areas and/or the sea and skies or space orbits. It will then have to integrate local and remote objects and different connectivity modes seamlessly, with a diversity of connectivity technologies. It will have to handle its own constituting nodes and services of transient nature, which can disappear and reappear, possibly at a different location and in zero time, be multiplied and shrunk without notice, etc. At the same time, this future network will be expected to operate as a facility: it will be relied upon by private users, businesses, critical branches and governments. Therefore, it will have to be resilient to failures, operational errors and security threats alike, in a world, where autonomic operations for both services and infrastructures, and in particular AI/ML techniques, will be widely used. Open standards will be required, while governments will want to impose limits and regulations on the operations on all the data required to drive these new systems. In this context, overcoming the digital divide will be a key driver for technology evolution, and personal freedom and rights will need to be assured across all media.

Here, reliable and trusted flexible provisioning and elastic execution on a dynamic and changing resource pool emerge as key challenges for the future system architecture. Flexible provisioning refers to the generality of the infrastructure and its capability to onboard and execute essentially any ICT service. The generality of the infrastructure, as opposed to the reliance on service-dedicated components, is important to increase infrastructure sustainability over time and degrees of freedom for multiplexing gains. Execution elasticity refers to an efficient adaptation before, during and after the execution, i.e., in particular in runtime, and supports the selection of best suitable links, modules and more complex components, to preserve the expected service properties while limiting overprovisioning. In particular, elasticity, as the capability of adjusting resources used in service execution, is key to enable truly green networking, as it allows to redirect requests to resources with better ecological sustainability and to limit the overall resource footprint while preserving the service throughput. Given the resource mix, we must assume that elasticity and flexibility also apply to infrastructure resources, which can be as varied as data centers, edge nodes, flying platforms, or satellite computers. Hence, working with individual resources is limiting and not sustainable; rather, allocations and executions should refer to the resource pool as a whole. This in turn requires pervasive, resilient resource control, since without trust and reliability, the whole infrastructure cannot fulfill the expectations of either individual stakeholders (providers and consumers) or of the society at large.

Overall, we envision a Smart Green Network as a programmable system based on a unifying controllability framework spanning all resources a service/tenant is authorized to control, including from previously separate and heterogeneous domains, e.g., enterprise and telecom networks, virtual and physical, data centers and routers, satellites and terrestrial nodes, etc., covering a global network of networks (including the space domain). The unifying controllability framework will glue the disparate resource islands to one system of the tenant supporting smart flexible instantiation and adaptive, elastic and correct execution of any service on the resources (Figure 6-1). For 6G in particular, the resources will stem from all system players, typically from mobile network operators, but also from cloud providers, non-public network providers and might include terminals, where suitable. Interestingly, 6G will have to architecturally embrace the fact that system resources used for service execution might, per se, be provided as services, i.e., that the service and its control in general cannot be limited to the strict boundaries of the authority domain of the service operator only nor to any particular layer. Rather, in 6G all system participants are potentially both resource providers and service consumers at once. In this situation, the properties of the service must be, in general, enforced regardless of (or even in spite of lacking) assurances at the resource layer. 

Hence, the key challenges that the Unified Controllability layer must solve are: the aspects of control over multiple general-purpose, distributed, network control operating systems; the availability of powerful abstractions from resources to services; new naming schemes for virtualised resources; dynamic and automated discovery; structurally adaptive logical interconnection; multi-criteria routing in networks of different densities; (potentially intent-based) open APIs and highly configurable policies to control the resource and service access as well as dynamics; isolation of application’s execution environments and performances; efficient scheduling of requests to resources; a high degree of automation and support of self-* principles (self-driving networks); secure and human-auditable methods to provide reliable and trusted infrastructures; and distributed yet trustworthy AL/ML instrumentations.

Overall, it is imperative to address different challenges to reach this vision:

  • Sustainability – the infrastructure will need to be driven by sustainability considerations, both in its design, and in its applications, striving for implementations that will minimize the total number of units, protocols and interfaces, potentially allowing dynamic pooling of resources from diverse participating systems, devices and objects.
  • Specialization – the infrastructure will need to be able to implement tailored features, while remaining flexible in terms of scalability, onboarding and function placement, offering programmable analytics and cooperative machine learning.
  • High programmability – the infrastructure should offer programmability to the service layer through open interfaces, in technology-agnostic nodes potentially with cloud agnostic and micro-services approaches. Overall, it should integrate autonomics to enable self-organized, resilient programmability and elastic, correct service execution.
  • Extreme connectivity – flexibly incorporating different radio technologies, including non-terrestrial networks, guaranteeing the interconnection of all types of sensing, communication and computing nodes, from low power to high-speed communications.
  • Trustworthiness – providing embedded security and reliability into the whole infrastructure, for all stakeholders, with full coverage providing trusted and privacy-aware services.

Although this is a mid-term vision, it relies on already perceived trends in the industry. Many of the aforementioned aspects are currently being pursued in simpler forms inside telecom operators, which are pursuing network consolidation at the core (the integration of non-standalone/standalone architectures and future B5G networks in a single seamless solution, with fully unified control, rating and billing functions) alongside the 5G deployment, given the expected gains in operation costs and improved network flexibility.

It should also be noticed that it is important that these directions are accompanied by the appropriate economic and policy work in future research to make way for the envisioned new services that go beyond current 5G.

You can download the whitepaper here, and the more detailed technical annex here.

Related Posts