Testing the Building Blocks of Intelligent 6G Networks

The latest one6G Open Lecture 12, titled 'Experimental Technologies for 6G: Toward Network Intelligence', provided a useful reminder that 6G is not just about defining new use cases, architectures and KPIs. It is also about proving that the proposed technologies can work in realistic conditions.

As we move closer to the 6G standardisation phase, the discussion is shifting from vision papers to experimentation, benchmarking, testbeds, datasets and validation. This is especially important for technologies such as AI-native networking, Integrated Sensing and Communication (ISAC), localisation, semantic sensing and digital twins. These are all attractive concepts on paper, but they will only become useful if they can be measured, compared, reproduced and trusted.

The lecture was moderated by Josef Eichinger, Chair of one6G Working Group 4, which focuses on testing, proof of concepts and experimental validation. The session brought together four technical presentations, each looking at a different part of the 6G experimentation puzzle.

The first presentation, by Andreas Kassler from Karlstad University, looked at AI time series model benchmarking as a service for 6G. This was a good starting point because many 6G visions assume the presence of AI everywhere in the network. We often hear about AI-native RAN, AI-native telco cloud, AI agents, digital twins and autonomous network control. The practical question is much harder: how do we know which AI model should be used, where it should be deployed, when it should be retrained, and whether it is still performing well?

In today’s networks, many control loops are still based on rules, thresholds and reactive optimisation. In a 6G network, the expectation is that hundreds or even thousands of AI models may be used across cloud, edge and RAN domains. These models may forecast traffic demand, predict resource availability, support anomaly detection, estimate mobility patterns or optimise energy consumption. Some may run as rApps, some as xApps, and some closer to the radio as dApps. Each location has different timing, compute, energy and reliability constraints.

This is where benchmarking becomes essential. Choosing a model based only on accuracy is not enough. A model that gives marginally better prediction accuracy may be unsuitable if it requires excessive GPU memory, consumes too much energy, has unacceptable inference latency or cannot be updated safely. The presentation highlighted the need to evaluate models using a broader set of KPIs, including training cost, inference time, robustness, stability, interpretability, energy consumption and hardware requirements.

The proposed framework automates much of the model evaluation pipeline. Rather than each researcher manually preparing data, training models, tuning hyperparameters and comparing results in an ad hoc way, the framework uses a Kubernetes-native approach to launch training and inference jobs, collect metrics and compare models at scale. It can support different model families, including transformers, recurrent neural networks, diffusion models, regression models and spatial-temporal models. It also considers split learning and federated learning, both of which are relevant when data and compute are distributed across edge and cloud locations.

One particularly interesting point was the move from static model deployment to continuous model lifecycle management. In a 6G environment, the challenge is not simply to find a good model once. The harder challenge is to update the correct model, at the right time, in the right place, without disrupting the network. A traffic steering xApp, for example, may need recalibration when mobility patterns or traffic mix change. An energy-saving dApp may need to update its model when demand changes across cells. This requires drift detection, retraining triggers, model versioning, rollout management and energy-aware placement.

The second presentation, by Andrea Conti from the University of Ferrara, focused on AI-based localisation and sensing. This topic is increasingly important because 6G is expected to provide much richer situational awareness than previous generations. Location information will not only be used by external applications; it can also improve the network itself, for example by supporting beam management, resource allocation, digital twins and automation.

Traditional localisation often relies on extracting a single estimate from a measurement, such as time of arrival, time difference of arrival or angle of arrival, and then feeding those estimates into a positioning algorithm. This can work in favourable conditions, but it becomes fragile in complex radio environments with non-line-of-sight propagation, rich multipath and clutter.

The alternative described in the lecture is a probabilistic, soft-information-based approach. Instead of reducing the received waveform to a single estimated value, the system tries to preserve richer information about the relationship between measurements and possible positions. This can be combined with AI and machine learning techniques to learn propagation behaviour in complex environments. In simple terms, the system does not just ask where the user is most likely to be; it maintains a richer picture of where the user could be and how confident it is.

The presentation also covered semantic sensing, where the goal is not only to detect or localise an object, but also to classify what kind of object it is. In industrial environments, this could mean distinguishing between a person and an automated guided vehicle. The demonstration used radar-based sensing to track and classify targets, with the system using features such as Doppler signatures rather than relying only on radar cross-section. This is an important distinction because in real environments, simple target characteristics may not be reliable enough.

The Q&A brought out some useful practical points. For example, semantic sensing systems need training data for different classes of targets, and performance will depend on the environment, number of targets and quality of clutter mitigation. The speaker also noted that the system can continuously update its classification rather than making a single one-off decision. This is the kind of detail that matters when moving from a lab demonstration to a deployable sensing system.

The third presentation, by Narcís Cardona from Universitat Politècnica de València, looked at game-engine-based fast ray tracing. This was one of the more visually striking parts of the lecture, but the key message was not just that game engines can make nice demonstrations. The real point is that high-performance graphics tools can support faster and more detailed radio environment modelling.

Ray tracing is useful when deterministic channel modelling is required, especially for wideband channels, large antenna arrays, localisation, ISAC and sensing. In these cases, simple statistical models may not capture the necessary spatial, angular and delay-domain details. Game engines can accelerate ray launching and ray tracing using GPU capabilities, allowing very large numbers of rays to be processed more efficiently than traditional CPU-based approaches.

However, the presentation was careful not to oversell the approach. Ray tracing is not magic. It depends heavily on the physics assumptions, environmental data and material modelling. The validity of geometrical optics and uniform theory of diffraction depends on the frequency, object dimensions and environment. Diffuse scattering, diffraction, near-field effects, large antenna arrays, material layers and internal reflections all complicate the picture.

One useful point was that attractive visual renderings are not the same as accurate radio models. For realistic 6G simulation, the environment needs to include not only building shapes but also material properties, object classification, surface behaviour and calibration against measurements. A simplified 3D map may be good enough for visualisation, but it may not be good enough for channel modelling, sensing or digital twin applications.

The talk also highlighted the importance of post-processing. Raw ray tracing outputs may include huge numbers of rays with information about path length, angles, material interactions and received power. These need to be cleaned, clustered and transformed into channel representations such as channel tensors, power delay profiles, angles of arrival and channel state information. In other words, the game engine can help generate the data, but a lot of signal processing and radio expertise is still required to turn that data into something meaningful.

The final presentation, by Mohamed Gharba from Huawei Technologies, focused on 6G ISAC-based environmental sensing for mapping, monitoring and reconstruction. This brought the session back to one of the central themes of 6G: the network as a sensing platform.

ISAC aims to use communication infrastructure not only to transmit data but also to perceive the environment. In practical terms, radio signals can be used to detect objects, estimate their position, reconstruct environments, monitor changes and potentially classify activities. The presentation showed field trial results for indoor and outdoor environmental reconstruction, including the generation of point clouds and the use of AI for classification and interpretation.

A key message was that ISAC could become a remote, all-weather monitoring capability. Unlike cameras, radio-based sensing may work in poor visibility or where lighting conditions are not ideal. Unlike dedicated sensors, ISAC has the potential to use infrastructure that is already part of the communications system. Of course, this does not remove the technical challenges. The system still needs high-quality measurements, robust algorithms, good calibration and careful interpretation of multipath and reflections.

The Q&A also highlighted the value of shared datasets. Several speakers referred to the importance of making measurement data available, either for localisation, ray tracing validation or ISAC reconstruction. This is a crucial point for the wider 6G research community. Without shared datasets and reproducible test environments, it becomes difficult to compare algorithms, validate claims or understand how well a solution generalises beyond a specific laboratory setup.

Taken together, the four talks showed that network intelligence in 6G is not a single technology. It is a combination of AI models, sensing capabilities, radio environment understanding, digital representations, orchestration, benchmarking and lifecycle management. It also requires a stronger connection between communications theory, machine learning, signal processing, software infrastructure and practical measurement campaigns.

For me, the most important takeaway was that 6G experimentation needs to become more systematic. We need testbeds, but also benchmarking frameworks. We need AI models, but also model lifecycle management. We need ray tracing, but also calibrated environments and material data. We need ISAC demonstrations, but also datasets and reproducibility. We need digital twins, but also a clear understanding of what is being twinned, at what fidelity, and for which decision-making purpose.

This is why one6G Working Group 4’s focus on testing, proof of concepts and verification is timely. As 6G moves from research vision to technical specification, the community will need more than attractive concepts. It will need evidence that these technologies can work together, scale, remain energy efficient and deliver reliable performance in realistic environments.

The lecture is well worth watching, especially for anyone interested in the practical side of AI-native 6G, localisation, sensing, ray tracing and ISAC. It shows that the journey toward network intelligence is not just about making networks smarter. It is also about making the methods used to evaluate that intelligence smarter, more transparent and more repeatable.

Related Posts

Comments