Networking for Distributed AI and Why It Matters for 6G

On 11 February 2026, theNetworkingChannel hosted an online panel on Networking for Distributed AI. The brief made it clear that artificial intelligence is reshaping not only computing platforms but also the networks that interconnect them. The real question is no longer whether AI will influence networking, but how deeply the two will converge and what this means for 6G.

The panel brought together complementary perspectives from academia and industry. The speakers were Tommaso Melodia from Northeastern University, Anastasios Giovanidis from Ericsson R&D, and Mario Baldi from NVIDIA. The session was organised by Marco Ajmone Marsan of IMDEA Networks Institute and Politecnico di Torino. What made the discussion particularly interesting was how naturally the three talks flowed from one layer of the stack to another, starting from the 6G vision, moving into operator networks, and then diving deep into the AI data centre.

Tommaso Melodia set the tone with a strong argument that networks are no longer simple data pipes. As AI systems move beyond chatbots and search engines into the physical world of robots, autonomous vehicles, drones and cyber physical systems, connectivity becomes central to how intelligence is sensed, distributed and executed. These systems must sense, decide and act in real time, and that demands more than best effort transport.

He described how the radio access network is evolving from a relatively closed, hardware centric system into a programmable and software defined platform. Open and virtualised architectures allow compute resources to sit closer to the edge, enabling AI workloads to run within the network itself. The idea is not only AI for the network, improving performance and efficiency, but AI in and on the network, sharing edge compute resources and supporting distributed intelligence.

In this vision, the access network becomes more than connectivity infrastructure. It becomes a distributed sensing fabric. It can collect data from the physical world, feed digital twins in the cloud, and support real time inference at the edge. If future world models require data grounded in physical reality rather than only text and abstract representations, then the RAN, with its geographical reach and pervasive presence, could effectively become a global data factory for physical AI. That naturally leads to a strategic question for operators. Do they remain transport providers, or do they evolve into intelligence platforms?

Anastasios Giovanidis then brought the operator and vendor perspective, outlining Ericsson’s vision of AI native 6G. In this model, AI is embedded across the full lifecycle of the network, from design and deployment to optimisation and maintenance. Intelligence should run wherever it is needed in the system, supported by distributed data infrastructure and accelerators such as GPUs. The aim is intrinsic and trustworthy AI capabilities that are part of the network’s natural functionality rather than an add on.

A key concept here is intent driven networking. Instead of manually configuring detailed parameters, operators express high level requirements, such as throughput, latency and energy efficiency targets. The system then decomposes this intent into specific control actions. Multi agent reinforcement learning is one approach, where distributed agents at different sites take local decisions based on local KPIs while collectively working towards global objectives.

A practical example discussed was antenna tilt and transmission power optimisation. Reinforcement learning based solutions have already delivered measurable gains in user throughput and reductions in energy consumption in live networks. Energy efficiency was a strong theme throughout, particularly because the RAN accounts for the majority of network power consumption. AI based prediction and control can dynamically adapt site behaviour to traffic conditions, reducing energy usage without degrading user experience.

He also touched on the concept of agentic networks, where AI agents observe, decide and act, and can even interact with each other using defined protocols. Over time, the ambition is zero touch, explainable and intent aware networks. Large language models may not be suited to strict millisecond control loops, but they can help with explainability and human interaction, translating complex AI decisions into understandable reasoning for operators.

Mario Baldi shifted the focus to the AI data centre, reminding us that there is no distributed AI without high performance networking. Modern AI platforms interconnect thousands or even hundreds of thousands of GPUs and CPUs. The architecture is hierarchical, with scale up networking within racks or pods, scale out networking across racks, and a front end network connecting storage and users.

AI traffic patterns are very different from traditional internet traffic. They are highly synchronised and repetitive, often involving collective communications where groups of processors must exchange data in tightly coordinated phases. Computation frequently cannot proceed until communication is complete, which makes networking performance directly visible in overall training or inference time.

This creates stringent requirements for extreme bandwidth and very low latency, along with efficient congestion control and effective use of multiple equal cost paths in fat tree topologies. Traditional networking approaches are not always suitable. There are open challenges around congestion control tailored to AI workloads, multipathing strategies, and whether scale up and scale out networks can eventually converge into a unified architecture. Power consumption is another concern. While GPUs consume significant energy, network power scales roughly linearly with transmission capacity, meaning that as clusters grow, networking could become an even larger share of total energy consumption.

Taken together, the panel highlighted a clear convergence. AI data centres are pushing networking to its limits in scale and performance, while telecom networks are becoming programmable platforms capable of hosting distributed intelligence and acting as sensing infrastructures for physical AI. For those following 6G developments, networking for distributed AI is not a side topic. It sits at the intersection of cloud, edge, radio and data centre, and challenges long standing architectural and business assumptions.

The full discussion is well worth watching, and I have embedded the video of the panel below for those who would like to explore the details and nuances directly from the speakers.

Related Posts

Comments