Making Devices, Machines and Things Smarter with Federated Learning

Federated Learning (FL) is a branch of Machine Learning (ML) that enables devices, machines and things to collaboratively learn a shared prediction model while keeping all the training data local, thereby respecting confidentiality and privacy, and decoupling the ability to do ML from the need to store the data in the cloud.

I found this simple video from Google as a good explanation and you can also read the details in Google AI blog here.

A recently published paper titled 'Opportunities of Federated Learning in Connected, Cooperative and Automated Industrial Systems' is also a fantastic place to start looking at advanced challenges with FL. Here is an extract.

Networked and cooperative intelligent machines have recently opened new research opportunities that target the integration of distributed ML tools with sensing, communication and decision operations. Cross-fertilization of these components is crucial to enable challenging collaborative tasks in terms of safety, reliability, scalability and latency.

Among distributed ML techniques, federated learning (FL), has been emerging for model training in decentralized wireless systems. Model parameters, namely weights and biases in deep neural network (DNN) layers, are optimized collectively by cooperation of interconnected devices, acting as distributed learners. In contrast to conventional edge-cloud ML, FL does not require to send local training data to the server, which may be infeasible in mission critical settings with extremely low latency and data privacy constraints.

The most popular FL implementation, namely federated averaging, alternates between the computation of a local model at each device and a round of communication with the server for learning of a global model. Local models are typically obtained by minimizing a local loss function via Stochastic Gradient Descent (SGD) steps, using local training examples and target values.

Federated averaging is privacy-preserving by design, as it keeps the training data on-device. However, it still leverages the server-client architecture, which might not be robust to data poisoning attacks and scalability needs. Overcoming this issue mandates moving towards fully decentralized FL solutions relying solely on local processing and cooperation among end machines. As shown in Fig. 1, the device sends its local ML model parameters to neighbors and receives in return the corresponding updates. Next, it improves its local parameters by fusing the received contributions. This procedure continues until convergence.

The article addresses the opportunities of emerging distributed FL tools specifically tailored for systems characterized by autonomous industrial components (vehicles, robots). FL is first proposed as an integral part of the sensing-decisionaction loop. Next, novel decentralized FL tools and emerging research challenges are highlighted. The potential of FL is further elaborated with considerations primarily given to mission critical control operations in the field of cooperative automated vehicles and densely interconnected robots. Analysis with real data on a practical usage scenario reveals FL as a promising tool underpinned by URLLC communications.

Last year, at the International Workshop on Fundamentals of Machine Learning over Networks organised by KTH Royal Institute of Technology, Stockholm, Sweden, Dr. H. Vincent Poor gave a talk on Learning at the Wireless Edge. In that talk he talked about FL and Decentralized Learning. The talk is embedded below.

Related Posts

Comments