2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLSP-33.5
Paper Title Decentralized Deep Learning using Momentum-Accelerated Consensus
Authors Aditya Balu, Iowa State University, United States; Zhanhong Jiang, Johnson Controls, United States; Sin Yong Tan, Iowa State University, United States; Chinmay Hedge, New York University, United States; Young M Lee, Johnson Controls, United States; Soumik Sarkar, Iowa State University, United States
SessionMLSP-33: Optimization Methods
LocationGather.Town
Session Time:Thursday, 10 June, 15:30 - 16:15
Presentation Time:Thursday, 10 June, 15:30 - 16:15
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-DFED] Distributed/Federated learning
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset. While several decentralized deep learning approaches exist, the majority consider a central parameter-server topology for aggregating the model parameters from the agents. However, such a topology may be inapplicable in networked systems such as ad-hoc mobile networks, field robotics, and power network systems where direct communication with the central parameter server may be inefficient. In this context, we propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology (without a central server). Our algorithm is based on the heavy-ball acceleration method used in gradient-based optimization. We propose a novel consensus protocol where each agent shares with its neighbors its model parameters and gradient-momentum values during the optimization process. We consider nonconvex objective functions and theoretically analyze our algorithm's performance. We present several empirical comparisons with competing decentralized learning methods to demonstrate the efficacy of our approach under different communication topologies.