PowerGossip

PowerGossip

Practical low-rank communication compression in decentralized deep learning

Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression.

Deep Neural Networks
Key facts
Maturity
Support
C4DT
Inactive
Lab
Unknown
  • Technical
  • Research papers

Machine Learning and Optimization Laboratory

Machine Learning and Optimization Laboratory
Martin Jaggi

Prof. Martin Jaggi

The Machine Learning and Optimization Laboratory is interested in machine learning, optimization algorithms and text understanding, as well as several application domains.

This page was last edited on 2024-04-09.