Is network the bottleneck of distributed training?

Zhen Zhang, Chaokun Chang, Haibin Lin, Yida Wang, Raman Arora, Xin Jin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Recently there has been a surge of research on improving the communication efficiency of distributed training. However, little work has been done to systematically understand whether the network is the bottleneck and to what extent. In this paper, we take a first-principles approach to measure and analyze the network performance of distributed training. As expected, our measurement confirms that communication is the component that blocks distributed training from linear scale-out. However, contrary to the common belief, we find that the network is running at low utilization and that if the network can be fully utilized, distributed training can achieve a scaling factor of close to one. Moreover, while many recent proposals on gradient compression advocate over 100x compression ratio, we show that under full network utilization, there is no need for gradient compression in 100 Gbps network. On the other hand, a lower speed network like 10 Gbps requires only 2x-5x gradients compression ratio to achieve almost linear scale-out. Compared to application-level techniques like gradient compression, network-level optimizations do not require changes to applications and do not hurt the performance of trained models. As such, we advocate that the real challenge of distributed training is for the network community to develop high-performance network transport to fully utilize the network capacity and achieve linear scale-out.

Original languageEnglish (US)
Title of host publicationNetAI 2020 - Proceedings of the 2020 Workshop on Network Meets AI and ML
PublisherAssociation for Computing Machinery
Pages8-13
Number of pages6
ISBN (Electronic)9781450380430
DOIs
StatePublished - Aug 14 2020
Event2020 ACM Workshop on Network Meets AI and ML, NetAI 2020 - Virtual, Online, United States
Duration: Aug 14 2020 → …

Publication series

NameNetAI 2020 - Proceedings of the 2020 Workshop on Network Meets AI and ML

Conference

Conference2020 ACM Workshop on Network Meets AI and ML, NetAI 2020
Country/TerritoryUnited States
CityVirtual, Online
Period8/14/20 → …

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Is network the bottleneck of distributed training?'. Together they form a unique fingerprint.

Cite this