TY - JOUR
T1 - MiCS
T2 - 49th International Conference on Very Large Data Bases, VLDB 2023
AU - Zhang, Zhen
AU - Zheng, Shuai
AU - Wang, Yida
AU - Chiu, Justin
AU - Karypis, George
AU - Chilimbi, Trishul
AU - Li, Mu
AU - Jin, Xin
N1 - Funding Information:
We sincerely thank the anonymous reviewers for their valuable feedback. We thank the Amazon Search M5 team for providing large clusters. Xin Jin and Shuai Zheng are the corresponding authors. Xin Jin is with the Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education. Zhen Zhang is supported in part by NSF grants CNS-1813487 and CCF-1918757. Xin Jin is supported in part by National Natural Science Foundation of China under the grant number 62172008 and National Natural Science Fund for the Excellent Young Scientists Fund Program (Overseas).
Publisher Copyright:
© 2022 VLDB Endowment.
PY - 2022
Y1 - 2022
N2 - Existing general purpose frameworks for gigantic model training, i.e., dense models with billions of parameters, cannot scale efficiently on cloud environment with various networking conditions due to large communication overheads. In this paper, we propose MiCS, which Minimizes the Communication Scale to bring down communication overhead. Specifically, by decreasing the number of participants in a communication collective, MiCS can utilize heterogeneous network bandwidth, reduce network traffic over slower links, reduce the latency of communications for maintaining high network bandwidth utilization, and amortize expensive global gradient synchronization overhead. Our evaluation on AWS shows that the system throughput of MiCS is up to 2.89× that of the state-of-the-art large model training systems. MiCS achieves near-linear scaling efficiency, which is up to 1.27× that of DeepSpeed. MiCS allows us to train a proprietary model with 100 billion parameters on 512 GPUs with 99.4% weak-scaling efficiency, and it is able to saturate over 54.5% theoretical computation power of each GPU on a public cloud with less GPU memory and more restricted networks than DGX-A100 clusters.
AB - Existing general purpose frameworks for gigantic model training, i.e., dense models with billions of parameters, cannot scale efficiently on cloud environment with various networking conditions due to large communication overheads. In this paper, we propose MiCS, which Minimizes the Communication Scale to bring down communication overhead. Specifically, by decreasing the number of participants in a communication collective, MiCS can utilize heterogeneous network bandwidth, reduce network traffic over slower links, reduce the latency of communications for maintaining high network bandwidth utilization, and amortize expensive global gradient synchronization overhead. Our evaluation on AWS shows that the system throughput of MiCS is up to 2.89× that of the state-of-the-art large model training systems. MiCS achieves near-linear scaling efficiency, which is up to 1.27× that of DeepSpeed. MiCS allows us to train a proprietary model with 100 billion parameters on 512 GPUs with 99.4% weak-scaling efficiency, and it is able to saturate over 54.5% theoretical computation power of each GPU on a public cloud with less GPU memory and more restricted networks than DGX-A100 clusters.
UR - http://www.scopus.com/inward/record.url?scp=85140411871&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140411871&partnerID=8YFLogxK
U2 - 10.14778/3561261.3561265
DO - 10.14778/3561261.3561265
M3 - Conference article
AN - SCOPUS:85140411871
SN - 2150-8097
VL - 16
SP - 37
EP - 50
JO - Proceedings of the VLDB Endowment
JF - Proceedings of the VLDB Endowment
IS - 1
Y2 - 28 August 2023 through 1 September 2023
ER -