大会名称 |
---|
2018年 情報科学技術フォーラム(FIT) |
大会コ-ド |
F |
開催年 |
2018 |
発行日 |
2018-09-12 |
セッション番号 |
2b |
セッション名 |
ソフトウェア(2) |
講演日 |
2018/09/19 |
講演場所(会議室等) |
C棟C33 |
講演番号 |
IB-002 |
タイトル |
Involving CPUs into Multi-GPU Deep Learning |
著者名 |
Tung Le D., 関山太朗, 根岸 康, 今井晴基, 河内谷清久仁, |
キーワード |
深層学習, データ並列, 高速化, GPU |
抄録 |
Data parallelism is widely used for training deep neural networks on multiple GPUs in a single machine thanks to its simplicity. However, its scalability is bound by the number of data transfers, mainly for exchanging and accumulating gradients among the GPUs. In this paper, we present a novel approach to data parallel training called CPU-GPU data parallel (CGDP) training that utilizes free CPU time on the host to speed up the training in the GPUs. We also present a cost model for analyzing and comparing the performances of both the typical data parallel training and the CPU-GPU data parallel training. |
本文pdf |
PDF download (138.2KB) |