Presentation | 2022-10-05 [Invited Lecture] Reducing Device Processing Load and Communication Overhead by Distillation in Federated Learning Hiromichi Yajima, Takumi Miyoshi, Taku Yamazaki, Shota Ono, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | In recent years, machine learning has been used in many cases to discover the rules or to predict future results from a large amount of data. Although current machine learning commonly aggregates data centrally on a server, the drastic increase in the data for machine learning makes it difficult to calculate on a single server. Therefore, distributed machine learning such as federated learning has been attracting attention to avoid the concentrated load on the server. Nevertheless, since the process of machine learning requires a huge amount of computation, it is difficult to perform federated learning process on small devices. This paper proposes a method to reduce device processing load and communication overhead by distillation in federated learning. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | Machine learning / Federated learning / Distillation / Processing load / Communication overhead |
Paper # | NS2022-87 |
Date of Issue | 2022-09-28 (NS) |
Conference Information | |
Committee | NS |
---|---|
Conference Date | 2022/10/5(3days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | Hokkaidou University + Online |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | Network architecture (5G, Local 5G, Beyond5G, Mobile networks, Ad-hoc and sensor networks, Overlay and P2P networks, Programmable networks, SDN/NFV, IoT, Network slicing), Next generation packet transport (High speed Ethernet, IP over WDM, Multi-service package technology, MPLS), Grid, etc. |
Chair | Tetsuya Oishi(NTT) |
Vice Chair | Takumi Miyoshi(Shibaura Insti of Tech.) |
Secretary | Takumi Miyoshi(NTT) |
Assistant | Kotaro Mihara(NTT) |
Paper Information | |
Registration To | Technical Committee on Network Systems |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | [Invited Lecture] Reducing Device Processing Load and Communication Overhead by Distillation in Federated Learning |
Sub Title (in English) | |
Keyword(1) | Machine learning |
Keyword(2) | Federated learning |
Keyword(3) | Distillation |
Keyword(4) | Processing load |
Keyword(5) | Communication overhead |
1st Author's Name | Hiromichi Yajima |
1st Author's Affiliation | Shibaura Institute of Technology(Shibaura Inst. of Tech.) |
2nd Author's Name | Takumi Miyoshi |
2nd Author's Affiliation | Shibaura Institute of Technology(Shibaura Inst. of Tech.) |
3rd Author's Name | Taku Yamazaki |
3rd Author's Affiliation | Shibaura Institute of Technology(Shibaura Inst. of Tech.) |
4th Author's Name | Shota Ono |
4th Author's Affiliation | The University of Tokyo(The Univ. of Tokyo) |
Date | 2022-10-05 |
Paper # | NS2022-87 |
Volume (vol) | vol.122 |
Number (no) | NS-198 |
Page | pp.pp.29-32(NS), |
#Pages | 4 |
Date of Issue | 2022-09-28 (NS) |