Summary

Asia-Pacific Network Operations and Management Symposium

2020

Session Number:TS7

Session:

Number:TS7-3

Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning

YouJun Kim,  Choong Seon Hong,  

pp.150-154

Publication Date:2020/9/22

Online ISSN:2188-5079

DOI:10.34385/proc.62.TS7-3

PDF download (344.8KB)

Summary:
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can?ft collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floatingpoint operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.