Summary

The 2018 International Symposium on Information Theory and Its Applications (ISITA2018)

2018

Session Number:We-AM-1-1

Session:

Number:We-AM-1-1.1

Reducing the Average Delay in Gradient Coding

Ming Hui Jovan Lee,  Ivan Tjuawinata,  Han Mao Kiah,  

pp.523-527

Publication Date:2018/10/18

Online ISSN:2188-5079

DOI:10.34385/proc.55.We-AM-1-1.1

PDF download

PayPerView

Summary:
Tandon et al. (2017) introduced a coding theoretic framework to alleviate the problem of stragglers in distributed learning. Following Tandon et al., many authors provided explicit schemes that were able to compute a certain function using n-s replies from n workers in the worst case. In this work, we focus on reducing the expected delay. To reduce the expected delay, we modify existing schemes so that less than (n-s) replies are sufficient in most cases. In particular, we provide a simple modification to existing optimal schemes and demonstrate that with this modification, the expected delay time converges to the fundamental delay. Additionally, for specific parameters, we reduce the number of replies further so that the expected delay time converges faster to the fundamental delay.