Summary

International Workshop on Smart Info-Media Systems in Asia

2022

Session Number:RS2

Session:

Number:RS2-1

An Iterative Method of LAD Regression using Gradient Boosting and Its Application to Image Coding

Tomoki Okuno,  Shinji Fukuma,  Shin-ichiro Mori,  

pp.96-100

Publication Date:2022/9/15

Online ISSN:2188-5079

DOI:10.34385/proc.69.RS2-1

PDF download

PayPerView

Summary:
When obtaining regression coefficients by regression, the residual norm is generally used as the objective function. When the objective function is the residual L2 norm, the regression coefficient is obtained explicitly as a solution of the normal equation. When the L1 norm is used (LAD regression), it can be solved by linear programming. However, linear programming takes a long time depending on the size of the problem, and in many cases, it is not possible to find a solution in the middle of the computation. Therefore, this paper examines an iterative solution method for L1-norm regression using boosting, which iterates a single regression. LS-boosting is an iterative method based on gradient boosting that performs a single L2-norm regression at each stage of boosting. The proposed method (L1-boosting) replaces it with L1-single regression; the regression coefficients of L1-single regression are weighted medians and can be easily obtained. In our experiments, we design a lossless predictive coder for image signals using LAD regression. The evaluation measures are first-order entropy (bpp) and residual norm average. The results show that LAD regression yields a higher compression ratio than L2, and L1-boosting takes longer to execute than linear programming. In the second experiment, L1-boosting's execution time is longer than that of linear programming, so the iterative stopping condition is set to the value of the first-order entropy of L1-cvx or less. The results showed that under this stopping condition, the execution time was shorter than that of linear programming. This is expected to reduce the runtime of data compression using linear prediction with L1-norm minimization.