Presentation 2022-01-21
On the Study of Second-Order Training Algorithm using Matrix Diagonalization based on Hutchinson estimation
Ryo Yamatomi, Shahrzad Mahboubi, Hiroshi Ninomiya,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) In this study, we propose a new training algorithm based on the second-order approximated gradient method, which aims to reduce the computational cost of the Newton method. In neural network (NN) training, the first-order approximate gradient methods are commonly used because of its low computational cost. However, when applied to highly nonlinear problems, first-order methods still converge too slowly, and optimization error cannot effectively reduced within finite time despite its advantages. On the other hand, the training algorithm based on Newton method which is considered effective for this problem, is computationally expensive because it uses Hessian matrices, making it difficult to train for large-scale nonlinear problems. In this study, we focus on reducing the computational cost of the Newton method and propose a new learning algorithm as Hutchinson diagonalized Newton method (HdN), which is realized by the approximated diagonal Hessian matrix using the diagonal matrix based on Hutchinson estimator. We apply the proposed method to the training of NN, and the performance of the proposed method is demonstrated through computer simulations.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Neural network / training algorithm / Newton method / hessian approximation scheme / Hutchinson estimator
Paper # NLP2021-89,MICT2021-64,MBE2021-50
Date of Issue 2022-01-14 (NLP, MICT, MBE)

Conference Information
Committee NLP / MICT / MBE / NC
Conference Date 2022/1/21(3days)
Place (in Japanese) (See Japanese page)
Place (in English) Online
Topics (in Japanese) (See Japanese page)
Topics (in English)
Chair Takuji Kosaka(Chukyo Univ.) / Eisuke Hanada(Saga Univ.) / Ryuhei Okuno(Setsunan Univ.) / Rieko Osu(Waseda Univ.)
Vice Chair Akio Tsuneda(Kumamoto Univ.) / Hirokazu Tanaka(Hiroshima City Univ.) / Daisuke Anzai(Nagoya Inst. of Tech.) / Junichi Hori(Niigata Univ.) / Hiroshi Yamakawa(Univ of Tokyo)
Secretary Akio Tsuneda(Kagawa Univ.) / Hirokazu Tanaka(Sojo Univ.) / Daisuke Anzai(Yokohama National Univ.) / Junichi Hori(KISTEC) / Hiroshi Yamakawa(Osaka Electro-Communication Univ)
Assistant Hideyuki Kato(Oita Univ.) / Yuichi Yokoi(Nagasaki Univ.) / Takahiro Ito(Hiroshima City Univ) / Kento Takabayashi(Okayama Pref. Univ.) / Takuya Nishikawa(National Cerebral and Cardiovascular Center Hospital) / Jun Akazawa(Meiji Univ. of Integrative Medicine) / Emi Yuda(Tohoku Univ) / Nobuhiko Wagatsuma(Toho Univ.) / Tomoki Kurikawa(KMU)

Paper Information
Registration To Technical Committee on Nonlinear Problems / Technical Committee on Healthcare and Medical Information Communication Technology / Technical Committee on ME and Bio Cybernetics / Technical Committee on Neurocomputing
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) On the Study of Second-Order Training Algorithm using Matrix Diagonalization based on Hutchinson estimation
Sub Title (in English)
Keyword(1) Neural network
Keyword(2) training algorithm
Keyword(3) Newton method
Keyword(4) hessian approximation scheme
Keyword(5) Hutchinson estimator
1st Author's Name Ryo Yamatomi
1st Author's Affiliation Shonan Institute of Technology(Shonan Inst. Tec.)
2nd Author's Name Shahrzad Mahboubi
2nd Author's Affiliation Shonan Institute of Technology(Shonan Inst. Tec.)
3rd Author's Name Hiroshi Ninomiya
3rd Author's Affiliation Shonan Institute of Technology(Shonan Inst. Tec.)
Date 2022-01-21
Paper # NLP2021-89,MICT2021-64,MBE2021-50
Volume (vol) vol.121
Number (no) NLP-335,MICT-336,MBE-337
Page pp.pp.67-70(NLP), pp.67-70(MICT), pp.67-70(MBE),
#Pages 4
Date of Issue 2022-01-14 (NLP, MICT, MBE)