Presentation | 2017-07-13 On the Efficiency of Limited-Memory quasi-Newton Training using Second-Order Approximation Gradient Model with Inertial Term Shahrzad Mahboubi, Hiroshi Ninomiya, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | In recent years, along with large-scale data, it is expected that the scale of neural network will be large too. Therefore, the amount of memory becomes enormous as the scale of the parameter of learning becomes huge. To deal with this problem, it is noteworthy that quasi-Newton algorithm incorporating Limited-memory method is effective for large-scale optimization problems. In this paper, we focus on Second-order approximation gradient model with inertial term incorporating Limited-memory scheme. We proposed the quasi-Newton method using Second-order approximation gradient model with inertial term as Nestelov's accelerated quasi-Newton method, improving the convergence speed of training. The effectiveness of Limited-memory scheme for Nestelov's accelerated quasi-Newton method is studied in this research. We apply the proposed method to training of the neural network and show effectiveness using computer simulations. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | Limited-memory quasi-Newton method / Second-order approximation gradient model with inertial term / Nestelov’s accelerated quasi-Newton method / neural network / training algorithm |
Paper # | NLP2017-32 |
Date of Issue | 2017-07-06 (NLP) |
Conference Information | |
Committee | NLP |
---|---|
Conference Date | 2017/7/13(2days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | Miyako Island Marine Terminal |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | etc. |
Chair | Masaharu Adachi(Tokyo Denki Univ.) |
Vice Chair | Norikazu Takahashi(Okayama Univ.) |
Secretary | Norikazu Takahashi(Nagaoka Univ. of Tech.) |
Assistant | Toshihiro Tachibana(Shonan Inst. of Tech.) / Masayuki Kimura(Kyoto Univ.) |
Paper Information | |
Registration To | Technical Committee on Nonlinear Problems |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | On the Efficiency of Limited-Memory quasi-Newton Training using Second-Order Approximation Gradient Model with Inertial Term |
Sub Title (in English) | |
Keyword(1) | Limited-memory quasi-Newton method |
Keyword(2) | Second-order approximation gradient model with inertial term |
Keyword(3) | Nestelov’s accelerated quasi-Newton method |
Keyword(4) | neural network |
Keyword(5) | training algorithm |
1st Author's Name | Shahrzad Mahboubi |
1st Author's Affiliation | Shonan Institute of Technology University(SIT) |
2nd Author's Name | Hiroshi Ninomiya |
2nd Author's Affiliation | Shonan Institute of Technology University(SIT) |
Date | 2017-07-13 |
Paper # | NLP2017-32 |
Volume (vol) | vol.117 |
Number (no) | NLP-121 |
Page | pp.pp.23-28(NLP), |
#Pages | 6 |
Date of Issue | 2017-07-06 (NLP) |