Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
CCS, NLP |
2022-06-09 16:50 |
Osaka |
(Primary: On-site, Secondary: Online) |
A Study on Accelerating Stochastic Weight Difference Propagation with Momentum Term Shahrzad Mahboubi, Hiroshi Ninomiya (Shonan Inst. of Tech.) NLP2022-9 CCS2022-9 |
With the rapid development of the IoT, there has been an increasing need to process the data on microcomputers equipped ... [more] |
NLP2022-9 CCS2022-9 pp.40-45 |
NLP, MICT, MBE, NC (Joint) [detail] |
2022-01-21 16:00 |
Online |
Online |
On the Study of Second-Order Training Algorithm using Matrix Diagonalization based on Hutchinson estimation Ryo Yamatomi, Shahrzad Mahboubi, Hiroshi Ninomiya (Shonan Inst. Tec.) NLP2021-89 MICT2021-64 MBE2021-50 |
In this study, we propose a new training algorithm based on the second-order approximated gradient method, which aims to... [more] |
NLP2021-89 MICT2021-64 MBE2021-50 pp.67-70 |
PRMU, IPSJ-CVIM |
2021-03-05 09:45 |
Online |
Online |
Improved Speech Separation Performance from Monaural Mixed Speech Based on Deep Embedding Network Shaoxiang Dang, Tetsuya Matsumoto, Hiroaki Kudo (Nagoya Univ.), Yoshinori Takeuchi (Daido Univ.) PRMU2020-85 |
Speech separation refers to the separation of utterances in which multiple people are speaking simultaneously. The idea ... [more] |
PRMU2020-85 pp.91-96 |
NLP, NC (Joint) |
2020-01-24 13:50 |
Okinawa |
Miyakojima Marine Terminal |
Ternarized Backpropagation for Edge AI and its FPGA Implementation Tatsuya Kaneko, Yoshiharu Yamagishi, Hiroshi Momose, Tetsuya Asai (Hokkaido Univ.) NLP2019-95 |
In recent years there has been growing more interest in machine/deep learning.
As following this movement, many types ... [more] |
NLP2019-95 pp.53-58 |
NLP, MSS (Joint) |
2019-03-15 14:55 |
Fukui |
Bunkyo Camp., Univ. of Fukui |
On the Influence of Momentum term in quasi-Newton method Shahrzad Mahboubi (SIT), Indrapriyadarsini s (Shizuoka Univ.), Hiroshi Ninomiya (SIT), Hideki Asai (Shizuoka Univ.) NLP2018-137 |
The Nesterov's Accelerated quasi-Newton (NAQ) method was derived from the quadratic approximation of the error function ... [more] |
NLP2018-137 pp.69-74 |
SIP, EA, SP, MI (Joint) [detail] |
2018-03-19 13:00 |
Okinawa |
|
[Poster Presentation]
An Experimental Study on Segmental and Prosodic Comparison of Utterances for Automatic Assessment of Dubbing Speech Takuya Ozuru, Nobuaki Minematsu, Daisuke Saito (Univ. of Tokyo) EA2017-114 SIP2017-123 SP2017-97 |
In Japanese language education, especially in its speech training, dubbing-based training has gained a
huge popularity.... [more] |
EA2017-114 SIP2017-123 SP2017-97 pp.75-80 |
IN |
2018-01-23 11:15 |
Aichi |
WINC AICHI |
Retraining anomaly detection model using Autoencoder Yasuhiro Ikeda, Keisuke Ishibashi, Yusuke Nakano, Keishiro Watanabe, Ryoichi Kawahara (NTT) IN2017-84 |
An autoencoder has been attracting much attention as an anomaly detection algorithm.
The autoencoder enables unsupervis... [more] |
IN2017-84 pp.77-82 |
NLP |
2017-07-13 13:25 |
Okinawa |
Miyako Island Marine Terminal |
On the Efficiency of Limited-Memory quasi-Newton Training using Second-Order Approximation Gradient Model with Inertial Term Shahrzad Mahboubi, Hiroshi Ninomiya (SIT) NLP2017-32 |
In recent years, along with large-scale data, it is expected that the scale of neural network will be large too. Therefo... [more] |
NLP2017-32 pp.23-28 |
MBE, NC (Joint) |
2017-03-13 13:10 |
Tokyo |
Kikai-Shinko-Kaikan Bldg. |
Application of Forward-Propagation Learning Rule to Three-Layer Auto-Encoder Tadamasa Kurosawa, Naohiro Fukumura (Toyohashi Univ. of Tech) NC2016-82 |
By the development of Deep Learning, the concern over three-layer Auto-Encoder for Pre-training has risen.
On the othe... [more] |
NC2016-82 pp.109-114 |
EA, SP, SIP |
2016-03-28 13:15 |
Oita |
Beppu International Convention Center B-ConPlaza |
[Poster Presentation]
An evaluation of acoustic-to-articulatory inversion mapping with latent trajectory Gaussian mixture model Patrick Lumban Tobing (NAIST), Tomoki Toda (Nagoya Univ./NAIST), Hirokazu Kameoka (NTT), Satoshi Nakamura (NAIST) EA2015-85 SIP2015-134 SP2015-113 |
In this report, we present an evaluation of acoustic-to-articulatory inversion mapping based on latent trajectory
Gauss... [more] |
EA2015-85 SIP2015-134 SP2015-113 pp.111-116 |
NC, NLP (Joint) |
2016-01-29 12:10 |
Fukuoka |
Kyushu Institute of Technology |
Accelerated quasi-Newton Training using Nesterov's Gradient Method Hiroshi Ninomiya (SIT) NLP2015-141 |
This paper describes a new quasi-Newton based accelerated technique for training of neural networks. Recently, Nesterov’... [more] |
NLP2015-141 pp.87-92 |
SP |
2015-08-21 16:15 |
Iwate |
Iwate Prefectural Univ. |
Training Data Selection for Acoustic Modeling Based on Submodular Optimization of Joint KL Divergence Taichi Asami, Ryo Masumura, Hirokazu Masataki, Manabu Okamoto, Sumitaka Sakauchi (NTT) SP2015-58 |
This paper provides a novel training data selection method to
construct acoustic models for automatic speech recogniti... [more] |
SP2015-58 pp.45-50 |
PRMU, IPSJ-CVIM, MVE [detail] |
2015-01-23 09:50 |
Nara |
|
Analysis of Minimum Classification Error Training using Bit-String-Based Genetic Algorithms Hiroto Togoe (Doshisha Univ.), Hideyuki Watanabe (NICT), Shigeru Katagiri (Doshisha Univ.), Xugang Lu, Chiori Hori (NICT), Miho Ohsaki (Doshisha Univ.) PRMU2014-100 MVE2014-62 |
Minimum Classification Error (MCE) training using gradient-descent-based loss minimization does not guarantee a global m... [more] |
PRMU2014-100 MVE2014-62 pp.171-176 |
PRMU |
2014-12-12 13:00 |
Fukuoka |
|
A proposal for data selection in self-training based cross dataset action recognition Takafumi Suzuki, Yu Wang, Jien Kato, Kenji Mase (Nagoya Univ) PRMU2014-80 |
In action recognition, in order to obtain high performance classifiers, it is necessary to feed the training algorithm e... [more] |
PRMU2014-80 pp.85-89 |
IBISML |
2014-11-17 17:00 |
Aichi |
Nagoya Univ. |
[Poster Presentation]
Training Algorithm for Restricted Boltzmann Machines Using Auxiliary Function Approach Norihiro Takamune (Univ. of Tokyo), Hirokazu Kameoka (Univ. of Tokyo/NTT) IBISML2014-56 |
Layerwise pre-training is one of important elements for deep learning, and Restricted Boltzmann Machines (RBMs) is popul... [more] |
IBISML2014-56 pp.161-168 |
ET |
2014-05-24 14:35 |
Hyogo |
Hyogo College of Medicine |
[Invited Talk]
An Learning Environment for Algorithm Understanding, Code Reading and Debugging Tatsuhiro Konishi, Satoru Kogure, Yasuhiro Noguchi (Shizuoka Univ.), Koichi Yamashita (Hamamatsu Univ.), Yukihiro Itoh (Shizuoka Univ.) ET2014-4 |
We have studied on educational environments to support understanding algorithm, code reading, and debugging. In this man... [more] |
ET2014-4 pp.17-22 |
SP, IPSJ-MUS |
2014-05-24 11:30 |
Tokyo |
|
Discriminative training of acoustic models for system combination Yuuki Tachioka (Mitsubishi Electric), Shinji Watanabe, Jonathan Le Roux, John R. Hershey (MERL) SP2014-15 |
In discriminative training methods, the objective function is designed to improve the performance of automatic speech re... [more] |
SP2014-15 pp.147-152 |
RCS, SR, SRW (Joint) |
2014-03-05 10:35 |
Tokyo |
Waseda Univ. |
Performance Comparison between Training Sequence Inserted OFDM and Single-carrier Transmission under Doubly-selective Fading Channel Shinya Onuma, Kohei Abo, Ryo Nagaoka, Katsuhiro Temma, Fumiyuki Adachi (Tohoku Univ.) RCS2013-377 |
In training sequence (TS) inserted block transmission, since the TS can be utilized for channel estimation, no
pilot bl... [more] |
RCS2013-377 pp.431-436 |
PRMU, IPSJ-CVIM, MVE [detail] |
2014-01-23 10:30 |
Osaka |
|
Multi-Class Support Vector Machine based on Minimum Classification Error Criterion Hisashi Uehara (Doshisha Univ.), Hideyuki Watanabe (NICT), Shigeru Katagiri, Miho Ohsaki (Doshisha Univ.), Shigeki Matsuda, Chiori Hori (NICT) PRMU2013-93 MVE2013-34 |
Gradient-descent-based optimization methods used in Minimum Classification Error (MCE) training are not necessarily easi... [more] |
PRMU2013-93 MVE2013-34 pp.13-18 |
SP, IPSJ-SLP (Joint) |
2013-07-26 10:30 |
Miyagi |
Soho (togatta spa) |
Grapheme-to-phoneme Conversion based on Adaptive Regularization of Weight Vectors Keigo Kubo, Sakriani Sakti, Graham Neubig, Tomoki Toda, Satoshi Nakamura (NAIST) SP2013-57 |
The current state-of-the-art approach in grapheme-to-phoneme (g2p) conversion is structured learning based on the Margin... [more] |
SP2013-57 pp.25-30 |