講演名 2018-10-12
Automatic Target Recognition based on Generative Adversarial Networks for Synthetic Aperture Radar Images
Yang-Lang Chang(NTUT), Bo-Yao Chen(NTUT), Chih-Yuan Chu(NTUT), Sina Hadipour(NTUT), Hirokazu Kobayashi(OIT),
PDFダウンロードページ PDFダウンロードページへ
抄録(和) Synthetic Aperture Radar (SAR) is an all day and all weather condition imaging technique which is widely used in national defense, remote sensing, disaster prevention, interferometry and forest and urban footprint mapping. Recently, convolutional neural networks have been used for automatic target recognition (SAR-ATR) and classification. The drawback, however, is the difficulty obtaining sufficient and reliable data in order to train a high accuracy classifier for automatic target recognition. As the number of training samples is reduced, the SAR-ATR accuracy rate decreases rapidly. Our study proposes a deep learning model based on Generative Adversarial Network (GAN) to overcome the problem of insufficient training samples and improve the performance of target classification. GAN is composed of two networks: A Generator network and a Discriminator network. The generator network produces SAR images from a series of random numbers. The discriminator network is a classifier which is trained using supervised learning to classify real and fake SAR images. The Generator and the Discriminator compete with each other in the training process in order to learn robust and reliable target features in SAR images. However, traditional GAN cannot be used to solve the classification problems in SAR-ATR. Our network is a variant of GAN called Auxiliary Classifier GAN (AC-GAN). The structure of AC-GAN allows separating large datasets into subsets by class and training a generator and discriminator for each subset. In this experiment, the SAR images in Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset were used to train the network. Using all the images in the dataset for training resulted in a classification accuracy of 98%. When only less than one-fifth of the images were used, AC-GAN reached an accuracy of 90%. This is a considerable increase in accuracy in comparison with traditional CNNs where for the same number of training samples, the accuracy rapidly decreased to 80%.
抄録(英) Synthetic Aperture Radar (SAR) is an all day and all weather condition imaging technique which is widely used in national defense, remote sensing, disaster prevention, interferometry and forest and urban footprint mapping. Recently, convolutional neural networks have been used for automatic target recognition (SAR-ATR) and classification. The drawback, however, is the difficulty obtaining sufficient and reliable data in order to train a high accuracy classifier for automatic target recognition. As the number of training samples is reduced, the SAR-ATR accuracy rate decreases rapidly. Our study proposes a deep learning model based on Generative Adversarial Network (GAN) to overcome the problem of insufficient training samples and improve the performance of target classification. GAN is composed of two networks: A Generator network and a Discriminator network. The generator network produces SAR images from a series of random numbers. The discriminator network is a classifier which is trained using supervised learning to classify real and fake SAR images. The Generator and the Discriminator compete with each other in the training process in order to learn robust and reliable target features in SAR images. However, traditional GAN cannot be used to solve the classification problems in SAR-ATR. Our network is a variant of GAN called Auxiliary Classifier GAN (AC-GAN). The structure of AC-GAN allows separating large datasets into subsets by class and training a generator and discriminator for each subset. In this experiment, the SAR images in Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset were used to train the network. Using all the images in the dataset for training resulted in a classification accuracy of 98%. When only less than one-fifth of the images were used, AC-GAN reached an accuracy of 90%. This is a considerable increase in accuracy in comparison with traditional CNNs where for the same number of training samples, the accuracy rapidly decreased to 80%.
キーワード(和) synthetic aperture radar / automatic target recognition / generative adversarial networks (GAN) / auxiliary classifiers GAN
キーワード(英) synthetic aperture radar / automatic target recognition / generative adversarial networks (GAN) / auxiliary classifiers GAN
資料番号 SANE2018-51
発行日 2018-10-05 (SANE)

研究会情報
研究会 SANE
開催期間 2018/10/12(から1日開催)
開催地(和) 電気通信大学
開催地(英) The University of Electro-Communications
テーマ(和) レーダ信号処理,リモートセンシング及び一般
テーマ(英) Radar signal processing and general issues
委員長氏名(和) 福島 荘之介(電子航法研)
委員長氏名(英) Sonosuke Fukushima(ENRI)
副委員長氏名(和) 森山 敏文(長崎大) / 灘井 章嗣(NICT)
副委員長氏名(英) Toshifumi Moriyama(Nagasaki Univ.) / Akitsugu Nadai(NICT)
幹事氏名(和) 毛塚 敦(電子航法研) / 秋田 学(電通大)
幹事氏名(英) Atsushi Kezuka(ENRI) / Manabu Akita(Univ. of Electro-Comm.)
幹事補佐氏名(和) 夏秋 嶺(東大) / 山梨 正人(三菱スペース・ソフトウエア) / 網嶋 武(三菱電機)
幹事補佐氏名(英) Ryo Natsuaki(Univ. of Tokyo) / Masato Yamanashi(Mitsubishi Space Software) / Takeshi Amishima(Mitsubishi Electric)

講演論文情報詳細
申込み研究会 Technical Committee on Space, Aeronautical and Navigational Electronics
本文の言語 ENG
タイトル(和)
サブタイトル(和)
タイトル(英) Automatic Target Recognition based on Generative Adversarial Networks for Synthetic Aperture Radar Images
サブタイトル(和)
キーワード(1)(和/英) synthetic aperture radar / synthetic aperture radar
キーワード(2)(和/英) automatic target recognition / automatic target recognition
キーワード(3)(和/英) generative adversarial networks (GAN) / generative adversarial networks (GAN)
キーワード(4)(和/英) auxiliary classifiers GAN / auxiliary classifiers GAN
第 1 著者 氏名(和/英) Yang-Lang Chang / Yang-Lang Chang
第 1 著者 所属(和/英) National Taipei University of Technology(略称:NTUT)
National Taipei University of Technology(略称:NTUT)
第 2 著者 氏名(和/英) Bo-Yao Chen / Bo-Yao Chen
第 2 著者 所属(和/英) National Taipei University of Technology(略称:NTUT)
National Taipei University of Technology(略称:NTUT)
第 3 著者 氏名(和/英) Chih-Yuan Chu / Chih-Yuan Chu
第 3 著者 所属(和/英) National Taipei University of Technology(略称:NTUT)
National Taipei University of Technology(略称:NTUT)
第 4 著者 氏名(和/英) Sina Hadipour / Sina Hadipour
第 4 著者 所属(和/英) National Taipei University of Technology(略称:NTUT)
National Taipei University of Technology(略称:NTUT)
第 5 著者 氏名(和/英) Hirokazu Kobayashi / Hirokazu Kobayashi
第 5 著者 所属(和/英) Osaka Institute of Technology(略称:OIT)
Osaka Institute of Technology(略称:OIT)
発表年月日 2018-10-12
資料番号 SANE2018-51
巻番号(vol) vol.118
号番号(no) SANE-239
ページ範囲 pp.41-44(SANE),
ページ数 4
発行日 2018-10-05 (SANE)