Presentation 2021-05-21
A Study of Detecting Adversarial Examples Using Sensitivities to Multiple Auto Encoders
Yuma Yamasaki, Minoru Kuribayashi, Nobuo Funabiki, Huy Hong Nguyen, Isao Echizen,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) By removing the small perturbations involved in adversarial examples, the image classification result returns to the correct label of the image, and by gradually increasing the strength of the filter that removes the noise, the image classification result is characterized. In the previous study, we focused on this point and trained a neural network using the image classification results after denoising by each filter with varying the strength as supervisory data to identify adversarial examples. However, since JPEG compression and scaling, which are well-known techniques, are used for denoising filters, the adversarial attack may be adjusted for such filters to fool the detector. In this study, we use an unsupervised machine learning model, Auto Encoder, which is trained on a specific dataset, as a black box filter, to enhance the security aspect. We designed several types of auto encoders with different characteristics by changing the number of images used for training, and evaluated the accuracy of its discrimination capability using each filter alone or in combination. As a result, it was confirmed that the noise removal effect was improved by combining some auto encoders, adversarial examples could be identified with an accuracy of over 90%.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) Adversarial Example / Image Classifier / Auto Encoder / Noise Removal Filter
Paper # IT2021-11,EMM2021-11
Date of Issue 2021-05-13 (IT, EMM)

Conference Information
Committee EMM / IT
Conference Date 2021/5/20(2days)
Place (in Japanese) (See Japanese page)
Place (in English) Online
Topics (in Japanese) (See Japanese page)
Topics (in English) Information Security, Information Theory, Information Hiding, etc.
Chair Masaki Kawamura(Yamaguchi Univ.) / Tadashi Wadayama(Nagoya Inst. of Tech.)
Vice Chair Motoi Iwata(Osaka Prefecture Univ.) / Masaaki Fujiyoshi(Tokyo Metropolitan Univ.) / Tetsuya Kojima(Tokyo Kosen)
Secretary Motoi Iwata(Tokyo Denki Univ.) / Masaaki Fujiyoshi(Kansai Univ.) / Tetsuya Kojima(Yamaguchi Univ.)
Assistant Madoka Hasegawa(Utsunomiya Univ.) / Maki Yoshida(NICT) / Takahiro Ohta(Senshu Univ.)

Paper Information
Registration To Technical Committee on Enriched MultiMedia / Technical Committee on Information Theory
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) A Study of Detecting Adversarial Examples Using Sensitivities to Multiple Auto Encoders
Sub Title (in English)
Keyword(1) Adversarial Example
Keyword(2) Image Classifier
Keyword(3) Auto Encoder
Keyword(4) Noise Removal Filter
1st Author's Name Yuma Yamasaki
1st Author's Affiliation Okayama University(Okayama Univ.)
2nd Author's Name Minoru Kuribayashi
2nd Author's Affiliation Okayama University(Okayama Univ.)
3rd Author's Name Nobuo Funabiki
3rd Author's Affiliation Okayama University(Okayama Univ.)
4th Author's Name Huy Hong Nguyen
4th Author's Affiliation NII(NII)
5th Author's Name Isao Echizen
5th Author's Affiliation NII(NII)
Date 2021-05-21
Paper # IT2021-11,EMM2021-11
Volume (vol) vol.121
Number (no) IT-28,EMM-29
Page pp.pp.60-65(IT), pp.60-65(EMM),
#Pages 6
Date of Issue 2021-05-13 (IT, EMM)