Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
MIKA (3rd) |
2021-10-28 10:30 |
Okinawa |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
Examination of Majority Decision Method for Network Intrusion Detection System Using Deep Learning Koko Nishiura, Yuju Ogawa, Tomotaka Kimura, Jun Cheng (Doshisha Univ.) |
In recent years, the importance of NIDS (Network Intrusion Detection Systems), which detects unauthorized access, has be... [more] |
|
RCS |
2021-10-22 15:00 |
Online |
Online |
[Poster Presentation]
Display-Camera Visible Light Communications Using Monocular Depth Estimation and Adversarial Example Hiraku Okada, ChangSeok Lee (Nagoya Univ.), Tadahiro Wada (Shizuoka Univ.), Kentaro Kobayashi (Meijo Univ.), Chedlia Ben Naila, Masaaki Katayama (Nagoya Univ.) RCS2021-140 |
In display-camera visible light communications, a display shows visual information on which data information is superimp... [more] |
RCS2021-140 pp.120-121 |
PRMU |
2021-10-09 09:30 |
Online |
Online |
Explaining Adversarial Examples by the Embedding Structure of Data Manifold Hajime Tasaki, Yuji Kaneko, Jinhui Chao (Chuo Univ.) PRMU2021-19 |
It is widely known that adversarial examples cause misclassification in classifiers using deep learning. Inspite of nume... [more] |
PRMU2021-19 pp.17-21 |
SIS, ITE-BCT |
2021-10-07 14:25 |
Online |
Online |
Block-wise Transformation with Secret Key for Adversary Robust Defence of SVM model Ryota Iijima, MaungMaung AprilPyone, Hitoshi Kiya (TMU) SIS2021-13 |
In this paper, we propose a method for implementing support vector machine (SVM) models that are robust against adversar... [more] |
SIS2021-13 pp.17-22 |
CS |
2021-07-16 10:25 |
Online |
Online |
Countermeasures against Adversarial Examples using Majority Decision Discriminators for Deep learning-Based Phishing Detection Methods Yuji Ogawa, Tomotaka Kimura, Jun Cheng (Doshisha Univ.) CS2021-33 |
In recent years, the number of phishing attacks has been increasing, and the detection of phishing URLs using deep learn... [more] |
CS2021-33 pp.78-79 |
SP, IPSJ-SLP, IPSJ-MUS |
2021-06-18 15:00 |
Online |
Online |
Protection method with audio processing against Audio Adversarial Example Taisei Yamamoto, Yuya Tarutani, Yukinobu Fukusima, Tokumi Yokohira (Okayama Univ) SP2021-4 |
Machine learning technology has improved the recognition accuracy of voice recognition, and demand for voice recognition... [more] |
SP2021-4 pp.19-24 |
EMM, IT |
2021-05-21 13:10 |
Online |
Online |
A Study of Detecting Adversarial Examples Using Sensitivities to Multiple Auto Encoders Yuma Yamasaki, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Huy Hong Nguyen, Isao Echizen (NII) IT2021-11 EMM2021-11 |
By removing the small perturbations involved in adversarial examples, the image classification result returns to the cor... [more] |
IT2021-11 EMM2021-11 pp.60-65 |
EMM |
2021-03-04 14:15 |
Online |
Online |
[Poster Presentation]
Detection of Adversarial Examples in CNN Image Classifiers Using Features Extracted with Multiple Strengths of Filter Akinori Higashi, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Huy Hong Nguyen, Isao Echizen (NII) EMM2020-70 |
Deep learning has been used as a new method for machine learning, and its performance has been significantly improved. A... [more] |
EMM2020-70 pp.19-24 |
ICSS, IPSJ-SPT |
2021-03-02 13:40 |
Online |
Online |
Research on the vulnerability of homoglyph attacks to online machine translation system Takeshi Sakamoto, Tatsuya Mori (Waseda Univ) ICSS2020-50 |
It has been widely known that systems empowered by neural network algorithms are vulnerable against an intrinsic attack ... [more] |
ICSS2020-50 pp.144-149 |
CQ, CBE (Joint) |
2021-01-21 16:00 |
Online |
Online |
[Poster Presentation]
Vulnerability Assessment for Deep-Learning Based Phishing Detection System Yuji Ogawa, Tomotaka Kimura, Jun Cheng (Doshisha Univ.) CQ2020-83 |
[more] |
CQ2020-83 pp.84-85 |
ITS, WBS, RCC |
2020-12-14 13:00 |
Online |
Online |
A Proposal of Information Embedding Method Using Image Classifier for Parallel Transmission Visible Light Communications Keita Kinpara, Tadahiro Wada, Kaiji Mukumoto (Shizuoka Univ), Hiraku Okada (Nagoya Univ) WBS2020-14 ITS2020-10 RCC2020-17 |
For visible light communications using a liquid crystal display and an image sensor, it must be desirable to embed trans... [more] |
WBS2020-14 ITS2020-10 RCC2020-17 pp.35-40 |
BioX |
2020-11-25 11:10 |
Online |
Online |
GAN based feature-level supportive method for improved adversarial attacks on face recognition Zhengwei Yin (USTC/Hosei Univ.), Kaoru Uchida (Hosei Univ.) BioX2020-35 |
With the rapid development of deep neural networks (DNN), DNN-based face recognition technologies are also achieving gre... [more] |
BioX2020-35 pp.1-6 |
SITE, ISEC, HWS, EMM, BioX, IPSJ-CSEC, IPSJ-SPT, ICSS [detail] |
2020-07-21 10:50 |
Online |
Online |
Adversarial scan attack against ICP algorithm for pose estimation on LiDAR-based SLAM Kota Yoshida, Takeshi Fujino (Ritsumeikan Univ.) ISEC2020-26 SITE2020-23 BioX2020-29 HWS2020-19 ICSS2020-13 EMM2020-23 |
An autonomous robot is controlled on physical information acquired by various sensors. Some physical attacks are propose... [more] |
ISEC2020-26 SITE2020-23 BioX2020-29 HWS2020-19 ICSS2020-13 EMM2020-23 pp.81-86 |
EMM |
2020-03-05 16:45 |
Okinawa |
(Cancelled but technical report was issued) |
[Poster Presentation]
Detecting Adversarial Examples Based on Sensitivities to Lossy Compression Algorithms Akinori Higashi, Minoru Kuribayashi, Nobuo Funabiki (Okayama Univ.), Huy Hong Nguyen, Isao Echizen (NII) EMM2019-123 |
The adversarial examples are created by adding small perturbations to an input image for misleading an CNN-based image c... [more] |
EMM2019-123 pp.113-116 |
NC, MBE (Joint) |
2020-03-05 09:30 |
Tokyo |
University of Electro Communications (Cancelled but technical report was issued) |
Improving Adversarial Robustness Based on Adversarial Training Consideration Ryota Komiyama, Motonobu Hattori (Univ. of Yamanashi) NC2019-90 |
Neural networks are used for various tasks because of their high performance.
However, it is known that even a high-per... [more] |
NC2019-90 pp.83-88 |
SP, EA, SIP |
2020-03-02 13:00 |
Okinawa |
Okinawa Industry Support Center (Cancelled but technical report was issued) |
Vulnerability investigation of speaker verification against black-box adversarial attacks Hiroto Kai, Sayaka Shiota, Hitoshi Kiya (TMU) EA2019-106 SIP2019-108 SP2019-55 |
Recently,vulnerability against adversarial attacks is being feared for machine learning-based systems.Adversarial attack... [more] |
EA2019-106 SIP2019-108 SP2019-55 pp.29-33 |
ICSS, IPSJ-SPT |
2020-03-03 11:20 |
Okinawa |
Okinawa-Ken-Seinen-Kaikan (Cancelled but technical report was issued) |
Adversarial Attack against Neural Machine Translation Systems Takeshi Sakamoto, Tatsuya Mori (Waseda Univ.) ICSS2019-89 |
It has been widely known that systems empowered by neural network algorithms are vulnerable against an intrinsic attack ... [more] |
ICSS2019-89 pp.125-130 |
ICSS, IPSJ-SPT |
2020-03-03 11:40 |
Okinawa |
Okinawa-Ken-Seinen-Kaikan (Cancelled but technical report was issued) |
Adversarial Attacks against Electrocardiograms Taiga Ono (Waseda Univ.), Takeshi Sugawara (UEC), Tatsuya Mori (Waseda Univ.) ICSS2019-90 |
Recent advancements in clinical services powered by deep learning have been met with the threat of Adversarial Examples.... [more] |
ICSS2019-90 pp.131-136 |
ITE-HI, IE, ITS, ITE-MMS, ITE-ME, ITE-AIT [detail] |
2020-02-27 14:00 |
Hokkaido |
Hokkaido Univ. (Cancelled but technical report was issued) |
An Image Transformation Network for Privacy-Preserving Deep Neural Networks Hiroki Ito, Yuma Kinoshita, Hitoshi Kiya (Tokyo Metro. Univ.) ITS2019-37 IE2019-75 |
We propose an image transformation network to generate visually-protected images for privacy-preserving deep neural netw... [more] |
ITS2019-37 IE2019-75 pp.195-200 |
IE, CS, IPSJ-AVM, ITE-BCT [detail] |
2019-12-06 10:10 |
Iwate |
Aiina Center |
Adversarial Examples for Monocular Depth Estimation Koichiro Yamanaka, Ryutaroh Matsumoto, Keita Takahashi, Toshiaki Fujii (Nagoya Univ.) CS2019-83 IE2019-63 |
Adversarial examples for classification and object recognition problems using convolutional neural net- works (CNN) have... [more] |
CS2019-83 IE2019-63 pp.91-95 |