Summary

International Symposium on Nonlinear Theory and Its Applications

2022

Session Number:A3L-B

Session:

Number:A3L-B-02

Backdoor Poisoning Attacks on Meta-Learning-Based Few-Shot Classifiers

Ganma Kato ,   Chako Takahashi ,   Koutarou Suzuki,  

pp.57-60

Publication Date:12/12/2022

Online ISSN:2188-5079

DOI:10.34385/proc.71.A3L-B-02

PDF download (469.9KB)

Summary:
Few-shot classification is a classification made on the basis of very few samples, and meta-learning methods are often employed to accomplish it. Research on poisoning attacks against meta-learning-based few-shot classification is now beginning. While poisoning that violates the classifier's availability has been investigated by Xu et al., and Oldewage et al., backdoor poisoning has only been briefly evaluated by Oldewage et al. under limited conditions. We formulate a backdoor poisoning attack on meta-learning-based few-shot classification in this study. We show that the proposed backdoor poisoning attack is effective against the few-shot classification using model-agnostic meta-learning through experiments.