Presentation 2024-03-22
Evaluation of Feature Inference Risk from Explainable AI metrics LIME and Shapley Values
Ryotaro Toma, Hiroaki Kikuchi,
PDF Download Page PDF download Page Link
Abstract(in Japanese) (See Japanese page)
Abstract(in English) Explainability has gained attention to ensure fairness and transparency in machine learning models, providing users with a sense of understanding. Many services such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure running Machine Learning as a Service (MLaaS) platforms, which provide several methods to explain model. However, in 2022, Luo et al. demonstrated that Shapley value-based explanations could lead to inference of private attribute, posing privacy risks of information leakage from models. Nevertheless, it remains unclear whether the attribute inference risk on the alternative explainability exist or not. Therefore, this study evaluates the attribute inference risk on LIME and compare the vulnerability with the explanability Shapley values.
Keyword(in Japanese) (See Japanese page)
Keyword(in English) LIME / Shapley values / XAI / Explainability / Machine Learning / Feature Inference
Paper # ICSS2023-88
Date of Issue 2024-03-14 (ICSS)

Conference Information
Committee ICSS / IPSJ-SPT
Conference Date 2024/3/21(2days)
Place (in Japanese) (See Japanese page)
Place (in English) OIST
Topics (in Japanese) (See Japanese page)
Topics (in English) Security, Trust, etc.
Chair Daisuke Inoue(NICT)
Vice Chair Akira Yamada(Kobe Univ.) / Toshihiro Yamauchi(Okayama Univ.)
Secretary Akira Yamada(Mitsubishi Electric) / Toshihiro Yamauchi(Univ. of Electro-Comm.)
Assistant Yo Kanemoto(NTT) / Masaya Sato(Okayama Prefectural Univ.)

Paper Information
Registration To Technical Committee on Information and Communication System Security / Special Interest Group on Security Psychology and Trust
Language JPN
Title (in Japanese) (See Japanese page)
Sub Title (in Japanese) (See Japanese page)
Title (in English) Evaluation of Feature Inference Risk from Explainable AI metrics LIME and Shapley Values
Sub Title (in English)
Keyword(1) LIME
Keyword(2) Shapley values
Keyword(3) XAI
Keyword(4) Explainability
Keyword(5) Machine Learning
Keyword(6) Feature Inference
1st Author's Name Ryotaro Toma
1st Author's Affiliation Meiji University(Meiji Univ.)
2nd Author's Name Hiroaki Kikuchi
2nd Author's Affiliation Meiji University(Meiji Univ.)
Date 2024-03-22
Paper # ICSS2023-88
Volume (vol) vol.123
Number (no) ICSS-448
Page pp.pp.137-144(ICSS),
#Pages 8
Date of Issue 2024-03-14 (ICSS)