The major aim of this paper is to explain the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems deploying Machine Learning models from the attackers’ perspective. Human emotion evaluation using EEG signals has consistently attracted a lot of research attention. The identification of human emotional states based on EEG signals is effective to detect potential internal threats caused by insider individuals. Nevertheless, EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. Besides, due to the instability and complexity of the EEG signals, it is challenging to explain and analyze how data poison attacks influence the decision process of EEG signal-based human emotion evaluation systems. In this paper, from the attackers’ side, data poison attacks occurring in the training phases of six different Machine Learning models including Random Forest, Adaptive Boosting (AdaBoost), Extra Trees, XGBoost, Multilayer Perceptron (MLP), and K-Nearest Neighbors (KNN) intrude on the EEG signal-based human emotion evaluation systems using these Machine Learning models. This seeks to reduce the performance of the aforementioned Machine Learning models with regard to the classification task of 4 different human emotions using EEG signals. The findings of the experiments demonstrate that the suggested data poison assaults are model-independently successful, although various models exhibit varying levels of resilience to the attacks. In addition, the data poison attacks on the EEG signal-based human emotion evaluation systems are explained with several Explainable Artificial Intelligence (XAI) methods including Shapley Additive Explanation (SHAP) values, Local Interpretable Model-agnostic Explanations (LIME), and Generated Decision Trees. And the codes of this paper are publicly available on GitHub.

Explainable Data Poison Attacks on Human Emotion Evaluation Systems Based on EEG Signals / Z. Zhang, S. Umar, A.Y. Al Hammadi, S. Yoon, E. Damiani, C.A. Ardagna, N. Bena, C. Yeob Yeun. - In: IEEE ACCESS. - ISSN 2169-3536. - 11:(2023), pp. 18134-18147. [10.1109/ACCESS.2023.3245813]

Explainable Data Poison Attacks on Human Emotion Evaluation Systems Based on EEG Signals

E. Damiani;C.A. Ardagna;N. Bena
Penultimo
;
2023

Abstract

The major aim of this paper is to explain the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems deploying Machine Learning models from the attackers’ perspective. Human emotion evaluation using EEG signals has consistently attracted a lot of research attention. The identification of human emotional states based on EEG signals is effective to detect potential internal threats caused by insider individuals. Nevertheless, EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. Besides, due to the instability and complexity of the EEG signals, it is challenging to explain and analyze how data poison attacks influence the decision process of EEG signal-based human emotion evaluation systems. In this paper, from the attackers’ side, data poison attacks occurring in the training phases of six different Machine Learning models including Random Forest, Adaptive Boosting (AdaBoost), Extra Trees, XGBoost, Multilayer Perceptron (MLP), and K-Nearest Neighbors (KNN) intrude on the EEG signal-based human emotion evaluation systems using these Machine Learning models. This seeks to reduce the performance of the aforementioned Machine Learning models with regard to the classification task of 4 different human emotions using EEG signals. The findings of the experiments demonstrate that the suggested data poison assaults are model-independently successful, although various models exhibit varying levels of resilience to the attacks. In addition, the data poison attacks on the EEG signal-based human emotion evaluation systems are explained with several Explainable Artificial Intelligence (XAI) methods including Shapley Additive Explanation (SHAP) values, Local Interpretable Model-agnostic Explanations (LIME), and Generated Decision Trees. And the codes of this paper are publicly available on GitHub.
Cyber resilience; cyber security; data poisoning; EEG signals, explainable artificial intelligence; human emotion evaluation; label-flipping; machine learning
Settore INF/01 - Informatica
   MUSA - Multilayered Urban Sustainability Actiona
   MUSA
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
2023
https://ieeexplore.ieee.org/document/10045653/
Article (author)
File in questo prodotto:
File Dimensione Formato  
ZUAYDABY.ACCESS2023.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 2.89 MB
Formato Adobe PDF
2.89 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/962051
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact