Szczegóły publikacji

Opis bibliograficzny

Performance of explainable AI methods in asset failure prediction / Jakub JAKUBOWSKI, Przemysław STANISZ, Szymon Bobek, Grzegorz J. Nalepa // W: Computational Science – ICCS 2022 : 22nd international conference : London, UK, June 21–23, 2022 : proceedings, Pt. 4 / eds. Derek Groen, [et al.]. — Cham : Springer Nature Switzerland, cop. 2022. — (Lecture Notes in Computer Science ; ISSN 0302-9743 ; LNCS 13353). — ISBN: 978-3-031-08759-2; e-ISBN: 978-3-031-08760-8. — S. 472–485. — Bibliogr., Abstr. — Publikacja dostępna online od: 2022-06-15


Autorzy (4)


Słowa kluczowe

machine learningpredictive maintenanceexplainable artificial intelligence

Dane bibliometryczne

ID BaDAP140689
Data dodania do BaDAP2022-07-04
DOI10.1007/978-3-031-08760-8_40
Rok publikacji2022
Typ publikacjimateriały konferencyjne (aut.)
Otwarty dostęptak
WydawcaSpringer
Konferencja22nd International Conference on Computational Science
Czasopismo/seriaLecture Notes in Computer Science

Abstract

Extensive research on machine learning models, which in the majority are black-boxes, created a great need for the development of Explainable Artificial Intelligence (XAI) methods. Complex machine learning (ML) models usually require an external explanation method to understand their decisions. The interpretation of the model predictions are crucial in many fields, i.e., predictive maintenance, where it is not only required to evaluate the state of an asset, but also to determine the root causes of the potential failure. In this work, we present a comparison of state-of-the-art ML models and XAI methods, which we used for the prediction of the RUL of aircraft turbofan engines. We trained five different models on the C-MAPSS dataset and used SHAP and LIME to assign numerical importance to the features. We have compared the results of explanations using stability and consistency metrics and evaluated the explanations qualitatively by visual inspection. The obtained results indicate that SHAP method outperforms other methods in the fidelity of explanations. We observe that there exist substantial differences in the explanations depending on the selection of a model and XAI method, thus we find a need for further research in XAI field.

Publikacje, które mogą Cię zainteresować

fragment książki
Comparing explanations from glass-box and black-box machine-learning models / Michał KUK, Szymon Bobek, Grzegorz J. Nalepa // W: Computational Science – ICCS 2022 : 22nd international conference : London, UK, June 21–23, 2022 : proceedings, Pt. 3 / eds. Derek Groen, Clélia de Mulatier, Maciej Paszyński, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, Peter M. A. Sloot. — Cham : Springer Nature Switzerland, cop. 2022. — (Lecture Notes in Computer Science ; ISSN 0302-9743 ; LNCS 13352). — ISBN: 978-3-031-08756-1; e-ISBN: 978-3-031-08757-8. — S. 668–675. — Bibliogr., Abstr. — Publikacja dostępna online od: 2022-06-15
fragment książki
Effect of feature discretization on classification performance of explainable scoring-based machine learning model / Arkadiusz Pajor, Jakub Żołnierek, Bartłomiej ŚNIEŻYŃSKI, Arkadiusz Sitek // W: Computational Science – ICCS 2022 : 22nd international conference : London, UK, June 21–23, 2022 : proceedings, Pt. 3 / eds. Derek Groen, Clélia de Mulatier, Maciej Paszyński, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, Peter M. A. Sloot. — Cham : Springer Nature Switzerland, cop. 2022. — (Lecture Notes in Computer Science ; ISSN 0302-9743 ; LNCS 13352). — ISBN: 978-3-031-08756-1; e-ISBN: 978-3-031-08757-8. — S. 92–105. — Bibliogr., Abstr. — Publikacja dostępna online od: 2022-06-15. — A. Pajor - pierwsza afiliacja: Sano Centre for Computational Medicine, Cracow