Szczegóły publikacji
Opis bibliograficzny
Performance of explainable AI methods in asset failure prediction / Jakub JAKUBOWSKI, Przemysław STANISZ, Szymon Bobek, Grzegorz J. Nalepa // W: Computational Science – ICCS 2022 : 22nd international conference : London, UK, June 21–23, 2022 : proceedings, Pt. 4 / eds. Derek Groen, [et al.]. — Cham : Springer Nature Switzerland, cop. 2022. — (Lecture Notes in Computer Science ; ISSN 0302-9743 ; LNCS 13353). — ISBN: 978-3-031-08759-2; e-ISBN: 978-3-031-08760-8. — S. 472–485. — Bibliogr., Abstr. — Publikacja dostępna online od: 2022-06-15
Autorzy (4)
- AGHJakubowski Jakub
- AGHStanisz Przemysław
- Bobek Szymon
- Nalepa Grzegorz
Słowa kluczowe
Dane bibliometryczne
ID BaDAP | 140689 |
---|---|
Data dodania do BaDAP | 2022-07-04 |
DOI | 10.1007/978-3-031-08760-8_40 |
Rok publikacji | 2022 |
Typ publikacji | materiały konferencyjne (aut.) |
Otwarty dostęp | |
Wydawca | Springer |
Konferencja | 22nd International Conference on Computational Science |
Czasopismo/seria | Lecture Notes in Computer Science |
Abstract
Extensive research on machine learning models, which in the majority are black-boxes, created a great need for the development of Explainable Artificial Intelligence (XAI) methods. Complex machine learning (ML) models usually require an external explanation method to understand their decisions. The interpretation of the model predictions are crucial in many fields, i.e., predictive maintenance, where it is not only required to evaluate the state of an asset, but also to determine the root causes of the potential failure. In this work, we present a comparison of state-of-the-art ML models and XAI methods, which we used for the prediction of the RUL of aircraft turbofan engines. We trained five different models on the C-MAPSS dataset and used SHAP and LIME to assign numerical importance to the features. We have compared the results of explanations using stability and consistency metrics and evaluated the explanations qualitatively by visual inspection. The obtained results indicate that SHAP method outperforms other methods in the fidelity of explanations. We observe that there exist substantial differences in the explanations depending on the selection of a model and XAI method, thus we find a need for further research in XAI field.