Szczegóły publikacji
Opis bibliograficzny
Deep neural network interpretability methods for supervised and unsupervised problems / Andrzej BRODZICKI, Dariusz KUCHARSKI, Michał PIEKARSKI, Aleksander KOSTUCH, Joanna JAWOREK-KORJAKOWSKA // W: PP-RAI'2022 [Dokument elektroniczny] : proceedings of the 3rd Polish conference on Artificial intelligence : April 25–27, 2022, Gdynia, Poland. — Wersja do Windows. — Dane tekstowe. — Gdynia : Gdynia Maritime University, 2022. — e-ISBN: 978-83-7421-401-8. — S. 25–28. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://wydawnictwo.umg.edu.pl/pp-rai2022/pdfs/ProceedingsPP-... [2022-04-27]. — Bibliogr. s. 28, Abstr. — M. Piekarski – dod. afiliacja: SOLARIS National Synchrotron Radiation Centre, UJ, Krakow
Autorzy (5)
Słowa kluczowe
Dane bibliometryczne
ID BaDAP | 139993 |
---|---|
Data dodania do BaDAP | 2022-04-29 |
Rok publikacji | 2022 |
Typ publikacji | materiały konferencyjne (aut.) |
Otwarty dostęp | |
Wydawca | Uniwersytet Morski w Gdyni |
Abstract
In recent years, deep neural networks (DNNs) have experienced a dynamic rise in applicability in many fields, from industry, through social media to healthcare. In this paper we focus on model interpretability for image analysis as it is a crucial point while deploying the methods in real life. We compare three visualisation algorithms including GradCAM, LIME and Occlusion that increase the model interpretability and check if the assessment is based on correct parts of the image or surrounding. We have compared the effectiveness of these methods in four different image processing research areas including 1) dermoscopic image classification, 2) lung nodule segmentation on CT scans, 3) classification of beam images for anomaly detection in synchrotron, 4) classification of seat occupancy. We briefly describe the model interpretability methods, compare achieved results and draw conclusions.