Szczegóły publikacji
Opis bibliograficzny
Evaluating uncertainty quantification in medical image segmentation: a multi-dataset, multi-algorithm study / Nyaz JALAL, Małgorzata Śliwińska, Wadim Wojciechowski, Iwona Kucybała, Miłosz Rozynek, Kamil Krupa, Patrycja Matusik, Jarosław Jarczewski, Zbisław TABOR // Applied Sciences (Basel) [Dokument elektroniczny]. — Czasopismo elektroniczne ; ISSN 2076-3417. — 2024 — vol. 14 iss. 21 art. no. 10020, s. 1-25. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 24-25, Abstr. — Publikacja dostępna online od: 2024-11-02
Autorzy (9)
- AGHJalal Nyaz
- AGHŚliwińska Małgorzata
- Wojciechowski Wadim
- Kucybała Iwona
- Rozynek Miłosz
- Krupa Kamil
- Matusik Patrycja
- Jarczewski Jarosław
- AGHTabor Zbisław
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 160754 |
|---|---|
| Data dodania do BaDAP | 2025-07-02 |
| Tekst źródłowy | URL |
| DOI | 10.3390/app142110020 |
| Rok publikacji | 2024 |
| Typ publikacji | artykuł w czasopiśmie |
| Otwarty dostęp | |
| Creative Commons | |
| Czasopismo/seria | Applied Sciences (Basel) |
Abstract
Deep learning is revolutionizing various scientific fields, with medical applications at the forefront. One key focus is automating image segmentation, a process crucial in many clinical services. However, medical images are often ambiguous and challenging even for experts. To address this, reliable models need to quantify their uncertainty, allowing physicians to understand the model’s confidence in its segmentation. This paper explores how the performance and uncertainty of a model are influenced by the number of annotations per input sample. We examine the effects of both single and multiple manual annotations on various deep learning architectures. To tackle this question, we employ three widely recognized deep learning architectures and evaluate them across four publicly available datasets. Furthermore, we explore the effects of dropout rates on Monte Carlo models by examining uncertainty models with dropout rates of 20%, 40%, 60%, and 80%. Subsequently, we evaluate the models using various measurement metrics. The findings reveal that the influence of multiple annotations varies significantly depending on the datasets. Additionally, we observe that the dropout rate has minimal or no impact on the model’s performance unless there is a substantial loss of training data, primarily evident in the 80% dropout rate scenario.