Szczegóły publikacji

Opis bibliograficzny

Vulnerability to one-pixel attacks of neural network architectures in medical image classification / Wiktoria Tajak, Karolina Nurzyńska, Adam PIÓRKOWSKI // Bio-Algorithms and Med-Systems / Jagiellonian University. Medical College ; ISSN  1895-9091 . — 2025 — vol. 21 no. 1, s. 58–70. — Bibliogr. s. 69–70, Abstr. — Publikacja dostępna online od: 2025-10-28

Autorzy (3)

Słowa kluczowe

one-pixel attacksartificial intelligenceadversarial attacksconvolutional neural networksmedical imaging

Dane bibliometryczne

ID BaDAP164241
Data dodania do BaDAP2026-01-12
Tekst źródłowyURL
DOI10.5604/01.3001.0055.3261
Rok publikacji2025
Typ publikacjiartykuł w czasopiśmie
Otwarty dostęptak
Creative Commons
Czasopismo/seriaBio-Algorithms and Med-Systems

Abstract

Objective: The use of neural networks for disease classification based on medical imaging is susceptible to variations in results caused by even a single-pixel change, a phenomenon known as a one-pixel attack, which should be examined qualitatively and quantitatively. Methods: For an extended dataset of brain MRI images representing four diagnoses, the networks VGG-16, ResNet-50, DenseNet-121, MobileNetV2, EfficientNet-B0, NASNetMobile, and ViT Base were implemented. Each model was trained three times on 96 × 96 inputs, with the best-performing trial selected for adversarial testing (Phase 1). The three most robust models from Phase 1 (VGG-16, MobileNetV2, EfficientNet-B0) were then retrained on 224 × 224 inputs to assess the effect of higher resolution on susceptibility (Phase 2). The susceptibility of a diagnosis change to a single bright pixel alteration in the input image was assessed, and an average number of vulnerable pixels (ANVP) per image was carried out. Results: At 96 × 96 resolution, the least vulnerable model was MobileNetV2 (ANVP: 20.45, susceptibility: 0.22%). This was followed by ViT Base (22.20, 0.24%), EfficientNet-B0 (38.55, 0.42%), DenseNet-121 (43.52, 0.47%), ResNet-50 (69.11, 0.75%), and VGG-16 (78.66, 0.85%). The most vulnerable was NASNetMobile (119.52, 1.30%). At 224 × 224 resolution, robustness further improved for EfficientNet-B0 (37.53, 0.07%) and MobileNetV2 (49.51, 0.10%), while VGG-16 remained less stable (99.44, 0.20%). Conclusions: Implementing disease classification based on medical imaging using neural networks may pose a potential risk of misinterpretation due to changes in data irrelevant to the study, which are clearly noticeable to a human.

Publikacje, które mogą Cię zainteresować

artykuł
#65710Data dodania: 15.5.2012
Neural networks for medical image processing / Tomasz PIĘCIAK, Joanna JAWOREK, Marek GORGOŃ // Bio-Algorithms and Med-Systems / Jagiellonian University. Medical College ; ISSN 1895-9091. — 2011 — vol. 7 no. 4, s. 91–100. — Bibliogr. s. 99–100, Abstr.
fragment książki
#164727Data dodania: 15.12.2025
Improved DeepFool: efficient adversarial attacks via optimisation and refinement / Łukasz MIKOŁAJCZYK, Piotr DUDA, Robert NOWICKI, Rafał SCHERER // W: ISD2025 [Dokument elektroniczny] : [33rd international conference on Information Systems Development] : September 3-5, 2025, Belgrade, Serbia] : empowering the interdisciplinary role of ISD in addressing contemporary issues in digital transformation: how data science and generative AI contributes to ISD? : proceedings / eds. I. Luković, [et al.]. — Wersja do Windows. — Dane tekstowe. — Gdańsk : University of Gdańsk ; Belgrade : University of Belgrade, 2025. — ( Proceedings of the International Conference on Information Systems Development ; ISSN  2938-5202 ). — e-ISBN: 978-83-972632-1-5. — S. [1–11]. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1741&con... [2025-12-04]. — Bibliogr. s. [10–11], Abstr. — Ł. Mikołajczyk, R. Nowicki, R. Scherer - dod. afiliacja: Czestochowa University of Technology Faculty of Computer Science and Artificial Intelligence, Czestochowa, Poland ; Center of Excellence in Artificial Intelligence