Szczegóły publikacji
Opis bibliograficzny
Color space channel evaluation for CLAHE-enhanced retinal vessel segmentation with Attention U-Net / Patrycja KWIEK, Małgorzata JAKUBOWSKA // W: BIBE 2025 [Dokument elektroniczny] : 2025 IEEE 25th International Conference on Bioinformatics and Bioengineering : Athens, Attiki, Greece, 6-8 November 2025 : proceedings. — Wersja do Windows. — Dane tekstowe. — Piscataway : The Institute of Electrical and Electronics Engineers, cop. 2025. — ( Proceedings - IEEE International Symposium on Bioinformatics and Bioengineering ; ISSN 2159-5410 ). — e-ISBN: 979-8-3315-5899-4. — S. 397–404. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 404, Abstr. — Publikacja dostępna online od: 2025-12-11
Autorzy (2)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 164465 |
|---|---|
| Data dodania do BaDAP | 2025-12-12 |
| Tekst źródłowy | URL |
| DOI | 10.1109/BIBE66822.2025.00073 |
| Rok publikacji | 2025 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Institute of Electrical and Electronics Engineers (IEEE) |
| Konferencja | IEEE Bioinformatics and Bioengineering 2025 |
| Czasopismo/seria | Proceedings - IEEE International Symposium on Bioinformatics and Bioengineering |
Abstract
Accurate segmentation of retinal blood vessels plays a crucial role in the early diagnosis and monitoring of systemic and ophthalmological diseases, such as diabetic retinopathy and hypertension. While many existing approaches rely on grayscale or RGB image representations, this work explores the impact of color space transformations on segmentation performance using an optimized Attention U-Net architecture. The study evaluates 19 input variations: grayscale and individual channels from six color space models—RGB, YUV, HSV, HLS, CIELab, and YCrCb. All images were preprocessed using the CLAHE algorithm to enhance local contrast and suppress background noise, particularly beneficial for medical images with low illumination and poor vessel visibility. A key innovation lies in identifying the most informative channel from each color space and comparing segmentation outcomes across them. The Attention U-Net was trained on 256×256 retinal image patches derived from the DRIVE dataset, using a composite loss function combining Binary Cross-Entropy (BCE) and Dice loss (weighted 0.2 and 0.8 respectively). Among all tested channels, the Y component from the YUV color space achieved the best performance, with a mean DICE coefficient of 0.879 on test patches and a maximum of 0.922. Full-resolution image evaluations further validated these results, reaching DICE scores up to 0.912 and pixel-level accuracy above 0.979. Statistical significance was rigorously assessed. A paired ttest confirmed that CLAHE preprocessing significantly improved segmentation accuracy (p < 0.0001). Additionally, a Wilcoxon signed-rank test demonstrated that models trained with the Y channel from YUV significantly outperformed grayscale-based models (p < 0.0001), with consistent improvement across all samples. The Y channel also exhibited the lowest variance among all tested channels, indicating strong robustness and generalization capability. This research underscores the importance of color channel selection in medical image segmentation and shows that combining optimized input representations with attention mechanisms and hybrid loss functions can yield clinically meaningful improvements. The findings provide a practical foundation for the development of more reliable, color-aware computer-aided diagnostic tools in ophthalmology and related fields.