Szczegóły publikacji
Opis bibliograficzny
Explainable sparse associative self-optimizing neural networks for classification / Adrian HORZYK, Jakub Kosno, Daniel BULANDA, Janusz A. Starzyk // W: Neural Information Processing : 30th International Conference, ICONIP 2023 : Changsha, China, November 20–23, 2023 : proceedings, Pt. 9 / eds. Biao Luo, [et al.]. — Singapore : Springer Nature Singapore, cop. 2024. — (Communications in Computer and Information Science ; ISSN 1865-0929 ; vol. 1963). — ISBN: 978-981-99-8137-3; e-ISBN: 978-981-99-8138-0. — S. 229–244. — Bibliogr., Abstr. — Publikacja dostępna online od: 2023-11-26
Autorzy (4)
- AGHHorzyk Adrian
- AGHKosno Jakub
- AGHBulanda Daniel
- Starzyk Janusz A.
Słowa kluczowe
Dane bibliometryczne
ID BaDAP | 150916 |
---|---|
Data dodania do BaDAP | 2023-12-21 |
DOI | 10.1007/978-981-99-8138-0_19 |
Rok publikacji | 2024 |
Typ publikacji | materiały konferencyjne (aut.) |
Otwarty dostęp | |
Wydawca | Springer |
Konferencja | International Conference on Neural Information Processing |
Czasopismo/seria | Communications in Computer and Information Science |
Abstract
Supervised models often suffer from a multitude of possible combinations of hyperparameters, rigid nonadaptive architectures, underfitting, overfitting, the curse of dimensionality, etc. They consume many resources and slow down the optimization process. As real-world objects are related and similar, we should adapt not only network parameters but also network structure to represent patterns and relationships better. When the network reproduces the most essential and frequent data patterns and relationships and aggregate similarities, it becomes not only efficient but also explainable and trustworthy. This paper presents a new approach to detect and represent similarities of numerical training examples to self-adapt a network structure and its parameters. Such a network will facilitate the classification by identifying hyperspace regions associated with the defined classes in a training dataset. Our approach demonstrates its ability to automatically reduce input data dimension by removing features that produce distortions and do not support the classification process. The presented adaptation algorithm uses only a few optional hyperparameters and produces a sparse associative neural network structure that fits contextually any given dataset by detecting data similarities and constructing hypercuboids in data space. The explanation of these associative adaptive techniques is followed by the comparisons of the classification results against other state-of-the-art models and methods.