Szczegóły publikacji
Opis bibliograficzny
Feature significance in wide neural networks / Janusz A. Starzyk, Rafał Niemiec, Adrian HORZYK // W: IEEE SSCI 2019 [Dokument elektroniczny] : 2019 IEEE Symposium Series on Computational Intelligence : December 6–9, 2019, Xiamen, China. — Wersja do Windows. — Dane tekstowe. — [Piscataway] : IEEE, [2019]. — ISBN: 978-172812485-8; e-ISBN: 978-1-7281-2484-1. — S. 908–915. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://ieeexplore-1ieee-1org-1000047x200e6.wbg2.bg.agh.edu.p... [2020-03-25]. — Bibliogr. s. 915, Abstr. — Toż. na dysku Flash. - W bazie Scopus zakres stron: 909–916
Autorzy (3)
- Starzyk Janusz A.
- Niemiec Rafał
- AGHHorzyk Adrian
Słowa kluczowe
Dane bibliometryczne
ID BaDAP | 126619 |
---|---|
Data dodania do BaDAP | 2020-01-07 |
DOI | 10.1109/SSCI44817.2019.9002711 |
Rok publikacji | 2019 |
Typ publikacji | materiały konferencyjne (aut.) |
Otwarty dostęp | |
Wydawca | Institute of Electrical and Electronics Engineers (IEEE) |
Konferencja | 2019 IEEE Symposium Series on Computational Intelligence |
Abstract
Wide neural networks were recently proposed as a less costly alternative to deep neural networks. In this paper, we analyze the properties of wide neural networks regarding feature selection and their significance We compared the random selection ofweights in the hidden layer to the selection based on radial basis functions. Wide neural networks were also compared with fully connected cascade networks. Feature significance was introduced as a measure to compare various feature selection techniques. Another performance measure introduced in this paper - incremental feature significance - determines the level of improvement that results from selecting only some features, which were added to the existing features, rather than replacing one set of features with another. In both cases, we can also estimate the number of features saved by replacing the original features with the selected ones for which recognition levels improve. This approach can be applied to wide networks that use different feature selection methods than those that are analyzed in this paper; like a k-nearest neighbor, an autoencoder etc. © 2019 IEEE.