Szczegóły publikacji

Opis bibliograficzny

Feature significance in wide neural networks / Janusz A. Starzyk, Rafał Niemiec, Adrian HORZYK // W: IEEE SSCI 2019 [Dokument elektroniczny] : 2019 IEEE Symposium Series on Computational Intelligence : December 6–9, 2019, Xiamen, China. — Wersja do Windows. — Dane tekstowe. — [Piscataway] : IEEE, [2019]. — ISBN: 978-172812485-8; e-ISBN: 978-1-7281-2484-1. — S. 908–915. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://ieeexplore-1ieee-1org-1000047x200e6.wbg2.bg.agh.edu.p... [2020-03-25]. — Bibliogr. s. 915, Abstr. — Toż. na dysku Flash. - W bazie Scopus zakres stron: 909–916


Autorzy (3)


Słowa kluczowe

broad neural networksincremental feature significancefeature significance

Dane bibliometryczne

ID BaDAP126619
Data dodania do BaDAP2020-01-07
DOI10.1109/SSCI44817.2019.9002711
Rok publikacji2019
Typ publikacjimateriały konferencyjne (aut.)
Otwarty dostęptak
WydawcaInstitute of Electrical and Electronics Engineers (IEEE)
Konferencja2019 IEEE Symposium Series on Computational Intelligence

Abstract

Wide neural networks were recently proposed as a less costly alternative to deep neural networks. In this paper, we analyze the properties of wide neural networks regarding feature selection and their significance We compared the random selection ofweights in the hidden layer to the selection based on radial basis functions. Wide neural networks were also compared with fully connected cascade networks. Feature significance was introduced as a measure to compare various feature selection techniques. Another performance measure introduced in this paper - incremental feature significance - determines the level of improvement that results from selecting only some features, which were added to the existing features, rather than replacing one set of features with another. In both cases, we can also estimate the number of features saved by replacing the original features with the selected ones for which recognition levels improve. This approach can be applied to wide networks that use different feature selection methods than those that are analyzed in this paper; like a k-nearest neighbor, an autoencoder etc. © 2019 IEEE.

Publikacje, które mogą Cię zainteresować

fragment książki
Associative data model in search for nearest neighbors and similar patterns / Adrian HORZYK, Janusz A. Starzyk // W: IEEE SSCI 2019 [Dokument elektroniczny] : 2019 IEEE Symposium Series on Computational Intelligence : December 6–9, 2019, Xiamen, China. — Wersja do Windows. — Dane tekstowe. — [Piscataway] : IEEE, [2019]. — ISBN: 978-172812485-8; e-ISBN: 978-1-7281-2484-1. — S. 932–939. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://ieeexplore-1ieee-1org-1000047x200e6.wbg2.bg.agh.edu.p... [2020-03-25]. — Bibliogr. s. 939, Abstr. — Toż. na dysku Flash. - W bazie Scopus zakres stron: 933–940
fragment książki
Data driven detection of railway point machines failures / Iwo DOBOSZEWSKI, Simon Fossier, Christophe Marsala // W: IEEE SSCI 2019 [Dokument elektroniczny] : 2019 IEEE Symposium Series on Computational Intelligence : December 6–9, 2019, Xiamen, China. — Wersja do Windows. — Dane tekstowe. — [Piscataway] : IEEE, [2019]. — ISBN: 978-172812485-8; e-ISBN: 978-1-7281-2484-1. — S. 1233-1240. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://ieeexplore-1ieee-1org-1000047x200e6.wbg2.bg.agh.edu.p... [2020-03-30]. — Bibliogr. s. 1240, Abstr. — I. Doboszewski - dod. afiliacja: Sorbonne Universite, France