Szczegóły publikacji

Opis bibliograficzny

Training neural networks on high-dimensional data using random projection / Piotr Iwo WÓJCIK, Marcin KURDZIEL // Pattern Analysis and Applications ; ISSN 1433-7541. — 2019 — vol. 22 iss. 3, s. 1221–1231. — Bibliogr. s. 1230–1231, Abstr. — Publikacja dostępna online od: 2018-03-19


Autorzy (2)


Słowa kluczowe

neural networkssparse datarandom projectionhigh dimensional data

Dane bibliometryczne

ID BaDAP122861
Data dodania do BaDAP2019-07-18
Tekst źródłowyURL
DOI10.1007/s10044-018-0697-0
Rok publikacji2019
Typ publikacjiartykuł w czasopiśmie
Otwarty dostęptak
Creative Commons
Czasopismo/seriaPattern Analysis and Applications

Abstract

Training deep neural networks (DNNs) on high-dimensional data with no spatial structure poses a major computational problem. It implies a network architecture with a huge input layer, which greatly increases the number of weights, often making the training infeasible. One solution to this problem is to reduce the dimensionality of the input space to a manageable size, and then train a deep network on a representation with fewer dimensions. Here, we focus on performing the dimensionality reduction step by randomly projecting the input data into a lower-dimensional space. Conceptually, this is equivalent to adding a random projection (RP) layer in front of the network. We study two variants of RP layers: one where the weights are fixed, and one where they are fine-tuned during network training. We evaluate the performance of DNNs with input layers constructed using several recently proposed RP schemes. These include: Gaussian, Achlioptas’, Li’s, subsampled randomized Hadamard transform (SRHT) and Count Sketch-based constructions. Our results demonstrate that DNNs with RP layer achieve competitive performance on high-dimensional real-world datasets. In particular, we show that SRHT and Count Sketch-based projections provide the best balance between the projection time and the network performance.

Publikacje, które mogą Cię zainteresować

fragment książki
Supervised training of siamese spiking neural networks with Earth Mover's Distance / Mateusz PABIAN, Dominik RZEPKA, Mirosław PAWLAK // W: ICASSP 2022 [Dokument elektroniczny] : 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing : 7–13 May 2022, virtual, 22–27 May 2022, Singapore, satellite venue: Shenzhen, China : proceedings. — Wersja do Windows. — Dane tekstowe. — Piscataway : The Institute of Electrical and Electronics Engineers, cop. 2022. — (Proceedings of the ... IEEE International Conference on Acoustics, Speech, and Signal Processing ; ISSN 1520-6149). — e-ISBN: 978-1-6654-0540-9. — S. 4233–4237. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 4236–4237, Abstr.
fragment książki
High-performance and programmable attentional graph neural networks with Global tensor formulations / Maciej Besta, Paweł RENC, Robert Gerstenberger, Paolo Sylos Labini, Alexandros Ziogas, Tiancheng Chen, Lukas Gianinazzi, Florian Scheidl, Kalman Szenes, Armon Carigiet, Patrick Iff, Grzegorz Kwasniewski, Raghavendra Kanakagiri, Chio Ge, Sammy Jaeger, Jarosław WĄS, Flavio Vella, Torsten Hoefler // W: SC'23 [Dokument elektroniczny] : proceedings of the international conference for High performance computing, networking, storage and analysis : Denver, CO, USA, November 12–17, 2023. — Wersja do Windows. — Dane tekstowe. — New York : Association for Computing Machinery, 2023. — e-ISBN: 979-8-4007-0109-2. — S. 1–14, [4], art. no. 66. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://dl.acm.org/doi/pdf/10.1145/3581784.3607067 [2023-11-15]. — Bibliogr. s. 12–14, Abstr. — Publikacja dostępna online od: 2023-11-11. — P. Renc - dod. afiliacja: Sano Centre for Computational Medicine, Kraków