Szczegóły publikacji
Opis bibliograficzny
Memory-efficient graph convolutional networks for object classification and detection with event cameras / Kamil Jeziorek, Andrea Pinna, Tomasz KRYJAK // W: SPA 2023 : Signal Processing Algorithms, Architectures, Arrangements, and Applications : Poznan, 20th - 22nd September 2023 / IEEE The Institute of Electrical and Electronics Engineers Inc., [etc.]. — [Piscataway] : IEEE, [2023]. — (Signal Processing Algorithms, Architectures, Arrangements, and Applications Conference Proceedings ; ISSN 2326-0262). — ISBN: 979-8-3503-0498-5. — S. 160-165. — Bibliogr. s. 165, Abstr. — T. Kryjak - dod. afiliacja: Sorbonne Universite, France
Autorzy (3)
- AGHJeziorek Kamil
- Pinna Andrea
- AGHKryjak Tomasz
Słowa kluczowe
Dane bibliometryczne
ID BaDAP | 150128 |
---|---|
Data dodania do BaDAP | 2023-12-18 |
Tekst źródłowy | URL |
DOI | 10.23919/SPA59660.2023.10274464 |
Rok publikacji | 2023 |
Typ publikacji | materiały konferencyjne (aut.) |
Otwarty dostęp | |
Wydawca | Institute of Electrical and Electronics Engineers (IEEE) |
Czasopismo/seria | Signal Processing Algorithms, Architectures, Arrangements, and Applications Conference Proceedings |
Abstract
Recent advances in event camera research emphasize processing data in its original sparse form, which allows the use of its unique features such as high temporal resolution, high dynamic range, low latency, and resistance to image blur. One promising approach for analyzing event data is through graph convolutional networks (GCNs). However, current research in this domain primarily focuses on optimizing computational costs, neglecting the associated memory costs. In this paper, we consider both factors together in order to achieve satisfying results and relatively low model complexity. For this purpose, we performed a comparative analysis of different graph convolution operations, considering factors such as execution time, the number of trainable model parameters, data format requirements, and training outcomes. Our results show a 450-fold reduction in the number of parameters for the feature extraction module and a 4.5-fold reduction in the size of the data representation while maintaining a classification accuracy of 52.3%, which is 6.3% higher compared to the operation used in state-of-the-art approaches. To further evaluate performance, we implemented the object detection architecture and evaluated its performance on the N-Caltech101 dataset. The results showed an accuracy of 53.7% mAP@0.5 and reached an execution rate of 82 graphs per second.