Szczegóły publikacji
Opis bibliograficzny
Traffic sign detection with event cameras and DCNN / Piotr WZOREK, Tomasz Kryjak // W: SPA 2022 [Dokument elektroniczny] : Signal Processing Algorithms, Architectures, Arrangements, and Applications : 25th IEEE SPA conference : Poznan, 21st - 22nd September 2022 : conference proceedings / IEEE The Institute of Electrical and Electronics Engineers Inc. — [Piscataway] : IEEE, cop. 2022. — (Signal Processing Algorithms, Architectures, Arrangements, and Applications Conference Proceedings ; ISSN 2326-0262). — Dod. ISBN: 979-8-3503-2008-4. — e-ISBN: 978-8-3620-6542-4. — S. 86-91. — Bibliogr. s. 91, Abstr. — Publikacja dostępna online od: 2022-11-07. — T. Kryjak - afiliacja: Silesian University of Technology, IEEE
Autorzy (2)
- AGHWzorek Piotr
- Kryjak Tomasz
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 143871 |
|---|---|
| Data dodania do BaDAP | 2022-11-29 |
| Tekst źródłowy | URL |
| DOI | 10.23919/SPA53010.2022.9927864 |
| Rok publikacji | 2022 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Institute of Electrical and Electronics Engineers (IEEE) |
| Czasopismo/seria | Signal Processing Algorithms, Architectures, Arrangements, and Applications Conference Proceedings |
Abstract
n recent years, event cameras (DVS - Dynamic Vision Sensors) have been used in vision systems as an alternative or supplement to traditional cameras. They are characterised by high dynamic range, high temporal resolution, low latency, and reliable performance in limited lighting conditions - parameters that are particularly important in the context of advanced driver assistance systems (ADAS) and self-driving cars. In this work, we test whether these rather novel sensors can be applied to the popular task of traffic sign detection. To this end, we analyse different representations of the event data: event frame, event frequency, and the exponentially decaying time surface, and apply video frame reconstruction using a deep neural network called FireNet. We use the deep convolutional neural network YOLOv4 as a detector. For particular representations, we obtain a detection accuracy in the range of 86.9-88.9% mAP@0.5. The use of a fusion of the considered representations allows us to obtain a detector with higher accuracy of 89.9% mAP@0.5. In comparison, the detector for the frames reconstructed with FireNet is characterised by an accuracy of 72.67% mAP@0.5. The results obtained illustrate the potential of event cameras in automotive applications, either as standalone sensors or in close cooperation with typical frame-based cameras.