Szczegóły publikacji
Opis bibliograficzny
Hardware-accelerated event-graph neural networks for low-latency time-series classification on SoC FPGA / Hiroshi Nakano, Krzysztof BŁACHUT, Kamil JEZIOREK, Piotr WZOREK, Manon Dampfhoffer, Thomas Mesquida, Hiroaki Nishi, Tomasz KRYJAK, Thomas Dalgaty // W: Applied Reconfigurable Computing : architectures, tools, and applications : 21st international symposium, ARC 2025 : Seville, Spain, April 9–11, 2025 : proceedings / eds. Roberto Giorgi, [et al.]. — Cham : Springer, cop. 2025. — ( Lecture Notes in Computer Science ; ISSN 0302-9743 ; LNCS 15594 ). — ISBN: 978-3-031-87994-4; e-ISBN: 978-3-031-87995-1. — S. 51–68. — Bibliogr., Abstr. — Publikacja dostępna online od: 2025-04-04
Autorzy (9)
- Nakano Hiroshi
- AGHBłachut Krzysztof
- AGHJeziorek Kamil
- AGHWzorek Piotr
- Dampfhoffer Manon
- Mesquida Thomas
- Nishi Hiroaki
- AGHKryjak Tomasz
- Dalgaty Thomas
Dane bibliometryczne
| ID BaDAP | 160129 |
|---|---|
| Data dodania do BaDAP | 2025-06-02 |
| DOI | 10.1007/978-3-031-87995-1_4 |
| Rok publikacji | 2025 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Springer |
| Czasopismo/seria | Lecture Notes in Computer Science |
Abstract
As the quantities of data recorded by embedded edge sensors grow, so too does the need for intelligent local processing. Such data often comes in the form of time-series signals, based on which real-time predictions can be made locally using an AI model. However, a hardware-software approach capable of making low-latency predictions with low power consumption is required. In this paper, we present a hardware implementation of an event-graph neural network for time-series classification. We leverage an artificial cochlea model to convert the input time-series signals into a sparse event-data format that allows the event-graph to drastically reduce the number of calculations relative to other AI methods. We implemented the design on a SoC FPGA and applied it to the real-time processing of the Spiking Heidelberg Digits (SHD) dataset to benchmark our approach against competitive solutions. Our method achieves a floating-point accuracy of 92.7% on the SHD dataset for the base model, which is only 2.4% and 2% less than the state-of-the-art models with over 10 and 67 fewer model parameters, respectively. It also outperforms FPGA-based spiking neural network implementations by 19.3% and 4.5%, achieving 92.3% accuracy for the quantised model while using fewer computational resources and reducing latency.