Szczegóły publikacji

Opis bibliograficzny

Energy efficient hardware acceleration of neural networks with power-of-two quantisation / Dominika PRZEWŁOCKA-RUS, Tomasz KRYJAK // W: Computer Vision and Graphics : proceedings of the International Conference on Computer Vision and Graphics ICCVG 2022 : [19-21 September 2022, Warsaw] / eds. Leszek J. Chmielewski, Arkadiusz Orłowski. — Cham : Springer Nature Switzerland AG, cop. 2023. — (Lecture Notes in Networks and Systems ; ISSN 2367-3370 ; LNNS 598). — ISBN: 978-3-031-22024-1; e-ISBN: 978-3-031-22025-8. — S. 225–236. — Bibliogr., Abstr. — Publikacja dostępna online od: 2023-02-11

Autorzy (2)

Słowa kluczowe

hardware accelerationneural networkspower of two quantizationenergy efficient

Dane bibliometryczne

ID BaDAP145407
Data dodania do BaDAP2023-03-08
DOI10.1007/978-3-031-22025-8_16
Rok publikacji2023
Typ publikacjimateriały konferencyjne (aut.)
Otwarty dostęptak
WydawcaSpringer
Czasopismo/seriaLecture Notes in Networks and Systems

Abstract

Deep neural networks virtually dominate the domain of most modern vision systems, providing high performance at a cost of increased computational complexity. Since for those systems it is often required to operate both in real-time and with minimal energy consumption (e.g., for wearable devices or autonomous vehicles, edge Internet of Things (IoT), sensor networks), various network optimisation techniques are used, e.g., quantisation, pruning, or dedicated lightweight architectures. Due to the logarithmic distribution of weights in neural network layers, a method providing high performance with significant reduction in computational precision (for 4-bit weights and less) is the Power-of-Two (PoT) quantisation (and therefore also with a logarithmic distribution). This method introduces additional possibilities of replacing the typical for neural networks Multiply and ACcumulate (MAC—performing, e.g., convolution operations) units, with more energy-efficient Bitshift and ACcumulate (BAC). In this paper, we show that a hardware neural network accelerator with PoT weights implemented on the Zynq UltraScale + MPSoC ZCU104 SoC FPGA can be at least 1.4x more energy efficient than the uniform quantisation version. To further reduce the actual power requirement by omitting part of the computation for zero weights, we also propose a new pruning method adapted to logarithmic quantisation.

Publikacje, które mogą Cię zainteresować

fragment książki
#145405Data dodania: 8.3.2023
Traffic sign classification using deep and quantum neural networks / Sylwia Kuros, Tomasz KRYJAK // W: Computer Vision and Graphics : proceedings of the International Conference on Computer Vision and Graphics ICCVG 2022 : [19-21 September 2022, Warsaw] / eds. Leszek J. Chmielewski, Arkadiusz Orłowski. — Cham : Springer Nature Switzerland AG, cop. 2023. — (Lecture Notes in Networks and Systems ; ISSN 2367-3370 ; LNNS 598). — ISBN: 978-3-031-22024-1; e-ISBN: 978-3-031-22025-8. — S. 43–55. — Bibliogr., Abstr. — Publikacja dostępna online od: 2023-02-11
fragment książki
#140733Data dodania: 1.7.2022
Power-of-two quantization for low bitwidth and hardware compliant neural networks / Dominika PRZEWŁOCKA-RUS, Syed Shakib Sarwar, H. Ekin Sumbul, Yuecheng Li, Barbara De Salvo // W: tinyML research symposium 2022 [Dokument elektroniczny] : 28 march 2022, San Jose. — Wersja do Windows. — Dane tekstowe. — [USA : tinyML Foundation], cop. 2022. — S. [1–7]. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://cms.tinyml.org/wp-content/uploads/talks2022/2203.0502... [2022-06-23]. — Bibliogr. s. [7], Abstr. — Na stronie dod. link prezentacji: https://www.youtube.com/watch?v=doYRnyoGSvc [2022-06-23]