Szczegóły publikacji

Opis bibliograficzny

Power-of-two quantization for low bitwidth and hardware compliant neural networks / Dominika PRZEWŁOCKA-RUS, Syed Shakib Sarwar, H. Ekin Sumbul, Yuecheng Li, Barbara De Salvo // W: tinyML research symposium 2022 [Dokument elektroniczny] : 28 march 2022, San Jose. — Wersja do Windows. — Dane tekstowe. — [USA : tinyML Foundation], cop. 2022. — S. [1–7]. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://cms.tinyml.org/wp-content/uploads/talks2022/2203.0502... [2022-06-23]. — Bibliogr. s. [7], Abstr. — Na stronie dod. link prezentacji: https://www.youtube.com/watch?v=doYRnyoGSvc [2022-06-23]

Autorzy (5)

Słowa kluczowe

non uniform quantizationhardware designneural networkslogarithmic quantization

Dane bibliometryczne

ID BaDAP140733
Data dodania do BaDAP2022-07-01
Rok publikacji2022
Typ publikacjimateriały konferencyjne (aut.)
Otwarty dostęptak

Abstract

Deploying Deep Neural Networks in low-power embedded devices for real time-constrained applications requires optimization of memory and computational complexity of the networks, usually by quantizing the weights. Most of the existing works employ linear quantization which causes considerable degradation in accuracy for weight bit widths lower than 8. Since the distribution of weights is usually non-uniform (with most weights concentrated around zero), other methods, such as logarithmic quantization, are more suitable as they are able to preserve the shape of the weight distribution more precise. Moreover, using base-2 logarithmic representation allows optimizing the multiplication by replacing it with bit shifting. In this paper, we explore non-linear quantization techniques for exploiting lower bit precision and identify favorable hardware implementation options. We developed the Quantization Aware Training (QAT) algorithm that allowed training of low bit width Power-of-Two (PoT) networks and achieved accuracies on par with state-of-the-art floating point models for different tasks. We explored PoT weight encoding techniques and investigated hardware designs of MAC units for three different quantization schemes - uniform, PoT and Additive-PoT (APoT) - to show the increased efficiency when using the proposed approach. Eventually, the experiments showed that for low bit width precision, non-uniform quantization performs better than uniform, and at the same time, PoT quantization vastly reduces the computational complexity of the neural network.

Publikacje, które mogą Cię zainteresować

fragment książki
#145407Data dodania: 8.3.2023
Energy efficient hardware acceleration of neural networks with power-of-two quantisation / Dominika PRZEWŁOCKA-RUS, Tomasz KRYJAK // W: Computer Vision and Graphics : proceedings of the International Conference on Computer Vision and Graphics ICCVG 2022 : [19-21 September 2022, Warsaw] / eds. Leszek J. Chmielewski, Arkadiusz Orłowski. — Cham : Springer Nature Switzerland AG, cop. 2023. — (Lecture Notes in Networks and Systems ; ISSN 2367-3370 ; LNNS 598). — ISBN: 978-3-031-22024-1; e-ISBN: 978-3-031-22025-8. — S. 225–236. — Bibliogr., Abstr. — Publikacja dostępna online od: 2023-02-11
fragment książki
#123312Data dodania: 4.10.2019
Unveiling the potential of Graph Neural Networks for network modeling and optimization in SDN / Krzysztof RUSEK, José Suárez-Varela, Albert Mestres, Pere Barlet-Ros, Albert Cabellos-Aparicio // W: SOSR'19 [Dokument elektroniczny] : proceedings of the 2019 ACM Symposium On SDN Research : San Jose, USA, April 3–4, 2019. — Wersja do Windows. — Dane tekstowe. — [USA : ACM], [2019]. — e-ISBN: 978-1-4503-6710-3. — S. 140–151. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 151, Abstr.