Szczegóły publikacji

Opis bibliograficzny

High-performance and programmable attentional graph neural networks with Global tensor formulations / Maciej Besta, Paweł RENC, Robert Gerstenberger, Paolo Sylos Labini, Alexandros Ziogas, Tiancheng Chen, Lukas Gianinazzi, Florian Scheidl, Kalman Szenes, Armon Carigiet, Patrick Iff, Grzegorz Kwasniewski, Raghavendra Kanakagiri, Chio Ge, Sammy Jaeger, Jarosław WĄS, Flavio Vella, Torsten Hoefler // W: SC'23 [Dokument elektroniczny] : proceedings of the international conference for High performance computing, networking, storage and analysis : Denver, CO, USA, November 12–17, 2023. — Wersja do Windows. — Dane tekstowe. — New York : Association for Computing Machinery, 2023. — e-ISBN: 979-8-4007-0109-2. — S. 1–14, [4], art. no. 66. — Wymagania systemowe: Adobe Reader. — Tryb dostępu: https://dl.acm.org/doi/pdf/10.1145/3581784.3607067 [2023-11-15]. — Bibliogr. s. 12–14, Abstr. — Publikacja dostępna online od: 2023-11-11. — P. Renc - dod. afiliacja: Sano Centre for Computational Medicine, Kraków


Autorzy (18)

  • Besta Maciej
  • AGHRenc Paweł
  • Gerstenberger Robert
  • Labini Paolo Sylos
  • Ziogas Alexandros
  • Chen Tiancheng
  • Gianinazzi Lukas
  • Scheidl Florian
  • Szenes Kalman
  • Carigiet Armon
  • Iff Patrick
  • Kwasniewski Grzegorz
  • Kanakagiri Raghavendra
  • Ge Chio
  • Jaeger Sammy
  • AGHWąs Jarosław
  • Vella Flavio
  • Hoefler Torsten

Słowa kluczowe

Graph Neural Networkssparse dense tensor operationsgraph attention models

Dane bibliometryczne

ID BaDAP150154
Data dodania do BaDAP2023-11-16
DOI10.1145/3581784.3607067
Rok publikacji2023
Typ publikacjimateriały konferencyjne (aut.)
Otwarty dostęptak
Creative Commons
WydawcaAssociation for Computing Machinery (ACM)
KonferencjaInternational conference for High performance computing, networking, storage and analysis

Abstract

Graph attention models (A-GNNs), a type of Graph Neural Networks (GNNs), have been shown to be more powerful than simpler convolutional GNNs (C-GNNs). However, A-GNNs are more complex to program and difficult to scale. To address this, we develop a novel mathematical formulation, based on tensors that group all the feature vectors, targeting both training and inference of A-GNNs. The formulation enables straightforward adoption of communication-minimizing routines, it fosters optimizations such as vectorization, and it enables seamless integration with established linear algebra DSLs or libraries such as GraphBLAS. Our implementation uses a data redistribution scheme explicitly developed for sparse-dense tensor operations used heavily in GNNs, and fusing optimizations that further minimize memory usage and communication cost. We ensure theoretical asymptotic reductions in communicated data compared to the established message-passing GNN paradigm. Finally, we provide excellent scalability and speedups of even 4–5x over modern libraries such as Deep Graph Library.

Publikacje, które mogą Cię zainteresować

fragment książki
Atom: neural traffic compression with spatio-temporal graph neural networks / Paul Almasan, Krzysztof RUSEK, Shihan Xiao, Xiang Shi, Xiangle Cheng, Albert Cabellos-Aparicio, Pere Barlet-Ros // W: GNNet'23 [Dokument elektroniczny] : proceedings of the 2nd on Graph Neural Networking workshop 2023 : December 8, 2023, Paris, France. — Wersja do Windows. — Dane tekstowe. — New York : The Association for Computing Machinery, cop. 2023. — e-ISBN: 979-8-4007-0448-2. — S. 1–6. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 6, Abstr. — Publikacja dostępna online od: 2023-12-05
fragment książki
Novel approaches toward scalable composable workflows in hyper-heterogeneous computing environments / Jonathan Bader, James Belak, Matthew Bement, Matthew Berry, Robert Carson, Daniela Cassol, Stephen Chan, John Coleman, Kastan Day, Alejandro Duque, Kjiersten Fagnan, Jeff Froula, Shantenu Jha, Daniel S. Katz, Piotr Kica, Volodymyr Kindratenko, Edward Kirton, Ramani Kothadia, Daniel Laney, Fabian Lehmann, Ulf Leser, Sabina Lichołai, Maciej MALAWSKI, Mario Melara, Elais Player, Matt Rolchigo, Setareh Sarrafan, Seung-Jin Sul, Abdullah Syed, Lauritz Thamsen, Mikhail Titov, Matteo Turilli, Silvina Caino-Lores, Anirban Mandal // W: SC23 Workshops [Dokument elektroniczny] : proceedings of 2023 SC23 Workshops of the international conference on High performance computing, network, storage, and analysis : Nov 12–17, 2023, Denver, CO, [USA]. — Wersja do Windows. — Dane tekstowe. — New York : The Association for Computing Machinery, cop. 2023. — e-ISBN: 979-8-4007-0785-8. — S. 2097–2108. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 2108, Abstr. — Publikacja dostępna online od: 2023-11-12. --- Referat jest częścią WORKS23 : the 18th workshop on Workflows in Support of large-scale science : held in conjunction with SC23