Szczegóły publikacji
Opis bibliograficzny
Efficiently parallelized associative inference of associative graph data neural networks / Adam SZRETER, Adrian HORZYK // W: Artificial Intelligence and Soft Computing : 24th International Conference, ICAISC 2025 : Zakopane, Poland, June 22–26, 2025 : proceedings , Pt. 3 / eds. Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada. — Cham : Springer Nature Switzerland, cop. 2026. — ( Lecture Notes in Computer Science ; ISSN 0302-9743. Lecture Notes in Artificial Intelligence ; 15950 ). — ISBN: 978-3-032-03710-7; e-ISBN: 978-3-032-03711-4. — S. 339–352. — Bibliogr., Abstr. — Publikacja dostępna online od: 2025-11-01
Autorzy (2)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 164450 |
|---|---|
| Data dodania do BaDAP | 2026-01-22 |
| DOI | 10.1007/978-3-032-03711-4_28 |
| Rok publikacji | 2026 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Springer |
| Konferencja | International Conference on Artificial Intelligence and Soft Computing 2025 |
| Czasopismo/seria | Lecture Notes in Computer Science |
Abstract
Associative Graph Data Structures (AGDS) enable efficient data inference by directly encoding complex relationships, achieving access times that scale logarithmically or even remain constant. However, the inherently parallel nature of AGDS is undermined when implemented on sequential processors, particularly during associative inference operations. This paper introduces the Associative Graph Data Neural Network (AGDNN), a parallelized extension of AGDS. It explores multiple parallelization strategies, including GPU acceleration, the MPI programming model, and Erlang-based concurrency. We select Erlang as the most effective approach due to its lightweight process model and efficient message passing. A highly concurrent Erlang-based algorithm for associative inference is proposed, addressing key challenges such as efficiently determining inference termination. The reference implementation demonstrates near-linear speedup across multiple cores. Experimental results confirm that AGDNN significantly enhances inference efficiency, paving the way for scalable, high-performance associative reasoning in graph-based neural models.