Szczegóły publikacji
Opis bibliograficzny
Deep reinforcement learning for energy-efficient 6G V2X networks / Faysal MARZUK, Andres VEJAR, Piotr CHOŁDA // Electronics [Dokument elektroniczny]. — Czasopismo elektroniczne ; ISSN 2079-9292 . — 2025 — vol. 14 iss. 6 art. no. 1148, s. 1–23. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 21–23, Abstr. — Publikacja dostępna online od: 2025-03-14
Autorzy (3)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 159050 |
|---|---|
| Data dodania do BaDAP | 2025-05-05 |
| Tekst źródłowy | URL |
| DOI | 10.3390/electronics14061148 |
| Rok publikacji | 2025 |
| Typ publikacji | artykuł w czasopiśmie |
| Otwarty dostęp | |
| Creative Commons | |
| Czasopismo/seria | Electronics |
Abstract
The deployment of 6G vehicle-to-everything (V2X) networks is a challenging task given the 6G requirements of ultra-high data rates along with ultra-low latency levels. For intelligent transportation systems (ITSs), V2X communications involve a high density of user equipment (UE), vehicles, and next-generation Node-B (gNB). Therefore, optimal management of the current network infrastructure plays a key role in minimizing energy and latency. Optimal resource allocation using linear programming methods cannot be scaled to the required scenarios of 6G V2X communications and is not suitable for online allocation. To overcome these limitations, deep reinforcement learning (DRL) is a promising approach given its properties of direct integration with online allocation models. In this work, we investigate the problem of optimal resource allocation in 6G V2X networks, where ITSs are deployed to execute tasks offloaded by vehicles subject to data rate and latency requirements. We apply a policy optimization-based DRL to jointly reduce the number of active gNBs and the latency concerning the offloaded tasks going on in the vehicle. The model is analyzed for several ITS scenarios to investigate the performance observed and the advantages of the proposed policy optimization allocation for 6G V2X networks. Our evaluation results illustrate that the proposed DRL-based algorithm produces dynamic solutions that approximate the optimal ones at reasonable rates of energy consumption. Our numerical results indicate that DRL-based solutions are distinguished by equivalently balanced energy consumption under different scenarios.