Szczegóły publikacji
Opis bibliograficzny
Offloading in V2X with road side units: deep reinforcement learning / Widhi Yahya, Ying-Dar Lin, Faysal MARZUK, Piotr CHOŁDA, Yuan-Cheng Lai // Vehicular Communications ; ISSN 2214-2096. — 2025 — vol. 51 art. no. 100862, s. 1–15. — Bibliogr. s. 14–15, Abstr. — Publikacja dostępna online od: 2024-12-05
Autorzy (5)
- Yahya Widhi
- Lin Ying-Dar
- AGHMarzuk Faysal
- AGHChołda Piotr
- Lai Yuan-Cheng
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 157329 |
|---|---|
| Data dodania do BaDAP | 2025-02-03 |
| Tekst źródłowy | URL |
| DOI | 10.1016/j.vehcom.2024.100862 |
| Rok publikacji | 2025 |
| Typ publikacji | artykuł w czasopiśmie |
| Otwarty dostęp | |
| Czasopismo/seria | Vehicular Communications |
Abstract
Traffic offloading is crucial for reducing computing latency in distributed edge systems such as vehicle-to-everything (V2X) networks, which use roadside units (RSUs) and access network mobile edge computing (AN-MEC) with ML agents. Traffic offloading is part of the control plane problem, which requires fast decision-making in complex V2X systems. This study presents a novel ratio-based offloading strategy using the twin delayed deep deterministic policy gradient (TD3) algorithm to optimize offloading ratios in a two-tier V2X system, enabling computation at both RSUs and the edge. The offloading optimization covers both vertical and horizontal offloading, introducing a continuous search space that needs fast decision-making to accommodate fluctuating traffic in complex V2X systems. We developed a V2X environment to evaluate the performance of the offloading agent, incorporating latency models, state and action definitions, and reward structures. A comparative analysis with metaheuristic simulated annealing (SA) is conducted, and the impact of single versus multiple offloading agents with deployment options at a centralized central office (CO) is examined. Evaluation results indicate that TD3's decision time is five orders of magnitude faster than SA. For 10 and 50 sites, SA takes 602 and 20,421 seconds, respectively, while single-agent TD3 requires 4 to 24 milliseconds and multi-agent TD3 takes 1 to 3 milliseconds. The average latency for SA ranges from 0.18 to 0.32 milliseconds, single-agent TD3 from 0.26 to 0.5 milliseconds, and multi-agent TD3 from 0.22 to 0.45 milliseconds, demonstrating that TD3 approximates SA performance with initial training.