Szczegóły publikacji

Opis bibliograficzny

Deep reinforcement and IL for autonomous driving: a review in the CARLA simulation environment / Piotr CZECHOWSKI, Bartosz Kawa, Mustafa SAKHAI, Maciej WIELGOSZ // Applied Sciences (Basel) [Dokument elektroniczny]. — Czasopismo elektroniczne ; ISSN 2076-3417. — 2025 — vol. 15 iss. 16 art. no. 8972, s. 1–25. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 22–25, Abstr. — Publikacja dostępna online od: 2025-08-14. — M. Wielgosz - dod. afiliacja: Akademickie Centrum Komputerowe Cyfronet AGH

Autorzy (4)

Słowa kluczowe

autonomous drivingimitation learningreinforcement learning

Dane bibliometryczne

ID BaDAP162003
Data dodania do BaDAP2025-09-29
Tekst źródłowyURL
DOI10.3390/app15168972
Rok publikacji2025
Typ publikacjiprzegląd
Otwarty dostęptak
Creative Commons
Czasopismo/seriaApplied Sciences (Basel)

Abstract

Autonomous driving is a complex and fast-evolving domain at the intersection of robotics, machine learning, and control systems. This paper provides a systematic review of recent developments in reinforcement learning (RL) and imitation learning (IL) approaches for autonomous vehicle control, with a dedicated focus on the CARLA simulator, an open-source, high-fidelity platform that has become a standard for learning-based autonomous vehicle (AV) research. We analyze RL-based and IL-based studies, extracting and comparing their formulations of state, action, and reward spaces. Special attention is given to the design of reward functions, control architectures, and integration pipelines. Comparative graphs and diagrams illustrate performance trade-offs. We further highlight gaps in generalization to real-world driving scenarios, robustness under dynamic environments, and scalability of agent architectures. Despite rapid progress, existing autonomous driving systems exhibit significant limitations. For instance, studies show that end-to-end reinforcement learning (RL) models can suffer from performance degradation of up to 35% when exposed to unseen weather or town conditions, and imitation learning (IL) agents trained solely on expert demonstrations exhibit up to 40% higher collision rates in novel environments. Furthermore, reward misspecification remains a critical issue-over 20% of reported failures in simulated environments stem from poorly calibrated reward signals. Generalization gaps, especially in RL, also manifest in task-specific overfitting, with agents failing up to 60% of the time when faced with dynamic obstacles not encountered during training. These persistent shortcomings underscore the need for more robust and sample-efficient learning strategies. Finally, we discuss hybrid paradigms that integrate IL and RL, such as Generative Adversarial IL, and propose future research directions.

Publikacje, które mogą Cię zainteresować

artykuł
#146476Data dodania: 10.5.2023
High-level sensor models for the reinforcement learning driving policy training / Wojciech TURLEJ // Electronics [Dokument elektroniczny]. — Czasopismo elektroniczne ; ISSN 2079-9292. — 2023 — vol. 12 iss. 1 art. no. 71, s. 1–20. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 19–20, Abstr. — Publikacja dostępna online od: 2022-12-25. — Dod. afiliacja autora: Aptiv Services Poland S. A.
fragment książki
#153994Data dodania: 27.6.2024
Towards end-to-end escape in urban autonomous driving using reinforcement learning / Mustafa SAKHAI, Maciej WIELGOSZ // W: Intelligent systems and applications : proceedings of the 2023 Intelligent Systems Conference (IntelliSys) : [7-8 September 2022, Amsterdam, the Netherlands], Vol. 2 / ed. Kohei Arai. — Cham : Springer Nature Switzerland, cop. 2024. — (Lecture Notes in Networks and Systems ; ISSN 2367-3370 ; LNNS 823). — ISBN: 978-3-031-47723-2; e-ISBN: 978-3-031-47724-9. — S. 21–40. — Bibliogr., Abstr. — Publikacja dostępna online od: 2024-04-19