Szczegóły publikacji
Opis bibliograficzny
Transformed-*: a domain-incremental lifelong learning scenario generation framework / Dominik ŻUREK, Roberto Corizzo, Michał KARWATOWSKI, Marcin PIETROŃ, Kamil FABER // W: IJCNN 2023 [Dokument elektroniczny] : International Joint Conference on Neural Networks : 18–23 June 2023, Queensland, Australia : conference proceedings. — Wersja do Windows. — Dane tekstowe. — Piscataway : IEEE, cop. 2023. — (Proceedings of ... International Joint Conference on Neural Networks ; ISSN 2161-4393). — e-ISBN: 978-1-6654-8867-9. — S. [1–10]. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. [9–10], Abstr.
Autorzy (5)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 148381 |
|---|---|
| Data dodania do BaDAP | 2023-10-13 |
| Tekst źródłowy | URL |
| DOI | 10.1109/IJCNN54540.2023.10191200 |
| Rok publikacji | 2023 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Institute of Electrical and Electronics Engineers (IEEE) |
| Konferencja | IEEE International Joint Conference on Neural Networks 2023 |
| Czasopismo/seria | Proceedings of ... International Joint Conference on Neural Networks |
Abstract
Lifelong learning is becoming a popular trend in modern machine learning research. Domain-incremental scenarios are particularly relevant since they closely reflects real-world characteristics. However, one open challenge is the ability to devise scenarios that entail the inherent unpredictability and complexities of domains still unexplored in lifelong learning. To tackle this issue, we propose a framework for domain-incremental scenario generation. The framework enables users to create lifelong learning scenarios using any image dataset, and leveraging a fully customizable pool of transformation functions. We devise an algorithm and criteria that iteratively guide users in evaluating the inclusion of candidate transformation functions to the scenario and in making this decision based on desired outcomes. Experimental results with common lifelong learning strategies and benchmark datasets show that our framework is highly flexible since it allows tweaking complexities and challenges incorporated in generated scenarios. Furthermore, experimental results show that there is a gap between state-of-the-art learning strategies and a proposed upper bound to be exploited in the design of future learning strategies.