Szczegóły publikacji
Opis bibliograficzny
Automatic optimization of hyperparameters using associative self-adapting structures / Szymon Czaplak, Adrian HORZYK // W: IJCNN 2022 [Dokument elektroniczny] : International Joint Conference on Neural Networks : Padua, Italy, 18–23 July 2022 : proceedings / IEEE. — Wersja do Windows. — Dane tekstowe. — Piscataway : IEEE, cop. 2022. — ( Proceedings of ... International Joint Conference on Neural Networks ; ISSN 2161-4393 ). — Konferencja zorganizowana w ramach IEEE World Congress on Computational Intelligence (IEEE WCCI 2022). — e-ISBN: 978-1-7281-8671-9. — S. [1–8]. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. [8], Abstr. — Publikacja dostępna online od: 2022-09-30
Autorzy (2)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 142996 |
|---|---|
| Data dodania do BaDAP | 2022-10-29 |
| Tekst źródłowy | URL |
| DOI | 10.1109/IJCNN55064.2022.9892758 |
| Rok publikacji | 2022 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Institute of Electrical and Electronics Engineers (IEEE) |
| Konferencja | IEEE International Joint Conference on Neural Networks 2022 |
| Czasopismo/seria | Proceedings of ... International Joint Conference on Neural Networks |
Abstract
In this work, we focus on hyperparameter optimization, which is one of the essential steps in the development of machine learning solutions. This work proposes a novel approach to hyperparameter optimization using associative self-adapting structures, which allows us to efficiently explore the hyperparameter space. The core algorithm introduced in this paper is based on the associative graph data structure (AGDS) merged with the evolutionary approach, inspired by the processes used in drug discovery and human behavior. The proposed algorithm was compared with two state-of-the-art algorithms, a Tree-Structured Parzen Estimator and differential evolution. All experiments were carried out using Penn Machine Learning Benchmarks, repeated ten times for each problem, to provide objective comparisons. We optimized Random Forest and deep neural network models for every dataset of these benchmarks. The results reveal a noticeable improvement compared to both optimization methods in both optimized models, which shows that the presented approach can be successfully used as a good alternative for current state-of-the-art solutions.