Szczegóły publikacji
Opis bibliograficzny
Simulated annealing and bacterial foraging for probabilistic neural network parameters adjustment / Szymon KUCHARCZYK, Piotr A. KOWALSKI // W: Computational intelligence and mathematics for tackling complex problems 6 / eds. László T. Kóczy, Jesús Medina, Piotr A. Kowalski, Eloísa Ramírez-Poussa. — Cham : Springer Nature Switzerland, cop. 2026. — ( Studies in Computational Intelligence ; ISSN 1860-949X ; SCI vol. 1222 ). — Publikacja zawiera materiały z konferencji: 15th European Symposium on Computational Intelligence and Mathematics : 12–15 May 2024, Krakow. — ISBN: 978-3-031-97878-4; e-ISBN: 978-3-031-97879-1. — S. 173–187. — Bibliogr., Abstr. — P. A. Kowalski - dod. afiliacja: Systems Research Institute Polish Academy of Sciences, Warsaw, Poland
Autorzy (2)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 166192 |
|---|---|
| Data dodania do BaDAP | 2026-03-12 |
| DOI | 10.1007/978-3-031-97879-1_19 |
| Rok publikacji | 2026 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Springer |
| Czasopismo/seria | Studies in Computational Intelligence |
Abstract
Probabilistic Neural Networks (PNNs), a category of Feedforward Neural Networks, leverage Kernel Density Estimators (KDEs) and the Bayesian conditional probability theorem for estimating conditional probabilities. Initially designed for classification, these networks exhibit commendable performance in both classification and regression tasks. The training process involves determining optimal or suboptimal values for the KDE smoothing parameter, commonly accomplished through analytical methods such as the Plug-in technique. Additionally, metaheuristic approaches like Particle Swarm Optimisation and Krill Herd Algorithm have been employed for smoothing parameter optimisation in PNNs due to the absence of gradient calculations. This contribution proposes the integration of Bacterial Foraging Optimisation (BFO) and Simulated Annealing (SA) for enhancing PNNs. The efficiency of these techniques in optimising PNNs is compared with the conventional Plug-in method, employing benchmark classification datasets sourced from UCI and Kaggle repositories. The results reveal that SA surpasses other methods in specific benchmarking tasks, suggesting its efficacy in training PNNs for specific problem domains.