Szczegóły publikacji
Opis bibliograficzny
Averaging of motion capture recordings for movements’ templates generation / Tomasz Hachaj, Katarzyna KOPTYRA, Marek R. OGIELA // Multimedia Tools and Applications ; ISSN 1380-7501. — 2018 — vol. 77 iss. 23, s. 30353–30380. — Bibliogr. s. 30376–30379, Abstr. — Publikacja dostępna online od: 2018-05-24
Autorzy (3)
- Hachaj Tomasz
- AGHKoptyra Katarzyna
- AGHOgiela Marek
Słowa kluczowe
Dane bibliometryczne
ID BaDAP | 119179 |
---|---|
Data dodania do BaDAP | 2019-02-15 |
Tekst źródłowy | URL |
DOI | 10.1007/s11042-018-6137-8 |
Rok publikacji | 2018 |
Typ publikacji | artykuł w czasopiśmie |
Otwarty dostęp | |
Creative Commons | |
Czasopismo/seria | Multimedia Tools and Applications |
Abstract
In this paper we propose, describe and evaluate the novel motion capture (MoCap) data averaging framework. It incorporates hierarchical kinematic model, angle coordinates' preprocessing methods, that recalculate the original MoCap recording making it applicable for further averaging algorithms, and finally signals averaging processing. We have tested two signal averaging methods namely Kalman Filter (KF) and Dynamic Time Warping barycenter averaging (DBA). The propose methods have been tested on MoCap recordings of elite Karate athlete, multiple champion of Oyama karate knockdown kumite who performed 28 different karate techniques repeated 10 times each. The proposed methods proved to have not only high effectiveness measured with root-mean-square deviation (4.04 +/- 5.03 degrees for KF and 5.57 +/- 6.27 for DBA) and normalized Dynamic Time Warping distance (0.90 +/- 1.58 degrees for KF and 0.93 +/- 1.23 for DBA), but also the reconstruction and visualization of those recordings persists all crucial aspects of those complicated actions. The proposed methodology has many important applications in classification, clustering, kinematic analysis and coaching. Our approach generates an averaged full body motion template that can be practically used for example for human actions recognition. In order to prove it we have evaluated templates generated by our method in human action classification tasks using DTW classifier. We have made two experiments. In first leave - one - out cross - validation we have obtained 100% correct recognitions. In second experiment when we classified recordings of one person using templates of another recognition rate 94.2% was obtained.