Szczegóły publikacji
Opis bibliograficzny
User-generated content (UGC)/in-the-wild video content recognition / Mikołaj LESZCZUK, Lucjan JANOWSKI, Jakub NAWAŁA, Michał GREGA // W: Intelligent Information and Database Systems : 14th Asian Conference, ACIIDS 2022 : Ho Chi Minh City, Vietnam, November 28–30, 2022 : proceedings, Pt. 2 / eds. Ngoc Thanh Nguyen, [et al.]. — Cham : Springer Nature Switzerland, cop. 2022. — (Lecture Notes in Computer Science ; ISSN 0302-9743 ; LNCS 13758. Lecture Notes in Artificial Intelligence). — ISBN: 978-3-031-21966-5; e-ISBN: 978-3-031-21967-2. — S. 356–368. — Bibliogr., Abstr. — Publikacja dostępna online od: 2022-12-09
Autorzy (4)
Słowa kluczowe
Dane bibliometryczne
| ID BaDAP | 144163 |
|---|---|
| Data dodania do BaDAP | 2022-12-20 |
| DOI | 10.1007/978-3-031-21967-2_29 |
| Rok publikacji | 2022 |
| Typ publikacji | materiały konferencyjne (aut.) |
| Otwarty dostęp | |
| Wydawca | Springer |
| Konferencja | Asian Conference on Intelligent Information and Database Systems 2022 |
| Czasopismo/seria | Lecture Notes in Computer Science |
Abstract
According to Cisco, we are facing a three-fold increase in IP traffic in five years, ranging from 2017 to 2022. IP video traffic generated by users is largely related to user-generated content (UGC). Although at the beginning of UGC creation, this content was often characterised by amateur acquisition conditions and unprofessional processing, the development of widely available knowledge and affordable equipment allows one to create UGC of a quality practically indistinguishable from professional content. Since some UGC content is indistinguishable from professional content, we are not interested in all UGC content, but only in the quality that clearly differs from the professional. For this content, we use the term “in the wild” as a concept closely related to the concept of UGC, which is its special case. In this paper, we show that it is possible to deliver the new concept of an objective “in-the-wild” video content recognition model. The value of the F measure in our model is 0.988. The resulting model is trained and tested with the use of video sequence databases containing professional and “in the wild” content. These modelling results are obtained when the random forest learning method is used. However, it should be noted that the use of the more explainable decision tree learning method does not cause a significant decrease in the value of measure F (an F-measure of 0.973).