synthetic video
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 9)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Imane Benraya ◽  
Nadjia Benblidia ◽  
Yasmine Amara

<p>Background subtraction is the first and basic stage in video analysis and smart surveillance to extract moving objects. In fact, the background subtraction library (BGSLibrary) was created by Andrews Sobral in 2012, which currently combines 43 background subtraction algorithms from the most popular and widely used in the field of video analysis. Each algorithm has its own characteristics, strengths and weaknesses in extracting moving objects. The evaluation allows the identification of these characteristics and helps researchers to design the best methods. Unfortunately, the literature lacks a comprehensive evaluation of the algorithms included in the library. Accordingly, the present work will evaluate these algorithms in the BGSLibrary through the segmentation performance, execution time and processor, so as to, achieve a perfect, comprehensive, real-time evaluation of the system. Indeed, a background modeling challenge (BMC) dataset was selected using the synthetic video with the presence of noise. Results are presented in tables, columns and foreground masks.</p>


2021 ◽  
Vol 13 (14) ◽  
pp. 2780
Author(s):  
Shivang Shukla ◽  
Bernard Tiddeman ◽  
Helen C. Miles

Crowd size estimation is a challenging problem, especially when the crowd is spread over a significant geographical area. It has applications in monitoring of rallies and demonstrations and in calculating the assistance requirements in humanitarian disasters. Therefore, accomplishing a crowd surveillance system for large crowds constitutes a significant issue. UAV-based techniques are an appealing choice for crowd estimation over a large region, but they present a variety of interesting challenges, such as integrating per-frame estimates through a video without counting individuals twice. Large quantities of annotated training data are required to design, train, and test such a system. In this paper, we have first reviewed several crowd estimation techniques, existing crowd simulators and data sets available for crowd analysis. Later, we have described a simulation system to provide such data, avoiding the need for tedious and error-prone manual annotation. Then, we have evaluated synthetic video from the simulator using various existing single-frame crowd estimation techniques. Our findings show that the simulated data can be used to train and test crowd estimation, thereby providing a suitable platform to develop such techniques. We also propose an automated UAV-based 3D crowd estimation system that can be used for approximately static or slow-moving crowds, such as public events, political rallies, and natural or man-made disasters. We evaluate the results by applying our new framework to a variety of scenarios with varying crowd sizes. The proposed system gives promising results using widely accepted metrics including MAE, RMSE, Precision, Recall, and F1 score to validate the results.


Author(s):  
Shoumik Majumdar ◽  
Shubhangi Jain ◽  
Isidora Chara Tourni ◽  
Arsenii Mustafin ◽  
Diala Lteif ◽  
...  

Deep learning models perform remarkably well for the same task under the assumption that data is always coming from the same distribution. However, this is generally violated in practice, mainly due to the differences in the data acquisition techniques and the lack of information about the underlying source of new data. Domain Generalization targets the ability to generalize to test data of an unseen domain; while this problem is well-studied for images, such studies are significantly lacking in spatiotemporal visual content – videos and GIFs. This is due to (1) the challenging nature of misalignment of temporal features and the varying appearance/motion of actors and actions in different domains, and (2) spatiotemporal datasets being laborious to collect and annotate for multiple domains. We collect and present the first synthetic video dataset of Animated GIFs for domain generalization, Ani-GIFs, that is used to study domain gap of videos vs. GIFs, and animated vs. real GIFs, for the task of action recognition. We provide a training and testing setting for Ani-GIFs, and extend two domain generalization baseline approaches, based on data augmentation and explainability, to the spatiotemporal domain to catalyze research in this direction.


Author(s):  
Chamara Kattadige ◽  
Shashika R Muramudalige ◽  
Kwon Nung Choi ◽  
Guillaume Jourjon ◽  
Haonan Wang ◽  
...  

Author(s):  
Anup Kadam ◽  
Sagar Rane ◽  
Arpit Mishra ◽  
Shailesh Sahu ◽  
Shubham Singh ◽  
...  
Keyword(s):  

2021 ◽  
Vol 23 ◽  
pp. 26-38 ◽  
Author(s):  
Angeliki V. Katsenou ◽  
Goce Dimitrov ◽  
Di Ma ◽  
David R. Bull

Author(s):  
S. Palazzo ◽  
C. Spampinato ◽  
P. D’Oro ◽  
D. Giordano ◽  
M. Shah

Author(s):  
Tri Wrahatnolo ◽  
Setya Chendra Wibawa ◽  
Lilik Anifah ◽  
IGP Asto Buditjahjanto ◽  
Wiyli Yustanti

Sign in / Sign up

Export Citation Format

Share Document