video data
Recently Published Documents


TOTAL DOCUMENTS

2451
(FIVE YEARS 924)

H-INDEX

39
(FIVE YEARS 8)

2022 ◽  
Vol 24 (2) ◽  
pp. 1-18
Author(s):  
Raya Basil Alothman ◽  
Imad Ibraheem Saada ◽  
Basma Salim Bazel Al-Brge

When data exchange advances through the electronic system, the need for information security has become a must. Protection of images and videos is important in today's visual communication system. Confidential image / video data must be shielded from unauthorized uses. Detecting and identifying unauthorized users is a challenging task. Various researchers have suggested different techniques for securing the transfer of images. In this research, the comparative study of these current technologies also addressed the types of images / videos and the different techniques of image / video processing with the steps used to process the image or video. This research classifies the two types of Encryption Algorithm, Symmetric and Encryption Algorithm, and provides a comparative analysis of its types, such as AES, MAES, RSA, DES, 3DES and BLOWFISH.


2022 ◽  
Vol 24 (2) ◽  
pp. 0-0

When data exchange advances through the electronic system, the need for information security has become a must. Protection of images and videos is important in today's visual communication system. Confidential image / video data must be shielded from unauthorized uses. Detecting and identifying unauthorized users is a challenging task. Various researchers have suggested different techniques for securing the transfer of images. In this research, the comparative study of these current technologies also addressed the types of images / videos and the different techniques of image / video processing with the steps used to process the image or video. This research classifies the two types of Encryption Algorithm, Symmetric and Encryption Algorithm, and provides a comparative analysis of its types, such as AES, MAES, RSA, DES, 3DES and BLOWFISH.


2022 ◽  
pp. 1-12
Author(s):  
Md Rajib M Hasan ◽  
Noor H. S. Alani

Moving or dynamic object analysis continues to be an increasingly active research field in computer vision with many types of research investigating different methods for motion tracking, object recognition, pose estimation, or motion evaluation (e.g. in sports sciences). Many techniques are available to measure the forces and motion of the people, such as force plates to measure ground reaction forces for a jump or running sports. In training and commercial solution, the detailed motion of athlete's available motion capture devices based on optical markers on the athlete's body and multiple calibrated fixed cameras around the sides of the capture volume can be used. In some situations, it is not practical to attach any kind of marker or transducer to the athletes or the existing machinery are being used, while it is required by a pure vision-based approach to use the natural appearance of the person or object. When a sporting event is taking place, there are opportunities for computer vision to help the referee and other personnel involved in the sports to keep track of incidents occurring, which may provide full coverage and analysis in details of the event for sports viewers. The research aims at using computer vision methods, specially designed for monocular recording, for measuring sports activities, such as high jump, wide jump, or running. Just for indicating the complexity of the project: a single camera needs to understand the height at a particular distance using silhouette extraction. Moving object analysis benefits from silhouette extraction and this has been applied to many domains including sports activities. This paper comparatively discusses two significant techniques to extract silhouettes of a moving object (a jumping person) in monocular video data in different scenarios. The results show that the performance of silhouette extraction varies in dependency on the quality of used video data.


ZDM ◽  
2022 ◽  
Author(s):  
Markku S. Hannula ◽  
Eeva Haataja ◽  
Erika Löfström ◽  
Enrique Garcia Moreno-Esteva ◽  
Jessica F. A. Salminen-Saari ◽  
...  

AbstractIn this reflective methodological paper we focus on affordances and challenges of video data. We compare and analyze two research settings that use the latest video technology to capture classroom interactions in mathematics education, namely, The Social Unit of Learning (SUL) project of the University of Melbourne and the MathTrack project of the University of Helsinki. While using these two settings as examples, we have structured our reflections around themes pertinent to video research in general, namely, research methods, data management, and research ethics. SUL and MathTrack share an understanding of mathematics learning as social multimodal practice, and provide possibilities for zooming into the situational micro interactions that construct collaborative problem-solving learning. Both settings provide rich data for in-depth analyses of peer interactions and learning processes. The settings share special needs for technical support and data management, as well as attention to ethical aspects from the perspective of the participants’ security and discretion. SUL data are especially suitable for investigating interactions on a broad scope, addressing how multiple interactional processes intertwine. MathTrack, on the other hand, enables exploration of participants’ visual attention in detail and its role in learning. Both settings could provide tools for teachers’ professional development by showing them aspects of classroom interactions that would otherwise remain hidden.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Long Hao ◽  
Li-Min Zhou

As the demand for education continues to increase, the relative lack of physical resources has become a bottleneck hindering the development of school physical education to a certain extent. This research mainly discusses the evaluation index system of school sports resources based on artificial intelligence and edge computing. Human resources, financial resources, and material resources in school sports resources are the three major resources in resource science. University sports stadium information publicity uses Internet technology to establish a sports information management platform and mobile Internet terminals to optimize university sports resources and stadium information management services. It uses artificial intelligence technology to improve venue information management. It establishes a comprehensive platform for venue management information, collects multidimensional information, provides information resources and accurate information push, and links venue information with public fitness needs. Using edge computing to realize nearby cloud processing of video data, reduce the phenomenon of black screen jams during live broadcast, improve data computing capabilities, and reduce users’ dependence on the performance of terminal devices, build a smart sports resource platform, combine artificial intelligence (AI) to create smart communities, smart venues, and realize intelligent operations such as event service operations and safety prevention and control in important event venues. During the live broadcast of the student sports league, the nearby cloud processing of video data is realized in the form of edge computing, which improves the data computing ability and reduces the performance dependence on the user terminal equipment itself. In the academic survey of college physical education teachers, undergraduates accounted for 26.99%, masters accounted for 60.3%, and doctoral degrees accounted for 12.8%. This research will help the reasonable allocation of school sports resources.


2022 ◽  
Vol 12 ◽  
Author(s):  
Anna Bánki ◽  
Martina de Eccher ◽  
Lilith Falschlehner ◽  
Stefanie Hoehl ◽  
Gabriela Markova

Online data collection with infants raises special opportunities and challenges for developmental research. One of the most prevalent methods in infancy research is eye-tracking, which has been widely applied in laboratory settings to assess cognitive development. Technological advances now allow conducting eye-tracking online with various populations, including infants. However, the accuracy and reliability of online infant eye-tracking remain to be comprehensively evaluated. No research to date has directly compared webcam-based and in-lab eye-tracking data from infants, similarly to data from adults. The present study provides a direct comparison of in-lab and webcam-based eye-tracking data from infants who completed an identical looking time paradigm in two different settings (in the laboratory or online at home). We assessed 4-6-month-old infants (n = 38) in an eye-tracking task that measured the detection of audio-visual asynchrony. Webcam-based and in-lab eye-tracking data were compared on eye-tracking and video data quality, infants’ viewing behavior, and experimental effects. Results revealed no differences between the in-lab and online setting in the frequency of technical issues and participant attrition rates. Video data quality was comparable between settings in terms of completeness and brightness, despite lower frame rate and resolution online. Eye-tracking data quality was higher in the laboratory than online, except in case of relative sample loss. Gaze data quantity recorded by eye-tracking was significantly lower than by video in both settings. In valid trials, eye-tracking and video data captured infants’ viewing behavior uniformly, irrespective of setting. Despite the common challenges of infant eye-tracking across experimental settings, our results point toward the necessity to further improve the precision of online eye-tracking with infants. Taken together, online eye-tracking is a promising tool to assess infants’ gaze behavior but requires careful data quality control. The demographic composition of both samples differed from the generic population on caregiver education: our samples comprised caregivers with higher-than-average education levels, challenging the notion that online studies will per se reach more diverse populations.


2022 ◽  
Vol 2 (1) ◽  
Author(s):  
Yalong Pi ◽  
Nick Duffield ◽  
Amir H. Behzadan ◽  
Tim Lomax

AbstractAccurate and prompt traffic data are necessary for the successful management of major events. Computer vision techniques, such as convolutional neural network (CNN) applied on video monitoring data, can provide a cost-efficient and timely alternative to traditional data collection and analysis methods. This paper presents a framework designed to take videos as input and output traffic volume counts and intersection turning patterns. This framework comprises a CNN model and an object tracking algorithm to detect and track vehicles in the camera’s pixel view first. Homographic projection then maps vehicle spatial-temporal information (including unique ID, location, and timestamp) onto an orthogonal real-scale map, from which the traffic counts and turns are computed. Several video data are manually labeled and compared with the framework output. The following results show a robust traffic volume count accuracy up to 96.91%. Moreover, this work investigates the performance influencing factors including lighting condition (over a 24-h-period), pixel size, and camera angle. Based on the analysis, it is suggested to place cameras such that detection pixel size is above 2343 and the view angle is below 22°, for more accurate counts. Next, previous and current traffic reports after Texas A&M home football games are compared with the framework output. Results suggest that the proposed framework is able to reproduce traffic volume change trends for different traffic directions. Lastly, this work also contributes a new intersection turning pattern, i.e., counts for each ingress-egress edge pair, with its optimization technique which result in an accuracy between 43% and 72%.


Sign in / Sign up

Export Citation Format

Share Document