video feature
Recently Published Documents


TOTAL DOCUMENTS

69
(FIVE YEARS 29)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Panayiota Mini

This article examines the first Greek film to give a central role to the concept of the mermaid: Georges Skalenakis’ 1987 direct-to-video feature Gorgona (‘Mermaid’). Although actually concerning an all-human female, Gorgona attaches to her many traits of both the internationally common half-fish/half-woman creature (known in Greek as γοργόνα/gorgona) and the mermaid sister (also known as γοργόνα) in the legend of Alexander the Great. The article identifies the video-film’s allusions to these fishtailed figures and argues that the film produced an updated mermaid image that responded to other national and foreign audiovisual conceptions of the mermaid of the 1980s and enriched the star persona of its female lead, Eleni Filini, with a mythic quality and national symbolism.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Zhang Min-qing ◽  
Li Wen-ping

There are many different types of sports training films, and categorizing them can be difficult. As a result, this research introduces an autonomous video content classification system that makes managing large amounts of video data easier. This research provides a video feature extraction approach using a support vector machine (SVM) video classification algorithm and a mix of video and audio dual-mode characteristics. It automates the classification of cartoons, ads, music, news, and sports videos, as well as the detection of terrorist and violent moments in films. To begin, a new feature expression scheme, the MPEG-7 visual descriptor subcombination, is proposed based on an analysis of the existing video classification algorithms, with the goal of addressing the problems in these algorithms. This is accomplished by analyzing the visual differences of the five video classification algorithms. The model was able to extract 9 descriptors from the four characteristics of color, texture, shape, and motion, resulting in a new overall visual feature with good results. The results suggest that the algorithm optimizes video segmentation by highlighting disparities in feature selection between different categories of films. Second, the support vector machine’s multivideo classification performance is improved by the enhanced secondary prediction method. Finally, a comparison experiment with current related similar algorithms was conducted. The suggested method outperformed the competition in the accuracy of video classification in five different types of videos, as well as in the recognition of terrorist and violent incidents.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xiangli Xia ◽  
Yang Gao

Video abnormal event detection is a challenging problem in pattern recognition field. Existing methods usually design the two steps of video feature extraction and anomaly detection model establishment independently, which leads to the failure to achieve the optimal result. As a remedy, a method based on one-class neural network (ONN) is designed for video anomaly detection. The proposed method combines the layer-by-layer data representation capabilities of the autoencoder and good classification capabilities of ONN. The features of the hidden layer are constructed for the specific task of anomaly detection, thereby obtaining a hyperplane to separate all normal samples from abnormal ones. Experimental results show that the proposed method achieves 94.9% frame-level AUC and 94.5% frame-level AUC on the PED1 subset and PED2 subset from the USCD dataset, respectively. In addition, it achieves 80 correct event detections on the Subway dataset. The results confirm the wide applicability and good performance of the proposed method in industrial and urban environments.


2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
J Colemeadow ◽  
S K Pandian

Abstract Aim To establish the level of both patient and clinician satisfaction with regard to the newly implemented telephone clinics and determine ways in which the service can be improved. The telephone clinics have been implemented during the Covid-19 pandemic within the urology department at a busy university hospital. Method An online and paper questionnaire was distributed to patients who received a telephone consultation between April and August 2020. A similar online questionnaire was distributed to urology staff undertaking telephone consultations. Results 44 patient responses were received, with 8 clinician responses. 72% of patients were satisfied or very satisfied with the telephone clinic service provided. The same proportion of patients received their appointment on schedule. 98% of patients could hear and understand the information relayed to them. 78% of patients would opt for a telephone clinic in the future. Only 21% of patients would have preferred the addition of a video feature to their telephone consultation. 63% of clinicians felt that telephone consultations were a suitable alternative and 89% reported their consultations ran to time. With regards to improvements, 89% of clinicians felt that a language interpretation service should be readily available and that headsets may facilitate ease of consultation. Conclusions Telephone consultations are effective and appropriate during the restricted services resulting from the COVID-19 pandemic. Patients have received telephone clinics well. The majority of clinicians felt that telephone clinics were a suitable alternative, however several improvements to the service have been suggested.


2021 ◽  
pp. 002224372110420
Author(s):  
Mi Zhou ◽  
Pedro Ferreira ◽  
Michael D. Smith ◽  
George H. Chen

Video is one of the fastest growing online services offered to consumers. The rapid growth of online video consumption brings new opportunities for marketing executives and researchers to analyze consumer behavior. However, video introduces new challenges. Specifically, analyzing unstructured video data presents formidable methodological challenges that limit the current use of multimedia data to generate marketing insights. To address this challenge, the authors propose a novel video feature framework based on machine learning and computer vision techniques, which helps marketers predict and understand the consumption of online video from a content-based perspective. The authors apply this frame-work to two unique datasets: one provided by Masterclass.com, consisting of 771 online videos and more than 2.6 million viewing records from 225,580 consumers, and another from Crash Course, consisting of 1,127 videos focusing on more traditional education disciplines. The analyses show that the framework proposed in this paper can be used to accurately predict both individual-level consumer behavior and aggregate video popularity in these two very different contexts. The authors discuss how their findings and methods can be used to advance management and marketing research with unstructured video data in other contexts such as video marketing and entertainment analytics.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3094
Author(s):  
Hanqing Chen ◽  
Chunyan Hu ◽  
Feifei Lee ◽  
Chaowei Lin ◽  
Wei Yao ◽  
...  

Recently, with the popularization of camera tools such as mobile phones and the rise of various short video platforms, a lot of videos are being uploaded to the Internet at all times, for which a video retrieval system with fast retrieval speed and high precision is very necessary. Therefore, content-based video retrieval (CBVR) has aroused the interest of many researchers. A typical CBVR system mainly contains the following two essential parts: video feature extraction and similarity comparison. Feature extraction of video is very challenging, previous video retrieval methods are mostly based on extracting features from single video frames, while resulting the loss of temporal information in the videos. Hashing methods are extensively used in multimedia information retrieval due to its retrieval efficiency, but most of them are currently only applied to image retrieval. In order to solve these problems in video retrieval, we build an end-to-end framework called deep supervised video hashing (DSVH), which employs a 3D convolutional neural network (CNN) to obtain spatial-temporal features of videos, then train a set of hash functions by supervised hashing to transfer the video features into binary space and get the compact binary codes of videos. Finally, we use triplet loss for network training. We conduct a lot of experiments on three public video datasets UCF-101, JHMDB and HMDB-51, and the results show that the proposed method has advantages over many state-of-the-art video retrieval methods. Compared with the DVH method, the mAP value of UCF-101 dataset is improved by 9.3%, and the minimum improvement on JHMDB dataset is also increased by 0.3%. At the same time, we also demonstrate the stability of the algorithm in the HMDB-51 dataset.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Peter Washington ◽  
Qandeel Tariq ◽  
Emilie Leblanc ◽  
Brianna Chrisman ◽  
Kaitlyn Dunlap ◽  
...  

AbstractStandard medical diagnosis of mental health conditions requires licensed experts who are increasingly outnumbered by those at risk, limiting reach. We test the hypothesis that a trustworthy crowd of non-experts can efficiently annotate behavioral features needed for accurate machine learning detection of the common childhood developmental disorder Autism Spectrum Disorder (ASD) for children under 8 years old. We implement a novel process for identifying and certifying a trustworthy distributed workforce for video feature extraction, selecting a workforce of 102 workers from a pool of 1,107. Two previously validated ASD logistic regression classifiers, evaluated against parent-reported diagnoses, were used to assess the accuracy of the trusted crowd’s ratings of unstructured home videos. A representative balanced sample (N = 50 videos) of videos were evaluated with and without face box and pitch shift privacy alterations, with AUROC and AUPRC scores > 0.98. With both privacy-preserving modifications, sensitivity is preserved (96.0%) while maintaining specificity (80.0%) and accuracy (88.0%) at levels comparable to prior classification methods without alterations. We find that machine learning classification from features extracted by a certified nonexpert crowd achieves high performance for ASD detection from natural home videos of the child at risk and maintains high sensitivity when privacy-preserving mechanisms are applied. These results suggest that privacy-safeguarded crowdsourced analysis of short home videos can help enable rapid and mobile machine-learning detection of developmental delays in children.


Author(s):  
Daniel Danso Essel ◽  
Ben-Bright Benuwa ◽  
Benjamin Ghansah

Sparse Representation (SR) and Dictionary Learning (DL) based Classifier have shown promising results in classification tasks, with impressive recognition rate on image data. In Video Semantic Analysis (VSA) however, the local structure of video data contains significant discriminative information required for classification. To the best of our knowledge, this has not been fully explored by recent DL-based approaches. Further, similar coding findings are not being realized from video features with the same video category. Based on the foregoing, a novel learning algorithm, Sparsity based Locality-Sensitive Discriminative Dictionary Learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of Locality-Sensitive Dictionary Learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experimental results show that, the proposed SLSDDL significantly improves the performance of video semantic detection compared with state-of-the-art approaches. The proposed approach also shows robustness to diverse video environments, proving the universality of the novel approach.


2021 ◽  
Vol 5 (1) ◽  
pp. 469
Author(s):  
Difiani - Apriyanti ◽  
Hermawati Syarif ◽  
Syahrul Ramadhan

In Indonesia, English for Specific Purpose usually offers to students of non-English Department. Actually, it must also be taught to students of English Department, especially who are in vocational higher institutions. By facilitating the learning of English for Specific Purpose by the institution, they will fit in any kinds of work fields later on. Actually, the process of learning English for Specific Purpose could be integrated in various subjects in English Department if the class is using the right learning material, like Digital Video Feature Project. It is a project assignment for Public Speaking class which is developed to improve students’ English for Specific Purpose by applying Gall and Borg Small Scale Model. The steps are need analysis, planning objectives, developing preliminary forms of product, preliminary field testing, and main field testing. This article is only discussing the last phase, which is analysing the effect of Digital Video Feature Project on the student’ English for Specific Purpose. The sample taken was a female English Department student of Politeknik Negeri Padang.  She enrolled in Public Speaking class, and chose English for Environmental Engineering for her English for Specific Purpose. She made a video feature about Waste Bank Program which is established by Padang Environmental Services Department. From the analysis  on the student’ self assessment form, the results are  the student is more competence writing in the context of Environmental Engineering,  is more skillfull as a public speaker in a topic related to Environmental Engineering field, her  knowledge related to Environmental Engineering  increases, and her social life with people related to Environmental Engineering is also built. In conclusion, video feature project of Waste Bank Program does not only influence the student’s ESP and public speaking skill, but also effect their mind and consideration to support the program.


Sign in / Sign up

Export Citation Format

Share Document