contrast feature
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 10)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Shreya Kumar ◽  
Swarnalaxmi Thiruvenkadam

Feature extraction is an integral part in speech emotion recognition. Some emotions become indistinguishable from others due to high resemblance in their features, which results in low prediction accuracy. This paper analyses the impact of spectral contrast feature in increasing the accuracy for such emotions. The RAVDESS dataset has been chosen for this study. The SAVEE dataset, CREMA-D dataset and JL corpus dataset were also used to test its performance over different English accents. In addition to that, EmoDB dataset has been used to study its performance in the German language. The use of spectral contrast feature has increased the prediction accuracy in speech emotion recognition systems to a good degree as it performs well in distinguishing emotions with significant differences in arousal levels, and it has been discussed in detail.<div> </div>


2020 ◽  
Vol 11 ◽  
pp. 1432-1438
Author(s):  
Linda Laflör ◽  
Michael Reichling ◽  
Philipp Rahe

A distinct dumbbell shape is observed as the dominant contrast feature in the experimental data when imaging 1,1’-ferrocene dicarboxylic acid (FDCA) molecules on bulk and thin film CaF2(111) surfaces with non-contact atomic force microscopy (NC-AFM). We use NC-AFM image calculations with the probe particle model to interpret this distinct shape by repulsive interactions between the NC-AFM tip and the top hydrogen atoms of the cyclopentadienyl (Cp) rings. Simulated NC-AFM images show an excellent agreement with experimental constant-height NC-AFM data of FDCA molecules at several tip–sample distances. By measuring this distinct dumbbell shape together with the molecular orientation, a strategy is proposed to determine the conformation of the ferrocene moiety, herein on CaF2(111) surfaces, by using the protruding hydrogen atoms as markers.


2020 ◽  
Vol 23 (4) ◽  
pp. 313-318
Author(s):  
Xiaobo Zhang ◽  
Weiyang Chen ◽  
Gang Li ◽  
Weiwei Li

Background: The analysis of retinal images can help to detect retinal abnormalities that are caused by cardiovascular and retinal disorders. Objective: In this paper, we propose methods based on texture features for mining and analyzing information from retinal images. Methods: The recognition of the retinal mask region is a prerequisite for retinal image processing. However, there is no way to automatically recognize the retinal region. By quantifying and analyzing texture features, a method is proposed to automatically identify the retinal region. The boundary of the circular retinal region is detected based on the image texture contrast feature, followed by the filling of the closed circular area, and then the detected circular retinal mask region can be obtained. Results: The experimental results show that the method based on the image contrast feature can be used to detect the retinal region automatically. The average accuracy of retinal mask region detection of images from the Digital Retinal Images for Vessel Extraction (DRIVE) database was 99.34%. Conclusion: This is the first time these texture features of retinal images are analyzed, and texture features are used to recognize the circular retinal region automatically.


2020 ◽  
Vol 7 (5) ◽  
pp. 191487
Author(s):  
Fintan Nagle ◽  
Nilli Lavie

Perceptual load is a well-established determinant of attentional engagement in a task. So far, perceptual load has typically been manipulated by increasing either the number of task-relevant items or the perceptual processing demand (e.g. conjunction versus feature tasks). The tasks used often involved rather simple visual displays (e.g. letters or single objects). How can perceptual load be operationalized for richer, real-world images? A promising proxy is the visual complexity of an image. However, current predictive models for visual complexity have limited applicability to diverse real-world images. Here we modelled visual complexity using a deep convolutional neural network (CNN) trained to learn perceived ratings of visual complexity. We presented 53 observers with 4000 images from the PASCAL VOC dataset, obtaining 75 020 2-alternative forced choice paired comparisons across observers. Image visual complexity scores were obtained using the TrueSkill algorithm. A CNN with weights pre-trained on an object recognition task predicted complexity ratings with r = 0.83. By contrast, feature-based models used in the literature, working on image statistics such as entropy, edge density and JPEG compression ratio, only achieved r = 0.70. Thus, our model offers a promising method to quantify the perceptual load of real-world scenes through visual complexity.


2020 ◽  
pp. 368-395
Author(s):  
Charlotte Galves

Based on the quantitative and qualitative study of 11 syntactically parsed texts (485,767 words) from the Tycho Brahe Parsed Corpus of Historical Portuguese, this chapter argues that Classical Portuguese, i.e. the language instantiated in texts written in Portugal by authors born in the sixteenth and seventeenth centuries, is a V2 language of the kind that Wolfe calls ‘relaxed V2 languages’. These are languages in which V1 and V3 sentences coexist with V2 patterns. To account for the sentential patterns observed and their interpretation, a new cartographic analysis of the left periphery is proposed. The existence of sentences in which quantified objects precede fronted subjects suggests that there are two distinct positions in the CP layer to which preverbal phrases can move. The higher one is the familiar Focus category. It is argued that the lower one is neuter with respect to the topic/focus dichotomy and merely encodes a contrast feature. Other constituents can be adjoined at the higher portion of the left periphery where they are interpreted as topics or frames. The chapter concludes by emphasizing the importance of textually diversified corpora as the basis of historical syntactic studies.


Sign in / Sign up

Export Citation Format

Share Document