scholarly journals Diversify and Match: A Domain Adaptive Representation Learning Paradigm for Object Detection

Author(s):  
Taekyung Kim ◽  
Minki Jeong ◽  
Seunghyeon Kim ◽  
Seokeon Choi ◽  
Changick Kim
Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Tan Guo ◽  
Lei Zhang ◽  
Xiaoheng Tan ◽  
Liu Yang ◽  
Zhiwei Guo ◽  
...  

Naïve sparse representation has stability problem due to its unsupervised nature, which is not preferred for classification tasks. For this problem, this paper presents a novel representation learning method named classification-oriented local representation (CoLR) for image recognition. The core idea of CoLR is to find the most relevant training classes and samples with test sample by taking the merits of class-wise sparseness weighting, sample locality, and label prior. The proposed representation strategy can not only promote a classification-oriented representation, but also boost a locality adaptive representation within the selected training classes. The CoLR model is efficiently solved by Augmented Lagrange Multiplier (ALM) scheme based on a variable splitting strategy. Then, the performance of the proposed model is evaluated on benchmark face datasets and deep object features. Specifically, the deep features of the object dataset are obtained by a well-trained convolutional neural network (CNN) with five convolutional layers and three fully connected layers on the challenging ImageNet. Extensive experiments verify the superiority of CoLR in comparison with some state-of-the-art models.


2020 ◽  
Vol 102 ◽  
pp. 107127 ◽  
Author(s):  
Nishant Sankaran ◽  
Deen Dayal Mohan ◽  
Nagashri N. Lakshminarayana ◽  
Srirangaraj Setlur ◽  
Venu Govindaraju

Author(s):  
Hadar Ram ◽  
Dieter Struyf ◽  
Bram Vervliet ◽  
Gal Menahem ◽  
Nira Liberman

Abstract. People apply what they learn from experience not only to the experienced stimuli, but also to novel stimuli. But what determines how widely people generalize what they have learned? Using a predictive learning paradigm, we examined the hypothesis that a low (vs. high) probability of an outcome following a predicting stimulus would widen generalization. In three experiments, participants learned which stimulus predicted an outcome (S+) and which stimulus did not (S−) and then indicated how much they expected the outcome after each of eight novel stimuli ranging in perceptual similarity to S+ and S−. The stimuli were rings of different sizes and the outcome was a picture of a lightning bolt. As hypothesized, a lower probability of the outcome widened generalization. That is, novel stimuli that were similar to S+ (but not to S−) produced expectations for the outcome that were as high as those associated with S+.


Sign in / Sign up

Export Citation Format

Share Document