weight learning
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 48)

H-INDEX

14
(FIVE YEARS 5)

2021 ◽  
Author(s):  
Sriram Srinivasan ◽  
Charles Dickens ◽  
Eriq Augustine ◽  
Golnoosh Farnadi ◽  
Lise Getoor

AbstractStatistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust.


2021 ◽  
Vol 153 ◽  
pp. 111494
Author(s):  
Amin Golzari Oskouei ◽  
Mohammad Ali Balafar ◽  
Cina Motamed

2021 ◽  
pp. 1-19
Author(s):  
Xingguang Pan ◽  
Lin Wang ◽  
Chengquan Huang ◽  
Shitong Wang ◽  
Haiqing Chen

In feature weighted fuzzy c-means algorithms, there exist two challenges when the feature weighting techniques are used to improve their performances. On one hand, if the values of feature weights are learnt in advance, and then fixed in the process of clustering, the learnt weights might be lack of flexibility and might not fully reflect their relevance. On the other hand, if the feature weights are adaptively adjusted during the clustering process, the algorithms maybe suffer from bad initialization and lead to incorrect feature weight assignment, thus the performance of the algorithms may degrade the in some conditions. In order to ease these problems, a novel weighted fuzzy c-means based on feature weight learning (FWL-FWCM) is proposed. It is a hybrid of fuzzy weighted c-means (FWCM) algorithm with Improved FWCM (IFWCM) algorithm. FWL-FWCM algorithm first learns feature weights as priori knowledge from the data in advance by minimizing the feature evaluation function using the gradient descent technique, then iteratively optimizes the clustering objective function which integrates the within weighted cluster dispersion with a term of the discrepancy between the weights and the priori knowledge. Experiments conducted on an artificial dataset and real datasets demonstrate the proposed approach outperforms the-state-of-the-art feature weight clustering methods. The convergence property of FWL-FWCM is also presented.


2021 ◽  
Vol 21 (2) ◽  
pp. 1-21
Author(s):  
Yuanpeng Zhang ◽  
Yizhang Jiang ◽  
Lianyong Qi ◽  
Md Zakirul Alam Bhuiyan ◽  
Pengjiang Qian

Using unsupervised learning methods for clinical diagnosis is very meaningful. In this study, we propose an unsupervised multi-view & multi-medoid variant-entropy-based fuzzy clustering (M 2 VEFC) method for epilepsy EEG signals detecting. Comparing with existing related studies, M 2 VEFC has four main merits and contributions: (1) Features in original EEG data are represented from different perspectives that can provide more pattern information for epilepsy signals detecting. (2) During multi-view modeling, multi-medoids are used to capture the structure of clusters in each view. Furthermore, we assume that the medoids in a cluster observed from different views should keep invariant, which is taken as one of the collaborative learning mechanisms in this study. (3) A variant entropy is designed as another collaborative learning mechanism in which view weight learning is controlled by a user-free parameter. The parameter is derived from the distribution of samples in each view such that the learned weights have more discrimination. (4) M 2 VEFC does not need original data as its input—it only needs a similarity matrix and feature statistical information. Therefore, the original data are not exposed to users and hence the privacy is protected. We use several different kinds of feature extraction techniques to extract several groups of features as multi-view data from original EEG data to test the proposed method M 2 VEFC. Experimental results indicate M 2 VEFC achieves a promising performance that is better than benchmarking models.


Author(s):  
Xiangyuan Lan ◽  
Zifei Yang ◽  
Wei Zhang ◽  
Pong C. Yuen

The development of multi-spectrum image sensing technology has brought great interest in exploiting the information of multiple modalities (e.g., RGB and infrared modalities) for solving computer vision problems. In this article, we investigate how to exploit information from RGB and infrared modalities to address two important issues in visual tracking: robustness and object re-detection. Although various algorithms that attempt to exploit multi-modality information in appearance modeling have been developed, they still face challenges that mainly come from the following aspects: (1) the lack of robustness to deal with large appearance changes and dynamic background, (2) failure in re-capturing the object when tracking loss happens, and (3) difficulty in determining the reliability of different modalities. To address these issues and perform effective integration of multiple modalities, we propose a new tracking-by-detection algorithm called Adaptive Spatial-temporal Regulated Multi-Modality Correlation Filter. Particularly, an adaptive spatial-temporal regularization is imposed into the correlation filter framework in which the spatial regularization can help to suppress effect from the cluttered background while the temporal regularization enables the adaptive incorporation of historical appearance cues to deal with appearance changes. In addition, a dynamic modality weight learning algorithm is integrated into the correlation filter training, which ensures that more reliable modalities gain more importance in target tracking. Experimental results demonstrate the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document