noisy examples
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 3)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 17 (6) ◽  
pp. e1009045
Author(s):  
Bruno Golosio ◽  
Chiara De Luca ◽  
Cristiano Capone ◽  
Elena Pastorelli ◽  
Giovanni Stegel ◽  
...  

The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


2020 ◽  
Vol 38 (1) ◽  
pp. 917-933 ◽  
Author(s):  
Kamal Bashir ◽  
Tianrui Li ◽  
Chubato Wondaferaw Yohannese ◽  
Mahama Yahaya

Author(s):  
Saman Riaz ◽  
Ali Arshad ◽  
Licheng Jiao

Software fault prediction is the very consequent research topic for software quality assurance. Data driven approaches provide robust mechanisms to deal with software fault prediction. However, the prediction performance of the model highly depends on the quality of dataset. Many software datasets suffers from the problem of class imbalance. In this regard, under-sampling is a popular data pre-processing method in dealing with class imbalance problem, Easy Ensemble (EE) present a robust approach to achieve a high classification rate and address the biasness towards majority class samples. However, imbalance class is not the only issue that harms performance of classifiers. Some noisy examples and irrelevant features may additionally reduce the rate of predictive accuracy of the classifier. In this paper, we proposed two-stage data pre-processing which incorporates feature selection and a new Rough set Easy Ensemble scheme. In feature selection stage, we eliminate the irrelevant features by feature ranking algorithm. In the second stage of a new Rough set Easy Ensemble by incorporating Rough K nearest neighbor rule filter (RK) afore executing Easy Ensemble (EE), named RKEE for short. RK can remove noisy examples from both minority and majority class. Experimental evaluation on real-world software projects, such as NASA and Eclipse dataset, is performed in order to demonstrate the effectiveness of our proposed approach. Furthermore, this paper comprehensively investigates the influencing factor in our approach. Such as, the impact of Rough set theory on noise-filter, the relationship between model performance and imbalance ratio etc. comprehensive experiments indicate that the proposed approach shows outstanding performance with significance in terms of area-under-the-curve (AUC).


2018 ◽  
Author(s):  
Diego Vidaurre ◽  
Mark W. Woolrich ◽  
Anderson M. Winkler ◽  
Theodoros Karapanagiotidis ◽  
Jonathan Smallwood ◽  
...  

AbstractSpatial or temporal aspects of neural organisation are known to be important indices of how cognition is organised. However, measurements and estimations are often noisy and many of the algorithms used are probabilistic, which in combination have been argued to limit studies exploring the neural basis of specific aspects of cognition. Focusing on static and dynamic functional connectivity estimations, we propose to leverage this variability to improve statistical efficiency in relating these estimations to behaviour. To achieve this goal, we use a procedure based on permutation testing that provides a way of combining the results from many individual tests that refer to the same hypothesis. This is needed when testing a measure whose value is obtained from a noisy process, which can be repeated multiple times, referred to as replications. Focusing on functional connectivity, this noisy process can be: (i) computational, e.g. when using an approximate inference algorithm for which different runs can produce different results or (ii) observational, if we have the capacity to acquire data multiple times, and the different acquired data sets can be considered noisy examples of some underlying truth. In both cases, we are not interested in the individual replications but on the unobserved process generating each replication. In this note, we show how results can be combined instead of choosing just one of the estimated models. Using both simulations and real data, we show the benefits of this approach in practice.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Nelson Rangel-Valdez ◽  
Eduardo Fernandez ◽  
Laura Cruz-Reyes ◽  
Claudia Gomez-Santillan ◽  
Gilberto Rivera ◽  
...  

One of the main concerns in Multicriteria Decision Aid (MCDA) is robustness analysis. Some of the most important approaches to model decision maker preferences are based on fuzzy outranking models whose parameters (e.g., weights and veto thresholds) must be elicited. The so-called preference-disaggregation analysis (PDA) has been successfully carried out by means of metaheuristics, but this kind of works lacks a robustness analysis. Based on the above, the present research studies the robustness of a PDA metaheuristic method to estimate model parameters of an outranking-based relational system of preferences. The method is considered robust if the solutions obtained in the presence of noise can maintain the same performance in predicting preference judgments in a new reference set. The research shows experimental evidence that the PDA method keeps the same performance in situations with up to 10% of noise level, making it robust.


2013 ◽  
Vol 22 (02) ◽  
pp. 1350008 ◽  
Author(s):  
ATLÁNTIDA I. SÁNCHEZ ◽  
EDUARDO F. MORALES ◽  
JESUS A. GONZALEZ

Imbalanced data sets in the class distribution is common to many real world applications. As many classifiers tend to degrade their performance over the minority class, several approaches have been proposed to deal with this problem. In this paper, we propose two new cluster-based oversampling methods, SOI-C and SOI-CJ. The proposed methods create clusters from the minority class instances and generate synthetic instances inside those clusters. In contrast with other oversampling methods, the proposed approaches avoid creating new instances in majority class regions. They are more robust to noisy examples (the number of new instances generated per cluster is proportional to the cluster's size). The clusters are automatically generated. Our new methods do not need tuning parameters, and they can deal both with numerical and nominal attributes. The two methods were tested with twenty artificial datasets and twenty three datasets from the UCI Machine Learning repository. For our experiments, we used six classifiers and results were evaluated with recall, precision, F-measure, and AUC measures, which are more suitable for class imbalanced datasets. We performed ANOVA and paired t-tests to show that the proposed methods are competitive and in many cases significantly better than the rest of the oversampling methods used during the comparison.


Author(s):  
K. Ashok Kumar ◽  
Y.V. Bhaskar Reddy

Conventional content-based image retrieval (CBIR) schemes employing relevance feedback may suffer from some problems in the practical applications. First, most ordinary users would like to complete their search in a single interaction especially on the web. Second, it is time consuming and difficult to label a lot of negative examples with sufficient variety. Third, ordinary users may introduce some noisy examples into the query. This correspondence explores solutions to a new issue that image retrieval using unclean positive examples. In the proposed scheme, multiple feature distances are combined to obtain image similarity using classification technology. To handle the noisy positive examples, a new two step strategy is proposed by incorporating the methods of data cleaning and noise tolerant classifier. The extensive experiments carried out on two different real image collections validate the effectiveness of the proposed scheme.


Sign in / Sign up

Export Citation Format

Share Document