scholarly journals Active Selection of Training Examples for Meta-Learning

Author(s):  
Ricardo B. C. Prudencio ◽  
Teresa B. Ludermir
2018 ◽  
Author(s):  
Regina R. Parente ◽  
Ricardo B. C. Prudencio

In Meta-learning, training examples are generated from experiments performed with a pool of candidate algorithms in a number of problems (real or synthetic). Generating a good set of examples can be difficult due to the low availability of real datasets in some domains and the high computational cost of labeling. In this paper, we focus on the selection of training meta-examples by combining data manipulation and Transfer Learning via One-class classification. So, the most relevant examples are selected to be labeled. Our experiments revealed that it is possible to reduce the computational cost of generating meta- examples and maintain the meta-learning performance.


2019 ◽  
Vol 46 (1) ◽  
pp. 1 ◽  
Author(s):  
Hiroyuki Shimono ◽  
Graham Farquhar ◽  
Matthew Brookhouse ◽  
Florian A. Busch ◽  
Anthony O'Grady ◽  
...  

Elevated atmospheric CO2 concentration (e[CO2]) can stimulate the photosynthesis and productivity of C3 species including food and forest crops. Intraspecific variation in responsiveness to e[CO2] can be exploited to increase productivity under e[CO2]. However, active selection of genotypes to increase productivity under e[CO2] is rarely performed across a wide range of germplasm, because of constraints of space and the cost of CO2 fumigation facilities. If we are to capitalise on recent advances in whole genome sequencing, approaches are required to help overcome these issues of space and cost. Here, we discuss the advantage of applying prescreening as a tool in large genome×e[CO2] experiments, where a surrogate for e[CO2] was used to select cultivars for more detailed analysis under e[CO2] conditions. We discuss why phenotypic prescreening in population-wide screening for e[CO2] responsiveness is necessary, what approaches could be used for prescreening for e[CO2] responsiveness, and how the data can be used to improve genetic selection of high-performing cultivars. We do this within the framework of understanding the strengths and limitations of genotype–phenotype mapping.


2014 ◽  
Vol 47 (3) ◽  
pp. 1443-1458 ◽  
Author(s):  
Ahmad Ali Abin ◽  
Hamid Beigy

2000 ◽  
Vol 12 (10) ◽  
pp. 2405-2426 ◽  
Author(s):  
Leonardo Franco ◽  
Sergio A. Cannas

In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analytical calculation of the minimum number of examples needed to obtain full generalization (i.e., zero generalization error). The analysis of the training sets associated with such parameter leads us to propose a general architecture-independent criterion for selection of training examples. The criterion was checked through numerical simulations for various particular target functions with particular architectures, as well as for random target functions in a nonoverlapping receptive field perceptron. In all cases, the selection sampling criterion lead to an improvement in the generalization capacity compared with a pure random sampling. We also show that for the parity problem, one of the most used problems for testing learning algorithms, only the use of the whole set of examples ensures global learning in a depth two architecture. We show that this difficulty can be overcome by considering a tree-structured network of depth 2 log2(N) – 1.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Samar Ali Shilbayeh ◽  
Sunil Vadera

Purpose This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” Design/methodology/approach This paper describes the use of a meta-learning framework for recommending cost-sensitive classification methods for the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” The framework is based on the idea of applying machine learning techniques to discover knowledge about the performance of different machine learning algorithms. It includes components that repeatedly apply different classification methods on data sets and measures their performance. The characteristics of the data sets, combined with the algorithms and the performance provide the training examples. A decision tree algorithm is applied to the training examples to induce the knowledge, which can then be used to recommend algorithms for new data sets. The paper makes a contribution to both meta-learning and cost-sensitive machine learning approaches. Those both fields are not new, however, building a recommender that recommends the optimal case-sensitive approach for a given data problem is the contribution. The proposed solution is implemented in WEKA and evaluated by applying it on different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. The developed solution takes the misclassification cost into consideration during the learning process, which is not available in the compared project. Findings The proposed solution is implemented in WEKA and evaluated by applying it to different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. Originality/value The paper presents a major piece of new information in writing for the first time. Meta-learning work has been done before but this paper presents a new meta-learning framework that is costs sensitive.


Author(s):  
Amir Erfan Eshratifar ◽  
Mohammad Saeed Abrishami ◽  
David Eigen ◽  
Massoud Pedram

Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e. when there are only a few classes and training examples available in the target task), meta-learning approaches that optimize for future task learning have outperformed the typical transfer approach of initializing model weights from a pretrained starting point. But as we experimentally show, metalearning algorithms that work well in the few-class setting do not generalize well in many-shot and many-class cases. In this paper, we propose a joint training approach that combines both transfer-learning and meta-learning. Benefiting from the advantages of each, our method obtains improved generalization performance on unseen target tasks in both few- and many-class and few- and many-shot scenarios.


Sign in / Sign up

Export Citation Format

Share Document