scholarly journals Demystifying the Random Feature-Based Online Multi-Kernel Learning

Author(s):  
Songnam Hong ◽  
Jeongmin Chae

<div>The random feature-based online multi-kernel learning (RF-OMKL) is a promising framework in functional learning tasks. This framework is necessary for an online learning with continuous streaming data due to its low-complexity and scalability. </div><div>In RF-OMKL framework, numerous algorithms can be presented according to an underlying online learning and optimization techniques. The best known algorithm (termed Raker) has been proposed with the lens of the famous online learning with expert advice, where each kernel from a kernel dictionary is viewed as an expert. Harnessing this relation, it was proved that Raker yields a sublinear {\em expert} regret bound, in which as the name implies, the best function is further restricted as the expert-based framework. Namely, it is not an actual sublinear regret bound under RF-OMKL framework. In this paper, we propose a novel algorithm (named BestOMKL) for RF-OMKL framework and prove that it achieves a sublinear regret bound under a certain condition. Beyond our theoretical contribution, we demonstrate the superiority of our algorithm via numerical tests with real datasets. Notably, BestOMKL outperforms the state-of-the-art kernel-based algorithms (including Raker) on various online learning tasks, while having a lower complexity as Raker. These suggest the practicality of BestOMKL.</div>

2021 ◽  
Author(s):  
Songnam Hong ◽  
Jeongmin Chae

<div>The random feature-based online multi-kernel learning (RF-OMKL) is a promising framework in functional learning tasks. This framework is necessary for an online learning with continuous streaming data due to its low-complexity and scalability. </div><div>In RF-OMKL framework, numerous algorithms can be presented according to an underlying online learning and optimization techniques. The best known algorithm (termed Raker) has been proposed with the lens of the famous online learning with expert advice, where each kernel from a kernel dictionary is viewed as an expert. Harnessing this relation, it was proved that Raker yields a sublinear {\em expert} regret bound, in which as the name implies, the best function is further restricted as the expert-based framework. Namely, it is not an actual sublinear regret bound under RF-OMKL framework. In this paper, we propose a novel algorithm (named BestOMKL) for RF-OMKL framework and prove that it achieves a sublinear regret bound under a certain condition. Beyond our theoretical contribution, we demonstrate the superiority of our algorithm via numerical tests with real datasets. Notably, BestOMKL outperforms the state-of-the-art kernel-based algorithms (including Raker) on various online learning tasks, while having a lower complexity as Raker. These suggest the practicality of BestOMKL.</div>


Author(s):  
Chrysi Rapanta ◽  
Luca Botturi ◽  
Peter Goodyear ◽  
Lourdes Guàrdia ◽  
Marguerite Koole

AbstractThe Covid-19 pandemic has presented an opportunity for rethinking assumptions about education in general and higher education in particular. In the light of the general crisis the pandemic caused, especially when it comes to the so-called emergency remote teaching (ERT), educators from all grades and contexts experienced the necessity of rethinking their roles, the ways of supporting the students’ learning tasks and the image of students as self-organising learners, active citizens and autonomous social agents. In our first Postdigital Science and Education paper, we sought to distil and share some expert advice for campus-based university teachers to adapt to online teaching and learning. In this sequel paper, we ask ourselves: Now that campus-based university teachers have experienced the unplanned and forced version of Online Learning and Teaching (OLT), how can this experience help bridge the gap between online and in-person teaching in the following years? The four experts, also co-authors of this paper, interviewed aligning towards an emphasis on pedagogisation rather than digitalisation of higher education, with strategic decision-making being in the heart of post-pandemic practices. Our literature review of papers published in the last year and analysis of the expert answers reveal that the ‘forced’ experience of teaching with digital technologies as part of ERT can gradually give place to a harmonious integration of physical and digital tools and methods for the sake of more active, flexible and meaningful learning.


2020 ◽  
Vol 34 (04) ◽  
pp. 5199-5206
Author(s):  
Siddharth Mitra ◽  
Aditya Gopalan

We study how to adapt to smoothly-varying (‘easy’) environments in well-known online learning problems where acquiring information is expensive. For the problem of label efficient prediction, which is a budgeted version of prediction with expert advice, we present an online algorithm whose regret depends optimally on the number of labels allowed and Q* (the quadratic variation of the losses of the best action in hindsight), along with a parameter-free counterpart whose regret depends optimally on Q (the quadratic variation of the losses of all the actions). These quantities can be significantly smaller than T (the total time horizon), yielding an improvement over existing, variation-independent results for the problem. We then extend our analysis to handle label efficient prediction with bandit (partial) feedback, i.e., label efficient bandits. Our work builds upon the framework of optimistic online mirror descent, and leverages second order corrections along with a carefully designed hybrid regularizer that encodes the constrained information structure of the problem. We then consider revealing action-partial monitoring games – a version of label efficient prediction with additive information costs – which in general are known to lie in the hard class of games having minimax regret of order T2/3. We provide a strategy with an O((Q*T)1/3 bound for revealing action games, along with one with a O((QT)1/3) bound for the full class of hard partial monitoring games, both being strict improvements over current bounds.


Author(s):  
Yasin Görmez ◽  
◽  
Yunus E. Işık ◽  
Mustafa Temiz ◽  
Zafer Aydın

Sentiment analysis is the process of determining the attitude or the emotional state of a text automatically. Many algorithms are proposed for this task including ensemble methods, which have the potential to decrease error rates of the individual base learners considerably. In many machine learning tasks and especially in sentiment analysis, extracting informative features is as important as developing sophisticated classifiers. In this study, a stacked ensemble method is proposed for sentiment analysis, which systematically combines six feature extraction methods and three classifiers. The proposed method obtains cross-validation accuracies of 89.6%, 90.7% and 67.2% on large movie, Turkish movie and SemEval-2017 datasets, respectively, outperforming the other classifiers. The accuracy improvements are shown to be statistically significant at the 99% confidence level by performing a Z-test.


2018 ◽  
Vol 21 (1) ◽  
pp. 125-142 ◽  
Author(s):  
Eric Golinko ◽  
Xingquan Zhu

Author(s):  
Trong Nghia Hoang ◽  
Quang Minh Hoang ◽  
Kian Hsiang Low ◽  
Jonathan How

This paper presents a novel Collective Online Learning of Gaussian Processes (COOL-GP) framework for enabling a massive number of GP inference agents to simultaneously perform (a) efficient online updates of their GP models using their local streaming data with varying correlation structures and (b) decentralized fusion of their resulting online GP models with different learned hyperparameter settings and inducing inputs. To realize this, we exploit the notion of a common encoding structure to encapsulate the local streaming data gathered by any GP inference agent into summary statistics based on our proposed representation, which is amenable to both an efficient online update via an importance sampling trick as well as multi-agent model fusion via decentralized message passing that can exploit sparse connectivity among agents for improving efficiency and enhance the robustness of our framework against transmission loss. We provide a rigorous theoretical analysis of the approximation loss arising from our proposed representation to achieve efficient online updates and model fusion. Empirical evaluations show that COOL-GP is highly effective in model fusion, resilient to information disparity between agents, robust to transmission loss, and can scale to thousands of agents.


1996 ◽  
Vol 2 (3) ◽  
pp. 240-248 ◽  
Author(s):  
Michael R. Polster ◽  
Steven Z. Rapcsak

AbstractWe report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given “shallow” encoding instructions to focus on facial features. By contrast, he performs relatively well with “deep” encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations. (JINS, 1996, 2, 240–248.)


Sign in / Sign up

Export Citation Format

Share Document