Demystifying the Random Feature-Based Online Multi-Kernel Learning
<div>The random feature-based online multi-kernel learning (RF-OMKL) is a promising framework in functional learning tasks. This framework is necessary for an online learning with continuous streaming data due to its low-complexity and scalability. </div><div>In RF-OMKL framework, numerous algorithms can be presented according to an underlying online learning and optimization techniques. The best known algorithm (termed Raker) has been proposed with the lens of the famous online learning with expert advice, where each kernel from a kernel dictionary is viewed as an expert. Harnessing this relation, it was proved that Raker yields a sublinear {\em expert} regret bound, in which as the name implies, the best function is further restricted as the expert-based framework. Namely, it is not an actual sublinear regret bound under RF-OMKL framework. In this paper, we propose a novel algorithm (named BestOMKL) for RF-OMKL framework and prove that it achieves a sublinear regret bound under a certain condition. Beyond our theoretical contribution, we demonstrate the superiority of our algorithm via numerical tests with real datasets. Notably, BestOMKL outperforms the state-of-the-art kernel-based algorithms (including Raker) on various online learning tasks, while having a lower complexity as Raker. These suggest the practicality of BestOMKL.</div>