scholarly journals A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems

Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1284
Author(s):  
Licheng Cui ◽  
Huawei Zhai ◽  
Hongfei Lin

An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.

2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Liang-Rui Ren ◽  
Ying-Lian Gao ◽  
Jin-Xing Liu ◽  
Junliang Shang ◽  
Chun-Hou Zheng

Abstract Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shan Pang ◽  
Xinyi Yang

In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.


2019 ◽  
Vol 9 (19) ◽  
pp. 3987 ◽  
Author(s):  
Zhang ◽  
Peng ◽  
Zhou ◽  
Ji ◽  
Wang

Complete characteristic curves of a pump turbine are fundamental for improving the modeling accuracy of the pump turbine in a pump turbine governing system. In view of the difficulty in modeling the "S" characteristic region of the complete characteristic curves in the pump turbine, a novel Autoencoder and partial least squares regression based extreme learning machine model (AE-PLS-ELM) was proposed to describe the pump turbine characteristics. First, a mathematical model was formulated to describe the flow and moment characteristic curves. The improved Suter transformation was employed to transfer the original curves into WH and WM curves. Second, the ELM-Autoencoder technique and the partial least squares regression (PLSR) method were introduced to the architecture of the original ELM network. The ELM-Autoencoder technique was employed to obtain the initial weights of the Autoencoder based extreme learning machine (AE-ELM) model. The PLS method was exploited to avoid the multicollinearity problem of the Moore-Penrose generalized inverse. Lastly, the effectiveness of the proposed AE-PLS-ELM model has been verified using real data from a pumped storage unit in China. The results demonstrated that the AE-PLS-ELM model can obtain better modeling accuracy and generalization performance than the traditional models and, thus, can be exploited as an effective and sufficient approach for the modeling of pump turbine characteristics.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Chao Wang ◽  
Jianhui Wang ◽  
Shusheng Gu

Extreme learning machine (ELM) as an emerging technology has recently attracted many researchers’ interest due to its fast learning speed and state-of-the-art generalization ability in the implementation. Meanwhile, the incremental extreme learning machine (I-ELM) based on incremental learning algorithm was proposed which outperforms many popular learning algorithms. However, the incremental algorithms with ELM do not recalculate the output weights of all the existing nodes when a new node is added and cannot obtain the least-squares solution of output weight vectors. In this paper, we propose orthogonal convex incremental learning machine (OCI-ELM) with Gram-Schmidt orthogonalization method and Barron’s convex optimization learning method to solve the nonconvex optimization problem and least-squares solution problem, and then we give the rigorous proofs in theory. Moreover, in this paper, we propose a deep architecture based on stacked OCI-ELM autoencoders according to stacked generalization philosophy for solving large and complex data problems. The experimental results verified with both UCI datasets and large datasets demonstrate that the deep network based on stacked OCI-ELM autoencoders (DOC-IELM-AEs) outperforms the other methods mentioned in the paper with better performance on regression and classification problems.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Qinwei Fan ◽  
Tongke Fan

Extreme learning machine (ELM), as a new simple feedforward neural network learning algorithm, has been extensively used in practical applications because of its good generalization performance and fast learning speed. However, the standard ELM requires more hidden nodes in the application due to the random assignment of hidden layer parameters, which in turn has disadvantages such as poorly hidden layer sparsity, low adjustment ability, and complex network structure. In this paper, we propose a hybrid ELM algorithm based on the bat and cuckoo search algorithm to optimize the input weight and threshold of the ELM algorithm. We test the numerical experimental performance of function approximation and classification problems under a few benchmark datasets; simulation results show that the proposed algorithm can obtain significantly better prediction accuracy compared to similar algorithms.


Author(s):  
KE LI ◽  
RAN WANG ◽  
SAM KWONG ◽  
JINGJING CAO

Extreme Learning Machine (ELM) is an emergent technique for training Single-hidden Layer Feedforward Networks (SLFNs). It attracts significant interest during the recent years, but the randomly assigned network parameters might cause high learning risks. This fact motivates our idea in this paper to propose an evolving ELM paradigm for classification problems. In this paradigm, a Differential Evolution (DE) variant, which can online select the appropriate operator for offspring generation and adaptively adjust the corresponding control parameters, is proposed for optimizing the network. In addition, a 5-fold cross validation is adopted in the fitness assignment procedure, for improving the generalization capability. Empirical studies on several real-world classification data sets have demonstrated that the evolving ELM paradigm can generally outperform the original ELM as well as several recent classification algorithms.


2020 ◽  
Vol 36 (1) ◽  
pp. 35-44 ◽  
Author(s):  
LIMPAPAT BUSSABAN ◽  
ATTAPOL KAEWKHAO ◽  
SUTHEP SUANTAI

In this paper, a novel algorithm, called parallel inertial S-iteration forward-backward algorithm (PISFBA) isproposed for finding a common fixed point of a countable family of nonexpansive mappings and convergencebehavior of PISFBA is analyzed and discussed. As applications, we apply PISFBA to estimate the weight con-necting the hidden layer and output layer in a regularized extreme learning machine. Finally, the proposedlearning algorithm is applied to solve regression and data classification problems


2016 ◽  
Vol 25 (01) ◽  
pp. 1550026 ◽  
Author(s):  
Juan J. Carrasco ◽  
Mónica Millán-Giraldo ◽  
Juan Caravaca ◽  
Pablo Escandell-Montero ◽  
José M. Martínez-Martínez ◽  
...  

Extreme Learning Machine (ELM) is a recently proposed algorithm, efficient and fast for learning the parameters of single layer neural structures. One of the main problems of this algorithm is to choose the optimal architecture for a given problem solution. To solve this limitation several solutions have been proposed in the literature, including the regularization of the structure. However, to the best of our knowledge, there are no works where such adjustment is applied to classification problems in the presence of a non-linearity in the output; all published works tackle modelling or regression problems. Our proposal has been applied to a series of standard databases for the evaluation of machine learning techniques. Results obtained in terms of classification success rate and training time, are compared to the original ELM, to the well known Least Square Support Vector Machine (LS-SVM) algorithm and with two other methods based on the ELM regularization: Optimally Pruned Extreme Learning Machine (OP-ELM) and Bayesian Extreme Learning Machine (BELM). The obtained results clearly demonstrate the usefulness of the proposed method and its superiority over a classical approach.


2022 ◽  
Vol 2153 (1) ◽  
pp. 012014
Author(s):  
E Gelvez-Almeida ◽  
A Váasquez-Coronel ◽  
R Guatelli ◽  
V Aubin ◽  
M Mora

Abstract Extreme learning machine is an algorithm that has shown a good performance facing classification and regression problems. It has gained great acceptance by the scientific community due to the simplicity of the model and its sola great generalization capacity. This work proposes the use of extreme learning machine neural networks to carry out the classification between Parkinson’s disease patients and healthy individuals. The descriptor used corresponds to the feature vector generated applying the local binary Pattern algorithm to the grayscale spectrograms. The spectrograms are obtained from the audio signal samples from the considered repository. Experiments are conducted with single hidden layer and multilayer extreme learning machine networks comparing the results of each structure. Results show that hierarchical extreme learning machine with three hidden layers has a better general performance over multilayer extreme learning machine networks and a single hidden layer extreme learning machine. The rate of success obtained is within the ranges presented in the literature. However, the hierarchical network training time is considerably faster compared to multilayer networks of three or two hidden layers.


Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.


Sign in / Sign up

Export Citation Format

Share Document