THE GENERALIZATION OF A CONSTRUCTIVE ALGORITHM IN PATTERN CLASSIFICATION PROBLEMS

1992 ◽  
Vol 03 (supp01) ◽  
pp. 65-70 ◽  
Author(s):  
Neil Burgess ◽  
Silvano Di Zenzo ◽  
Paolo Ferragina ◽  
Mario Notturno Granieri

The use of a constructive algorithm for pattern classification is examined. The algorithm, a ‘Perceptron Cascade’, has been shown to converge to zero errors whilst learning any consistent classification of real-valued pattern vectors (Burgess, 1992). Limiting network size and producing bounded decision regions are noted to be important for the generalization ability of a network. A scheme is suggested by which a result on generalization (Vapnik, 1992) may enable calculation of the optimal network size. A fast algorithm for principal component analysis (Sirat, 1991) is used to construct ‘hyper-boxes’ around each class of patterns to ensure bounded decision regions. Performance is compared with the Gaussian Maximum Likelihood procedure in three artificial problems simulating real pattern classification applications.

2002 ◽  
Vol 30 (4) ◽  
pp. 239-247
Author(s):  
S. M. Shamsuddin ◽  
M. Darus ◽  
M. N. Sulaiman

Data reduction is a process of feature extraction that transforms the data space into a feature space of much lower dimension compared to the original data space, yet it retains most of the intrinsic information content of the data. This can be done by using a number of methods, such as principal component analysis (PCA), factor analysis, and feature clustering. Principal components are extracted from a collection of multivariate cases as a way of accounting for as much of the variation in that collection as possible by means of as few variables as possible. On the other hand, backpropagation network has been used extensively in classification problems such as XOR problems, share prices prediction, and pattern recognition. This paper proposes an improved error signal of backpropagation network for classification of the reduction invariants using principal component analysis, for extracting the bulk of the useful information present in moment invariants of handwritten digits, leaving the redundant information behind. Higher order centralised scale- invariants are used to extract features of handwritten digits before PCA, and the reduction invariants are sent to the improved backpropagation model for classification purposes.


Author(s):  
Hyeuk Kim

Unsupervised learning in machine learning divides data into several groups. The observations in the same group have similar characteristics and the observations in the different groups have the different characteristics. In the paper, we classify data by partitioning around medoids which have some advantages over the k-means clustering. We apply it to baseball players in Korea Baseball League. We also apply the principal component analysis to data and draw the graph using two components for axis. We interpret the meaning of the clustering graphically through the procedure. The combination of the partitioning around medoids and the principal component analysis can be used to any other data and the approach makes us to figure out the characteristics easily.


Sign in / Sign up

Export Citation Format

Share Document