Self-Organizing Map Learning Nonlinearly Embedded Manifolds

2005 ◽  
Vol 4 (1) ◽  
pp. 22-31 ◽  
Author(s):  
Timo Similä

One of the main tasks in exploratory data analysis is to create an appropriate representation for complex data. In this paper, the problem of creating a representation for observations lying on a low-dimensional manifold embedded in high-dimensional coordinates is considered. We propose a modification of the Self-organizing map (SOM) algorithm that is able to learn the manifold structure in the high-dimensional observation coordinates. Any manifold learning algorithm may be incorporated to the proposed training strategy to guide the map onto the manifold surface instead of becoming trapped in local minima. In this paper, the Locally linear embedding algorithm is adopted. We use the proposed method successfully on several data sets with manifold geometry including an illustrative example of a surface as well as image data. We also show with other experiments that the advantage of the method over the basic SOM is restricted to this specific type of data.

Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 109 ◽  
Author(s):  
Marian B. Gorzałczany ◽  
Filip Rudziński

In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.


2021 ◽  
pp. 1-33
Author(s):  
Nicolas P. Rougier ◽  
Georgios Is. Detorakis

Abstract We propose a variation of the self-organizing map algorithm by considering the random placement of neurons on a two-dimensional manifold, following a blue noise distribution from which various topologies can be derived. These topologies possess random (but controllable) discontinuities that allow for a more flexible self- organization, especially with high-dimensional data. The proposed algorithm is tested on one-, two- and three-dimensional tasks, as well as on the MNIST handwritten digits data set and validated using spectral analysis and topological data analysis tools. We also demonstrate the ability of the randomized self-organizing map to gracefully reorganize itself in case of neural lesion and/or neurogenesis.


2013 ◽  
Vol 677 ◽  
pp. 502-507
Author(s):  
Kang Hua Hui ◽  
Chun Li Li ◽  
Xiao Rong Feng ◽  
Xue Yang Wang

In this paper, a new method is proposed, which can be considered as the combination of sparse representation based classification (SRC) and KNN classifier. In detail, with the assumption of locally linear embedding coming into existence, the proposed method achieves the classification goal via non-negative locally sparse representation, combining the reconstruction property and the sparsity of SRC and the discrimination power included in KNN. Compared to SRC, the proposed method has obvious discrimination and is more acceptable for the real image data without those preconditions difficult to satisfy. Moreover, it is more suitable for the classification of low dimensional data dimensionally reduced by dimensionality reduction methods, especially those methods obtaining the low dimensional and neighborhood preserving embeddings of high dimensional data. The experiments on MNIST is also presented, which supports the above arguments.


Author(s):  
Fumiya Akasaka ◽  
Kazuki Fujita ◽  
Yoshiki Shimomura

This paper proposes the PSS Business Case Map as a tool to support designers’ idea generation in PSS design. The map visualizes the similarities among PSS business cases in a two-dimensional diagram. To make the map, PSS business cases are first collected by conducting, for example, a literature survey. The collected business cases are then classified from multiple aspects that characterize each case such as its product type, service type, target customer, and so on. Based on the results of this classification, the similarities among the cases are calculated and visualized by using the Self-Organizing Map (SOM) technique. A SOM is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional) view from high-dimensional data. The visualization result is offered to designers in a form of a two-dimensional map, which is called the PSS Business Case Map. By using the map, designers can figure out the position of their current business and can acquire ideas for the servitization of their business.


2014 ◽  
Vol 41 (3) ◽  
pp. 341-355 ◽  
Author(s):  
Yi Xiao ◽  
Rui-Bin Feng ◽  
Zi-Fa Han ◽  
Chi-Sing Leung

2020 ◽  
Vol 92 (15) ◽  
pp. 10450-10459 ◽  
Author(s):  
Wil Gardner ◽  
Ruqaya Maliki ◽  
Suzanne M. Cutts ◽  
Benjamin W. Muir ◽  
Davide Ballabio ◽  
...  

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Kwang Baek Kim ◽  
Chang Won Kim

Accurate measures of liver fat content are essential for investigating hepatic steatosis. For a noninvasive inexpensive ultrasonographic analysis, it is necessary to validate the quantitative assessment of liver fat content so that fully automated reliable computer-aided software can assist medical practitioners without any operator subjectivity. In this study, we attempt to quantify the hepatorenal index difference between the liver and the kidney with respect to the multiple severity status of hepatic steatosis. In order to do this, a series of carefully designed image processing techniques, including fuzzy stretching and edge tracking, are applied to extract regions of interest. Then, an unsupervised neural learning algorithm, the self-organizing map, is designed to establish characteristic clusters from the image, and the distribution of the hepatorenal index values with respect to the different levels of the fatty liver status is experimentally verified to estimate the differences in the distribution of the hepatorenal index. Such findings will be useful in building reliable computer-aided diagnostic software if combined with a good set of other characteristic feature sets and powerful machine learning classifiers in the future.


2020 ◽  
Vol 49 (3) ◽  
pp. 421-437
Author(s):  
Genggeng Liu ◽  
Lin Xie ◽  
Chi-Hua Chen

Dimensionality reduction plays an important role in the data processing of machine learning and data mining, which makes the processing of high-dimensional data more efficient. Dimensionality reduction can extract the low-dimensional feature representation of high-dimensional data, and an effective dimensionality reduction method can not only extract most of the useful information of the original data, but also realize the function of removing useless noise. The dimensionality reduction methods can be applied to all types of data, especially image data. Although the supervised learning method has achieved good results in the application of dimensionality reduction, its performance depends on the number of labeled training samples. With the growing of information from internet, marking the data requires more resources and is more difficult. Therefore, using unsupervised learning to learn the feature of data has extremely important research value. In this paper, an unsupervised multilayered variational auto-encoder model is studied in the text data, so that the high-dimensional feature to the low-dimensional feature becomes efficient and the low-dimensional feature can retain mainly information as much as possible. Low-dimensional feature obtained by different dimensionality reduction methods are used to compare with the dimensionality reduction results of variational auto-encoder (VAE), and the method can be significantly improved over other comparison methods.


2019 ◽  
Vol 283 ◽  
pp. 07009
Author(s):  
Xinyao Zhang ◽  
Pengyu Wang ◽  
Ning Wang

Dimensionality reduction is one of the central problems in machine learning and pattern recognition, which aims to develop a compact representation for complex data from high-dimensional observations. Here, we apply a nonlinear manifold learning algorithm, called local tangent space alignment (LTSA) algorithm, to high-dimensional acoustic observations and achieve nonlinear dimensionality reduction for the acoustic field measured by a linear senor array. By dimensionality reduction, the underlying physical degrees of freedom of acoustic field, such as the variations of sound source location and sound speed profiles, can be discovered. Two simulations are presented to verify the validity of the approach.


Sign in / Sign up

Export Citation Format

Share Document