dynamic decay adjustment
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 2)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Hue Yee CHONG ◽  
Shing Chiang TAN ◽  
Hwa Jen Yap

Abstract In recent decades, hybridization of superior attributes of few algorithms was proposed to aid in covering more areas of complex application as well as improves performance. This paper presents an intelligent system integrating a Radial Basis Function Network with Dynamic Decay Adjustment (RBFN-DDA) with a Harmony Search (HS) to perform condition monitoring in industrial processes. An effective condition monitoring can help reduce unexpected breakdown incidents and facilitate in maintenance. RBDN-DDA performs incremental learning wherein its structure expands by adding new hidden units to include new information. As such, its training can reach stability in a shorter time compared to the gradient-descent based methods. By integrating with the HS algorithm, the proposed metaheuristic neural network (RBFN-DDA-HS) can optimize the RBFN-DDA parameters and improve classification performances from the original RBFN-DDA by 2.2% up to 22.5% in two benchmarks and a real-world condition-monitoring case studies. The results also show that the proposed RBFN-DDA-HS is compatible, if not better than, the classification performances of other state-of-art machine learning methods.


2006 ◽  
Vol 69 (16-18) ◽  
pp. 2456-2460 ◽  
Author(s):  
Shing Chiang Tan ◽  
M.V.C. Rao ◽  
Chee Peng Lim

2006 ◽  
Vol 16 (04) ◽  
pp. 271-281 ◽  
Author(s):  
ADRIANO L. I. OLIVEIRA ◽  
ERICLES A. MEDEIROS ◽  
THYAGO A. B. V. ROCHA ◽  
MIGUEL E. R. BEZERRA ◽  
RONALDO C. VERAS

The dynamic decay adjustment (DDA) algorithm is a fast constructive algorithm for training RBF neural networks (RBFNs) and probabilistic neural networks (PNNs). The algorithm has two parameters, namely, θ+ and θ-. The papers which introduced DDA argued that those parameters would not heavily influence classification performance and therefore they recommended using always the default values of these parameters. In contrast, this paper shows that smaller values of parameter θ- can, for a considerable number of datasets, result in strong improvement in generalization performance. The experiments described here were carried out using twenty benchmark classification datasets from both Proben1 and the UCI repositories. The results show that for eleven of the datasets, the parameter θ- strongly influenced classification performance. The influence of θ- was also noticeable, although much less, on six of the datasets considered. This paper also compares the performance of RBF-DDA with θ- selection with both AdaBoost and Support Vector Machines (SVMs).


Sign in / Sign up

Export Citation Format

Share Document