scholarly journals The effects of multiple layers feed-forward neural network transfer function in digital based Ethiopian soil classification and moisture prediction

Author(s):  
Belete Biazen Bezabeh ◽  
Abrham Debasu Mengistu

In the area of machine learning performance analysis is the major task in order to get a better performance both in training and testing model. In addition, performance analysis of machine learning techniques helps to identify how the machine is performing on the given input and also to find any improvements needed to make on the learning model. Feed-forward neural network (FFNN) has different area of applications, but the epoch convergences of the network differs from the usage of transfer function. In this study, to build the model for classification and moisture prediction of soil, rectified linear units (ReLU), Sigmoid, hyperbolic tangent (Tanh) and Gaussian transfer function of feed-forward neural network had been analyzed to identify an appropriate transfer function. Color, texture, shape and brisk local feature descriptor are used as a feature vector of FFNN in the input layer and 4 hidden layers were considered in this study. In each hidden layer 26 neurons are used. From the experiment, Gaussian transfer function outperforms than ReLU, sigmoid and tanh transfer function. But the convergence rate of Gaussian transfer function took more epoch than ReLU, Sigmoid and tanh.

2021 ◽  
Vol 12 (2) ◽  
pp. 89
Author(s):  
As'ary Ramadhan

Estimasi biaya pengembangan proyek perangkat lunak merupakan salah satu masalah yang kritis dalam rekayasa perangkat lunak. Kegagalan dari proyek perangkat lunak diakibatkan ketidak akuratannya estimasi sumber daya yang dibutuhkan. Beberapa model telah dikembangkan dalam beberapa puluh tahun belakangan ini. Untuk meberikan keakuratan dalam estimasi biaya proyek perangkat lunak masih menjadi tantangan hingga saat ini. Tujuan dilakukannya penelitian ini meningkatkan akurasi estimasi biaya proyek perangkat lunak dengan menerapkan algoritma genetika sebagai proses pelatihan pada Feed Forward Neural Network Backpropagation (FFNN-BP) yang mengakomodasi formula dari Post Architecture Model (COCOMO II). Magnitude of Relative Error (MRE) dan Mean Magnitude of Relative-Error (MMRE) digunakan sebagai pengkuran indikasi kinerja. Hasil percobaan menunjukkan bahwa model yang diusulkan memberikan hasil estimasi biaya proyek perangkat lunak menjadi lebih akurat dari COCOMO II dan FFNN-BP. Dalam kasus ini MMRE untuk COCOMO II adalah 74.68%, FFNN-BP adalah 39.90% .  Kata kunci: COCOMO II, Machine Learning, Proyek Manajemen IT, Backpropagation


2021 ◽  
Author(s):  
Shubhangi Pande ◽  
Neeraj Kumar Rathore ◽  
Anuradha Purohit

Abstract Machine learning applications employ FFNN (Feed Forward Neural Network) in their discipline enormously. But, it has been observed that the FFNN requisite speed is not up the mark. The fundamental causes of this problem are: 1) for training neural networks, slow gradient descent methods are broadly used and 2) for such methods, there is a need for iteratively tuning hidden layer parameters including biases and weights. To resolve these problems, a new emanant machine learning algorithm, which is a substitution of the feed-forward neural network, entitled as Extreme Learning Machine (ELM) introduced in this paper. ELM also come up with a general learning scheme for the immense diversity of different networks (SLFNs and multilayer networks). According to ELM originators, the learning capacity of networks trained using backpropagation is a thousand times slower than the networks trained using ELM, along with this, ELM models exhibit good generalization performance. ELM is more efficient in contradiction of Least Square Support Vector Machine (LS-SVM), Support Vector Machine (SVM), and rest of the precocious approaches. ELM’s eccentric outline has three main targets: 1) high learning accuracy 2) less human intervention 3) fast learning speed. ELM consider as a greater capacity to achieve global optimum. The distribution of application of ELM incorporates: feature learning, clustering, regression, compression, and classification. With this paper, our goal is to familiarize various ELM variants, their applications, ELM strengths, ELM researches and comparison with other learning algorithms, and many more concepts related to ELM.


2021 ◽  
Author(s):  
Shubhangi Pande ◽  
Neeraj Rathore ◽  
Anuradha Purohit

Abstract Machine learning applications employ FFNN (Feed Forward Neural Network) in their discipline enormously. But, it has been observed that the FFNN requisite speed is not up the mark. The fundamental causes of this problem are: 1) for training neural networks, slow gradient descent methods are broadly used and 2) for such methods, there is a need for iteratively tuning hidden layer parameters including biases and weights. To resolve these problems, a new emanant machine learning algorithm, which is a substitution of the feed-forward neural network, entitled as Extreme Learning Machine (ELM) introduced in this paper. ELM also come up with a general learning scheme for the immense diversity of different networks (SLFNs and multilayer networks). According to ELM originators, the learning capacity of networks trained using backpropagation is a thousand times slower than the networks trained using ELM, along with this, ELM models exhibit good generalization performance. ELM is more efficient in contradiction of Least Square Support Vector Machine (LS-SVM), Support Vector Machine (SVM), and rest of the precocious approaches. ELM’s eccentric outline has three main targets: 1) high learning accuracy 2) less human intervention 3) fast learning speed. ELM consider as a greater capacity to achieve global optimum. The distribution of application of ELM incorporates: feature learning, clustering, regression, compression, and classification. With this paper, our goal is to familiarize various ELM variants, their applications, ELM strengths, ELM researches and comparison with other learning algorithms, and many more concepts related to ELM.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Andrés Muñoz-Villamizar ◽  
Carlos Yohan Rafavy ◽  
Justin Casey

PurposeThis research is inspired by a real case study from a pump rental business company across the US. The company was looking to increase the utilization of its rental assets while, at the same time, keeping the cost of fleet mobilization as efficient as possible. However, decisions for asset movement between branches were largely arranged between individual branch managers on an as-needed basis.Design/methodology/approachThe authors propose an improvement for the company's asset management practice by modeling an integrated decision tool which involves evaluation of several machine learning algorithms for demand prediction and mathematical optimization for a centrally-planned asset allocation.FindingsThe authors found that a feed-forward neural network (FNN) model with single hidden layer is the best performing predictor for the company's intermittent product demand and the optimization model is proven to prescribe the most efficient asset allocation given the demand prediction from FNN model.Practical implicationsThe implementation of this new tool will close the gap between the company's current and desired future level of operational performance and consequently increase its competitivenessOriginality/valueThe results show a superior prediction performance by a feed-forward neural network model and an efficient allocation decision prescribed by the optimization model.


2002 ◽  
Vol 6 (4) ◽  
pp. 671-684 ◽  
Author(s):  
A. Y. Shamseldin ◽  
A. E. Nasr ◽  
K. M. O’Connor

Abstract. The Multi-Layer Feed-Forward Neural Network (MLFFNN) is applied in the context of river flow forecast combination, where a number of rainfall-runoff models are used simultaneously to produce an overall combined river flow forecast. The operation of the MLFFNN depends not only on its neuron configuration but also on the choice of neuron transfer function adopted, which is non-linear for the hidden and output layers. These models, each having a different structure to simulate the perceived mechanisms of the runoff process, utilise the information carrying capacity of the model calibration data in different ways. Hence, in a discharge forecast combination procedure, the discharge forecasts of each model provide a source of information different from that of the other models used in the combination. In the present work, the significance of the choice of the transfer function type in the overall performance of the MLFFNN, when used in the river flow forecast combination context, is investigated critically. Five neuron transfer functions are used in this investigation, namely, the logistic function, the bipolar function, the hyperbolic tangent function, the arctan function and the scaled arctan function. The results indicate that the logistic function yields the best model forecast combination performance. Keywords: River flow forecast combination, multi-layer feed-forward neural network, neuron transfer functions, rainfall-runoff models


2021 ◽  
Vol 6 (5) ◽  
pp. 15-19
Author(s):  
Sina E. Charandabi ◽  
Kamyar Kamyar

This paper initially presents a nontechnical overview of cryptocurrency, its history, and the technicalities of its usage as a means of exchange. Bitcoin’s working methodology and mathematical baseline is further presented in more depth. For the remaining majority of the paper, recent cryptocurrency price data of Bitcoin, Ethereum, Tether, Dogecoin, and Binance coin was used to train a machine learning model of Feed Forward Neural Networks to predict future prices for each of the datasets. Further and in conclusion, the results are discussed, and the efficiency and accuracy of these models are evaluated.


Sign in / Sign up

Export Citation Format

Share Document