scholarly journals Comparative Evaluation of Predicting Energy Consumption of Absorption Heat Pump with Multilayer Shallow Neural Network Training Algorithms

Buildings ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 13
Author(s):  
Jee-Heon Kim ◽  
Nam-Chul Seong ◽  
Won-Chang Choi

The performance of various multilayer neural network algorithms to predict the energy consumption of an absorption chiller in an air conditioning system under the same conditions was compared and evaluated in this study. Each prediction model was created using 12 representative multilayer shallow neural network algorithms. As training data, about a month of actual operation data during the heating period was used, and the predictive performance of 12 algorithms according to the training size was evaluated. The prediction results indicate that the error rates using the measured values are 0.09% minimum, 5.76% maximum, and 1.94 standard deviation (SD) for the Levenberg–Marquardt backpropagation model and 0.41% minimum, 5.05% maximum, and 1.68 SD for the Bayesian regularization backpropagation model. The conjugate gradient with Polak–Ribiére updates backpropagation model yielded lower values than the other two models, with 0.31% minimum, 5.73% maximum, and 1.76 SD. Based on the results for the predictive performance evaluation index, CvRMSE, all other models (conjugate gradient with Fletcher–Reeves updates backpropagation, one-step secant backpropagation, gradient descent with momentum and adaptive learning rate backpropagation, gradient descent with momentum backpropagation) except for the gradient descent backpropagation model yielded results that satisfy ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) Guideline 14. The results of this study confirm that the prediction performance may differ for each multilayer neural network training algorithm. Therefore, selecting the appropriate model to fit the characteristics of a specific project is essential.


2018 ◽  
Vol 5 (2) ◽  
pp. 145-156 ◽  
Author(s):  
Taposh Kumar Neogy ◽  
Naresh Babu Bynagari

In machine learning, the transition from hand-designed features to learned features has been a huge success. Regardless, optimization methods are still created by hand. In this study, we illustrate how an optimization method's design can be recast as a learning problem, allowing the algorithm to automatically learn to exploit structure in the problems of interest. On the tasks for which they are taught, our learning algorithms, implemented by LSTMs, beat generic, hand-designed competitors, and they also adapt well to other challenges with comparable structure. We show this on a variety of tasks, including simple convex problems, neural network training, and visual styling with neural art.  



2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Ioannis E. Livieris ◽  
Panagiotis Pintelas

Conjugate gradient methods constitute excellent neural network training methods characterized by their simplicity, numerical efficiency, and their very low memory requirements. In this paper, we propose a conjugate gradient neural network training algorithm which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant condition proposed by Li et al. (2007). Under mild conditions, we establish that the proposed method is globally convergent for general functions under the strong Wolfe conditions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods and has a potential to significantly enhance the computational efficiency and robustness of the training process.



2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Stephen Whitelam ◽  
Viktor Selin ◽  
Sang-Won Park ◽  
Isaac Tamblyn

AbstractWe show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise. Averaged over independent realizations of the learning process, neuroevolution is equivalent to gradient descent on the loss function. We use numerical simulation to show that this correspondence can be observed for finite mutations, for shallow and deep neural networks. Our results provide a connection between two families of neural-network training methods that are usually considered to be fundamentally different.



2017 ◽  
Vol 93 ◽  
pp. 219-229 ◽  
Author(s):  
Linnan Wang ◽  
Yi Yang ◽  
Renqiang Min ◽  
Srimat Chakradhar


Energies ◽  
2020 ◽  
Vol 13 (19) ◽  
pp. 5164
Author(s):  
Chin-Hsiang Cheng ◽  
Yu-Ting Lin

The present study develops a novel optimization method for designing a Stirling engine by combining a variable-step simplified conjugate gradient method (VSCGM) and a neural network training algorithm. As compared with existing gradient-based methods, like the conjugate gradient method (CGM) and simplified conjugate gradient method (SCGM), the VSCGM method is a further modified version presented in this study which allows the convergence speed to be greatly accelerated while the form of the objective function can still be defined flexibly. Through the automatic adjustment of the variable step size, the optimal design is reached more efficiently and accurately. Therefore, the VSCGM appears to be a potential and alternative tool in a variety of engineering applications. In this study, optimization of a low-temperature-differential gamma-type Stirling engine was attempted as a test case. The optimizer was trained by the neural network algorithm based on the training data provided from three-dimensional computational fluid dynamic (CFD) computation. The optimal design of the influential parameters of the Stirling engine is yielded efficiently. Results show that the indicated work and thermal efficiency are increased with the present approach by 102.93% and 5.24%, respectively. Robustness of the VSCGM is tested by giving different sets of initial guesses.



Sign in / Sign up

Export Citation Format

Share Document