scholarly journals Application of an automated machine learning-genetic algorithm (AutoML-GA) coupled with computational fluid dynamics simulations for rapid engine design optimization

2021 ◽  
pp. 146808742110234
Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal ◽  
Alvaro Vidal Torreira ◽  
Daniel Probst ◽  
Matthew Shaxted ◽  
...  

In recent years, the use of machine learning-based surrogate models for computational fluid dynamics (CFD) simulations has emerged as a promising technique for reducing the computational cost associated with engine design optimization. However, such methods still suffer from drawbacks. One main disadvantage is that the default machine learning (ML) hyperparameters are often severely suboptimal for a given problem. This has often been addressed by manually trying out different hyperparameter settings, but this solution is ineffective in case of a high-dimensional hyperparameter space. Besides this problem, the amount of data needed for training is also not known a priori. In response to these issues that need to be addressed, the present work describes and validates an automated active learning approach, AutoML-GA, for surrogate-based optimization of internal combustion engines. In this approach, a Bayesian optimization technique is used to find the best machine learning hyperparameters based on an initial dataset obtained from a small number of CFD simulations. Subsequently, a genetic algorithm is employed to locate the design optimum on the ML surrogate surface. In the vicinity of the design optimum, the solution is refined by repeatedly running CFD simulations at the projected optima and adding the newly obtained data to the training dataset. It is demonstrated that AutoML-GA leads to a better optimum with a lower number of CFD simulations, compared to the use of default hyperparameters. The proposed framework offers the advantage of being a more hands-off approach that can be readily utilized by researchers and engineers in industry who do not have extensive machine learning expertise.

2020 ◽  
Vol 143 (2) ◽  
Author(s):  
Jihad A. Badra ◽  
Fethi Khaled ◽  
Meng Tang ◽  
Yuanjiang Pei ◽  
Janardhan Kodavasal ◽  
...  

Abstract Gasoline compression ignition (GCI) engines are considered an attractive alternative to traditional spark-ignition and diesel engines. In this work, a Machine Learning-Grid Gradient Ascent (ML-GGA) approach was developed to optimize the performance of internal combustion engines. ML offers a pathway to transform complex physical processes that occur in a combustion engine into compact informational processes. The developed ML-GGA model was compared with a recently developed Machine Learning-Genetic Algorithm (ML-GA). Detailed investigations of optimization solver parameters and variable limit extension were performed in the present ML-GGA model to improve the accuracy and robustness of the optimization process. Detailed descriptions of the different procedures, optimization tools, and criteria that must be followed for a successful output are provided here. The developed ML-GGA approach was used to optimize the operating conditions (case 1) and the piston bowl design (case 2) of a heavy-duty diesel engine running on a gasoline fuel with a research octane number (RON) of 80. The ML-GGA approach yielded >2% improvements in the merit function, compared with the optimum obtained from a thorough computational fluid dynamics (CFD) guided system optimization. The predictions from the ML-GGA approach were validated with engine CFD simulations. This study demonstrates the potential of ML-GGA to significantly reduce the time needed for optimization problems, without loss in accuracy compared with traditional approaches.


2018 ◽  
Vol 21 (7) ◽  
pp. 1251-1270 ◽  
Author(s):  
Chaitanya Kavuri ◽  
Sage L Kokjohn

Past research has shown that multidimensional computational fluid dynamics modeling in combination with a genetic algorithm method is an effective approach for optimizing internal combustion engine design. However, optimization studies performed with a detailed computational fluid dynamics model are time intensive, which limits the practical application of this approach. This study addresses this issue by using a machine learning approach called Gaussian process regression in combination with computational fluid dynamics modeling to reduce the computational optimization time. An approach was proposed where the Gaussian process regression model could be used instead of the computational fluid dynamics model to predict the outputs of the genetic algorithm optimization. In this approach, for every nth generation of the genetic algorithm, the data from the previous n − 1 generations was used to train the Gaussian process regression model. The approach was tested on an engine optimization study with five input parameters. When the genetic algorithm was run solely with computational fluid dynamics, the optimization took 50 days to complete. In comparison with the computational fluid dynamics and Gaussian process regression approach, the computational time was reduced by 62%, and the optimization was completed in 19 days using the same amount of computational resources. Additional parametric studies were performed to investigate the impact of genetic algorithm + Gaussian process regression parameters. Results showed that either reducing the initial dataset size or relaxing the error criterion resulted in increased Gaussian process regression evaluations within the genetic algorithm. However, relaxing the error criterion was found to impact the model predictions negatively. The initial dataset size was found to have a negligible impact on the final optimum design. Finally, the potential of machine learning in further improving the optimization process was explored by using the Gaussian process regression model to check for the robustness of the designs to operating parameter variations during the optimization. The genetic algorithm was repeated with the modified procedure and it was shown that adding the stability check resulted in a different, more reliable and stable optimum solution.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Meisam Babanezhad ◽  
Iman Behroyan ◽  
Ali Taghvaie Nakhjiri ◽  
Mashallah Rezakazemi ◽  
Azam Marjani ◽  
...  

AbstractComputational fluid dynamics (CFD) simulating is a useful methodology for reduction of experiments and their associated costs. Although the CFD could predict all hydro-thermal parameters of fluid flows, the connections between such parameters with each other are impossible using this approach. Machine learning by the artificial intelligence (AI) algorithm has already shown the ability to intelligently record engineering data. However, there are no studies available to deeply investigate the implicit connections between the variables resulted from the CFD. The present investigation tries to conduct cooperation between the mechanistic CFD and the artificial algorithm. The genetic algorithm is combined with the fuzzy interface system (GAFIS). Turbulent forced convection of Al2O3/water nanofluid in a heated tube is simulated for inlet temperatures (i.e., 305, 310, 315, and 320 K). GAFIS learns nodes coordinates of the fluid, the inlet temperatures, and turbulent kinetic energy (TKE) as inputs. The fluid temperature is learned as output. The number of inputs, population size, and the component are checked for the best intelligence. Finally, at the best intelligence, a formula is developed to make a relationship between the output (i.e. nanofluid temperatures) and inputs (the coordinates of the nodes of the nanofluid, inlet temperature, and TKE). The results revealed that the GAFIS intelligence reaches the highest level when the input number, the population size, and the exponent are 5, 30, and 3, respectively. Adding the turbulent kinetic energy as the fifth input, the regression value increases from 0.95 to 0.98. This means that by considering the turbulent kinetic energy the GAFIS reaches a higher level of intelligence by distinguishing the more difference between the learned data. The CFD and GAFIS predicted the same values of the nanofluid temperature.


2008 ◽  
Vol 5 (28) ◽  
pp. 1291-1301 ◽  
Author(s):  
Sam Van Wassenbergh ◽  
Peter Aerts

Most theoretical models of unsteady aquatic movement in organisms assume that including steady-state drag force and added mass approximates the hydrodynamic force exerted on an organism's body. However, animals often perform explosively quick movements where high accelerations are realized in a few milliseconds and are followed closely by rapid decelerations. For such highly unsteady movements, the accuracy of this modelling approach may be limited. This type of movement can be found during pivot feeding in pipefish that abruptly rotate their head and snout towards prey. We used computational fluid dynamics (CFD) to validate a simple analytical model of cranial rotation in pipefish. CFD simulations also allowed us to assess prey displacement by head rotation. CFD showed that the analytical model accurately calculates the forces exerted on the pipefish. Although the initial phase of acceleration changes the flow patterns during the subsequent deceleration phase, the accuracy of the analytical model was not reduced during this deceleration phase. Our analysis also showed that prey are left approximately stationary despite the quickly approaching pipefish snout. This suggests that pivot-feeding fish need little or no suction to compensate for the effects of the flow induced by cranial rotation.


RBRH ◽  
2021 ◽  
Vol 26 ◽  
Author(s):  
Mayara Francisca da Silva ◽  
Fábio Veríssimo Gonçalves ◽  
Johannes Gérson Janzen

ABSTRACT Computational Fluid Dynamics (CFD) simulations of a leakage in a pressurized pipe were undertaken to determine the empirical effects of hydraulic and geometric factors on the leakage flow rate. The results showed that pressure, leakage area and leakage form, influenced the leakage flow rate significantly, while pipe thickness and mean velocity did not influence the leakage flow rate. With relation to the interactions, the effect of pressure upon leakage flow rate depends on leakage area, being stronger for great leakage areas; the effects of leakage area and pressure on leakage flow rate is more pronounced for longitudinal leakages than for circular leakages. Finally, our results suggest that the equations that predict leakage flow rate in pressurized pipes may need a revision.


2021 ◽  
Vol 2021 (6) ◽  
pp. 5421-5425
Author(s):  
MICHAL RICHTAR ◽  
◽  
PETRA MUCKOVA ◽  
JAN FAMFULIK ◽  
JAKUB SMIRAUS ◽  
...  

The aim of the article is to present the possibilities of application of computational fluid dynamics (CFD) to modelling of air flow in combustion engine intake manifold depending on airbox configuration. The non-stationary flow occurs in internal combustion engines. This is a specific type of flow characterized by the fact that the variables depend not only on the position but also on the time. The intake manifold dimension and geometry strongly effects intake air amount. The basic target goal is to investigate how the intake trumpet position in the airbox impacts the filling of the combustion chamber. Furthermore, the effect of different distances between the trumpet neck and the airbox wall in this paper will be compared.


2021 ◽  
Author(s):  
Darren Jia

Diabolo is a popular game in which the object can be spun at up to speeds of 5000 rpm. This high spin velocity gives the diabolo the necessary angular momentum to remain stable. The shape of the diabolo generates an interesting air flow pattern. The viscous air applies a resistive torque on the fast spinning diabolo. Through computational fluid dynamics (CFD) simulations it's shown that the resistive torque has an interesting dependence on the angular speed of the diabolo. Further, the geometric shape of the diabolo affects the dependence of torque on angular speed.


Sign in / Sign up

Export Citation Format

Share Document