scholarly journals Impact of the Partitioning Method on Multidimensional Adaptive-Chemistry Simulations

Energies ◽  
2020 ◽  
Vol 13 (10) ◽  
pp. 2567
Author(s):  
Giuseppe D’Alessio ◽  
Alberto Cuoci ◽  
Gianmarco Aversano ◽  
Mauro Bracconi ◽  
Alessandro Stagni ◽  
...  

The large number of species included in the detailed kinetic mechanisms represents a serious challenge for numerical simulations of reactive flows, as it can lead to large CPU times, even for relatively simple systems. One possible solution to mitigate the computational cost of detailed numerical simulations, without sacrificing their accuracy, is to adopt a Sample-Partitioning Adaptive Reduced Chemistry (SPARC) approach. The first step of the aforementioned approach is the thermochemical space partitioning for the generation of locally reduced mechanisms, but this task is often challenging because of the high-dimensionality, as well as the high non-linearity associated to reacting systems. Moreover, the importance of this step in the overall approach is not negligible, as it has effects on the mechanisms’ level of chemical reduction and, consequently, on the accuracy and the computational speed-up of the adaptive simulation. In this work, two different clustering algorithms for the partitioning of the thermochemical space were evaluated by means of an adaptive CFD simulation of a 2D unsteady laminar flame of a nitrogen-diluted methane stream in air. The first one is a hybrid approach based on the coupling between the Self-Organizing Maps with K-Means (SKM), and the second one is the Local Principal Component Analysis (LPCA). Comparable results in terms of mechanism reduction (i.e., the mean number of species in the reduced mechanisms) and simulation accuracy were obtained for both the tested methods, but LPCA showed superior performances in terms of reduced mechanisms uniformity and speed-up of the adaptive simulation. Moreover, the local algorithm showed a lower sensitivity to the training dataset size in terms of the required CPU-time for convergence, thus also being optimal, with respect to SKM, for massive dataset clustering tasks.

2020 ◽  
Vol 20 (6) ◽  
pp. 116-125
Author(s):  
Nikolay Shegunov ◽  
Oleg Iliev

AbstractMultiLevel Monte Carlo (MLMC) attracts great interest for numerical simulations of Stochastic Partial Differential Equations (SPDEs), due to its superiority over the standard Monte Carlo (MC) approach. MLMC combines in a proper manner many cheap fast simulations with few slow and expensive ones, the variance is reduced, and a significant speed up is achieved. Simulations with MC/MLMC consist of three main components: generating random fields, solving deterministic problem and reduction of the variance. Each part is subject to a different degree of parallelism. Compared to the classical MC, MLMC introduces “levels” on which the sampling is done. These levels have different computational cost, thus, efficiently utilizing the parallel resources becomes a non-trivial problem. The main focus of this paper is the parallelization of the MLMC Algorithm.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
A. P. Cédola ◽  
D. Kim ◽  
A. Tibaldi ◽  
M. Tang ◽  
A. Khalili ◽  
...  

This paper presents an experimental and theoretical study on the impact of doping and recombination mechanisms on quantum dot solar cells based on the InAs/GaAs system. Numerical simulations are built on a hybrid approach that includes the quantum features of the charge transfer processes between the nanostructured material and the bulk host material in a classical transport model of the macroscopic continuum. This allows gaining a detailed understanding of the several physical mechanisms affecting the photovoltaic conversion efficiency and provides a quantitatively accurate picture of real devices at a reasonable computational cost. Experimental results demonstrate that QD doping provides a remarkable increase of the solar cell open-circuit voltage, which is explained by the numerical simulations as the result of reduced recombination loss through quantum dots and defects.


Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Yun-Hua Wu ◽  
Lin-Lin Ge ◽  
Feng Wang ◽  
Bing Hua ◽  
Zhi-Ming Chen ◽  
...  

In order to satisfy the real-time requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSA-SURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale- and rotation-invariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCA-SURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSA-SURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.


Author(s):  
Franz Pichler ◽  
Gundolf Haase

A finite element code is developed in which all of the computationally expensive steps are performed on a graphics processing unit via the THRUST and the PARALUTION libraries. The code focuses on the simulation of transient problems where the repeated computations per time-step create the computational cost. It is used to solve partial and ordinary differential equations as they arise in thermal-runaway simulations of automotive batteries. The speed-up obtained by utilizing the graphics processing unit for every critical step is compared against the single core and the multi-threading solutions which are also supported by the chosen libraries. This way a high total speed-up on the graphics processing unit is achieved without the need for programming a single classical Compute Unified Device Architecture kernel.


2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


2016 ◽  
Author(s):  
Andrew Dawson ◽  
Peter Düben

Abstract. This paper describes the rpe library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialised hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision. The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.


2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


Sign in / Sign up

Export Citation Format

Share Document