Computational Effort vs. Accuracy Tradeoff in Uncertainty Quantification

Author(s):  
Joshua G. Mullins ◽  
Sankaran Mahadevan
2019 ◽  
Vol 67 (4) ◽  
pp. 283-303
Author(s):  
Chettapong Janya-anurak ◽  
Thomas Bernard ◽  
Jürgen Beyerer

Abstract Many industrial and environmental processes are characterized as complex spatio-temporal systems. Such systems known as distributed parameter systems (DPSs) are usually highly complex and it is difficult to establish the relation between model inputs, model outputs and model parameters. Moreover, the solutions of physics-based models commonly differ somehow from the measurements. In this work, appropriate Uncertainty Quantification (UQ) approaches are selected and combined systematically to analyze and identify systems. However, there are two main challenges when applying the UQ approaches to nonlinear distributed parameter systems. These are: (1) how uncertainties are modeled and (2) the computational effort, as the conventional methods require numerous evaluations of the model to compute the probability density function of the response. This paper presents a framework to solve these two issues. Within the Bayesian framework, incomplete knowledge about the system is considered as uncertainty of the system. The uncertainties are represented by random variables, whose probability density function can be achieved by converting the knowledge of the parameters using the Principle of Maximum Entropy. The generalized Polynomial Chaos (gPC) expansion is employed to reduce the computational effort. The framework using gPC based on Bayesian UQ proposed in this work is capable of analyzing systems systematically and reducing the disagreement between model predictions and measurements of the real processes to fulfill user defined performance criteria. The efficiency of the framework is assessed by applying it to a benchmark model (neutron diffusion equation) and to a model of a complex rheological forming process. These applications illustrate that the framework is capable of systematically analyzing the system and optimally calibrating the model parameters.


2011 ◽  
Vol 10 (1) ◽  
pp. 140-160 ◽  
Author(s):  
Akil Narayan ◽  
Dongbin Xiu

AbstractIn this work we consider a general notion ofdistributional sensitivity, which measures the variation in solutions of a given physical/mathematical system with respect to the variation of probability distribution of the inputs. This is distinctively different from the classical sensitivity analysis, which studies the changes of solutions with respect to the values of the inputs. The general idea is measurement of sensitivity of outputs with respect to probability distributions, which is a well-studied concept in related disciplines. We adapt these ideas to present a quantitative framework in the context of uncertainty quantification for measuring such a kind of sensitivity and a set of efficient algorithms to approximate the distributional sensitivity numerically. A remarkable feature of the algorithms is that they do not incur additional computational effort in addition to a one-time stochastic solver. Therefore, an accurate stochastic computation with respect to a prior input distribution is needed only once, and the ensuing distributional sensitivity computation for different input distributions is a post-processing step. We prove that an accurate numericalmodel leads to accurate calculations of this sensitivity, which applies not just to slowly-converging Monte-Carlo estimates, but also to exponentially convergent spectral approximations. We provide computational examples to demonstrate the ease of applicability and verify the convergence claims.


Author(s):  
Alessandra Cuneo ◽  
Andrea Giugno ◽  
Luca Mantelli ◽  
Alberto Traverso

Abstract Pressurized solid oxide fuel cell (SOFC) systems are a sustainable opportunity for improvement over conventional systems, featuring high electric efficiency, potential for cogeneration applications, and low carbon emissions. Such systems are usually analyzed in deterministic conditions. However, it is widely demonstrated that such systems are affected significantly by uncertainties, both in component performance and operating parameters. This paper aims to study the propagation of uncertainties related both to the fuel cell (ohmic losses, anode ejector diameter, and fuel gas composition) and the gas turbine cycle characteristics (compressor and turbine efficiencies, recuperator pressure losses). The analysis is carried out on an innovative hybrid system layout, where a turbocharger is used to pressurize the fuel cell, promising better cost effectiveness then a microturbine-based hybrid system, at small scales. Due to plant complexity and high computational effort required by uncertainty quantification methodologies, a response surface (RS) is created. To evaluate the impact of the aforementioned uncertainties on the relevant system outputs, such as overall efficiency and net electrical power, the Monte Carlo method is applied to the RS. Particular attention is focused on the impact of uncertainties on the opening of the turbocharger wastegate valve, which is aimed at satisfying the fuel cell constraints at each operating condition.


Author(s):  
Alessandra Cuneo ◽  
Andrea Giugno ◽  
Luca Mantelli ◽  
Alberto Traverso

Abstract Pressurised solid oxide fuel cell (SOFC) systems are a sustainable opportunity for improvement over conventional systems, featuring high electric efficiency, potential for cogeneration applications and low carbon emissions. Such systems are usually analyzed in deterministic conditions. However, it is widely demonstrated that such systems are affected significantly by uncertainties, both in component performance and operating parameters. This paper aims to study the propagation of uncertainties related both to the fuel cell (ohmic losses, anode ejector diameter and fuel gas composition) and the gas turbine cycle characteristics (compressor and turbine efficiencies, recuperator pressure losses). The analysis is carried out on an innovative hybrid system layout, where a turbocharger is used to pressurise the fuel cell, promising better cost effectiveness then a microturbine-based hybrid system, at small scales. Due to plant complexity and high computational effort required by uncertainty quantification methodologies, a response surface is created. To evaluate the impact of the aforementioned uncertainties on the relevant system outputs, such as overall efficiency and net electrical power, the Monte Carlo method is applied to the response surface. Particular attention is focused on the impact of uncertainties on the opening of the turbocharger wastegate valve, which is aimed at satisfying the fuel cell constraints at each operating condition.


Author(s):  
J. Gjønnes ◽  
N. Bøe ◽  
K. Gjønnes

Structure information of high precision can be extracted from intentsity details in convergent beam patterns like the one reproduced in Fig 1. From low order reflections for small unit cell crystals,bonding charges, ionicities and atomic parameters can be derived, (Zuo, Spence and O’Keefe, 1988; Zuo, Spence and Høier 1989; Gjønnes, Matsuhata and Taftø, 1989) , but extension to larger unit cell ma seem difficult. The disks must then be reduced in order to avoid overlap calculations will become more complex and intensity features often less distinct Several avenues may be then explored: increased computational effort in order to handle the necessary many-parameter dynamical calculations; use of zone axis intensities at symmetry positions within the CBED disks, as in Figure 2 measurement of integrated intensity across K-line segments. In the last case measurable quantities which are well defined also from a theoretical viewpoint can be related to a two-beam like expression for the intensity profile:With as an effective Fourier potential equated to a gap at the dispersion surface, this intensity can be integrated across the line, with kinematical and dynamical limits proportional to and at low and high thickness respctively (Blackman, 1939).


2010 ◽  
Vol 31 (3) ◽  
pp. 130-137 ◽  
Author(s):  
Hagen C. Flehmig ◽  
Michael B. Steinborn ◽  
Karl Westhoff ◽  
Robert Langner

Previous research suggests a relationship between neuroticism (N) and the speed-accuracy tradeoff in speeded performance: High-N individuals were observed performing less efficiently than low-N individuals and compensatorily overemphasizing response speed at the expense of accuracy. This study examined N-related performance differences in the serial mental addition and comparison task (SMACT) in 99 individuals, comparing several performance measures (i.e., response speed, accuracy, and variability), retest reliability, and practice effects. N was negatively correlated with mean reaction time but positively correlated with error percentage, indicating that high-N individuals tended to be faster but less accurate in their performance than low-N individuals. The strengthening of the relationship after practice demonstrated the reliability of the findings. There was, however, no relationship between N and distractibility (assessed via measures of reaction time variability). Our main findings are in line with the processing efficiency theory, extending the relationship between N and working style to sustained self-paced speeded mental addition.


2014 ◽  
Author(s):  
Babette Rae ◽  
Scott Brown ◽  
Maxim Bushmakin ◽  
Mark Rubin

1997 ◽  
Author(s):  
Jeffry S. Kellogg ◽  
Xiangen Hu ◽  
William Marks

Sign in / Sign up

Export Citation Format

Share Document