scholarly journals Reliability analysis for high-lift device based on Copula function and Bayesian updating

2018 ◽  
Vol 10 (12) ◽  
pp. 168781401881929 ◽  
Author(s):  
Jiazhen Feng ◽  
Jianguo Zhang ◽  
Jiwei Qiu

In order to improve the reliability analysis accuracy of the aircraft high-lift, an approach based on the Copula function theory and Bayesian updating is proposed. Considering the influence of the random variables’ correlation in the process of updating, choosing the reasonable prior joint distribution and likelihood function is crucial. Under the condition of the incomplete probability information, the analytic expressions of the prior joint distribution and likelihood function of the correlated random variables are derived through the Copula function. Then, the posterior joint distribution is obtained by Bayesian updating. The reliability of the lifting device is calculated based on the posterior distribution. The case analysis shows that the reliability results based on the proposed approach are more accurate and more coincident with the factual situation than the reliability analysis results based on the independence assumption of random variables.

Author(s):  
Dale J. Poirier

This article is concerned with the foundation of statistical inference in the representation theorems. It shows how different assumptions about the joint distribution of the observable data lead to different parametric models defined by prior and likelihood function. Parametric models arise as an implication of the assumptions about observables. The article presents many extensions and offers description of the subjectivist attitude that underlies much of Bayesian econometrics. This subjectivist interpretation is close to probability. This article discusses exchangeability as the foundation for Bayesian econometrics. It serves as the basis for further extensions to incorporate heterogeneity and dependency across observations. It also discusses representation theorems involving random variables more complicated than Bernoulli random variables. They are not true properties of reality but are useful for making inferences regarding future observables.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 229
Author(s):  
Fangyi Li ◽  
Yufei Yan ◽  
Jianhua Rong ◽  
Houyao Zhu

In practical engineering, due to the lack of information, it is impossible to accurately determine the distribution of all variables. Therefore, time-variant reliability problems with both random and interval variables may be encountered. However, this kind of problem usually involves a complex multilevel nested optimization problem, which leads to a substantial computational burden, and it is difficult to meet the requirements of complex engineering problem analysis. This study proposes a decoupling strategy to efficiently analyze the time-variant reliability based on the mixed uncertainty model. The interval variables are treated with independent random variables that are uniformly distributed in their respective intervals. Then the time-variant reliability-equivalent model, containing only random variables, is established, to avoid multi-layer nesting optimization. The stochastic process is first discretized to obtain several static limit state functions at different times. The time-variant reliability problem is changed into the conventional time-invariant system reliability problem. First order reliability analysis method (FORM) is used to analyze the reliability of each time. Thus, an efficient and robust convergence hybrid time-variant reliability calculation algorithm is proposed based on the equivalent model. Finally, numerical examples shows the effectiveness of the proposed method.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 312
Author(s):  
Ilze A. Auzina ◽  
Jakub M. Tomczak

Many real-life processes are black-box problems, i.e., the internal workings are inaccessible or a closed-form mathematical expression of the likelihood function cannot be defined. For continuous random variables, likelihood-free inference problems can be solved via Approximate Bayesian Computation (ABC). However, an optimal alternative for discrete random variables is yet to be formulated. Here, we aim to fill this research gap. We propose an adjusted population-based MCMC ABC method by re-defining the standard ABC parameters to discrete ones and by introducing a novel Markov kernel that is inspired by differential evolution. We first assess the proposed Markov kernel on a likelihood-based inference problem, namely discovering the underlying diseases based on a QMR-DTnetwork and, subsequently, the entire method on three likelihood-free inference problems: (i) the QMR-DT network with the unknown likelihood function, (ii) the learning binary neural network, and (iii) neural architecture search. The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.


1958 ◽  
Vol 10 ◽  
pp. 222-229 ◽  
Author(s):  
J. R. Blum ◽  
H. Chernoff ◽  
M. Rosenblatt ◽  
H. Teicher

Let {Xn} (n = 1, 2 , …) be a stochastic process. The random variables comprising it or the process itself will be said to be interchangeable if, for any choice of distinct positive integers i 1, i 2, H 3 … , ik, the joint distribution of depends merely on k and is independent of the integers i 1, i 2, … , i k. It was shown by De Finetti (3) that the probability measure for any interchangeable process is a mixture of probability measures of processes each consisting of independent and identically distributed random variables.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jinsheng Wang ◽  
Muhannad Aldosary ◽  
Song Cen ◽  
Chenfeng Li

Purpose Normal transformation is often required in structural reliability analysis to convert the non-normal random variables into independent standard normal variables. The existing normal transformation techniques, for example, Rosenblatt transformation and Nataf transformation, usually require the joint probability density function (PDF) and/or marginal PDFs of non-normal random variables. In practical problems, however, the joint PDF and marginal PDFs are often unknown due to the lack of data while the statistical information is much easier to be expressed in terms of statistical moments and correlation coefficients. This study aims to address this issue, by presenting an alternative normal transformation method that does not require PDFs of the input random variables. Design/methodology/approach The new approach, namely, the Hermite polynomial normal transformation, expresses the normal transformation function in terms of Hermite polynomials and it works with both uncorrelated and correlated random variables. Its application in structural reliability analysis using different methods is thoroughly investigated via a number of carefully designed comparison studies. Findings Comprehensive comparisons are conducted to examine the performance of the proposed Hermite polynomial normal transformation scheme. The results show that the presented approach has comparable accuracy to previous methods and can be obtained in closed-form. Moreover, the new scheme only requires the first four statistical moments and/or the correlation coefficients between random variables, which greatly widen the applicability of normal transformations in practical problems. Originality/value This study interprets the classical polynomial normal transformation method in terms of Hermite polynomials, namely, Hermite polynomial normal transformation, to convert uncorrelated/correlated random variables into standard normal random variables. The new scheme only requires the first four statistical moments to operate, making it particularly suitable for problems that are constraint by limited data. Besides, the extension to correlated cases can easily be achieved with the introducing of the Hermite polynomials. Compared to existing methods, the new scheme is cheap to compute and delivers comparable accuracy.


2012 ◽  
Vol 49 (3) ◽  
pp. 895-900
Author(s):  
Sheldon M. Ross

We find the joint distribution of the lengths of the shortest paths from a specified node to all other nodes in a network in which the edge lengths are assumed to be independent heterogeneous exponential random variables. We also give an efficient way to simulate these lengths that requires only one generated exponential per node, as well as efficient procedures to use the simulated data to estimate quantities of the joint distribution.


Sign in / Sign up

Export Citation Format

Share Document