scholarly journals Gaussian Mixture Reduction for Time-Constrained Approximate Inference in Hybrid Bayesian Networks

2019 ◽  
Vol 9 (10) ◽  
pp. 2055 ◽  
Author(s):  
Cheol Young Park ◽  
Kathryn Blackmond Laskey ◽  
Paulo C. G. Costa ◽  
Shou Matsumoto

Hybrid Bayesian Networks (HBNs), which contain both discrete and continuous variables, arise naturally in many application areas (e.g., image understanding, data fusion, medical diagnosis, fraud detection). This paper concerns inference in an important subclass of HBNs, the conditional Gaussian (CG) networks, in which all continuous random variables have Gaussian distributions and all children of continuous random variables must be continuous. Inference in CG networks can be NP-hard even for special-case structures, such as poly-trees, where inference in discrete Bayesian networks can be performed in polynomial time. Therefore, approximate inference is required. In approximate inference, it is often necessary to trade off accuracy against solution time. This paper presents an extension to the Hybrid Message Passing inference algorithm for general CG networks and an algorithm for optimizing its accuracy given a bound on computation time. The extended algorithm uses Gaussian mixture reduction to prevent an exponential increase in the number of Gaussian mixture components. The trade-off algorithm performs pre-processing to find optimal run-time settings for the extended algorithm. Experimental results for four CG networks compare performance of the extended algorithm with existing algorithms and show the optimal settings for these CG networks.

2005 ◽  
Vol 14 (03) ◽  
pp. 477-489
Author(s):  
LAILA KHREISAT

One of the major challenges facing real time world applications that employ Bayesian networks, is the design and development of efficient inference algorithms. In this paper we present an approximate real time inference algorithm for Bayesian Networks. The algorithm is an anytime reasoning method based on probabilistic inequalities, capable of handling fully and partially quantified Bayesian networks. In our method the accuracy of the results improve gradually as computation time increases, providing a trade-off between resource consumption and output quality. The method is tractable in providing the initial answers, as well as complete in the limiting case.


2018 ◽  
Vol 62 ◽  
pp. 799-828 ◽  
Author(s):  
Antonio Salmerón ◽  
Rafael Rumí ◽  
Helge Langseth ◽  
Thomas D. Nielsen ◽  
Anders L. Madsen

Hybrid Bayesian networks have received an increasing attention during the last years. The difference with respect to standard Bayesian networks is that they can host discrete and continuous variables simultaneously, which extends the applicability of the Bayesian network framework in general. However, this extra feature also comes at a cost: inference in these types of models is computationally more challenging and the underlying models and updating procedures may not even support closed-form solutions. In this paper we provide an overview of the main trends and principled approaches for performing inference in hybrid Bayesian networks. The methods covered in the paper are organized and discussed according to their methodological basis. We consider how the methods have been extended and adapted to also include (hybrid) dynamic Bayesian networks, and we end with an overview of established software systems supporting inference in these types of models.


Author(s):  
Alessandro Barbiero ◽  
Asmerilda Hitaj

AbstractIn many management science or economic applications, it is common to represent the key uncertain inputs as continuous random variables. However, when analytic techniques fail to provide a closed-form solution to a problem or when one needs to reduce the computational load, it is often necessary to resort to some problem-specific approximation technique or approximate each given continuous probability distribution by a discrete distribution. Many discretization methods have been proposed so far; in this work, we revise the most popular techniques, highlighting their strengths and weaknesses, and empirically investigate their performance through a comparative study applied to a well-known engineering problem, formulated as a stress–strength model, with the aim of weighting up their feasibility and accuracy in recovering the value of the reliability parameter, also with reference to the number of discrete points. The results overall reward a recently introduced method as the best performer, which derives the discrete approximation as the numerical solution of a constrained non-linear optimization, preserving the first two moments of the original distribution. This method provides more accurate results than an ad-hoc first-order approximation technique. However, it is the most computationally demanding as well and the computation time can get even larger than that required by Monte Carlo approximation if the number of discrete points exceeds a certain threshold.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 957
Author(s):  
Branislav Popović ◽  
Lenka Cepova ◽  
Robert Cep ◽  
Marko Janev ◽  
Lidija Krstanović

In this work, we deliver a novel measure of similarity between Gaussian mixture models (GMMs) by neighborhood preserving embedding (NPE) of the parameter space, that projects components of GMMs, which by our assumption lie close to lower dimensional manifold. By doing so, we obtain a transformation from the original high-dimensional parameter space, into a much lower-dimensional resulting parameter space. Therefore, resolving the distance between two GMMs is reduced to (taking the account of the corresponding weights) calculating the distance between sets of lower-dimensional Euclidean vectors. Much better trade-off between the recognition accuracy and the computational complexity is achieved in comparison to measures utilizing distances between Gaussian components evaluated in the original parameter space. The proposed measure is much more efficient in machine learning tasks that operate on large data sets, as in such tasks, the required number of overall Gaussian components is always large. Artificial, as well as real-world experiments are conducted, showing much better trade-off between recognition accuracy and computational complexity of the proposed measure, in comparison to all baseline measures of similarity between GMMs tested in this paper.


Sign in / Sign up

Export Citation Format

Share Document