Dirichlet process models

2013 ◽  
pp. 561-590
Author(s):  
Wesam Elshamy ◽  
William H. Hsu

Topic models are probabilistic models for discovering topical themes in collections of documents. These models provide us with the means of organizing what would otherwise be unstructured collections. The first wave of topic models developed was able to discover the prevailing topics in a big collection of documents spanning a period of time. These time-invariant models were not capable of modeling 1) the time varying number of topics they discover and 2) the time changing structure of these topics. Few models were developed to address these two deficiencies. The online-hierarchical Dirichlet process models the documents with a time varying number of topics, and the continuous-time dynamic topic model evolves topic structure in continuous-time. In this chapter, the authors present the continuous-time infinite dynamic topic model that combines the advantages of these two models. It is a probabilistic topic model that changes the number of topics and topic structure over continuous-time.


1998 ◽  
Vol 7 (2) ◽  
pp. 223 ◽  
Author(s):  
Steven N. MacEachern ◽  
Peter Müller ◽  
Peter Muller

1998 ◽  
Vol 7 (2) ◽  
pp. 223-238 ◽  
Author(s):  
Steven N. Maceachern ◽  
Peter Müller

Biometrika ◽  
2007 ◽  
Vol 94 (4) ◽  
pp. 809-825 ◽  
Author(s):  
J. A. Duan ◽  
M. Guindani ◽  
A. E. Gelfand

2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Terrance Savitsky ◽  
Marina Vannucci

We expand a framework for Bayesian variable selection for Gaussian process (GP) models by employing spiked Dirichlet process (DP) prior constructions over set partitions containing covariates. Our approach results in a nonparametric treatment of the distribution of the covariance parameters of the GP covariance matrix that in turn induces a clustering of the covariates. We evaluate two prior constructions: the first one employs a mixture of a point-mass and a continuous distribution as the centering distribution for the DP prior, therefore, clustering all covariates. The second one employs a mixture of a spike and a DP prior with a continuous distribution as the centering distribution, which induces clustering of the selected covariates only. DP models borrow information across covariates through model-based clustering. Our simulation results, in particular, show a reduction in posterior sampling variability and, in turn, enhanced prediction performances. In our model formulations, we accomplish posterior inference by employing novel combinations and extensions of existing algorithms for inference with DP prior models and compare performances under the two prior constructions.


2018 ◽  
Vol 41 ◽  
Author(s):  
Wei Ji Ma

AbstractGiven the many types of suboptimality in perception, I ask how one should test for multiple forms of suboptimality at the same time – or, more generally, how one should compare process models that can differ in any or all of the multiple components. In analogy to factorial experimental design, I advocate for factorial model comparison.


Sign in / Sign up

Export Citation Format

Share Document