scholarly journals Unifying rational models of categorization via the hierarchical Dirichlet process

2019 ◽  
Author(s):  
Tom Griffiths ◽  
Kevin Canini ◽  
Adam N Sanborn ◽  
Danielle Navarro

Models of categorization make different representational assumptions, with categories being represented by prototypes, sets of exemplars, and everything in between. Rational models of categorization justify these representational assumptions in terms of different schemes for estimating probability distributions. However, they do not answer the question of which scheme should be used in representing a given category. We show that existing rational models of categorization are special cases of a statistical model called the hierarchical Dirichlet process, which can be used to automatically infer a representation of the appropriate complexity for a given category.

2010 ◽  
Vol 6 (4) ◽  
pp. e1000763 ◽  
Author(s):  
Daniel Ting ◽  
Guoli Wang ◽  
Maxim Shapovalov ◽  
Rajib Mitra ◽  
Michael I. Jordan ◽  
...  

Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 404 ◽  
Author(s):  
Julianna Pinele ◽  
João E. Strapasson ◽  
Sueli I. R. Costa

The Fisher–Rao distance is a measure of dissimilarity between probability distributions, which, under certain regularity conditions of the statistical model, is up to a scaling factor the unique Riemannian metric invariant under Markov morphisms. It is related to the Shannon entropy and has been used to enlarge the perspective of analysis in a wide variety of domains such as image processing, radar systems, and morphological classification. Here, we approach this metric considered in the statistical model of normal multivariate probability distributions, for which there is not an explicit expression in general, by gathering known results (closed forms for submanifolds and bounds) and derive expressions for the distance between distributions with the same covariance matrix and between distributions with mirrored covariance matrices. An application of the Fisher–Rao distance to the simplification of Gaussian mixtures using the hierarchical clustering algorithm is also presented.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3127
Author(s):  
Federico Bassetti ◽  
Lucia Ladelli

We introduce mixtures of species sampling sequences (mSSS) and discuss how these sequences are related to various types of Bayesian models. As a particular case, we recover species sampling sequences with general (not necessarily diffuse) base measures. These models include some “spike-and-slab” non-parametric priors recently introduced to provide sparsity. Furthermore, we show how mSSS arise while considering hierarchical species sampling random probabilities (e.g., the hierarchical Dirichlet process). Extending previous results, we prove that mSSS are obtained by assigning the values of an exchangeable sequence to the classes of a latent exchangeable random partition. Using this representation, we give an explicit expression of the Exchangeable Partition Probability Function of the partition generated by an mSSS. Some special cases are discussed in detail—in particular, species sampling sequences with general base measures and a mixture of species sampling sequences with Gibbs-type latent partition. Finally, we give explicit expressions of the predictive distributions of an mSSS.


2019 ◽  
Author(s):  
Mark Andrews

A Gibbs sampler for the hierarchical Dirichlet process mixture model (HDPMM) when used with multinomial data.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 432
Author(s):  
Emmanuel Chevallier ◽  
Nicolas Guigui

This paper aims to describe a statistical model of wrapped densities for bi-invariant statistics on the group of rigid motions of a Euclidean space. Probability distributions on the group are constructed from distributions on tangent spaces and pushed to the group by the exponential map. We provide an expression of the Jacobian determinant of the exponential map of S E ( n ) which enables the obtaining of explicit expressions of the densities on the group. Besides having explicit expressions, the strengths of this statistical model are that densities are parametrized by their moments and are easy to sample from. Unfortunately, we are not able to provide convergence rates for density estimation. We provide instead a numerical comparison between the moment-matching estimators on S E ( 2 ) and R 3 , which shows similar behaviors.


Sign in / Sign up

Export Citation Format

Share Document