binary random variable
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 3)

H-INDEX

1
(FIVE YEARS 0)

Author(s):  
Sarah E Robertson ◽  
Issa J Dahabreh ◽  
Jon A Steingrimsson

Abstract We consider methods for generating draws of a binary random variable whose expectation conditional on covariates follows a logistic regression model with known covariate coefficients. We examine approximations for finding a “balancing intercept,” that is, a value for the intercept of the logistic model that leads to a desired marginal expectation for the binary random variable. We show that a recently proposed analytical approximation can produce inaccurate results, especially when targeting more extreme marginal expectations or when the linear predictor of the regression model has high variance. We describe and implement a numerical approximation based on Monte Carlo methods that appears to work well in practice. Our approach to the basic problem of the balancing intercept provides an example of a broadly applicable strategy for formulating and solving problems that arise in the design of simulation studies used to evaluate or teach epidemiologic methods.


2020 ◽  
Vol 18 (1) ◽  
Author(s):  
Rand Wilcox

For a binary random variable Y, let p(x) = P(Y = 1 | X = x) for some covariate X. The goal of computing a confidence interval for p(x) is considered. In the logistic regression model, even a slight departure difficult to detect via a goodness-of-fit test can yield inaccurate results. The accuracy of a confidence interval can deteriorate as the sample size increases. The goal is to suggest an alternative approach based on a smoother, which provides a more flexible approximation of p(x).


Author(s):  
Zheng Tian ◽  
Ying Wen ◽  
Zhichen Gong ◽  
Faiz Punakkath ◽  
Shihao Zou ◽  
...  

In a single-agent setting, reinforcement learning (RL) tasks can be cast into an inference problem by introducing a binary random variable o, which stands for the "optimality". In this paper, we redefine the binary random variable o in multi-agent setting and formalize multi-agent reinforcement learning (MARL) as probabilistic inference. We derive a variational lower bound of the likelihood of achieving the optimality and name it as Regularized Opponent Model with Maximum Entropy Objective (ROMMEO). From ROMMEO, we present a novel perspective on opponent modeling and show how it can improve the performance of training agents theoretically and empirically in cooperative games. To optimize ROMMEO, we first introduce a tabular Q-iteration method ROMMEO-Q with proof of convergence. We extend the exact algorithm to complex environments by proposing an approximate version, ROMMEO-AC. We evaluate these two algorithms on the challenging iterated matrix game and differential game respectively and show that they can outperform strong MARL baselines. 


Author(s):  
V. V. Buldygin ◽  
K. K. Moskvichova

Sign in / Sign up

Export Citation Format

Share Document