没有合适的资源?快使用搜索试试~ 我知道了~
GibbsSampling贝叶斯推断1
需积分: 0 0 下载量 191 浏览量
2022-08-03
15:11:03
上传
评论
收藏 188KB PDF 举报
温馨提示
试读
6页
GibbsSampling贝叶斯推断1
资源详情
资源评论
资源推荐
Bayesian Inference: Gibbs Sampling
Ilker Yildirim
Department of Brain and Cognitive Sciences
University of Rochester
Rochester, NY 14627
August 2012
References: Most of the material in this note was taken from: (1) Lynch, S. M. (2007).
Introduction to Applied Bayesian Statistics and Estimation for Social Scientists. New York:
Springer; and (2) Taylan Cemgil’s lecture slides on Monte Carlo methods
(http://www.cmpe.boun.edu.tr/courses/cmpe58n/fall2009/)
1. Introduction
As Bayesian models of cognitive phenomena become more sophisticated, the need for efficient
inference methods becomes more urgent. In a nutshell, the goal of Bayesian inference is to
maintain a full posterior probability distribution over a set of random variables. However,
maintaining and using this distribution often involves computing integrals which, for most
non-trivial models, is intractable. Sampling algorithms based on Monte Carlo Markov Chain
(MCMC) techniques are one possible way to go about inference in such models.
The underlying logic of MCMC sampling is that we can estimate any desired expectation
by ergodic averages. That is, we can compute any statistic of a posterior distribution as long
as we have N simulated samples from that distribution:
E[f(s)]
P
≈
1
N
N
X
i=1
f(s
(i)
) (1)
where P is the posterior distribution of interest, f(s) is the desired expectation, and f(s
(i)
)
is the i
th
simulated sample from P. For example, we can estimate the mean by E[x]
P
=
1
N
P
N
i=1
x
(i)
.
How do we obtain samples from the posterior distribution? Gibbs sampling is one MCMC
technique suitable for the task. The idea in Gibbs sampling is to generate posterior samples
by sweeping through each variable (or block of variables) to sample from its conditional
distribution with the remaining variables fixed to their current values. For instance, consider
the random variables X
1
, X
2
, and X
3
. We start by setting these variables to their initial
values x
(0)
1
, x
(0)
2
, and x
(0)
3
(often values sampled from a prior distribution q). At iteration
i, we sample x
(i)
1
∼ p(X
1
= x
1
|X
2
= x
(i−1)
2
, X
3
= x
(i−1)
3
), sample x
2
∼ p(X
2
= x
2
|X
1
=
x
(i)
1
, X
3
= x
(i−1)
3
), and sample x
3
∼ p(X
3
= x
3
|X
1
= x
(i)
1
, X
2
= x
(i)
2
). This process continues
until “convergence” (the sample values have the same distribution as if they were sampled
from the true posterior joint distribution). Algorithm 1 details a generic Gibbs sampler.
1
郑瑜伊
- 粉丝: 19
- 资源: 318
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0