GMM-Based Hidden Markov Random Field
for Color Image and 3D Volume Segmentation
Quan Wang
Signal Analysis and Machine Perception Laboratory
Electrical, Computer, and Systems Engineering
Rensselaer Polytechnic Institute
wangq10@rpi.edu
Abstract
In this project
1
, we first study the Gaussian-based hidden
Markov random field (HMRF) model and its expectation-
maximization (EM) algorithm. Then we generalize it to
Gaussian mixture model-based hidden Markov random
field. The algorithm is implemented in MATLAB. We also
apply this algorithm to color image segmentation problems
and 3D volume segmentation problems.
1. Introduction
Markov random fields (MRFs) have been widely used
for computer vision problems, such as image segmenta-
tion [10], surface reconstruction [6] and depth inference [5].
Much of its success attributes to the efficient algorithms,
such as Iterated Conditional Modes [1], and its consider-
ation of both “data faithfulness” and “model smoothness”
[8].
The HMRF-EM framework was first proposed for seg-
mentation of brain MR images [11]. For simplicity, we first
assume that the image is 2D gray-level, and the intensity
distribution of each region to be segmented follows a Gaus-
sian distribution. Given an image Y = (y
1
, . . . , y
N
) where
N is the number of pixels and each y
i
is the gray-level in-
tensity of a pixel, we want to infer a configuration of labels
X = (x
1
, . . . , x
N
) where x
i
∈ L and L is the set of all pos-
sible labels. In a binary segmentation problem, L = {0, 1}.
According to the MAP criterion, we seek the labeling X
?
which satisfies:
X
?
= argmax
X
{P (Y|X, Θ)P (X)}. (1)
The prior probability P (X) is a Gibbs distribution, and the
1
This work originally appears as the final project of Prof. Qiang Ji’s
course Introduction to Probabilistic Graphical Models at RPI.
joint likelihood probability is
P (Y|X, Θ) =
Y
i
P (y
i
|X, Θ)
=
Y
i
P (y
i
|x
i
, θ
x
i
), (2)
where P (y
i
|x
i
, θ
x
i
) is a Gaussian distribution with param-
eters θ
x
i
= (µ
x
i
, σ
x
i
). In MRF problems, people usually
learn the parameter set Θ = {θ
l
|l ∈ L} from the training
data. For example, in image segmentation problems, prior
knowledge of the intensity distributions of the foreground
and the background might be consistent within a dataset, es-
pecially domain specific dataset. Thus, we can learn the pa-
rameters from some images that are manually labeled, and
use these parameters to run the MRF to segment the other
images.
The major difference between MRF and HMRF is that,
in HMRF, the parameter set Θ is learned in an unsupervised
manner. In a HMRF image segmentation problem, there
is no training stage, and we assume no prior knowledge is
known about the foreground/background intensity distribu-
tion. Thus, a natural proposal for solving a HMRF problem
is to use the EM algorithm, where parameter set Θ and label
configuration X are learned alternatively.
2. EM Algorithm for Parameters
We still use the 2D gray-level and Gaussian distribution
assumption. We use the EM algorithm to estimate the pa-
rameter set Θ = {θ
l
|l ∈ L}. We describe the EM algorithm
by the following [7]:
1. Start: Assume we have an initial parameter set Θ
(0)
.
2. E-step: At the tth iteration, we have Θ
(t)
, and we cal-
1
arXiv:1212.4527v1 [cs.CV] 18 Dec 2012
- 1
- 2
前往页