没有合适的资源?快使用搜索试试~ 我知道了~
BCJR算法[借鉴].pdf
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 73 浏览量
2021-10-12
23:23:08
上传
评论
收藏 57KB PDF 举报
温馨提示
试读
12页
BCJR算法[借鉴].pdf
资源详情
资源评论
资源推荐
- 1 -
Bahl, Cocke, Jelinek and Raviv (BCJR) Algorithm
Reference: “Optimal Decoding of Linear Codes for Minimizing Symbol Error
Rate ”, IEEE Trans. Info. Theory, Vol IT-20, pp 284-287, March 1974.
The Viterbi Algorithm – MLSE – minimizes the probability of sequence (word)
error
- not necessarily minimize the probability of bit(symbol) error.
Objective : To derive an optimal decoding method for linear codes which
minimizes the symbol error probability.
General Problem :
Estimating the a posteriori probabilities (APP) of
1. The states and
2. The transitions of a
Markov source observed through a noisy discrete memoryless channel (DMC).
The source above is assumed to be a “discrete-time finite-state ” Markov process
(e.g. Convolutional encoder, or a finite state machine – FSM).
M states, m=0,1, …..M-1.
The state of the source at time t is denoted by S
t
and its output by X
t
.
A state sequence from t to t ′
S
t
t′
= S
t
, S
t+1
, ….,S
t′
.
The output sequence X
t
t′
= X
t
, X
t+1
, …. ,X
t′
.
MARKOV
SOURCE
DISCRETE
MEMORYLES
S
CHANNEL
RECEIVER
- 2 -
The state transitions of the Markov source are governed by the transition
probabilities
p
t
(m | m′) = Pr{S
t
= m | S
t-1
= m′)
And the output by the probabilities
q
t
(X | m ′,m) = Pr {X
t
= X | S
t-1
= m′ ; S
t
= m}
where X belongs to some finite discrete alphabet.
The Markov source starts in the initial state S
0
= 0, and produces an output
sequence X
1
τ
ending in the terminal state S
τ
= 0.
X
1
τ
is the input to a noisy DMC whose output is the sequence
Y
1
τ
= Y
1
, Y
2
, ….,Y
τ
.
The transition probabilities of the DMC are defined by R(.|.) so that for all
1 ≤ t ≤τ Pr { Y
1
t
| X
1
t
} =
)|(
1
j
t
j
j
XYR
∏
=
The objective of the decoder is to examine Y
1
τ
and estimate the APP of the states
and the transitions of the Markov source, i.e., the conditional probabilities
Pr{ S
t
= m | Y
1
τ
} = Pr {S
t
= m; Y
1
τ
} / Pr{ Y
1
τ
} (1)
Pr{ S
t-1
= m′; S
t
= m | Y
1
τ
} = Pr { S
t-1
= m′; S
t
= m; Y
1
τ
} / Pr{ Y
1
τ
} (2)
Graphical interpretation:
State diagram: Nodes are states, branches – the transitions having nonzero
probabilities.
- 3 -
Trellis diagram: Index the states with both the time index t and state index m ?
“trellis ”
Shows the time progression of the state sequences.
For every state sequence S
1
τ
there is a unique path through the trellis and vice
versa.
If the Markov source is time variant, then we can no longer represent it by a state-
transition diagram; however we can construct a trellis for its state sequences.
Associated with each node in the trellis is the corresponding APP
Pr { S
t
= m | Y
1
τ
} and
Associated with each branch in the trellis is the corresponding APP
Pr{ S
t-1
= m′; S
t
= m | Y
1
τ
}.
The objective of the decoder is to examine Y
1
τ
and compute these APP.
It is easier to derive the joint probabilities
λ
t
(m) = Pr { S
t
= m ; Y
1
τ
} and
σt
(m′,m) = Pr{ S
t-1
= m′; S
t
= m ; Y
1
τ
}.
For a given Y
1
τ
, Pr{ Y
1
τ
} is a constant, we can divide λ
t
(m) and σ
t
(m′,m) by Pr{
Y
1
τ
} which is equivalent to λ
τ
(0), which is available from the decoder.
Alternatively we can normalize λ
t
(m) and σ
t
(m′,m) to add up to 1 to obtain the
same result.
? Derive a method for obtaining the probabilities λt
(m) and σ
t
(m′,m).
Define the probability functions:
α
t
(m) = Pr { S
t
= m ; Y
1
t
}
βt
(m) = Pr { Y
t+1
τ
| S
t
= m }
剩余11页未读,继续阅读
cyh76339129
- 粉丝: 1
- 资源: 14万+
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0