没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
Chapter 2
Random Processes and Random Fields
2.1 Introduction . . ................................................. 36
2.2 Probabilistic Description of Random Process ........................ 37
2.2.1 First- and second-order statistics . ........................... 37
2.2.2 Stationary random process ................................. 38
2.3 Ensemble Averages . . . .......................................... 38
2.3.1 Autocorrelation and autocovariance functions .................. 38
2.3.2 Structure functions ....................................... 40
2.3.3 Basic properties .......................................... 40
2.4 Time Averages and Ergodicity . . .................................. 41
2.5 Power Spectral Density Functions ................................. 42
2.5.1 Riemann-Stieltjes integral .................................. 43
2.6 Random Fields ................................................. 45
2.6.1 Spatial covariance function ................................. 45
2.6.2 One-dimensional spatial power spectrum . . .................... 46
2.6.3 Three-dimensional spatial power spectrum .................... 46
2.6.4 Structure function ........................................ 48
2.7 Summary and Discussion ........................................ 49
2.8 Worked Examples .............................................. 51
Problems ..................................................... 53
References . . . ................................................. 56
Overview: Because the open channel through which we propagate electro-
magnetic radiation is often considered a turbulent medium, we present a
brief review in this chapter of the main ideas associated with a random
field, which in general is a function of a vector spatial variable R and time
t. To begin, however, we start with the somewhat simpler concept of a
random process and then present a parallel treatment for a random field.
Fundamental in the study of random processes is the introduction of
ensemble averages, which are used to formulate mean values, correlation
functions, and covariance functions. The development of these statistics is
greatly simplified for a stationary process, which means that all statistics
only depend on time differences and not the specific time origin. From
a practical point of view, however, we usually consider just the weaker
35
condition of a stationary process in the wide sense, which demands only that
the mean and covariance be invariant under translations in time.
Whereas theoretical treatments of a random process ordinarily involve the
ensemble average, measurements of various statistics of a random process
make use of the long-time average. Nonetheless, if a random process is
ergodic, then we can equate long-time-average statistics to ensemble
averages.
In addition to mean values, correlation functions, and covariance func-
tions, we also introduce the notions of structure function and power spectral
density. Structure functions, which involve averages of squared differences,
are widely used in turbulence studies, particularly if the random process is
not stationary but has stationary increments. The power spectral density is
simply the Fourier transform of the covariance function and, consequently,
contains the same information in a different form.
Last, our treatment of random fields is virtually identical to that of random
process but there are some subtle differences between the two. For example
the notion of “statistical homogeneity” is the spatial counterpart of the tem-
poral “stationarity”. It is also common to assume that a random field satisfies
the additional property of isotropy, which means that the random field
statistics depend only on the scalar distance between spatial points.
2.1 Introduction
We assume in this chapter that the reader is familiar with the idea of a random
variable and the basics of probability theory. A natural generalization of the
random variable concept is that of random process. A random process, also
called a stochastic process, is a collection of time functions and an associated
probability description [1–4]. The entire collection of such functions is called
an ensemble. Ordinarily, we represent any particular member of the ensemble
by simply x(t), called a sample function or realization of the random process.
For a fixed value of time, say t
1
, the quantity x
1
¼ x(t
1
) can then be interpreted
as a random variable.
A continuous random process is one in which the random variables x
1
,
x
2
, ..., can assume any value within a specified range of possible values. But, a
discrete random process is one in which the random variables can assume only
certain isolated values (possibly infinite in number). Here we are concerned
only with continuous random processes.
One of the most common random processes occurring in engineering
applications is random noise, e.g., a “randomly” fluctuating voltage or current at
the input to a receiver that interferes with the reception of a radio or radar
signal, or the current through a photoelectric detector, and so on. Although
many treatments are limited to random processes of time t, we will find it necessary
to extend these ideas to the notion of a random field, which in general is a function
of both time t and space R ¼ (x, y, z). Atmospheric wind velocity, temperature,
36 Chapter 2
and index of refraction fluctuations are all examples of a random field important to
optical wave propagation.
2.2 Probabilistic Description of Random Process
If we imagine “sampling” the random process x(t) at a finite number of times
t
1
, t
2
, ..., t
n
, then we obtain the collection of random variables x
k
¼ x(t
k
), k ¼
1, 2, ..., n. The probability measure associated with these random variables is
described by the joint probability density function (PDF) of order n
p
x
(x
1
, t
1
; x
2
, t
2
; ...; x
n
, t
n
):
In principle, we can develop the theory of a continuous random process by
describing the joint probability density function of all orders. However, this
is generally an impossible task so we usually settle for only first- and/or
second-order distributions.
2.2.1 First- and second-order statistics
The quantity defined by the probability function
F
x
(x, t) ¼ Pr½x(t) x (1)
is called the first-order distribution function of the random process x(t). The
corresponding first-order PDF is
p
x
(x, t) ¼
@F
x
(x, t)
@x
: (2)
Similarly, the second-order distribution function and corresponding PDF are
defined, respectively, by
F
x
(x
1
, t
1
; x
2
, t
2
) ¼ Pr½x(t
1
) x
1
, x(t
2
) x
2
, (3)
p
x
(x
1
, t
1
; x
2
, t
2
) ¼
@
2
F
x
(x
1
, t
1
; x
2
, t
2
)
@x
1
@x
2
: (4)
We note that F
x
(x
1
, t
1
; 1, t
2
) ¼ F
x
(x
1
, t
1
) and
p
x
1
(x
1
, t
1
) ¼
ð
1
1
p
x
(x
1
, t
1
; x
2
, t
2
) dx
2
: (5)
Conditional PDFs and distributions associated with random processes can be
defined in much the same manner as done for random variables. For example,
the conditional PDF of x
2
¼ x(t
2
), given the process took on value x
1
at time t
1
,
is defined by
p
x
2
(x
2
, t
2
x
1
, t
1
) ¼
p
x
(x
1
, t
1
; x
2
, t
2
)
p
x
1
(x
1
, t
1
)
: (6)
Random Processes and Random Fields 37
2.2.2 Stationary random process
Suppose the first-order PDF does not depend on time, i.e., p
x
(x, t) ¼ p
x
(x), and
further, that the second-order PDF has the form
p
x
(x
1
,t
1
; x
2
, t
2
) ¼ p
x
(x
1
, x
2
; t
2
t
1
) (7)
for all t
1
and t
2
. That is, the second-order or joint PDF depends only on the time
difference t ¼ t
2
t
1
but not on the specific times t
1
and t
2
. If all marginal and
joint PDFs depend only on the time difference t ¼ t
2
t
1
, but not on the specific
time origin, we have what is called a stationary random process. Such a process
can also be described as one in which its moments are invariant under translations
in time.
Truly stationary random processes do not exist in nature because there must be
some finite time at which a process is stopped. Nonetheless, in some applications
the process will not change significantly during the finite observation time, so we
can treat it like a stationary process. Of course, if any of the PDFs associated with a
random process do change with the choice of time origin, we say that process is
nonstationary.
2.3 Ensemble Averages
In the following discussion we will use the bracket notation klto denote
an ensemble average of the quantity inside the brackets. We define the
mean, also called the expected value or ensemble average, of the random
process x(t)by
kx(t )l ¼ m(t) ¼
ð
1
1
xp
x
(x, t) dx, (8)
where we are emphasizing that the mean value in general may depend on time.
Similarly, the variance defined by
s
2
x
(t) ¼ kx
2
(t)l m
2
(t) ¼
ð
1
1
½x(t) m(t )
2
p
x
(x, t) dx (9)
is also a function of time in the general case. However, if the random
process is stationary, then its mean value and variance are both independent of
time. In this latter case we write the mean as simply kx(t)l ¼ m and the variance
as s
2
x
.
2.3.1 Autocorrelation and autocovariance functions
Let x
1
and x
2
denote random variables taken from a real stationary random process
x(t) at times t
1
and t
2
, respectively. We define the autocorrelation function
38 Chapter 2
(also called simply the correlation function) by the expression
R
x
(t
1
, t
2
) ; R
x
(t) ¼ kx(t
1
)x(t
2
)l
¼
ðð
1
1
x
1
x
2
p
x
(x
1
, x
2
; t) dx
1
dx
2
, (10)
where t ¼ t
2
t
1
.Ifx(t)isacomplex stationary random process, then we define
the correlation function by R
x
(t) ¼ kx(t
1
)x
(t
2
)l, where the asterisk
denotes the
complex conjugate of the quantity.
Similarly, the autocovariance function (or covariance function) is defined in
general by the ensemble average
B
x
(t
1
, t
2
) ¼ k½x(t
1
) kx(t
1
)l½x(t
2
) kx(t
2
)ll
¼ kx(t
1
)x(t
2
)l m(t
1
)m(t
2
),
(11)
from which, for a stationary process, we deduce
B
x
(t) ¼ R
x
(t) m
2
: (12)
Hence, when the mean of the random process is zero, the correlation and covari-
ance functions are identical. Also, when t
1
¼ t
2
(t ¼ 0), the covariance function
(12) reduces to the variance (9) of the random variable x. It is customary in
many cases to consider the normalized covariance function defined by the quotient
b
x
(t) ¼
B
x
(t)
B
x
(0)
: (13)
Because the maximum of the covariance function occurs at t ¼ 0 [see Eq. (22)
below], it follows that
1 b
x
(t) 1: (14)
To be considered a strict stationary process, we require all marginal and
joint density functions to be independent of the choice of time origin. However,
this requirement is more stringent than necessary in most practical situations.
If all we know is that the mean value kx(t)l is constant and the covariance function
B
x
(t) depends only on the time interval t ¼ t
2
t
1
, we say the random process x(t)
is stationary in the wide sense. Strict stationary processes are automatically wide-
sense stationary, but the converse is not necessarily true. For most wide-sense
stationary processes, it is usually the case that
B
x
(t) ! 0, jtj!1: (15)
For practical reasons, it is common in applications to assume the given random
process is stationary, at least in the wide sense. That is the approach we generally
take here.
Random Processes and Random Fields 39
剩余21页未读,继续阅读
资源评论
sinat_21574987
- 粉丝: 1
- 资源: 3
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功