没有合适的资源?快使用搜索试试~ 我知道了~
Robust solutions of uncertain linear programming
需积分: 9 3 下载量 46 浏览量
2020-11-19
08:21:48
上传
评论
收藏 238KB PDF 举报
温馨提示
试读
16页
Robust solutions of uncertain linear programmingRobust solutions of uncertain linear programming
资源详情
资源评论
资源推荐
Robust Solutions of Uncertain Linear Programs
A. Ben-Tal and A. Nemirovski
1)
Abstract
We treat in this paper Linear Programming (LP) problems with uncertain data. The
focus is on uncertainty associated with hard constraints: those which must be satisfied,
whatever is the actual realization of the data (within a prescribed uncertainty set). We
suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Coun-
terpart (RC). We then develop the analytical and computational optimization tools to
obtain robust solutions of an uncertain LP problem via solving the corresponding explic-
itly stated convex RC program. In particular, it is shown that the RC of an LP with
ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic
program, which can be solved in polynomial time.
Keywords: linear programming, data uncertainty, robustness, convex programming, interior-point
methods
1 Introduction
The data A, b associated with a linear program
min
n
c
T
x | Ax ≥ b
o
(1)
are “uncertain” to some degree in most real world problems
2)
. In many models the uncertainty is
ignored altogether, and a representative nominal value of the data is used (e.g., expected values). The
classical approach in Operations Research/Management Science to deal with uncertainty is Stochastic
Programming (SP) (see, e.g., [6, 13] and references therein). But even in this approach constraints
may be violated, with certain penalty (this is the case for SP with recourse [6, 9], Scenario optimization
[14], Entropic Penalty methods [1]) or with certain probability (chance constraints). In the dominating
penalty approach, even when the random variables are degenerate (deterministic), the corresponding
SP model does not recover necessarily the original LP constraints, but only a relaxation of these
constraints. Thus, although this was not stated explicitly in the past, SP treats in fact mainly soft
constraints. These remarks apply also to the recent scenario-based penalty approach of Mulvey,
Vanderbei and Zenios [11].
In this paper we study uncertainty associated with hard constraints, i.e., those which must be
satisfied whatever is the realization of the data (A, b) within, of course, a reasonable prescribed “un-
certainty set” U. Feasibility of a vector x is thus interpreted as
Ax ≥ b ∀(A, b) ∈ U. (2)
Consequently, we call the problem
min
n
c
T
x | Ax ≥ b ∀(A, b) ∈ U
o
(3)
1
The research was partly supported by The Fund for the Promotion of Research at the Technion and by the Israel
Ministry of Science grant # 9636-1-96
2)
If the vector c is also uncertain, we could look at the equivalent formulation of (1): min
x,t
t : c
T
x ≤ t, Ax ≥ b
and thus without loss of generality we may restrict the uncertainty to the constraints only.
1
the robust counterpart of the uncertain LP problem (1), and we call a vector x
∗
solving (3) a robust
solution of the uncertain problem.
No underlying stochastic model of the data is assumed to be known (or even to exist), although
such knowledge may be of use to obtain reasonable uncertainty sets (see Section 4).
Dealing with uncertain hard constraints is perhaps a novelty in Mathematical Programming; to
the best of our knowledge, the only previous example related to this question is due to A.L. Soyster
[16] (see below). The issue of hard uncertain constraints, however, is not a novelty for Control Theory,
where it is a well-studied subject forming the area of Robust Control (see, e.g., [17] and references
therein).
The approach reflected by (3) may look at a first glance too “pessimistic”: the point is that
there are indeed situations in reality when an applicable solution must be feasible for all realizations
of the data, and even a small violation of the constraints cannot be tolerated. We have discussed
such a situation elsewhere, for problems of designing engineering structures (e.g., bridges), see [2]. In
these problems, ignoring even small changes in the forces acting on the structure may cause “violent”
displacements and result in a severely unstable structure. This example is not from the world of LP,
but we can easily imagine similar situations in LP models. Consider, e.g., a chemical plant which
takes raw materials, decompose them into components and then recombine the components to get
the final product (there could be several “decomposition – recombination” stages in the production
process). The corresponding LP model includes the inequality constraints expressing the fact that
when recombining the intermediate components, you cannot use more than what is given by the
preceding decomposition phase, and these constraints indeed are “hard”. If, as it is normally the case,
the content of the components in raw materials is “uncertain” or/and the yield of the decomposition
process depends on uncontrolled factors, we end up with an LP program with uncertain data in hard
constraints.
As it was already mentioned, uncertain hard constraints in LP models were discussed (in a very
specific setting) by A.L. Soyster [16] (for further developments, see also [15, 7]). The case considered
in these papers is the one of “column-wise” uncertainty, i.e., the columns a
i
of the constraint matrix in
the constraints Ax ≤ b, x ≥ 0 are known to belong to a given convex sets K
i
, and the linear objective
is minimized over those x for which
n
X
i=1
x
i
a
i
≤ b ∀(a
i
∈ K
i
, i = 1, ..., n). (4)
As it is easily shown in [16], the constraints (4) are equivalent to a system of linear constraints
A
∗
x ≤ b, x ≥ 0, a
∗
ij
= sup
a
i
∈K
i
(a
i
)
j
. (5)
It turns out that the phenomenon that (4) is equivalent to a linear system (5) is specific for “column-
wise” uncertainty. As we shall see later, the general case is the one of “row-wise” uncertainty (the one
where the rows of the constraint matrix are known to belong to given convex sets). In this latter case,
the robust counterpart of the problem is typically not an LP program. E.g., when the uncertainty
sets for the rows of A are ellipsoids, the robust counterpart turns out to be a conic quadratic problem.
Note also that the case of column-wise uncertainty is extremely conservative: the constraints (5) of the
robust counterpart correspond to the case when every entry in the constraint matrix is as “bad” (as
large) as it could be. At the same time, in the case of row-wise uncertainty the robust counterpart is
capable to reflect the fact that generically the coefficients of the constraints cannot be simultaneously
at their worst values.
The rest of the paper is organized as follows. Section 2 contains formal definition of the “robust
counterpart” of an “uncertain Linear Programming program” (which is the one just defined, but in a
slightly more convenient form). Here we also answer some natural questions about the links between
2
this counterpart and the usual LP program corresponding to the “worst” realization of the data from
the uncertainty set U. Note that the robust counterpart problem (P
U
) has a continuum of constraints,
i.e., is a semi-infinite problem, and as such it looks computationally intractable. In the last part
of Section 2 we discuss the crucial issue of which geometries of the uncertainty set U result in a
“computationally tractable” robust counterpart of (1). Since we have in mind large scale applications
of robust LP’s, we focus on geometries of the uncertainty set leading to “explicit” robust counterpart of
nice analytical structure, and which can be solved by high-performance optimization algorithms (like
the interior point methods). Specifically, in Section 3 we consider the case of our preferred structure
where U is an intersection of finitely many ellipsoids and derive for this case the explicit form of the
robust counterpart of (1), which turns out to be a conic quadratic problem. Section 4 contains a
simple portfolio selection example illustrating our robust solution and comparing it to the solution
obtained by the scenario-based approach of [11].
2 Robust counterpart of an uncertain Linear Programming program
2.1 The construction
For convenience and notational simplicity we choose to cast the LP problem in the form
(P ) min{c
T
x | Ax ≥ 0, f
T
x = 1}. (6)
The canonical LP program (1) in the Introduction can be obtained from (6) by the correspondence
c :=
c
0
, A := [A; −b], f = (0, ..., 0, 1)
T
.
In the formulation (6), the vectors c, f ∈ R
n
are fixed data, and the uncertainty is associated only
with the m × n matrix A. The uncertainty set, of which A is a member, is a set U of m × n real
matrices. The robust counterpart of (6) is defined to be the optimization problem
(P
U
) min{c
T
x | Ax ≥ 0 ∀A ∈ U; f
T
x = 1}. (7)
Thus, a robust feasible (r-feasible for short) solution to the “robust counterpart” of (P) should, by
definition, satisfy all realizations of the constraints from the uncertainty set U, and a robust optimal
(r-optimal for short) solution to (P
U
) is an r-feasible solution with the best possible value of the
objective. In the rest of this section we investigate the basic mathematical properties of program
(P
U
), with emphasis on the crucial issue of its “computational tractability”.
From now on we fix certain LP data U, c, f and denote by P the corresponding “uncertain LP
program”, i.e., the family of all LP programs (P) with given c, f and some A ∈ U. Each program of
this type will be called an instance (or a realization) of the uncertain LP program.
Evidently, the robust counterpart (P
U
) of an uncertain LP program remains unchanged when we
replace the uncertainty set U by its closed convex hull. Consequently, from now on we always assume
that U is convex and closed.
2.2 Robust counterpart and the “worst” LP program from P
The first issue concerning the robust counterpart of an uncertain LP is how “conservative” it is. From
the very beginning our approach is “worst case oriented”; but could (P
U
) be even worse than the
“worst” instance from P? E.g.,
(A) Assume (P
U
) is infeasible. Does it mean that there is an infeasible instance (P ) ∈ P?
3
Further, assume that (P
U
) is feasible, and its optimal value c
∗
is finite. By construction, c
∗
is
greater than or equal to the optimal value c
∗
(P ) of every problem instance (P ) ∈ P. The question of
how conservative is the robust counterpart now may be posed as
(B) Assume that (P
U
) is feasible with finite optimal value c
∗
. Does it mean that there exists a
problem instance (P ) ∈ P with c
∗
(P ) = c
∗
?
A “gap” between the solvability properties of the instances of an uncertain LP and those of its
robust counterpart may indeed exist, as is demonstrated by the following simple example:
min x
1
+ x
2
s.t.
a
11
x
1
+ x
2
≥ 1
x
1
+ a
22
x
2
≥ 1
x
1
+ x
2
= 1
x
1
, x
2
≥ 0
The uncertain elements are a
11
and a
22
, and the uncertainty set is U = {a
11
+ a
22
= 2,
1
2
≤ a
11
≤
3
2
}.
Here every problem instance clearly is solvable with optimal value 1 (if a
11
≥ 1, then the optimal
solution is (1, 0), otherwise it is (0, 1)), while the robust counterpart, which clearly is
min x
1
+ x
2
s.t.
1
2
x
1
+ x
2
≥ 1
x
1
+
1
2
x
2
≥ 1
x
1
+ x
2
= 1
x
1
, x
2
≥ 0
is infeasible.
We are about to demonstrate that there are quite natural assumptions under which the indicated
gap disappears, and the answers to the above two questions (A) and (B) become positive, so that (P
U
)
under these assumptions is not worse than the “worst” instance from P.
The main part of the aforementioned assumptions is that the uncertainty should be “constraint-
wise”. To explain this concept, let U
i
be the set of all possible realizations of i-th row in the constraint
matrix, i.e., be the projection of U ⊂ R
m×n
= R
n
× ... × R
n
onto i-th direct factor of the right hand
side of the latter relation
3)
. We say that the uncertainty is constraint-wise, if the uncertainty set U is
the direct product of the “partial” uncertainty sets U
i
:
U = U
1
× U
2
× ... × U
m
.
By construction, x is r-feasible if and only if
a
T
x ≥ 0 ∀a ∈ U
i
∀i; f
T
x = 1. (8)
In other words, (P
U
) remains unchanged when we extend the initial uncertainty set U to the direct
product
b
U = U
1
× ... × U
m
of its projections on the sub-spaces of data of different constraints; the
robust counterpart “feels” only the possible realizations of i-th constraint, i = 1, ..., m and does not
feel the dependencies (if any) between these constraints in the instances. Thus, given an arbitrary
uncertainty set U, we can always extend it to a constraint-wise uncertainty set resulting in the same
robust counterpart.
In view of the above remarks, when speaking about questions (A) and (B) it is quite natural to
assume constraint-wise uncertainty. An additional technical assumption needed to answer affirmatively
to (A) and (B) is the following:
Boundedness Assumption.There exists a convex compact set Q ⊂ R
n
which for sure contains feasible
sets of all problem instances (P ) ∈ P.
3)
in order not to introduce both column and row vectors, we represent a row in an m × n matrix by a column vector.
4
剩余15页未读,继续阅读
minuxAE
- 粉丝: 1w+
- 资源: 479
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0