没有合适的资源?快使用搜索试试~ 我知道了~
An Efficient Global Optimization Algorithm.pdf
需积分: 9 1 下载量 125 浏览量
2020-02-14
15:52:56
上传
评论
收藏 291KB PDF 举报
温馨提示
This paper presents a practical method for nding the globally optimal solution to nonlinear sum-of-ratios problem arising in image processing, engineering and man- agement. Unlike traditional methods which may get trapped in local minima due to the non-convex nature of this problem, our approach provides a theoretical guar- antee of global optimality. Our algorithm is based on solving a sequence of convex programming problems and has global linear and local superlinear/quadratic rate of convergence. The practical eciency of the algorithm is demonstrated by numerical experiments for synthetic data.
资源推荐
资源详情
资源评论
An Efficient Global Optimization Algorithm
for Nonlinear Sum-of-Ratios Problem
∗†
Yun-Chol Jong
a
a
Center of Natural Sciences, University of Science, Pyongyang, DPR Korea
May 3, 2012
Abstract
This paper presents a practical method for finding the globally optimal solution
to nonlinear sum-of-ratios problem arising in image processing, engineering and man-
agement. Unlike traditional methods which may get trapped in local minima due
to the non-convex nature of this problem, our approach provides a theoretical guar-
antee of global optimality. Our algorithm is based on solving a sequence of convex
programming problems and has global linear and local superlinear/quadratic rate of
convergence. The practical efficiency of the algorithm is demonstrated by numerical
experiments for synthetic data.
2010 Mathematics subject Classification: 90C26, 90C32, 65K05
Keywords: Fractional programming, non-convex optimization, global optimization algo-
rithm, sum-of ratios problem, guaranteed global optimality.
1 Introduction
The sum-of-ratios problem, which is to minimize (maximize) a sum of several fractional func-
tions subject to convex constraints, is a non-convex optimization problem that is difficult to
solve by traditional optimization methods. The problem arises in many applications such as
optimization of the average element shape quality in the finite element method, computer
∗
E-mail: yuncholjong@yahoo.com
†
Address: Gwahak-1 dong, Unjong distrct, Pyongyang, DPR Korea
1
graphics and management ([2],[3],[4]). In [4], many problems of projective geometry in-
cluding multiview triangulation, camera resectioning and homography estimation have been
formulated as the sum-of-ratios problem and a branch-and-bound method has been proposed
to find its global solution which relies on recent developments in fractional programming and
the theory of convex underestimators ([6],[7],[8],[9],[14],[15]). In the method of [4], number
of variables increases as twice as the number of fractional functions involving in the sum and
a second-order cone programming problem is needed to solve for obtaining a lower bound of
the optimal value in each iteration. Their algorithm is provably optimal, that is, given any
tolerance , if the optimization problem is feasible, the algorithm returns a solution which
is at most far from the global optimum. The branch-and-bound method requires a lot of
computations, has low convergence and it is not easy to find a reasonable branching strategy.
Recently there has been some progress made towards finding the global solution to a few
of these optimization problems ([16],[17]). However, the resulting algorithm is numerically
unstable, computationally expensive and does not generalize for more views or harder prob-
lems like resectioning. In [18], linear matrix inequalities were used to approximate the global
optimum, but no guarantee of actually obtaining the global optimum is given. Also, there
are unsolved problems concerning numerical stability. Robustification using L
1
-norm was
presented in [19], but the approach is restricted to the affine camera model.
In this paper, an efficient global optimization algorithm is presented which transforms
the sum-of-ratios problem into parametric convex programming problem and finds the global
solution successfully.
2 Equivalent parametric convex programming
The sum-of-ratios problem seeks to minimize the sum of fractional functions subject to
convex constraints, which is formulated as follows.
min F (x) =
N
X
i=1
F
i
(x),
subject to g
i
(x) ≤ 0, i = 1, ··· , m,
x ∈ R
n
(2.1)
where F
i
(x) =
f
i
(x)
h
i
(x)
, i = 1, ··· , N, and f
i
(x), g
i
(x) and −h
i
(x) are twice continuously differ-
entiable convex functions.
Let X = {x ∈ R
n
|g
i
(x) ≤ 0, i = 1, ··· , m}. It is assumed that f
i
(x) ≥ 0 and h
i
(x) > 0
for every x ∈ X, and that intX = {x ∈ R
n
|g
i
(x) < 0, i = 1, ··· , m} 6= ∅. Even with these
restrictions the above problem is NP-complete [5].
2
It is easy to see that the problem (2.1) is equivalent to the following problem.
min F (x) =
N
X
i=1
β
i
,
subject to F
i
(x) ≤ β
i
, i = 1, ..., N,
g
i
(x) ≤ 0, i = 1, ..., m,
x ∈ R
n
(2.2)
Lemma 2.1. If (¯x,
¯
β) is the solution of the problem (2.2), then there exist ¯u
i
, i = 1, ··· , N
such that ¯x is a solution of the following problem for u = ¯u and β =
¯
β.
min
N
X
i=1
u
i
(f
i
(x) −β
i
h
i
(x)),
subject to g
i
(x) ≤ 0, i = 1, ..., m,
x ∈ R
n
(2.3)
And ¯x also satisfies the following system of equations for u = ¯u and β =
¯
β:
u
i
=
1
h
i
(x)
, i = 1, ..., N (2.4)
f
i
(x) − β
i
h
i
(x) = 0, i = 1, ..., N (2.5)
Proof. The constraint F
i
(x) ≤ β
i
is equivalent to f
i
(x) − β
i
h
i
(x) ≤ 0. Let’s define the
following function for the problem (2.2).
L(x, β, w, u, v) = w
N
X
i=1
β
i
+
N
X
i=1
u
i
(f
i
(x) − β
i
h
i
(x)) +
m
X
i=1
v
i
g
i
(x).
By Fritz-John optimality condition (Theorem 4.2.8 of [1]), there exist ¯w, ¯u = ( ¯u
1
, ··· , ¯u
N
)
and ¯v = ( ¯v
1
, ··· , ¯v
m
) such that
∂L
∂x
=
N
X
i=1
¯u
i
(∇f
i
(¯x) −
¯
β
i
∇h
i
(¯x)) +
m
X
i=1
¯v
i
∇g
i
(¯x) = 0 (2.6)
∂L
∂β
i
= ¯w − ¯u
i
h
i
(¯x) = 0, i = 1, ··· , N (2.7)
¯u
i
∂L
∂u
i
= ¯u
i
(f
i
(¯x) −
¯
β
i
h
i
(¯x)) = 0, i = 1, ··· , N (2.8)
v
i
∂L
∂v
i
= ¯v
i
g
i
(¯x) = 0, i = 1, ··· , m (2.9)
3
g
i
(¯x) ≤ 0, ¯v
i
≥ 0, i = 1, ··· , m (2.10)
f
i
(¯x) −
¯
β
i
h
i
(¯x) ≤ 0, ¯u
i
≥ 0, i = 1, ··· , N (2.11)
¯w ≥ 0, ( ¯w, ¯u, ¯v) 6= (0, 0, 0) (2.12)
Suppose that ¯w = 0. Then, by (2.7), we have ¯u = 0 because h
i
(x) > 0, i = 1, ··· , N for all
x ∈ X. Hence, it follows from (2.6), (2.9), (2.10) and (2.12) that
X
i∈I(¯x)
¯v
i
∇g
i
(¯x) = 0, (2.13)
X
i∈I(¯x)
¯v
i
> 0, ¯v
i
≥ 0, i ∈ I(¯x), (2.14)
where I(¯x) = {i|g
i
(¯x) = 0, 1 ≤ i ≤ m}. By Slater condition, there exists a point x
0
such that
g
i
(x
0
) < 0, i = 1, ··· , m. (2.15)
Since g
i
(x), i = 1, ··· , m are convex, it follows from (2.15) that
∇g
i
(¯x)
T
(x
0
− ¯x) ≤ g
i
(x
0
) − g
i
(¯x) < 0, i ∈ I(¯x) (2.16)
Letting d = x
0
− ¯x, from (2.16) and (2.14), we have
P
i∈I(¯x)
¯v
i
∇g
i
(¯x)
!
T
d < 0, which contra-
dicts (2.13). Thus, we have ¯w > 0.
Denoting
¯u
¯w
and
¯v
¯w
by ¯u and ¯v again, respectively, we see that (2.7) is equivalent to (2.4)
and so (2.8) is equivalent to (2.5) because ¯u
i
> 0, i = 1, ··· , N by (2.4).
Given u = ¯u and β =
¯
β, (2.6), (2.9) and (2.10) is just the KKT condition for the problem
(2.3). Since the problem (2.3) is convex programming for parameters u > 0 and β ≥ 0, the
KKT condition is also sufficient optimality condition and then ¯x is the solution of (2.3) for
u = ¯u and β =
¯
β.
Remark 2.1. Consider the maximization problem
max F (x) =
N
X
i=1
F
i
(x),
subject to g
i
(x) ≤ 0, i = 1, ··· , m,
x ∈ R
n
where F
i
(x) =
f
i
(x)
h
i
(x)
, i = 1, ··· , N and f
i
(x), −g
i
(x) and −h
i
(x) are concave functions, and
f
i
(x) ≥ 0, h
i
(x) > 0, i = 1, ··· , N in the feasible set X
4
剩余20页未读,继续阅读
资源评论
我是YJJ啊
- 粉丝: 8
- 资源: 3
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功