没有合适的资源?快使用搜索试试~ 我知道了~
Solving High-Dimensional Multi-Objective Optimization Problems w...
1 下载量 82 浏览量
2021-02-06
20:09:08
上传
评论 1
收藏 658KB PDF 举报
温馨提示
Multi-objective (MO) optimization problems require simultaneously optimizing two or more objective functions. An MO algorithm needs to find solutions that reach different optimal balances of the objective functions, i.e., optimal Pareto front, therefore, high dimensionality of the solution space can hurt MO optimization much severer than single-objective optimization, which was little addressed in previous studies. This paper proposes a general, theoretically-grounded yet simple approach ReMO, w
资源推荐
资源详情
资源评论
Solving High-Dimensional Multi-Objective
Optimization Problems with Low Effective Dimensions
∗
Hong Qian, Yang Yu
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, 210023, China
{qianh,yuy}@lamda.nju.edu.cn
Abstract
Multi-objective (MO) optimization problems require simulta-
neously optimizing two or more objective functions. An MO
algorithm needs to find solutions that reach different opti-
mal balances of the objective functions, i.e., optimal Pareto
front, therefore, high dimensionality of the solution space
can hurt MO optimization much severer than single-objective
optimization, which was little addressed in previous stud-
ies. This paper proposes a general, theoretically-grounded yet
simple approach ReMO, which can scale current derivative-
free MO algorithms to the high-dimensional non-convex MO
functions with low effective dimensions, using random em-
bedding. We prove the conditions under which an MO func-
tion has a low effective dimension, and for such functions,
we prove that ReMO possesses the desirable properties of
optimal Pareto front preservation, time complexity reduc-
tion, and rotation perturbation invariance. Experimental re-
sults indicate that ReMO is effective for optimizing the high-
dimensional MO functions with low effective dimensions,
and is even effective for the high-dimensional MO functions
where all dimensions are effective but most only have a small
and bounded effect on the function value.
Introduction
Solving sophisticated optimization problems plays an essen-
tial role in the development of artificial intelligence. In some
real-world applications, we need to simultaneously optimize
two or more objective functions instead of only one, which
leads to the progress of multi-objective (MO) optimization.
Let f(x)=
f
1
(x),...,f
m
(x)
denote a multi-objective
function. In this paper, we assume that the optimization
problems are deterministic, i.e., each call of f returns the
same function value for the same solution x. Furthermore,
we focus on the derivative-free optimization. That is to say,
f is regarded as a black-box function, and we can only per-
form the MO optimization based on the sampled solutions
and their function values. Other information such as gradi-
ent is not used or even not available. Since the derivative-
free optimization methods do not depend on gradient, they
∗
This research was supported by the NSFC (61375061,
61333014, 61305067), Jiangsu Science Foundation
(BK20160066), and Foundation for the Author of National
Excellent Doctoral Dissertation of China (201451).
Copyright
c
2017, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
are suitable for a wide range of sophisticated real-world
optimization problems, such as non-convex functions, non-
differentiable functions, and discontinuous functions.
The MO optimization has achieved many remarkable
applications such as in reinforcement learning (Moffaert
and Now
´
e 2014), constrained optimization (Qian, Yu, and
Zhou 2015a), and software engineering (Harman, Mansouri,
and Zhang 2012; Minku and Yao 2013). Two reasons ac-
counting for its successful applications. First, the MO al-
gorithm can find the solutions that reach different opti-
mal balances of the objectives (e.g., performance and cost),
which can satisfy different demands from different users.
Besides, it has been shown that MO optimization can do
better than single-objective optimization in some machine
learning tasks (Li et al. 2014; Qian, Yu, and Zhou 2015b;
2015c).
Driven by the demand from real-world applications, de-
spite the hardness of MO optimization such as the ob-
jectives are often conflicted with each other, derivative-
free MO optimization methods have obtained substantial
achievements. Well-known MO optimization methods, such
as improved version of the strength Pareto evolutionary al-
gorithm (SPEA2) (Zitzler, Laumanns, and Thiele 2001),
region-based selection in evolutionary multi-objective opti-
mization (PESA-II) (Corne et al. 2001), non-dominated sort-
ing genetic algorithm II (NSGA-II) (Deb et al. 2002), and
multi-objective evolutionary algorithm based on decomposi-
tion (MOEA/D) (Zhang and Li 2007), non-dominated neigh-
bor immune algorithm (NNIA) (Gong et al. 2008), etc., have
been proposed and successfully applied in various applica-
tions.
Problem. Previous studies have shown that derivative-
free MO methods are effective and efficient for the MO
functions in low-dimensional solution space. However, MO
optimization methods may lose their power for the high-
dimensional MO functions, because the convergence rate
is slow or the computational cost of each iteration is high
in high-dimensional solution space. Furthermore, high di-
mensionality of the solution space can hurt MO optimiza-
tion much severer than single-objective optimization, since
an MO algorithm needs to find a set of solutions that reaches
different optimal balances of the objectives. Therefore, scal-
ability becomes one of the main bottlenecks of MO opti-
mization, which restricts the further applications of it.
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
875
Related Work. Recently, there emerged studies focusing
on addressing the issue of scaling derivative-free MO op-
timization algorithms to high-dimensional solution space,
such as (Wang et al. 2015; Ma et al. 2016; Zhang et al.
2016). In (Wang et al. 2015), the algorithm is designed
on the basis of analyzing the relationship between the de-
cision variables and the objective functions. In (Ma et al.
2016), the relationship between the decision variables is an-
alyzed in order to design the smart algorithm. In (Zhang
et al. 2016), the problem of scalability is handled through
decision variable clustering. Although the remarkable em-
pirical performance of these algorithms when addressing
high-dimension MO optimization problems, almost all of
these algorithms lack the solid theoretical foundations. The
questions of when and why these algorithms work need to
be answered theoretically in order to guide the better de-
sign and application of high-dimensional MO algorithms.
Compared with the theoretical progress for derivative-free
high-dimensional single-objective optimization (Wang et al.
2013; Kaban, Bootkrajang, and Durrant 2013; Friesen and
Domingos 2015; Kandasamy, Schneider, and P
´
oczos 2015;
Qian and Yu 2016), the theoretically-grounded MO meth-
ods are deficient and thus quite appealing. Furthermore,
some existing approaches to scaling MO algorithms to high-
dimensional MO problems are not general and only re-
stricted to the special algorithms. It is desirable that more
MO algorithms can possess the scalability.
On the other hand, it has been observed that, for a
wide class of high-dimensional single-objective optimiza-
tion problems, such as hyper-parameter optimization in ma-
chine learning tasks (Bergstra and Bengio 2012; Hutter,
Hoos, and Leyton-Brown 2014), the objective function value
is only affected by a few dimensions instead of all di-
mensions. And we call the dimensions which affect the
function value as the effective dimensions. For the high-
dimensional single-objective optimization problems with
low effective dimensions, the random embedding technique
which possesses the solid theoretical foundation has been
proposed (Wang et al. 2013; 2016). Due to the desirable
theoretical property of random embedding, it has been ap-
plied to cooperate with some state-of-the-art derivative-
free single-objective optimization methods and shown to
be effective. Successful cases including Bayesian optimiza-
tion (Wang et al. 2013; 2016), simultaneous optimistic opti-
mization (Qian and Yu 2016), and estimation of distribution
algorithm (Sanyang and Kaban 2016). These successful ex-
amples inspire us that we can extend the random embedding
technique to high-dimensional MO functions with low effec-
tive dimensions, and at the same time inherit the theoretical
merits of random embedding. Besides, it would be desirable
if the extension of random embedding is suitable for any MO
algorithm rather than only some special algorithms.
Our Contributions. This paper proposes a general,
theoretically-grounded yet simple approach ReMO, which
can scale any derivative-free MO optimization algorithm to
the high-dimensional non-convex MO optimization prob-
lems with low effective dimensions using random embed-
ding. ReMO performs the optimization by employing ar-
bitrary derivative-free MO optimization algorithm in low-
dimensional solution space, where the function values of so-
lutions are evaluated through embedding it into the origi-
nal high-dimensional solution space. Theoretical and exper-
imental results verify the effectiveness of ReMO for scala-
bility. The contributions of this paper are:
• Disclosing a sufficient and necessary condition as well as
a sufficient condition under which the high-dimensional
MO functions have the effective dimensions.
• Proving that ReMO possesses three desirable theoretical
properties: Pareto front preservation, time complexity re-
duction, and rotation perturbation invariance (i.e., robust
to rotation perturbation).
• Showing that ReMO is effective to improve the scalability
of current derivative-free MO optimization algorithms for
the high-dimensional non-convex MO problems with low
effective dimensions, and is even effective for the high-
dimensional MO problems where all dimensions are ef-
fective but most only have a small and bounded effect on
the function value.
The rest of the paper is organized as follows. Section 2 in-
troduces the notations of MO optimization and reviews ran-
dom embedding for the high-dimensional single-objective
optimization. Section 3 extends random embedding to the
high-dimensional MO optimization and presents the pro-
posed general approach ReMO. Section 4 shows three de-
sirable theoretical properties of ReMO, and Section 5 shows
the experimental results. Section 6 concludes the paper.
Background
Multi-Objective Optimization
Multi-objective (MO) optimization simultaneously opti-
mizes two or more objective functions as Definition 1. In
this paper, we consider minimization problems.
D
EFINITION 1 (Multi-Objective Optimization)
Given m objective functions f
1
,...,f
m
defined on the so-
lution space X⊆R
D
, the minimum multi-objective opti-
mization aims to find the solution x
∗
∈Xs.t.
x
∗
=argmin
x∈X
f(x) = argmin
x∈X
f
1
(x),...,f
m
(x)
,
where f(x)=
f
1
(x),...,f
m
(x)
is the objective function
vector of the solution x.
This paper only considers X = R
D
, i.e., unconstrained
MO optimization problems. In most cases, it is impossible
that there exist a solution x ∈ R
D
such that x is optimal for
all the objectives, since the objectives are often conflicted
with each other and optimizing one objective alone will de-
grade the other objectives. Thus, MO optimization attempts
to find out a set of solutions that reach different optimal
balances of the objective functions according to some crite-
ria. A widely-used criterion is the Pareto optimality that uti-
lizes the dominance relationship between solutions as Defi-
nition 2. The solution set with Pareto optimality is called the
Pareto set as Definition 3.
876
剩余6页未读,继续阅读
资源评论
weixin_38666527
- 粉丝: 9
- 资源: 911
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功