没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018 981
A Collaborative Neurodynamic Approach to
Multiple-Objective Distributed Optimization
Shaofu Yang, Qingshan Liu, Senior Member, IEEE, and Jun Wang, Fellow, IEEE
Abstract—This paper is concerned with multiple-objective
distributed optimization. Based on objective weighting and
decision space decomposition, a collaborative neurodynamic
approach to multiobjective distributed optimization is presented.
In the approach, a system of collaborative neural networks is
developed to search for Pareto optimal solutions, where each
neural network is associated with one objective function and
given constraints. Sufficient conditions are derived for ascer-
taining the convergence to a Pareto optimal solution of the
collaborative neurodynamic system. In addition, it is proved
that each connected subsystem can generate a Pareto optimal
solution when the communication topology is disconnected.
Then, a switching-topology-based method is proposed to compute
multiple Pareto optimal solutions for discretized approximation
of Pareto front. Finally, simulation results are discussed to
substantiate the performance of the collaborative neurodynamic
approach. A portfolio selection application is also given.
Index Terms—Collaborative neurodynamic approach, distrib-
uted optimization, multiobjective optimization, neural networks,
Pareto optimal solutions.
I. INTRODUCTION
M
ANY problems in science and engineering, including
multicriteria decision making, machine learning, model
predictive control, smart building design, can be formulated as
multiple-objective optimization problems (see [1]–[6], to name
a few), which inherently contain several conflicting objectives
to be optimized simultaneously. Usually, no single solution
can be found, such that every objective function attains its
optimum. Therefore, a concept of optimality known as Pareto
optimality is utilized in multiobjective framework, which
describes compromise between the competing objectives. The
set of all Pareto optimal values is called Pareto front.
Different from single-objective optimization, in multiobjec-
tive optimization, it is required to compute a set of Pareto
optimal solutions distributed in the Pareto front [7]. Various
methods have been proposed for obtaining Pareto optimal
solutions. One class of methods is the scalarization approach.
Manuscript received July 11, 2016; revised October 4, 2016; accepted
December 31, 2016. Date of publication February 1, 2017; date of current
version March 15, 2018. This work was supported in part by the Research
Grants Council of the Hong Kong Special Administrative Region of China,
under Grant 14207614, and in part by the National Natural Science Foundation
of China under Grant 61673330 and Grant 61473333.
S. Yang is with the School of Computer Science and Engineering, Southeast
University, Nanjing 210018, China, and was also with the Department of
Computer Science, City University of Hong Kong, Kowloon, Hong Kong
(e-mail: sfyang@seu.edu.cn).
Q. Liu is with the School of Automation, Huazhong University of Science
and Technology, Wuhan 430074, China (e-mail: qsliu@hust.edu.cn).
J. Wang is with the Department of Computer Science, City University of
Hong Kong, Kowloon, Hong Kong (e-mail: jwang.cs@cityu.edu.hk).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNNLS.2017.2652478
A Pareto optimal solution can be obtained by solving a para-
meterized single-objective optimization problem. By varying
the parameters, multiple Pareto optimal solutions are then
obtained. Many techniques have been developed to aggre-
gate objective functions, such as weighted-sum method [8],
goal programming [9], and normal boundary intersection
method [10]. Another class of approaches is population-
based method. Among which, evolutionary multiobjective
optimization approaches [11], [12] have been widely inves-
tigated. Numerous algorithms have been proposed, such as
NSGA-II [13], SPEA2 [14], MOEA/D [15], to name a few.
In addition, particle swarm optimization has also been applied
for multiobjective optimization [16].
Note that the aforementioned methods are mainly discussed
from a centralized manner. In the scalarization approach, all
objective functions need to be combined in advance. In the
population-based method, each individual has to know the full
knowledge of all objective functions to evaluate its quality,
which may not be fulfilled in some multiobjective optimization
problems, such as multiparty negotiations, where decision
makers know their own value functions only due to privacy
protection [17]. Another example is the multiobjective opti-
mization problems modeling business activities within a large
international corporation, in which decisions under multiple
objectives are made locally in each country so that the corpora-
tion performs at its best [18]. For these problems, a distributed
approach is necessary. In addition, with an increase in problem
size, a distributed approach is also demanded due to the limited
computing ability of a single computer.
In recent years, many efforts have been devoted to dis-
tributed optimization with a single-objective function based
on multiagent systems, in which each agent knows partial
information of an objective function, and all agents coop-
eratively seek for an optimal solution. Many multiagent
systems have been developed [19]–[28]. In contrast, a few
works on multiobjective distributed optimization are avail-
able. In [18] and [29], multiobjective distributed optimization
based on iterative augmented Lagrangian coordination tech-
niques and diffusion strategies are proposed. More recently,
a subgradient-based algorithm is proposed in [30] for multi-
objective distributed optimization and the relationship between
the selection of weight vector and the approximation error of
the Pareto front is discussed. The aforementioned subgradient-
based approaches require a diminishing step size to obtain
an exact solution, which may limit the performance of algo-
rithms [20]. In [31], neural network models are developed
for multiobjective optimization with equality constraints only
based on a decomposition-coordination principle.
2162-237X © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Southeast University. Downloaded on March 10,2022 at 22:18:57 UTC from IEEE Xplore. Restrictions apply.
982 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
Over past three decades, neurodynamic optimization based
on neural networks, which have the inherent nature of parallel
computing and the potential of electronic implementation, has
received great development. A neurodynamic approach can
solve optimization problems in running times at the orders
of magnitude much faster than the most popular optimiza-
tion algorithms executed on digital computers. In mid-1980s,
Hopfield and Tank [32] and Tank and Hopfield [33] spear-
headed a neural network model for solving the traveling
salesmen problem. From then on, neurodynamic optimization
approach has been extensively investigated. Various neural
network models have been developed for different single-
objective optimization problems (see [34]–[46], and references
therein). The key idea is to establish neural network models
based on optimality condition by using the Lagrange multi-
plier method, projection method, duality theory, and so on.
More recently, a neural network model with two-time-scale is
proposed for multiobjective optimization [47]. However, it is
still a centralized manner. In addition, by using multiple neural
network models, collaborative neurodynamic approaches have
been developed in [48] and [49] for global optimization with
a single objective function.
Motivated by the above discussions, this paper aims to
develop a collaborative neurodynamic approach for multiob-
jective distributed optimization. By using the weighted sum
method [8], a single-objective optimization problem can be
formulated, in which the scalar objective is summation of
weighted original objective functions. Based on decomposition
principle, a system consisting of multiple neural networks is
developed to cooperatively seek Pareto optimal solutions of a
multiobjective optimization problem. Each neural network is
required to know a single objective function only and even
partial information of constraints. The cooperative scheme
relies on the transmission of partial state information among
them. Each neural network optimizes its local objective func-
tion while seeks for state agreement with others, under the
guidance of a consensus-based coupling scheme. Sufficient
conditions are presented to guarantee the convergence of
the collaborative neurodynamic system to a Pareto optimal
solution.
In order to characterize a Pareto front, a switching-topology-
based method is proposed for computing a set of Pareto
optimal solutions. In this method, a switching process of the
communication topology in the developed system is designed.
Except for the last step, the communication topology is not
connected during the process. We show that each connected
component is also able to converge to a Pareto optimal
solution. Therefore, when the system consists of multiple
connected components, such method has a merit of generating
multiple Pareto optimal solutions in parallel.
In summary, the distinguished features of the approach
are twofold. First, a collaborative neurodynamic system is
developed for computing Pareto optimal solutions to a mul-
tiobjective distributed optimization problem by integrating
existing neurodynamic model for single-objective optimization
with a decomposition principle. In contrast to the existing
algorithms in [18], [29], and [30] for multiobjective distrib-
uted optimization, the approach herein can deal with general
constraints. Second, switching topology is used for generating
multiple Pareto optimal solutions to characterize Pareto fronts.
The remaining part of this paper is organized as
follows. In Section II, preliminary concepts on multiobjective
optimization, projection operator, and graph theory are intro-
duced. In Section III, the collaborative neurodynamic system is
formulated, and then, the stability properties of the system are
analyzed. In Section IV, a switching-topology-based method
is proposed to generate multiple Pareto optimal solutions.
In Section V, numerical examples are presented to illustrate
the proposed approach. In Section VI, an application of the
collaborative neurodynamic approach in portfolio selection
problem is shown. Finally, the conclusion remark is given
in Section VII.
Notation: Throughout this paper, vectors are column vec-
tors and written with bold face. · denotes 2-norm.
diag{A
1
, A
2
, ··· , A
n
} denotes a (block) diagonal matrix
consisting of diagonal entries (blocks) A
1
, A
2
, ··· , A
n
.
col{x
1
, x
2
, ··· , x
n
} denotes a column vector formed by
stacking vectors x
1
, x
2
, ··· , x
n
on top of each other. I
n
is
denoted for an identity matrix with dimension n. 1
k
denotes a
vector of k dimensions with all entries being 1. I
k
denotes the
set {1, 2, ··· , k}. R
m
+
denotes the half space {x ≥ 0 | x ∈ R
m
}.
k refers to the largest integer no larger than k.
II. P
RELIMINARIES
A. Multiobjective Optimization
A multiobjective optimization problem has a number of
objective functions to be minimized or maximized [7].
In mathematical notation, it can be posed as
min f (x) = ( f
1
(x), f
2
(x), ··· , f
k
(x))
T
s.t. x ∈ X (1)
where x ∈ R
n
is the vector of decision variables, and X =
{x ∈ R
n
: Ax = b, g(x) ≤ 0, x ∈ } is called feasible
region. f (x) ∈ R
k
is a vector of objective functions or criteria
with f
i
(x) : R
n
→ R (i ∈ I
k
). k is the number of objective
functions and satisfies k ≥ 2. g(x) : R
n
→ R
m
represents the
inequality constraint. A ∈ R
p×n
has a full row rank. ⊂ R
n
is
a closed convex set. The feasible objective region F is defined
as the set { f (x) : x ∈ X }. Problem (1) is said to be convex
if all the objective functions and the feasible region X are
convex.
Due to the contradiction of the objective functions, it is often
not possible to find a single solution what would be optimal for
all the objectives simultaneously. Consequently, the solution of
problem (1) is described by the concept of Pareto optimality,
which refers to the solution that none of the objective functions
can be improved without detriment to at least one of the other
objective functions. A formal definition of Pareto optimality
is given in the following.
Definition 1 [7]: A decision vector x
∗
∈ X is called
Pareto optimal if there does not exist x ∈ X such that
f
i
(x) ≤ f
i
(x
∗
) for all i ∈ I
k
,and f
j
(x)< f
j
(x
∗
) for one
j ∈ I
k
. A decision vector x
∗
∈ X is called weak Pareto
optimal if there does not exist x ∈ X such that f
i
(x)< f
i
(x
∗
)
for all i ∈ I
k
. An objective vector z
∗
∈ F is called (weak)
Authorized licensed use limited to: Southeast University. Downloaded on March 10,2022 at 22:18:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: COLLABORATIVE NEURODYNAMIC APPROACH TO MULTIPLE-OBJECTIVE DISTRIBUTED OPTIMIZATION 983
Pareto optimal if its corresponding decision vector is (weak)
Pareto optimal.
To obtain Pareto optimal solutions, one widely used
approach is the weighted sum method, which transforms
the multiple objectives into a single one by multiplying
each objective with a weight. Given a weighting vector
w = (w
1
,w
2
, ··· ,w
k
)
T
, problem (1) is converted into
min
k
i=1
w
i
f
i
(x)
s.t. x ∈ X (2)
where w
i
is the weight of the ith objective function. Generally,
it is required that w ∈ S
k
,whereS
k
={w ∈ R
k
+
: w ≥ 0,
w
1
= 1} is the unit simplex in R
k
. The solution set of
problem (2) with a given w is denoted as X
∗
(w).
Denote the set of Pareto optima and weak Pareto optima
of problem (1) by P and P
w
, respectively. By Definition 1,
P ⊂ P
w
. In addition, let P
S
k
=
w∈S
k
X
∗
(w) and P
int(S
k
)
=
w∈int(S
k
)
X
∗
(w),whereint(S
k
) represents the inner part of
set S
k
. Then, the relationship among P
S
k
, P
int(S
k
)
, P,andP
w
is stated by following lemma.
Lemma 1 [7], [50]: For problem (1), P
S
k
⊂ P
w
and
P
int(S
k
)
⊂ P. If problem (1) is convex, then P = P
int(S
k
)
⊂
P
w
= P
S
k
. If all objective functions in problem (1) are strictly
convex on ,thenP = P
int(S
k
)
= P
w
= P
S
k
.
The following assumptions hold throughout of this paper.
Assumption 1: All f
i
and g
i
are convex and twice contin-
uously differentiable on .
Assumption 2: The minimizer of each objective function f
i
exists.
Under Assumptions 1 and 2, any weak Pareto optimal
solution of a convex multiobjective optimization problem can
be obtained by solving problem (2) with a weighting vector
w basedonLemma1.
B. Projection Operator
Let P
(x) = arg min
y∈
x − y,wherex ∈ R
n
and
⊂ R
n
is a nonempty closed convex set. Then, P
(x) is
called the projection of x on . The properties of projection
operator are given in the following.
Lemma 2 [51]: Let ⊂ R
n
be a nonempty closed convex
set. Then, for any x ∈ R
n
, there is as follows.
1) P
(x) − P
( y)≤x − y,foranyy ∈ R
n
.
2) (P
(x) − x)
T
( y − P
(x)) ≥ 0, for all y ∈ .
3) (P
(x) − x)
T
( y − x) ≥P
(x) − x
2
,forall y ∈ .
Note that it is not easy to compute the projection of a point
onto a general convex set. However, for some special convex
sets, the projection operators are well defined. If ={x ∈
R
n
: l
i
≤ x
i
≤ h
i
, i ∈ I
n
}, P
(x) is also a vector with its
component defined as
P
(x)
i
=
⎧
⎨
⎩
l
i
, x
i
< l
i
x
i
, l
i
≤ x
i
≤ h
i
h
i
, h
i
< x
i
.
In addition, if = R
n
+
,thenP
(x) = (x)
+
,where(·)
+
is
defined componentwise as (x)
+
= max{x, 0}.If = R
n
,then
P
(x) = x for any x ∈ R
n
.
C. Graph Theory
AgraphG = (V, E) consists of a vertex set V ={v
1
,v
2
,
··· ,v
n
} and an edge set E ⊂ V × V. The adjacent matrix
A =[a
ij
]
n×n
associated with G is defined as a
ij
> 0if
(v
i
,v
j
) ∈ E,otherwise,a
ij
= 0. It is assumed that a
ii
= 0,
i.e., no self-loop in G. The degree of node v
i
is defined as
d
i
=
n
j=1
a
ij
. Then, the Laplace matrix of G is defined as
L = D − A,whereD = diag{d
1
, d
2
, ··· , d
n
}. G is undirected
if A is symmetric. We assume that G is undirected throughout
of this paper. For an undirected graph G, L is symmetric and
positive semidefinite. G is connected if and only if L has
a simple zero eigenvalue with L1
n
= 0
n
, and all the other
eigenvalues are positive [52].
III. C
OLLABORATIVE NEURODYNAMIC SYSTEM
This section concentrates on solving the multiobjective
optimization problem cooperatively by using multiple neural
networks. To establish a model in the framework of collabora-
tion, the key step is to decompose the optimization problem.
By using state decomposition method, problem (2) can be
equivalently converted to
min
k
i=1
w
i
f
i
(x
i
)
s.t. L ˜x = 0, x
i
∈ X (3)
where ˜x = col{x
1
, x
2
, ··· , x
k
}∈R
nk
, L = L ⊗ I
n
,andL is
the Laplacian matrix of a connected graph G with k vertexes.
It can be seen that each objective function in (3) has its own
state variable, which is independent of the others. Note that
L ˜x = 0 is equivalent to x
1
= x
2
=···= x
k
. Therefore, if ˜x
∗
is an optimal solution of (3), then ˜x
∗
can be written as 1
k
⊗x
∗
,
where x
∗
∈ R
n
is an optimal solution of (2). The converse
is also true. In the following, the collaborative neurodynamic
system is derived based on problem (3).
For notational convenience, denote
˜
f ( ˜x) = ( f
1
(x
1
), f
2
(x
2
), ··· , f
k
(x
k
))
T
˜g( ˜x) = col{g(x
1
), g(x
2
), ··· , g(x
k
)}
F( ˜x) = diag{∇ f
1
(x
1
), ∇ f
2
(x
2
), ··· , ∇ f
k
(x
k
)}
G( ˜x) = diag{∇ g(x
1
), ∇ g(x
2
), ··· , ∇ g(x
k
)}
A = I
k
⊗ A, b = 1
k
⊗ b.
Under Assumption 1, ˜x
∗
is an optimal solution of (3) if and
only if there exist ˜y
∗
∈ R
mk
+
, ˜z
∗
∈ R
pk
and ˜η
∗
∈ R
nk
such
that ( ˜x
∗
, ˜y
∗
, ˜z
∗
, ˜η
∗
) satisfying the following equations:
˜x = P
k
( ˜x − F( ˜x)w − G( ˜x) ˜y − A
T
˜z + L ˜η) (4a)
˜y = ( ˜y +˜g( ˜x))
+
, A˜x = b, L ˜x = 0. (4b)
where
k
is defined as
k
× ×···×. Note that the
equations in (4) are equivalent to Karush–Kuhn–Tucker (KKT)
conditions of problem (3) [36]. According to (4), a collabora-
tive neurodynamic system consisting of k neural networks for
solving problem (1) is then formulated. Note that ˜y can be
written as ˜y = col{ y
1
, y
2
, ··· , y
k
} with y
i
∈ R
m
. Similarly,
we write that ˜z = col{ z
1
, z
2
, ··· , z
k
} with z
i
∈ R
p
and
Authorized licensed use limited to: Southeast University. Downloaded on March 10,2022 at 22:18:57 UTC from IEEE Xplore. Restrictions apply.
剩余11页未读,继续阅读
资源评论
Mx_wangDD
- 粉丝: 5
- 资源: 2
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功