This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON CYBERNETICS 1
Regularized Primal–Dual Subgradient Method for
Distributed Constrained Optimization
Deming Yuan, Daniel W. C. Ho, Senior Member, IEEE, and Shengyuan Xu
Abstract—In this paper, we study the distributed constrained
optimization problem where the objective function is the sum
of local convex cost functions of distributed nodes in a net-
work, subject to a global inequality constraint. To solve this
problem, we propose a consensus-based distributed regularized
primal–dual subgradient method. In contrast to the existing
methods, most of which require projecting the estimates onto
the constraint set at every iteration, only one projection at the
last iteration is needed for our proposed method. We establish
the convergence of the method by showing that it achieves an
O(K
−1/4
) convergence rate for general distributed constrained
optimization, where K is the iteration counter. Finally, a numeri-
cal example is provided to validate the convergence of the propose
method.
Index Terms—Average consensus, constrained optimization,
distributed optimization, primal–dual subgradient method.
I. INTRODUCTION
R
ECENT years have witnessed the onset of a surge of
research in solving convex optimization problem in a
distributed setting [3]–[5], [15]–[29], due to its applications
in many practical fields, including distributed
1
-regression
in machine learning [5], distributed finite-time optimal ren-
dezvous problem [15], and distributed demand response con-
trol problem in smart grid [20], to name a few. Such problem
has a long history in the distributed optimization com-
munity (see [1], [2]); in general, it employs the average
consensus algorithms in the multiagent systems community
(see [7]–[14], [31]–[33]) to design the distributed computa-
tional model that removes the need for a central coordinator.
Reference [3] presents a first analysis of the consensus-
based subgradient method for solving the distributed convex
optimization problem.
Manuscript received March 2, 2015; revised May 6, 2015 and
June 11, 2015; accepted July 25, 2015. This work was supported in part
by the National Natural Science Foundation of China under Grant 61304042
and Grant 61374087, in part by GRF grants from HKSAR under Grant
CityU 114113 and Grant 11204514, in part by the Program for Changjiang
Scholars, and in part by the Program for Changjiang Scholars and Innovative
Research Team in the University under Grant IRT13072. This paper was
recommended by Associate Editor Q. Liu.
D. Yuan is with the College of Automation, Nanjing University
of Posts and Telecommunications, Nanjing 210023, China (e-mail:
dmyuan1012@gmail.com).
D. W. C. Ho is with the Department of Mathematics, City University of
Hong Kong, Hong Kong (e-mail: madaniel@cityu.edu.hk).
S. Xu is with the School of Automation, Nanjing University of Science and
Technology, Nanjing 210094, China (e-mail: syxu@njust.edu.cn).
Digital Object Identifier 10.1109/TCYB.2015.2464255
In this paper, we are interested in solving the constrained
convex optimization problem over a network that consists of
multiple interacting nodes. Each node is endowed with a local
convex cost function that is known to itself only, the goal of the
problem is to minimize the sum of local cost functions, subject
to a global inequality constraint known to all the nodes in the
network. We assume without loss of generality that the con-
straint set is contained in a Euclidean ball with finite radius.
To solve this problem, we propose a distributed regularized
primal–dual subgradient (DRPDS) method; the key idea of
the method is to appropriately penalize the intermediate esti-
mates when they are infeasible by introducing a regularized
Lagrangian function, which is motivated by [6]. Specifically,
at each iteration we project the primal variable onto the ball
containing the constraint set instead of the actual constraint
set, and the dual variable onto the nonnegative orthant (in fact,
these admit analytical solutions). At the last iteration, one pro-
jection onto the actual constraint set is needed to obtain a final
feasible solution.
A. Related Works
The problem of solving multiagent optimization problem
under inequality constraints has received considerable atten-
tion in recent years [17]–[20]. Zhu and Martinez [17]first
propose a distributed Lagrangian primal–dual subgradient
method; the basic idea is to characterize the primal–dual opti-
mal solutions as the saddle points of the Lagrangian function
associated with the problem. Reference [19] proposes a vari-
ant of the distributed primal–dual subgradient method, where
the multistep consensus algorithm is employed to simplify the
implementation and convergence analysis of the method. To
solve the multiagent optimization problem with more general
inequality constraints that couple all the agents’ optimiza-
tion variables, Chang et al. [20] propose a novel distributed
primal–dual perturbed subgradient method and establish its
convergence. The implementation of the aforementioned meth-
ods in general involves projection steps onto some primal and
dual constraint sets, respectively. In particular, they involve
each node projecting the primal variable onto the primal
constraint set at every iteration. Moreover, they require con-
structing a compact set that contains the dual optimal set, and
projecting the dual variable onto this set to guarantee that the
dual variable is bounded; this is of crucial importance in estab-
lishing the convergence of the methods. It is worth noting that
the construction of this compact set involves each node solving
a general constrained convex optimization problem.
2168-2267
c
2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.