IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 64, NO. 10, OCTOBER 2017 8177
Event-Driven Nonlinear Discounted Optimal
Regulation Involving a Power System Application
Ding Wang, Member, IEEE, Haibo He, Senior Member, IEEE, Xiangnan Zhong, Student Member, IEEE,
and Derong Liu, Fellow, IEEE
Abstract—By employing neural network approximation
architecture, the nonlinear discounted optimal regulation
is handled under event-driven adaptive critic framework.
The main idea lies in adopting an improved learning algo-
rithm, so that the event-driven discounted optimal control
law can be derived via training a neural network. The stabil-
ity guarantee and simulation illustration are also included.
It is highlighted that the initial stabilizing control policy is
not required during the implementation process with the
combined learning rule. Moreover, the closed-loop system
is formulated as an impulsive model. Then, the related sta-
bility issue is addressed by using the Lyapunov approach.
The simulation studies, including an application to a power
system, are also conducted to verify the effectiveness of
the present design method.
Index Terms—Adaptive/approximate dynamic program-
ming, approximation, discount factor, event-driven control,
neural networks, optimal regulation, power system.
I. INTRODUCTION
T
HERE exists a severe difficulty called the “curse of dimen-
sionality” when solving the Hamilton–Jacobi–Bellman
equation during nonlinear optimal regulation design. Hence,
a series of iterative methods have been developed to tackle
the optimal control problems. Among them, neural networks
have served as an important function approximation archi-
tecture to perform the iterative calculation. In fact, neural
Manuscript received December 19, 2016; revised February 12, 2017
and March 18, 2017; accepted April 4, 2017. Date of publication April 27,
2017; date of c urrent version September 11, 2017. This work was sup-
ported in part by the National Natural Science Foundation of China under
Grant U1501251, Grant 61533017, Grant 61233001, Grant 51529701,
and Grant 61520106009, in part by the Beijing Natural Science Foun-
dation under Grant 4162065, in part by the U.S. National Science Foun-
dation under Grant ECCS 1053717, and in part by the Early Career
Development Award of SKLMCCS. (Corresponding author: Haibo He.)
D. Wang is with The State Key Laboratory of Management and Con-
trol for Complex Systems, Institute of Automation, Chinese Academy of
Sciences, Beijing 100190, China, and also with the School of Computer
and Control Engineering, University of Chinese Academy of S ciences,
Beijing 100049, China. This research was conducted when D. Wang
was a Visiting Scholar at the Department of Electrical, Computer, and
Biomedical Engineering, University of Rhode Island, Kingston, RI 02881
USA (e-mail: ding.wang@ia.ac.cn).
H. He and X. Zhong are with the Department of Electrical, Computer
and Biomedical Engineering, University of Rhode Island, Kingston, RI
02881 USA (e-mail: he@ele.uri.edu; xzhong@ele.uri.edu).
D. Liu is with the School of Automation, Guangdong University of
Technology, Guangzhou 510006, China (e-mail: derongliu@foxmail.
com).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIE.2017.2698377
networks are widely adopted to perform learning and ap-
proximation abilities for practical feedback stabilization prob-
lems [1]–[5], especially for optimal control design [5]. Re-
cently, a significant breakthrough has been made in adaptive
boundary control design and stability analysis for infinite di-
mensional systems with constraints [2], [3]. Based on neural
networks, adaptive/approximate dynamic programming [6], [7]
is regarded as a key technique to design optimal controllers
adaptively and forward-in-time. In the last decade, extensive
developments of adaptive/approximate dynamic programming
have been achieved in aspect of optimal control for discrete-time
systems [8]–[12] and continuous-time systems [13]–[17]. For
example, Modares and Lewis [14] studied the linear quadratic
tracking control with discounted cost function of partially un-
known continuous-time systems based on the reinforcement
learning technique. One also has observed the extensions of
regulation design to various intelligent control strategies for
complex systems, such as decentralized control [18], consen-
sus control [19], H
∞
control [20], and tracking control [21].
However, the aforementioned results are mainly obtained based
on time-driven design, which would inevitably cause frequent
adjustments of the actuator state and might result in serious en-
ergy consumption. Thus, conducting time/event transformation
to achieve event-driven design has become a new avenue for the
feedback control community.
With the rapid development of network-based systems, more
and more control loops are closed through some communication
media. The increasing interest in saving the computational load
of networked control systems results in an extensive attention
with respect to the event-triggering mechanism. Under the gen-
eral framework of event-driven approaches, the actuators are
updated only when certain conditions are violated to guarantee
both stability performance and control efficiency of the target
plants. Recently, the event-driven adaptive critic control has pro-
vided a new channel to implement nonlinear intelligent optimal
stabilization [22]–[26]. For example, Sahoo et al. [22] presented
a novel approximation-based event-triggered control scheme
of multi-input multioutput unknown nonlinear continuous-time
systems in affine form. Vamvoudakis et al. [23] proposed an
event-driven tracking control algorithm for nonlinear systems
with an infinite horizon discounted cost function. Under the
new framework, the designed controller is only updated when
an event is triggered, thereby reducing the computational burden
of two processes including neural network learning and adaptive
optimal control.
0278-0046 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications
standards/publications/rights/index.html for more information.