Automatica 49 (2013) 1397–1402
Contents lists available at SciVerse ScienceDirect
Automatica
journal homepage: www.elsevier.com/locate/automatica
Brief paper
Distributed output feedback control of Markov jump
multi-agent systems
✩
Bing-Chang Wang
a,b,1
, Ji-Feng Zhang
c
a
School of Control Science and Engineering, Shandong University, Jinan, 250061, China
b
School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, NSW, 2308, Australia
c
Key Laboratory of Systems and Control, Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
a r t i c l e i n f o
Article history:
Received 8 January 2012
Received in revised form
19 November 2012
Accepted 11 December 2012
Available online 1 March 2013
Keywords:
Multi-agent system
Distributed control
Mean field approach
Markov jump system
Output feedback
a b s t r a c t
In this paper, distributed output feedback control of Markov jump multi-agent systems (MASs) is
investigated. Both dynamic equations and index functions of the MASs involved contain Markov jump
parameters. The information available for each agent to design their controllers are only the noisy
output and the jump parameters. By Markov jump optimal filtering theory and the mean field approach,
distributed output feedback control laws are presented. Under some mild conditions, it is shown that the
closed-loop system is uniformly stable and the distributed control is sub-optimal.
© 2013 Elsevier Ltd. All rights reserved.
1. Introduction
Recently, the control and optimization problem of multi-agent
systems (MASs) has become a hot topic in the systems and
control community. One key concerned issue is how to design
distributed control laws based on agents’ local information. For
the case of large population, the main difficulty lies in the high
computational complexity. To overcome this difficulty, the mean
field (MF) approach is extended and applied (Huang, Caines, &
Malhame, 2003; Huang, Caines, & Malhamé, 2007; Li & Zhang,
2008a,b). Especially, Huang et al. (2003, 2007) developed the Nash
certainty equivalence methodology based on the MF theory, with
which distributed ε-Nash equilibrium strategies were given for the
games of large population MASs coupled via discounted costs. Li
and Zhang (2008a,b) considered the case where agents are coupled
via their stochastic long run time-average indices, and obtained
asymptotical Nash equilibria in the probabilistic sense.
Uncertainty is hard to avoid in practice, for instance, failure of
system components and environmental uncertainties. As a proper
✩
This work was supported by the National Natural Science Foundation of China
under grants 60934006 and 61120106011. The material in this paper was not
presented at any conference. This paper was recommended for publication in
revised form by Associate Editor Tamas Keviczky under the direction of Editor Frank
Allgöwer.
E-mail addresses: wangbc@amss.ac.cn (B.-C. Wang), jif@iss.ac.cn (J.-F. Zhang).
1
Tel.: +86 13810488429; fax: +86 10 62587343.
mathematical model to describe the dynamical behaviors of the
systems in an environment with abrupt changes, Markov jump
systems have been studied for many years (Costa, Fragoso, &
Marques, 2005; Mariton, 1990; Sworder, 1969; Wonham, 1970).
Recently, Wang and Zhang (2012a,b) investigated MF games of
Markov jump MASs.
Another issue worthy of consideration is the case where only
partial observation of system states can be obtained, since, strictly
speaking, all practical control problems are based on the informa-
tion of output measurements instead of the full states. The con-
trol problems based on the full state information, no matter what
it is, deterministic or stochastic, can only be an approximation of
real problems. For output feedback control of conventional systems
(i.e. the case where only one agent is considered), readers are re-
ferred to Bensoussan (1992), Davis (1977) and Wonham (1968).
Huang, Caines, and Malhamé (2006) considered continuous-time
MF games for MASs with time-invariant parameters based on out-
put measurements and provided a set of distributed control laws.
In this paper, we investigate distributed output feedback con-
trol of discrete-time MASs under the game-theoretic framework.
Both dynamic equations and index functions of the MASs involved
contain Markov jump parameters. The information available for
each agent is the noisy output and the Markov jump parameters
instead of its state. Compared with the previous works (Wang &
Zhang, 2012a,b), here index functions are more general, in which
Markov jump parameters are allowed.
For this kind of systems, we design distributed control by
filtering theory and the MF approach. Due to partial observation of
0005-1098/$ – see front matter © 2013 Elsevier Ltd. All rights reserved.
doi:10.1016/j.automatica.2013.01.063