1545-5963 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCBB.2017.2701367, IEEE/ACM
Transactions on Computational Biology and Bioinformatics
2
charged particles, an electrostatic reaction occurs to
adjust particles velocity, which means that a repulsion
operator enlarges the distance between the two particles.
However, it is difficult to predefine the distance value to
decide whether two particles are too closed or not. In
addition, the operator to increase the distance of two
particles may also cause a closeness for other particles.
3) Third, to weaken the effects from any one solution
to the whole swarm, many studies are conducted to
increase the diversity of exemplars. One of the methods
is neighborhood strategy. In [13], neighborhood is inves-
tigated and each particle will be affected much by the
neighborhood. A suggestion to set the neighborhood size
is also provided that for simple problems PSO with large
neighborhood is advisable, while for complex problems,
PSO with small neighborhood is recommended. Niching
strategy is also a neighborhood strategy for diversity
maintenance [14], [15]. By defining a niching radius, the
individuals that are in the same circle can be considered
as the similar ones and therefore few communication
occurs among them. However, the niching radius or
the neighborhood size is difficult the be preset, which
is influential to algorithm’s performance. Besides the
neighborhood strategy, the diversity of exemplars is
also related to the swarm diversity. In the canonical
PSO, the current best position g
best
is used to guide
the update direction for all particles, which may cause
a premature convergence or local optima. To avoid
the effects from only one position, say g
best
, many
researches are conducted to increase the diversity of
exemplars. In [16], Mendes and Kennedy propose a
fully informed PSO, which employs all particles, neither
p
best
nor g
best
, to update velocity. The influence of
each particle to its neighbors is weighted based on its
fitness value and the neighborhood size. In [17], the
authors abandon g
best
but only employ p
best
in velocity
updating. Each particle has a probability to receive the
information from other particle’s p
best
. In [18], Cheng
and Jin propose a competitive strategy in pair of particles
to maintain diversity, which neither g
best
nor p
best
is
used in updating mechanism. However, the authors pay
more attention to the exemplars’ diversity, but few on the
considerations to both diversity and convergence. Even
some designs on the convergence is considered, such as
in [18], the parameters are very sensitive to algorithm’s
performance.
4) The fourth one is to hybridize other algorithms with PSO
or employ other techniques in PSO. In [19], by hybridiz-
ing PSO with breading and subpopulation strategy, a
breeding operator is proposed to keep population diver-
sity. Some researchers introduce other techniques such
as deflecting, stretching and repulsion in [20], which
prevents particles from searching discovered areas. In
[21], a cooperative method is introduced in PSO to
propose cooperative particle swarm optimizer (CPSO),
which employs more particles in one update and also
enhances the diversity of exemplars. However, the design
of the hybridization is mainly based on the experience,
which means there is no specific instruction to predict
whether a hybridization is effective for diversity main-
tenance or not. Therefore, it will cost much time to test
the hybridization.
As aforementioned, although the summarized variants or
improvements are proposed for diversity maintenance, there
still exit several problems to implement them. Hence we
reconsider the structure of PSO in this paper. In conventional
PSO, g
best
always affects all particles and will not be up-
dated until a new global best solution appears. Therefore, all
particles’ movement will be affected by g
best
for a number
of generations. Once g
best
is in local optimal position, it is
hard for the whole swarm to get rid of it. To eliminate the
effects from the only one exemplar, say g
best
, to the whole
swarm, we increase the diversity of exemplars. Considering
that all particles’ historical best positions, namely p
best
s, are
also good solutions and g
best
is a specific p
best
, we propose
a grouping strategy with p
best
guidance for PSO to increase
the diversity of exemplars. The main contributions are given
as follows:
1) We employ a grouping strategy to uniformly and ran-
domly divide the whole swarm into several groups. In
each group, the worse solutions will learn from the group
best solution. Since the swarm is divided into groups in
a random way, it reduces the probability that any one
position affects the whole swarm for generations.
2) Considering that g
best
is the best solution in the set
of p
best
and p
best
s are also good solutions, we employ
the set of p
best
to balance convergency and diversity in
this paper. In each generation, a p
best
will be selected
from the set of p
best
s as an exemplar, but not the only
g
best
. The design has two advantages. First, since the set
of p
best
records historical good positions of the whole
swarm, it can provide the swarm useful information for
convergence. Second, this prevents the whole swarm
from being affected by any one position, such as g
best
,
for generations.
In this way, particles can update itself by good positions.
Meanwhile, the proposed algorithm also can maintain the
population diversity by preserving the diversity of exemplars.
Hence both convergence and diversity maintenance are con-
sidered simultaneously. The rest of this paper is organized
as follows. Section II-A will present a brief introduction
of canonical PSO and the measurements for diversity. In
Section III, Grouping Particle Swarm Optimization with P
best
s
Guidance (GPSO-PG) will be proposed. The parameter setting
will be discussed in this section. In Section IV, large scale
optimization problems in CEC2008 and multi-modular opti-
mization problems in CEC 2010 are considered as benchmarks
to compare GPSO-PG with several peer competitors. The
analysis for the comparison results will also be presented in
this section. In Section V, we give the conclusions of this
paper and present future work.