A global best that is known to all and immediately updated when a new best position is found by any particle in the
swarm.
Neighborhood best that the particle obtains by communicating with a subset of the swarm.
The local best, which is the best solution that the particle has seen.
The particle position and velocity update equations in the simplest form that govern the PSO are given by
As the swarm iterates, the fitness of the global best solution improves (decreases for minimization problem). It could
happen that all particles being influenced by the global best eventually approach the global best, and there on the fitness
never improves despite however many runs the PSO is iterated thereafter. The particles also move about in the search
space in close proximity to the global best and not exploring the rest of search space. This phenomenon is called
'convergence'. If the inertial coefficient of the velocity is small, all particles could slow down until they approach zero
velocity at the global best. The selection of coefficients in the velocity update equations affects the convergence and the
ability of the swarm to find the optimum. One way to come out of the situation is to reinitialize the particles positions at
intervals or when convergence is detected.
Some research approaches investigated the application of constriction coefficients and inertia weights. Numerous
techniques for preventing premature convergence. The introduction of the fully informed particle swarm. Many variations
on the social network topology, parameter-free, fully adaptive swarms, and some highly simplified models have been
created. The algorithm has been analyzed as a dynamical system, and has been used in hundreds of engineering
applications; it is used to compose music, to model markets and organizations, and in art installations.
A basic, canonical PSO algorithm
The algorithm presented below uses the global best and local bests but no neighborhood bests. Neighborhood bests allow
parallel exploration of the search space and reduce the susceptibility of PSO to falling into local minima, but slow down
convergence speed. Note that neighborhoods merely slow down the proliferation of new bests, rather than creating
isolated subswarms because of the overlapping of neighborhoods: to make neighborhoods of size 3, say, particle 1 would
only communicate with particles 2 through 5, particle 2 with 3 through 6, and so on. But then a new best position
discovered by particle 2's neighborhood would be communicated to particle 1's neighborhood at the next iteration of the
PSO algorithm presented below. Smaller neighborhoods lead to slower convergence, while larger neighborhoods to faster
convergence, with a global best representing a neighborhood consisting of the entire swarm. The tendency is now to use
partly random neighborhoods (see Standard PSO on the Particle Swarm Central).
A single particle by itself is unable to accomplish anything. The power is in interactive collaboration.
Let be the fitness function that takes a particle's solution with several components in higher dimensional
space and maps it to a single dimension metric. Let there be n particles, each with associated positions and
velocities , . Let be the current best position of each particle and let be the global best.
Initialize and for all i. One common choice is to take and for all i and
, where a
j
,b
j
are the limits of the search domain in each dimension, and U represents the Uniform
distribution (continuous).
and .
While not converged:
For each particle :
Create random vectors , : and for all j,by taking for
Update the particle positions: .
Update the particle velocities: .
Particle swarm optimization - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Particle_swarm_optimization
2 of 6