/*
Standard PSO 2007
Contact for remarks, suggestions etc.:
MauriceClerc@WriteMe.com
Last update
2007-12-10 Warning about rotational invariance (valid here only on 2D)
2007-11-22 stop criterion (option): distance to solution < epsilon
and log_progress evaluation
2007-11-21 Ackley function
-------------------------------- Contributors
The works and comments of the following persons have been taken
into account while designing this standard. Sometimes this is for
including a feature, and sometimes for leaving out one.
Auger, Anne
Blackwell, Tim
Bratton, Dan
Clerc, Maurice
Croussette, Sylvain
Dattasharma, Abhi
Eberhart, Russel
Hansen, Nikolaus
Keko, Hrvoje
Kennedy, James
Krohling, Renato
Langdon, William
Liu, Hongbo
Miranda, Vladimiro
Poli, Riccardo
Serra, Pablo
Stickel, Manfred
-------------------------------- Motivation
Quite often, researchers claim to compare their version of PSO
with the "standard one", but the "standard one" itself seems to vary!
Thus, it is important to define a real standard that would stay
unchanged for at least one year.
This PSO version does not intend to be the best one on the market
(in particular, there is no adaptation of the swarm size nor of the
coefficients). This is simply very near to the original version (1995),
with just a few improvements based on some recent works.
--------------------------------- Metaphors
swarm: A team of communicating people (particles)
At each time step
Each particle chooses a few informants at random, selects the best
one from this set, and takes into account the information given by
the chosen particle.
If it finds no particle better than itself, then the "reasoning" is:
"I am the best, so I just take my current velocity and my previous
best position into account"
----------------------------------- Parameters/Options
clamping := true/false => whether to use clamping positions or not
randOrder:= true/false => whether to avoid the bias due to the loop
on particles "for s = 1 to swarm_size ..." or not
rotation := true/false => whether the algorithm is sensitive
to a rotation of the landscape or not
You may also modify the following ones, although suggested values
are either hard coded or automatically computed:
S := swarm size
K := maximum number of particles _informed_ by a given one
w := first cognitive/confidence coefficient
c := second cognitive/confidence coefficient
----------------------------------- Equations
For each particle and each dimension
Equation 1: v(t+1) = w*v(t) + R(c)*(p(t)-x(t)) + R(c)*(g(t)-x(t))
Equation 2: x(t+1) = x(t) + v(t+1)
where
v(t) := velocity at time t
x(t) := position at time t
p(t) := best previous position of the particle
g(t) := best position amongst the best previous positions
of the informants of the particle
R(c) := a number coming from a random distribution, which depends on c
In this standard, the distribution is uniform on [0,c]
Note 1:
When the particle has no informant better than itself,
it implies p(t) = g(t)
Therefore, Equation 1 gets modified to:
v(t+1) = w*v(t) + R(c)*(p(t)-x(t))
Note 2:
When the "non sensitivity to rotation" option is activated
(p(t)-x(t)) (and (g(t)-x(t))) are replaced by rotated vectors,
so that the final DNPP (Distribution of the Next Possible Positions)
is not dependent on the system of co-ordinates.
----------------------------------- Information links topology
A lot of work has been done about this topic. The main result is this:
There is no "best" topology. Hence the random approach used here.
----------------------------------- Initialisation
Initial positions are chosen at random inside the search space
(which is supposed to be a hyperparallelepiped, and often even
a hypercube), according to a uniform distribution.
This is not the best way, but the one used in the original PSO.
Each initial velocity is simply defined as the half-difference of two
random positions. It is simple, and needs no additional parameter.
However, again, it is not the best approach. The resulting distribution
is not even uniform, as is the case for any method that uses a
uniform distribution independently for each component.
The mathematically correct approach needs to use a uniform
distribution inside a hypersphere. It is not very difficult,
and was indeed used in some PSO versions. However, it is quite
different from the original one.
Moreover, it may be meaningless for some heterogeneous problems,
when each dimension has a different "interpretation".
------------------------------------ From SPSO-06 to SPSO-07
The main differences are:
1. option "non sensitivity to rotation of the landscape"
Note: although theoretically interesting, this option is quite
computer time consuming, and the improvement in result may
only be marginal.
2. option "random permutation of the particles before each iteration"
Note: same remark. Time consuming, no clear improvement
3. option "clamping position or not"
Note: in a few rare cases, not clamping positions may induce an
infinite run, if the stop criterion is the maximum number of
evaluations
4. probability p of a particular particle being an informant of another
particle. In SPSO-06 it was implicit (by building the random infonetwork)
Here, the default value is directly computed as a function of (S,K),
so that the infonetwork is exactly the same as in SPSO-06.
However, now it can be "manipulated" ( i.e. any value can be assigned)
5. The search space can be quantised (however this algorithm is _not_
for combinatorial problems)
Also, the code is far more modular. It means it is slower, but easier
to translate into another language, and easier to modify.
----------------------------------- Use
Define the problem (you may add your own one in problemDef() and perf())
Choose your options
Run and enjoy!
------------------------------------ Some results
So that you can check your own implementation, here are some results.
(clamping, randOrder, rotation, stop) = (1,1,1,0)
Function Domain error Nb_of_eval Nb_of_runs Result
Parabola [-100,100]^30 0.0001 6000 100 52% (success rate)
shifted "" "" "" "" 7%
shifted "" "" 7000 "" 100%
Griewank [-100,100]^30 0.0001 9000 100 51%
Rosenbrock [-10,10]^30 0.0001 40000 50 31
Rastrigin [-10,10]^30 0.0001 40000 50 53 (error mean)
Tripod [-100,100]^2 0.0001 10000 50 50%
(clamping, randOrder, rotation, stop) = (1,1,0,0)
Parabola [-100,100]^30 0.0001 6000 100 0.69 (0%)
shifted "" "" "" "" 2.16 (0%)
Griewank [-100,100]^30 0.0001 9000 100 14%
Rosenbrock [-10,10]^30 0.0001 40000 100 38.7
Rastrigin [-10,10]^30 0.0001 40000 100 54.8 (error mean)
Tripod [-100,100]^2 0.0001 10000 100 47%
Because of randomness, and also depending on your own pseudo-random
number generator, you just have to find similar values,
not exactly these ones.
*/
#include "stdio.h"
#include "math.h"
#include <stdlib.h>
#include <time.h>
#define D_max 100 // Max number of dimensions of the search space
#define R_max 200 // Max number of runs
#define S_max 100 // Max swarm size
#define zero 0 // 1.0e-30 // To avoid numerical instabilities
// Structures
struct quantum
{
double q[D_max];
int size;
};
struct SS
{
int D;
double max[D_max];
double min[D_max];
struct quantum q; // Quantisation step size. 0 => continuous problem
};
struct param
{
double c; // Confidence coefficient
int clamping; // Position clamping or not
int K; // Max number of particles informed by a given one
double p; // Probability threshold for random topology
// (is actually computed as p(S,K) )
int randOrder; // Random choice of particles or not
int rotation; // Sensitive to rotation or not
int S; // Swarm size
int stop; // Flag for stop criterion
double w; // Confidence coefficient
};
struct position
{
double f;
int improved;
int size;
double x[D_max];
};
struct
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
Quite often, researchers claim to compare their version of PSO with the "standard one", but the "standard one" itself seems to vary! Thus, it is important to define a real standard that would stay unchanged for at least one year. This PSO version does not intend to be the best one on the market (in particular, there is no adaptation of the swarm size nor of the coefficients). This is simply very near to the original version (1995), with just a few improvements based on some recent works.
资源推荐
资源详情
资源评论
收起资源包目录
standard_pso_2007.rar (1个子文件)
standard_pso_2007.c 33KB
共 1 条
- 1
资源评论
- leisland2012-08-10不错,官方网站不能登录,这里正好有,感谢!
- dear_niu2014-03-31代码太难懂了,太复杂
- 小呆子6662014-05-02我觉得写得不错,值得在实验室细细学习研究
gaoworld
- 粉丝: 4
- 资源: 23
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 基于串口通信的光通信上位机,包括运动控制和通信协议
- 串口与以太网文件传送协议(或自定义控制协议)
- Qt开发windows系统安装教程与代码实例.txt
- QT6实现的附带文件传输协议的串口终端
- 一个串口通讯类和调用Demo 通过设置串口、设置串口自定义协议,可方便对串口发送数据与接收数据
- 华为OD模拟题及参考答案.仅供学习和模拟考试使用
- stm32f103c8t6基于modbus协议和使用串口读取温湿度
- 英雄联盟LOL金克斯4K电脑壁纸
- Microbrain道闸产品上位机,以MahApps库为基础搭建界面,集成了串口(UART)、CAN、WIFI通信,十六进制协议
- Android串口通讯, 支持发送数据回调, 支持并发处理, 自定义协议, CRC校验, 自动粘包, 自动去除冗余的干扰数据
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功