Algorithms 2018, 11, 174 2 of 16
the hidden neurons [
10
]. The advantage of the LC-ELM on image watermarking was examined by
Mehta et al. [11].
In the LC-ELM learning algorithm, the addresses and radiuses were generally preset empirically
or randomly. And, thus, those parameters might not be optimal for the LC-ELM, and the algorithm
may yield an inappropriate underlying model. In 2015, Qu et al. presented an evolutionary local
coupled extreme learning machine (ELC-ELM). In the ELC-ELM, the differential evolutionary (DE)
algorithm was used to optimize the addresses and the radiuses of the fuzzy membership functions in
hidden neurons for improving the generalization performance [
12
]. However, it should be noted that
the hidden biases and input weights in the ELC-ELM were also set randomly.
The DE algorithm has good global converge property by means of utilizing the differential
information of the population. However, the instability performance of DE can also be caused because
of the above reason and the algorithm may be trapped in local optima [
13
,
14
]. Moreover, three
parameters of the DE algorithm should be controlled manually [
15
]. In 1995, the particle swarm
optimization (PSO) algorithm was presented by Eberhart et al. [
16
] and has been used in many
optimization fields as it can converge to the global minima quickly. Compared with other stochastic
optimization techniques, the advantages of the PSO algorithm are that it is easy to be implemented in
practice and few parameters need to be adjusted [
17
,
18
]. The PSO algorithm and its improved variants,
such as APSO (Adaptive PSO) and PSOGSA (The hybrid PSO and gravitational search algorithm),
were used to select the optimal parameters between the input layer and the hidden layer (input weights
and biases) of the ELM [19,20].
Therefore, in order to overcome the limitation of the DE, a new method combining the LC-ELM
with an improved PSO called LC-PSO-ELM is proposed in this paper. In the proposed algorithm, the
improved PSO algorithm is used to optimize the address and window radius of the local coupled
parameters. In addition, the input weights and hidden layer biases of the ELM are also optimized to
further improve the generalization performance of the LC-ELM, and the MP generalized inverse is
used to calculate the weights between the hidden layer and the output layer analytically. In order to
prove the superiority of the proposed algorithm, we compared the computer simulation results from
our developed algorithm to those from the ELM, LC-ELM and PSO-ELM algorithms, respectively. The
comparison results demonstrated that the newly developed algorithm exhibits improved generalization
performance with the highest accuracy.
The rest of this paper is organized as follows. The local coupled extreme learning machine
(LC-ELM) and the improved particle swarm optimization algorithm are given in Section 2. The
local coupled extreme learning machine based on the PSO algorithm is introduced in Section 3.
Section 4 includes different simulation results and analysis of the proposed algorithm in regression
and classification benchmark problems. Finally, the conclusions are summarized in Section 5.
2. Theoretical Background
2.1. Local Coupled Extreme Learning Machine
The ELM learning algorithm is a simple, fast and efficient method. For further improving the
generalization performance of the ELM, the LC-ELM learning algorithm was proposed by Qu [
10
]
in which the efficiency of LC-ELM in terms of classification and regression benchmark problems
was investigated.
In the LC-ELM, due to the utilization of the fuzzy membership function
F(·)
and the similarity
relation
S(x
,
d
i
)
, the complexity of the weight searching space was reduced and the generalization
performance was correspondingly improved in terms of the simple neural networks structure. The
mathematical formulation of the LC-ELM is presented as follows:
For
M
arbitrary distinct examples
(
x
i
, t
i
)
, where
x
i
= [x
i1
,
x
i2
,
· · · x
ip
] ∈ R
p
is the input and
t
i
= [t
i1
,
t
i1
,
· · · t
iq
] ∈ R
q
is the expected output,
i =
1,
. . .
,
M
. The output of the hidden layer
neurons
g(w
i
·x
j
+ b
i
)
for the ELM is modified with the help of fuzzy membership function as