Grey Wolf Optimizer
Seyedali Mirjalili
a,
⇑
, Seyed Mohammad Mirjalili
b
, Andrew Lewis
a
a
School of Information and Communication Technology, Griffith University, Nathan Campus, Brisbane QLD 4111, Australia
b
Department of Electrical Engineering, Faculty of Electrical and Computer Engineering, Shahid Beheshti University, G.C. 1983963113, Tehran, Iran
article info
Article history:
Received 27 June 2013
Received in revised form 18 October 2013
Accepted 11 December 2013
Available online 21 January 2014
Keywords:
Optimization
Optimization
techniques
Heuristic algorithm
Metaheuristics
Constrained optimization
GWO
abstract
This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves
(Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey
wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for sim-
ulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encir-
cling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known
test functions, and the results are verified by a comparative study with Particle Swarm Optimization
(PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming
(EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very com-
petitive results compared to these well-known meta-heuristics. The paper also considers solving three
classical engineering design problems (tension/compression spring, welded beam, and pressure vessel
designs) and presents a real application of the proposed method in the field of optical engineering. The
results of the classical engineering design problems and real application prove that the proposed algo-
rithm is applicable to challenging problems with unknown search spaces.
Ó 2013 Elsevier Ltd. All rights reserved.
1. Introduction
Meta-heuristic optimization techniques have become very pop-
ular
over the last two decades. Surprisingly, some of them such as
Genetic Algorithm (GA) [1], Ant Colony Optimization (ACO) [2],
and Particle Swarm Optimization (PSO) [3] are fairly well-known
among not only computer scientists but also scientists from differ-
ent fields. In addition to the huge number of theoretical works,
such optimization techniques have been applied in various fields
of study. There is a question here as to why meta-heuristics have
become remarkably common. The answer to this question can be
summarized into four main reasons: simplicity, flexibility, deriva-
tion-free mechanism, and local optima avoidance.
First, meta-heuristics are fairly simple. They have been mostly
inspired
by very simple concepts. The inspirations are typically re-
lated to physical phenomena, animals’ behaviors, or evolutionary
concepts. The simplicity allows computer scientists to simulate dif-
ferent natural concepts, propose new meta-heuristics, hybridize
two or more meta-heuristics, or improve the current meta-heuris-
tics. Moreover, the simplicity assists other scientists to learn meta-
heuristics quickly and apply them to their problems.
Second, flexibility refers to the applicability of meta-heuristics
to
different problems without any special changes in the structure
of the algorithm. Meta-heuristics are readily applicable to different
problems since they mostly assume problems as black boxes. In
other words, only the input(s) and output(s) of a system are impor-
tant
for a meta-heuristic. So, all a designer needs is to know how to
represent his/her problem for meta-heuristics.
Third, the majority of meta-heuristics have derivation-free
mechanis
ms. In contrast to gradient-based optimization ap-
proaches, meta-heuristics optimize problems stochastically. The
optimization process starts with random solution(s), and there is
no need to calculate the derivative of search spaces to find the opti-
mum. This makes meta-heuristics highly suitable for real problems
with expensive or unknown derivative information.
Finally, meta-heuristics have superior abilities to avoid local op-
tima
compared to conventional optimization techniques. This is
due to the stochastic nature of meta-heuristics which allow them
to avoid stagnation in local solutions and search the entire search
space extensively. The search space of real problems is usually un-
known and very complex with a massive number of local optima,
so meta-heuristics are good options for optimizing these challeng-
ing real problems.
The No Free Lunch (NFL) theorem [4] is
worth mentioning here.
This theorem has logically proved that there is no meta-heuristic
best suited for solving all optimization problems. In other words,
a particular meta-heuristic may show very promising results on a
set of problems, but the same algorithm may show poor perfor-
mance on a different set of problems. Obviously, NFL makes this
field of study highly active which results in enhancing current ap-
proaches and proposing new meta-heuristics every year. This also
0965-9978/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.advengsoft.2013.12.007
⇑
Corresponding author. Tel.: +61 434555738.
E-mail addresses: seyedali.mirjalili@griffithuni.edu.au (S. Mirjalili),
mohammad.smm@gmail.com (S.M. Mirjalili), a.lewis@griffith.edu.au (A. Lewis).
Advances in Engineering Software 69 (2014) 46–61
Contents lists available at ScienceDirect
Advances in Engineering Software
journal homepage: www.elsevier.com/locate/advengsoft