S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey Wolf Optimizer, Advances in Engineering Software , vol. 69, pp.
46-61, 2014, DOI: http://dx.doi.org/10.1016/j.advengsoft.2013.12.007
Grey Wolf Optimizer
1
Seyedali Mirjalili,
2
Seyed Mohammad Mirjalili,
1
Andrew Lewis
1
School of Information and Communication Technology, Griffith University, Nathan, Brisbane, QLD 4111,
Australia
2
Department of Electrical Engineering, Faculty of Electrical and Computer Engineering, Shahid Beheshti
University, G. C. 1983963113, Tehran, Iran
seyedali.mirjalili@griffithuni.edu.au, mohammad.smm@gmail.com, a.lewis@griffith.edu.au
Abstract: This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves
(Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in
nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the
leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and
attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the
results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search
Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES).
The results show that the GWO algorithm is able to provide very competitive results compared to these well-
known meta-heuristics. The paper also considers solving three classical engineering design problems
(tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the
proposed method in the field of optical engineering. The results of the classical engineering design problems and
real application prove that the proposed algorithm is applicable to challenging problems with unknown search
spaces.
Keywords: Optimization, optimization techniques, heuristic algorithm, Metaheuristics, Constrained
Optimization, GWO
1. Introduction
Meta-heuristic optimization techniques have become very popular over the last two decades. Surprisingly,
some of them such as Genetic Algorithm (GA) [1], Ant Colony Optimization (ACO) [2], and Particle Swarm
Optimization (PSO) [3] are fairly well-known among not only computer scientists but also scientists from
different fields. In addition to the huge number of theoretical works, such optimization techniques have been
applied in various fields of study. There is a question here as to why meta-heuristics have become remarkably
common. The answer to this question can be summarized into four main reasons: simplicity, flexibility,
derivation-free mechanism, and local optima avoidance.
First, meta-heuristics are fairly simple. They have been mostly inspired by very simple concepts. The
inspirations are typically related to physical phenomena, animals’ behaviors, or evolutionary concepts. The
simplicity allows computer scientists to simulate different natural concepts, propose new meta-heuristics,
hybridize two or more meta-heuristics, or improve the current meta-heuristics. Moreover, the simplicity assists
other scientists to learn meta-heuristics quickly and apply them to their problems.
Second, flexibility refers to the applicability of meta-heuristics to different problems without any special
changes in the structure of the algorithm. Meta-heuristics are readily applicable to different problems since they
mostly assume problems as black boxes. In other words, only the input(s) and output(s) of a system are
important for a meta-heuristic. So, all a designer needs is to know how to represent his/her problem for meta-
heuristics.
Third, the majority of meta-heuristics have derivation-free mechanisms. In contrast to gradient-based
optimization approaches, meta-heuristics optimize problems stochastically. The optimization process starts with
random solution(s), and there is no need to calculate the derivative of search spaces to find the optimum. This
makes meta-heuristics highly suitable for real problems with expensive or unknown derivative information.
Finally, meta-heuristics have superior abilities to avoid local optima compared to conventional optimization
techniques. This is due to the stochastic nature of meta-heuristics which allow them to avoid stagnation in local
solutions and search the entire search space extensively. The search space of real problems is usually unknown
and very complex with a massive number of local optima, so meta-heuristics are good options for optimizing
these challenging real problems.