Journal Pre-proofs
Gradient-Based Optimizer: A New Metaheuristic Optimization Algorithm
Iman Ahmadianfar, Omid Bozorg-Haddad, Xuefeng Chu
PII: S0020-0255(20)30624-1
DOI: https://doi.org/10.1016/j.ins.2020.06.037
Reference: INS 15600
To appear in: Information Sciences
Received Date: 7 December 2019
Revised Date: 9 May 2020
Accepted Date: 14 June 2020
Please cite this article as: I. Ahmadianfar, O. Bozorg-Haddad, X. Chu, Gradient-Based Optimizer: A New
Metaheuristic Optimization Algorithm, Information Sciences (2020), doi: https://doi.org/10.1016/j.ins.
2020.06.037
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover
page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version
will undergo additional copyediting, typesetting and review before it is published in its final form, but we are
providing this version to give early visibility of the article. Please note that, during the production process, errors
may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2020 Published by Elsevier Inc.
1
Gradient-Based Optimizer: A New Metaheuristic Optimization Algorithm
Iman Ahmadianfar
1
*, Omid Bozorg-Haddad
2
, Xuefeng Chu
3
Assistant Professor, Dept. of Civil Engineering, Behbahan Khatam Alanbia Univ. of Technology, Behbahan, Iran
(corresponding author). E-mail: i.ahmadianfar@bkatu.ac.ir, +989371588755.
2
Professor, Dept. of Irrigation and Reclamation Engineering, Faculty of Agricultural Engineering and Technology, College of
Agriculture and Natural Resources, Univ. of Tehran, Karaj, Tehran, Iran. E-mail: OBHaddad@ut.ac.ir.
3
Professor, Department of Civil & Environmental Engineering, North Dakota State University, Department 2470, Fargo, ND,
USA. xuefeng.chu@ndsu.edu.
Abstract
In this study, a novel metaheuristic optimization algorithm, gradient-based optimizer (GBO) is
proposed. The GBO, inspired by the gradient-based Newton’s method, uses two main operators:
gradient search rule (GSR) and local escaping operator (LEO) and a set of vectors to explore the
search space. The GSR employs the gradient-based method to enhance the exploration tendency
and accelerate the convergence rate to achieve better positions in the search space. The LEO
enables the proposed GBO to escape from local optima. The performance of the new algorithm
was evaluated in two phases. 28 mathematical test functions were first used to evaluate various
characteristics of the GBO, and then six engineering problems were optimized by the GBO. In
the first phase, the GBO was compared with five existing optimization algorithms, indicating
that the GBO yielded very promising results due to its enhanced capabilities of exploration,
exploitation, convergence, and effective avoidance of local optima. The second phase also
demonstrated the superior performance of the GBO in solving complex real-world engineering
problems. Source codes of the GBO algorithm are publicly available at
http://imanahmadianfar.com/codes/.
2
Keywords: Optimization; Gradient-based method; Metaheuristic algorithm; Constrained
optimization problem.
1. Introduction
Many real-world applications in various science and engineering fields can be converted
to optimization problems. However, the related problems are often highly non-convex, non-
linear, and multimodal. Although a variety of optimization algorithms have been developed, they
frequently fail to provide satisfactory results for such challenging problems, which emphasizes
the need for new optimization methods. The metaheuristic algorithms (MAs) [1], which are
known as global optimization techniques, have been successfully used to solve various complex
and real optimization problems [2, 3]. The metaheuristic methods use some principles of physics,
swarm intelligence, and biology [4].
In the last decades, different MAs have been developed and used. For example, the
genetic algorithm (GA) was derived from the Darwin's theory of evolution [5]. The differential
evolution (DE) algorithm employs the same operators (i.e., mutation and crossover) as those in
the GA but with a different approach [6]. The DE algorithm uses the difference between two
randomly selected vectors to generate a new vector. Particle swarm optimization (PSO) was
inspired by the social behaviors of birds and fish for catching food [7]. The artificial bee colony
(ABC) algorithm simulates the information sharing capability and food foraging behavior of
honey bees [8]. The gravitational search algorithm (GSA) uses the laws of gravity and motion
[9]. The bat algorithm (BA) simulates the echolocation behavior involved in bats [10]. The grey
wolf optimizer (GWO) mimics the hunting behavior of grey wolves in nature [11]. The sine and
cosine algorithm (SCA) uses a mathematical function on the basis of sine and cosine functions
3
[4]. The thermal exchange optimization (TEO) algorithm is based on the principle of the
Newton’s law of cooling [12]. Atom search optimization (ASO) simulates the motion of atoms in
nature based on atom dynamics [13].
The aforementioned methods are categorized as population-based algorithms, which
involve a set of solutions in the optimization process. The search engines of such optimization
methods are based on different phenomena as described above. Many studies have demonstrated
successful applications of these methods for a broad variety of real-world problems [2, 14, 15].
Generally, the population-based optimizers share common information despite their natures [16].
In these algorithms, the search engine implements two steps: exploration and exploitation [17].
Exploration involves exploring new positions far from the current position in the entire search
area, while exploitation aims to explore the near-optimal positions. The utilization of exploration
alone may lead to new positions with low accuracy. In contrast, the employment of exploitation
alone increases the chance to get stuck in local optimal positions. Many studies emphasized the
importance to balance the exploration and exploitation search processes in the metaheuristic
algorithms [15]. Hence, creating a suitable balance between these two processes is crucial [18].
Most of the metaheuristic algorithms are managed to create a proper trade-off between
exploration and exploitation. To do this, some studies have been conducted to enhance the
efficiency of basic algorithms by using suitable setting of the control parameters or hybridization
of optimization algorithms [19-21]. However, to date, creating a suitable balance between
exploration and exploitation in the metaheuristic methods is a challenging and unsolved issue.
On the other hand, based on the rule of No Free Lunch (NFL) [22], no metaheuristic algorithm
can solve all problems, indicating that a specific algorithm may provide very good results for a
set of problems, but the same method may have low efficiencies for a different set of problems.
4
NFL also implies that this field of research is highly dynamic, which leads to the development of
many new metaheuristic optimization algorithms over years. This study attempts to fill the
research gap by proposing a new metaheuristic algorithm with population-based characteristics.
Thus, the main objective of this study is to develop a novel gradient-based metaheuristic
algorithm, namely gradient-based optimizer (GBO). The most popular gradient-based search
methods include the Newton’s method [23], Quasi-Newton method [24], Levenberg Marquardt
(LM) algorithm [25], and the conjugate direction method [26]. These methods have been applied
in many studies to solve different types of optimization problems. For example, Salajegheh and
Salajegheh combined the Quasi-Newton method with the PSO algorithm to promote the
performance and reliability of the basic PSO [27]. Ibtissem and Nouredine [28] introduced a
hybrid of the DE algorithm and the conjugate gradient method to increase the local search ability
in the basic DE. Shahidi et al. [29] developed a self-adaptive optimization algorithm employing
the conjugate gradient as a local search method. Bandurski and Kwedlo [30] combined the
conjugate gradient method with the DE algorithm to improve the local search of the basic DE
algorithm. Parwani et al. [31] introduced a hybrid DE with a local optimization method, in which
the conjugate gradient method was used for local search to increase the convergence speed.
These studies demonstrated the important role of the gradient-based methods. Therefore, this
study proposes the GBO algorithm with a search engine based on the Newton’s method and
employs a set of vectors to search the solution space, which involves two operators including the
gradient search rule (GSR) and the local escaping operator (LEO). The performance of the GBO
is evaluated by using 28 mathematical test functions and 6 real-world engineering optimization
problems that have been examined in previous studies.
评论3