Mathematics 2023, 11, 2680 2 of 30
require us to consider large models of these NP-hard problems, both in terms of the number
of variables and constraints [
7
–
9
]. Therefore, these problems are handled and modeled
as multi-objective optimization problems (MOPs), where the goal is to find the best set
of trade-off solutions—known as a Pareto optimal set or non-dominated solutions [
10
].
In other words, this type of optimization searches for acceptable compromises between
objectives—as compared to single-objective optimization, which is where only one solution
has to be found. Therefore, significant attention has been given to this concept, and many
works have been proposed [
11
,
12
]. Meta-heuristics and evolutionary algorithms have been
widely adopted for solving some of the multi-objective optimization problems, including
non-dominating sort genetic algorithms (NSGAIIs) [
13
], where fast non-dominated sorting
was used. The extension of NSGAII, called NSGAIII [
14
], employed a non-dominated sort
and a reference point method. The PAES [
15
] and SPEA2 [
16
] employed an external archive
to store the non-dominated solutions; as such, algorithms have been quite successful and
are still used today [17,18].
MOPs have been the most common problems in several real-world applications [
19
,
20
].
Therefore, this field has continued to evolve, thus ensuring that many other algorithms were
developed, such as the multi-objective evolutionary algorithm based on decomposition
(MOEAD) [
21
], where the problem is decomposed into a number of sub-problems and
each one is treated as a single-objective problem. Deb et al. [
22
] introduced the
e
-MOEA
algorithm, where the
e
-dominance relation was employed. Many other extensions of MOEA
have been proposed, including the uniform decomposition measurement (UMOEA/D) [
23
],
the MO-memetic algorithm (MOEA/D-SQA) [24], and many others [25,26].
In terms of meta-heuristics algorithms and, particularly, population-based algorithms,
algorithms that handle multi-objective problems (MOPs) have typically been an extension
of a single-objective-optimization algorithms but modeled in a way to solve MOPs. One
of the most well-known algorithms is the multi-objective particle swarm optimization
(MOPSO) method, which is based on the single-objective optimization algorithm type of
particle swarm optimization (PSO) [
27
]; it is a population-based algorithm inspired by
the biological behavior of birds in a flock. The PSO has been proven to be a successful
algorithm that continues to be used for solving optimization problems [
28
]. Many extended
multi-objective versions of the PSO have been proposed. For instance, the swarm metaphor,
as proposed in [
29
], incorporated the Pareto dominance concept and crowding distances.
In a different work by
Cello et al. [30]
, another MOPSO was proposed that incorporated a
respiratory system to conserve the non-dominated solutions and to choose an instructor
that would guide the particles. The well-known algorithm of ant colony optimization
(ACO) and its
variants [31,32]
is another population-based algorithm. It was inspired by
ant behavior and designed to solve single-objective optimization problems. Furthermore, it
was improved to handle with MOPs accordingly, as in [33–35].
Following the same concept over the years, several other MOP algorithms were
developed by simply extending the single-objective version [
36
,
37
]. The cat swarm op-
timization (CSO) [
38
] method was extended by incorporating a Pareto ranking; thus, it
was then named the multi-objective cat-swarm optimization (MOCSO) [
39
]. The grey wolf
optimizer (GWO) [
40
], for example, was also extended by adding an external fixed-size
archive, resulting in the multi-objective grey wolf optimization (MOGWO) [
41
] method.
Zouache et al. [
42
] introduced a guided multi-objective moth–flame optimization (MOMFO)
method, which was an extension of the moth–flame optimizer (MFO) [
43
]. In MFO, an
unlimited external archive was used to determine the non-dominated solutions, and the
fast non-dominated sort was adopted, along with crowding distances. Furthermore,
e
-
dominance was employed as an updated archive strategy. A more recent work attempted
to solve MOPs using EO by proposing a multi-objective equilibrium optimizer with an
exploration–exploitation dominance strategy (MOEO-EED) [44].
Recently, an equilibrium optimizer algorithm was presented to solve a single-objective
optimization problem [
45
]. The presented results of this algorithm showed that it was
able to outperform well-known algorithms. In this paper, based on the aforementioned