Next Article in Journal
Virtual Marker Technique to Enhance User Interactions in a Marker-Based AR System
Previous Article in Journal
Complex Network Modelling of Origin–Destination Commuting Flows for the COVID-19 Epidemic Spread Analysis in Italian Lombardy Region
Previous Article in Special Issue
GBUO: “The Good, the Bad, and the Ugly” Optimizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems

by
Ali Sadeghi
1,
Sajjad Amiri Doumari
2,
Mohammad Dehghani
3,
Zeinab Montazeri
3,
Pavel Trojovský
4,* and
Hamid Jafarabadi Ashtiani
5
1
Electrical and Computer Engineering Department, University of Tabriz, Tabriz 51666, Iran
2
Department of Mathematics and Computer Science, Sirjan University of Technology, Sirjan 78137-33385, Kerman, Iran
3
Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz 71557-13876, Iran
4
Department of Mathematics, Faculty of Science, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
5
School of Electrical and Computer Engineering, Shiraz University, Shiraz 51154-71348, Iran
*
Author to whom correspondence should be addressed.
Submission received: 24 March 2021 / Revised: 18 April 2021 / Accepted: 8 May 2021 / Published: 12 May 2021

Abstract

:
Optimization is the science that presents a solution among the available solutions considering an optimization problem’s limitations. Optimization algorithms have been introduced as efficient tools for solving optimization problems. These algorithms are designed based on various natural phenomena, behavior, the lifestyle of living beings, physical laws, rules of games, etc. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) is introduced to solve various optimization problems. In GBGBO, population members update under the influence of two groups named the good group and the bad group. The good group consists of a certain number of the population members with better fitness function than other members and the bad group consists of a number of the population members with worse fitness function than other members of the population. GBGBO is mathematically modeled and its performance in solving optimization problems was tested on a set of twenty-three different objective functions. In addition, for further analysis, the results obtained from the proposed algorithm were compared with eight optimization algorithms: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), and the whale optimization algorithm (WOA), tunicate swarm algorithm (TSA), and marine predators algorithm (MPA). The results show that the proposed GBGBO algorithm has a good ability to solve various optimization problems and is more competitive than other similar algorithms.

1. Introduction

Optimization is the process in which the best solution (based on a set of constraints) to a particular problem is selected from a set of possible solutions. When an optimization problem is expressed, it must be modeled mathematically. In this modeling, the objectives of the problem and the limitations must be considered. In fact, an optimization problem has three main parts: the problem variables, the primary objects of the problem including constraints, and the secondary objects of the problem including the objective functions of the problem [1]. After designing the optimization problem, the next step is to solve the optimization problem using a suitable method. Optimization algorithms always have a special application in solving optimization problems. Optimization algorithms attempt to provide a solution by randomly scanning the search space.
An optimization problem has a definite optimal solution called global optimal. Optimization algorithms with random displacement in the problem search space provide a solution to a problem that is not necessarily global optimal, but close to it. For this reason, the solution obtained using optimization algorithms is called the quasi-optimal solution [2]. An optimization algorithm that presents a quasi-optimal solution closer to the global optimal solution is a more appropriate algorithm. This issue has led to the introduction of many optimization algorithms by researchers.
Various ideas have been applied in the design of optimization algorithms. These ideas are based on various natural phenomena, the behavior of living things, plants, the laws of physics, the rules of the game, etc. Optimization algorithms can be divided into four general groups based on the main design idea. These groups are physic-based optimization algorithms, swarm-based optimization algorithms, game-based optimization algorithms, and evolutionary-based optimization algorithms.
Physic-based optimization algorithms are designed based on simulations of processes and physical laws. Simulated annealing (SA) is a physics-based optimization algorithm modeled on the process of annealing metals [3]. In physics, annealing is the heat treatment during which changes the physical and sometimes chemical properties of a material occur. During this process, the metal is first heated, then kept at a certain temperature, and finally gradually cooled. The momentum search algorithm (MSA) is another physics-based optimization algorithm based on simulation of momentum laws and Newton’s laws of motion [4]. In MSA, the momentum that enters a bullet causes the bullet to motion toward quasi-optimal points in the search space. The gravitational search algorithm (GSA) is inspired by the physical law of gravity between objects at different distances from each other. According to this law, particles (or objects) in this universe always exert a force called gravity on each other, which is directly proportional to the mass of two objects and inversely proportional to the square of the distance between them. In GSA, the simulation of this concept is used in designing an optimizer for optimization problems [5].
Swarm-based optimization algorithms are modeled on various natural phenomena, the behavior of animals, plants, and other living organisms. Particle swarm optimization (PSO) is one of the oldest and most famous swarm-based optimization algorithms which is designed based on the simulation of group motion of birds [6]. The seagull optimization algorithm (SOA) is another swarm-based optimization algorithm that is designed based on simulating the migration and aggressive behavior of a seagull in nature [7]. The teaching–learning-based optimization (TLBO) was designed based on simulating the educational relationship between students and teacher that leads to student learning and progress. The TLBO has a mathematical model for teaching and learning, which is implemented in two stages teaching and learning [8]. The whale optimization algorithm (WOA) was developed based on simulation the social behavior of humpback whales in bubble-net hunting strategy [9]. The gray wolf optimizer (GWO) was inspired by the leadership hierarchy and hunting mechanism of gray wolves in nature. This natural behavior of gray wolves is such that four types of gray wolves such as alpha, beta, delta, and omega are used to simulate the leadership hierarchy. In addition, the three main steps of hunting— searching for prey, encircling prey, and attacking prey—are simulated in the GWO [10]. The tunicate swarm algorithm (TSA) was designed based on the simulation of jet propulsion and swarm behaviors of tunicates during the navigation and foraging process [11]. The marine predators algorithm (MPA) was inspired by the movement strategies that marine predators use when trapping their prey in the oceans. The main inspiration of the MPA is the widespread foraging strategy namely Lévy and Brownian movements in ocean predators along with optimal encounter rate policy in biological interaction between predator and prey [12].
Game-based optimization algorithms are designed using the potentials of various individual and group games. The orientation search algorithm (OSA) was designed by modeling the behavior of players and referees in the orientation game. In the orientation game, players move in the game space according to the direction specified by the reference [13]. The darts game optimizer (DGO) is another game-based optimization algorithm that was designed based on simulation of rules of the game and the behavior of the players in the darts game [14].
Evolutionary-based optimization algorithms as a family of stochastic search methods are inspired by the natural process of evolution of species. The genetic algorithm (GA) is an evolutionary-based optimization algorithm that was inspired by genetics and Darwin’s theory of evolution and is based on the survival of the fittest or natural selection. The GA simulates the reproductive process using three operators: selection, crossover, and mutation [15].
Although many optimization algorithms have been developed by scientists, no optimization algorithm can definitively provide global optimal solutions to the optimization problems. An optimization algorithm which provides the best solution for one optimization problem may fail to optimize another problem. The contribution of the authors and the main purpose of this paper was to design an optimization algorithm that could be used to solve optimization problems in various sciences. In designing the proposed algorithm, it was assumed that by increasing the power of exploration and exploitation of the algorithm, suitable quasi-optimal solutions are provided that are closer to the global optimal.
In this study, a new population-based optimization algorithm called the good and bad groups-based optimizer (GBGBO) was developed. The main idea of the proposed algorithm was to use more information from different population members in updating the whole population in such a way that instead of a good member, a good group, and instead of a bad member, a bad group would lead the population members. Thus, in each iteration, the status of the population members was updated based on two groups: The good group with the best values of the objective function and the bad group with the worst values of the objective function.
The rest of the article is organized in such a way that in Section 2, the proposed algorithm is introduced and modeled mathematically. In Section 3, the implementation of the proposed algorithm in the optimization is simulated. The Friedman rank test analysis is presented in Section 4. The analysis of the results and performance of the optimization algorithm is presented in Section 5. Section 6 provides conclusions and suggestions for future studies.

2. Good and Bad Groups-Based Optimizer

In this section, firstly the GBGBO is described, and then the mathematical modeling of the proposed algorithm is presented in order to implement in solving optimization problems.
The GBGBO is a population-based optimization algorithm, which is based on random scanning of the problem search space. The member of the population that has a better objective function as the best member of the population can direct the population of the algorithm to the optimal regions. However, the best member of the population may not provide the suitable amount for some problem variables. Thus, instead of just the best member of the population, GBGBO proposes a group of the best members to guide the population in the search space. This is also debatable for the worst member of the population. Instead of moving away from just the worst member of the population, GBGBO suggests moving away from a group of worst members. Thus, in GBGBO, population members update under the influence of two groups named the good group and the bad group. A good group consists of a certain number of the population members with better fitness function than other members and a bad group consists of a number of the population members with a worse fitness function than other members of the population.
The population of the GBGBO was defined using a matrix in which each row represented a member that proposed a solution to the optimization problem. The population matrix was first generated randomly and then updated according to the algorithm steps. This population matrix was specified in Equation (1).
X = [ X 1 X i X N   | x 1 , 1 x 1 , d x 1 , m   x i , 1 x i , d x i , m x N , 1 x N , d x N , m ] N × m
Here, X is the population matrix, X i is the i-th population member, x i , d is the d-th dimension of the i-th population member, N is the number of population members, and m is the number of variables of the optimization problem. After determining the population matrix, the objective function of the optimization problem was evaluated based on each member of the population that represents a solution. The vector values of the fitness function are specified in Equation (2).
F = [ F 1 F i F N   | F ( X 1 ) F ( X i ) F ( X N ) ] N × 1
Here, F is the fitness function value vector and F i is the fitness function value of the i-th population member.
Each optimization problem has a definite number of m variables that must be specified to optimize the objective function. In fact, the search space consists of m axes, each of which determines the value of a problem variable. In most optimization algorithms, a population member leads the population in the search space. That is, a good member leads the population in all axes. However, one or more other members may be more appropriate to guide the population in some axes. In addition, in some optimization algorithms, moving away from the worst population member is effective in updating and improving the population. The main idea of the GBGBO is to use the information of population members more effectively. Accordingly, instead of a good member leading the entire population in all axes, a group of good members was selected to lead the population. In addition, instead of just one bad member moving the algorithm away from the bad areas, a bad group was selected. In the GBGBO, the population members were updated based on two good and bad groups.
Good group updating:
The criterion for selecting good members was the value of the objective function. The number of NG members of the population that provide the best values of the objective function was selected as a good matrix.
Bad group updating:
As mentioned, the criterion for selecting bad members was also the value of the objective function. The number of NB members of the population that provided the worst values of the objective function was selected as the bad matrix.
In fact, if members of the population were arranged from the smallest to the largest value of the objective function, the NG number of the first member was selected as the good group and the NB number of the last member was selected as the bad group. Thus, based on the values of the fitness functions, the good group and the bad group could be calculated according to Equations (3) and (4).
G G = [ G G 1 G G i G G N | g g 1 , 1 g g 1 , d g g 1 , m   g g i , 1 g g i , d g g i , m g g N G , 1 g g N G , d g g N G , m ] N G × m
B G = [ B G 1 B G i B G N | b g 1 , 1 b g 1 , d b g 1 , m   b g i , 1 b g i , d b g i , m b g N B , 1 b g N B , d b g N B , m ] N B × m
Here, G G is the good group, G G i is the i-th good member,   g g i , d is the d-th dimension of the i-th good member, N G is the number of selected good members, B G is the bad group, B G i is the i-th bad member,   b g i , d is the d-th dimension of the i-th bad member, and N B is the number of selected bad members.
Each row of the population matrix as a population member was a proposed solution and indeed determined the variables of the problem. The best member of the population was the member that provided the best value for the objective function. Although the best member suggested appropriate values for the problem variables, it may not necessarily have been appropriate to guide the population in some variables. If the members of the algorithm population moved only under the guidance of the best member in the problem search space, all variables of each member moved towards the variables determined by the best member. In the GFBGO, a member of a good group was randomly selected to guide each variable of each member of the population. In fact, a good group member may have only led only a few variables of a population member in the search space, although a good group member may not have been selected to lead any variables. These concepts could also be developed for the worst members of the population and the bad group. This step of the GBGBO was simulated using Equations (5) and (6).
x i , d = { x i , d + r a n d × ( g g k , d 2 × x i , d ) ,     F i < F k G G x i , d + r a n d × ( x i , d 2 × g g k , d ) ,     e l s e     &   k 1 : N G
x i , d = { x i , d + r a n d × ( b g k , d 2 × x i , d ) ,     F i < F k B G x i , d + r a n d × ( x i , d 2 × b g k , d ) ,     e l s e     &   k 1 : N B
Here, g g k , d is the d-th dimension of selected good member to guide the d-th dimension of the ith population member, F k G G is the objective function value of the k-th selected good member, b g k , d is the d-th dimension of selected bad member to guide the d-th dimension of the i-th population member, F k B G is the objective function value of the k-th selected bad member, and rand is a random number with a normal distribution within the range from 0 to 1.
After all variables of all population members were updated based on Equations (5) and (6), the algorithm process was repeated until the stop condition was reached. Then, after the end of the algorithm iterations, the best solution obtained using the BGBGO was presented. Figure 1 shows the implementation of the BGBGO as a flowchart.

3. Simulation and Results

In this section, the performance of the proposed BGBGO algorithm in solving various optimization problems is evaluated. For this purpose, a set of twenty-three standard objective functions including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions [16] were optimized using BGBGO. Complete information on these objective functions is given in Table A1, Table A2 and Table A3 in Appendix A. Eight optimization algorithms, including the genetic algorithm (GA) [15], particle swarm optimization (PSO) [6], gravitational search algorithm (GSA) [5], teaching–learning-based optimization (TLBO) [8], gray wolf optimizer (GWO) [10], whale optimization algorithm (WOA) [9], tunicate swarm algorithm (TSA) [11], and marine predators algorithm (MPA) [12], were investigated in order to compare the optimization results. The experimentation was performed on MATLAB (version R2020a) using a 64-bit Core i7 processor with 3.20 GHz and 16 GB of main memory. Each of the optimization algorithms were independently implemented twenty times, and at the end the optimization results were presented as the mean and standard deviation of the best solutions as “ave” and “std”.
The values used for the main controlling parameters of the comparative algorithms are specified in Table 1.

3.1. Simulation Results on Unimodal Test Function F1 to F7

Seven objective functions F1 to F7 were selected as unimodal objective functions to evaluate the performance of the GBGBO and other algorithms. Complete information on these objective functions is given in Table A1 in Appendix A. These objective functions had only one optimal solution and were therefore suitable for evaluating the exploitation power of optimization algorithms. The results of the optimization of these objective functions after twenty independent implementations are presented in Table 2. The optimization results of these objective functions showed that the proposed GBGBO offered better solutions than other algorithms. This indicates that the GBGBO had a high exploitation power in achieving the optimal solution.

3.2. Simulation Results on High Dimensional Multi Modal Test Function F8 to F13

The second group of objective functions to evaluate optimization algorithms are the high dimensional multimodal test function. The six target functions F8 to F13 are of this type. Complete information on these objective functions is given in Table A2 in Appendix A. These types of objective functions are several local optimal solutions and were therefore suitable for evaluating the exploration power of the optimization algorithms. The results of the optimization of these objective functions are presented in Table 3. These results indicate the ability of the GBGBO to solve these types of objective functions and the superiority of the proposed algorithm over other algorithms. Therefore, the GBGBO had good exploration capability and scanned the search space of the problem well.

3.3. Simulation Results on Fixed Dimensional Multi Modal Test Function F14 to F23

The third group of objective functions, including F14 to F23, was selected from the fixed dimensional multimodal type. Complete information on these objective functions is given in Table A3 in Appendix A. This type of objective function was also suitable for evaluating the exploration power of optimization algorithms. The results of optimization of these objective functions using GBGBO and eight other algorithms are presented in Table 4. What is clear from these results is that the proposed GBGBO algorithm performed very well in such objective functions and in most cases provided the global optimal solution. This concept demonstrates the acceptable exploration ability of the GBGBO in accurately searching the problem search space.
Optimization results of the F1 to F23 objective functions using the proposed algorithm and eight other optimization algorithms are presented in Table 2, Table 3 and Table 4. The boxplot of results for each algorithm and objective function are drawn in Figure 2 for further analysis and visual comparison of the performance of the optimization algorithms. Based on the boxplots shown in Figure 2, in the unimodal objective functions type F1, F2, F3, F4, F6, and F7, the superiority of the GBGBO over the other eight algorithms is obvious. The function F5 of the GWO offers better performance. However, the GBGBO offers acceptable performance with little difference to the GWO. In the functions F9, F10, F11, F12, and F13 of the high-dimensional multimodal objective functions type, the GBGBO is the best optimizer among the review algorithms. In the objective function F8, the GA presented better performance. In the functions F14, F15, F16, F18, F19, F20, F21, F22, and F23 of the fixed-dimensional multi-model objective function type, the proposed algorithm could provide a more efficient quasi-optimal solution with smaller values of standard deviation. The function F17 in the GWO offers better performance. However, the GBGBO offers acceptable performance with little difference to the GWO.

4. Statistical Analysis

The optimization results of all three types of objective functions were presented as the mean and standard deviation of the solutions. Although these indicators make important information available, they alone were not enough to ensure that one algorithm is superior to other algorithms. This is because even after twenty independent executions, the superiority of one optimization algorithm over other algorithms may occur randomly, even with the lowest probability. Thus, a statistical analysis of the optimization results provided more information than the capability of an optimization algorithm. In this article, the Friedman rank test [17] was used. The results of this test are determined in Table 5. Analysis and comparison of these results show that BGBGO performs better than other algorithms in all three different types of objective functions unimodal, high dimensional multimodal, and fixed dimensional multimodal. In addition, the result of analysis on all twenty-three objective functions shows that the GBGBO ranks first among compared algorithms.

5. Discussion

Exploitation and exploration indexes are two important criteria and indicators in analyzing and evaluating the performance of optimization algorithms. The exploitation power means the ability of an optimization algorithm to provide a suitable quasi-optimal solution at the end of the algorithm iterations. Thus, in comparing the performance of several optimization algorithms on an optimization problem, the algorithm that can ultimately provide a quasi-optimal solution closer to the global optimal has higher exploitation power. The unimodal objective functions F1 to F7 have only one main solution and are therefore very suitable for evaluating the exploitation power of optimization algorithms. Comparison of the optimization results of these functions using the GBGBO and eight other algorithms in Table 2 shows the acceptable exploitation power of the GBGBO in solving optimization problems. Exploration power means the ability of an algorithm to accurately scan the search space of the optimization problem. In fact, among several optimization algorithms, an algorithm that can scan the search space well and is not limited to certain areas, as well as can pass locally optimal solutions, has higher exploration power. This indicator is especially important for optimization problems that have multiple local optimal solutions. The F8 to F23 objective functions had several local optimal solutions in addition to the main solution. Therefore, these objective functions were suitable for evaluating the exploration power of optimization algorithms. Analysis and comparison of the optimization results of the GBGBO and eight other algorithms on these objective functions, which are presented in Table 3 and Table 4, indicated the acceptable exploration power of the proposed GBGBO algorithm in solving this type of objective functions. On the other hand, the results of the Friedman rank test showed that the acceptable power of the GBGBO in exploration and exploitation indices is not random.

6. Conclusions and Future Work

Optimization algorithms are one of the efficient tools in solving optimization problems. Optimization algorithms with random search space scanning are able to provide quasi-optimal solutions to optimization problems. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) was presented to solve optimization problems. The GBGBO was designed based on simulation of the process of guiding the population members by two groups named good and bad groups, instead of only the best and worst members. The proposed GBGBO was mathematically modeled. The performance of the GBGBO was implemented and evaluated on a set of twenty-three standard objective functions. These objective functions were selected in three different types unimodal to evaluate exploitation power, high-dimensional, and fixed-dimensional multimodal to evaluate exploration power. Eight optimization algorithms, concretely the genetic algorithm (GA), the particle swarm optimization (PSO), the gravitational search algorithm (GSA), the teaching–learning-based optimization (TLBO), the gray wolf optimizer (GWO), the whale optimization algorithm (WOA), the tunicate swarm algorithm (TSA), and the marine predators algorithm (MPA), were selected to compare with the optimization results obtained from the GBGBO. The optimization results showed that the GBGBO is more capable of solving optimization problems than the other eight optimization algorithms and is more competitive. In addition, the Friedman rank test was used for statistical analysis of optimization results provided by optimization algorithms. Based on the results of this test, the GBGBO had a good performance in solving optimization problems and was ranked first among the compared algorithms.
The authors suggest some ideas and perspectives for future studies. The design of the binary version, as well as the multi-objective version of the GBGBO, are two special potentials for this study. Apart from this, implementing the GBGBO on various optimization problems and real-world optimization problems could be some significant contributions as well.

Author Contributions

M.D., Z.M. and A.S.; methodology, A.S. and S.A.D.; software, Z.M. and M.D.; validation, P.T. and H.J.A.; formal analysis, H.J.A., S.A.D. and P.T.; investigation, P.T.; resources, Z.M. and M.D.; data curation, P.T. and S.A.D.; writing—original draft preparation, A.S., S.A.D., M.D., Z.M. and H.J.A.; writing—review and editing, P.T. and S.A.D.; supervision, M.D.; project administration, A.S.; funding acquisition, P.T. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Project of Specific Research PrF UHK No. 2101/2021 and Long-term development plan of UHK, University of Hradec Králové, Czech Republic.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The authors declare to honor the Principles of Transparency and Best Practice in Scholarly Publishing about Data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Information of the twenty-three objective functions is provided in Table A1, Table A2 and Table A3.
Table A1. Unimodal test functions.
Table A1. Unimodal test functions.
F 1 ( x ) = i = 1 m x i 2 [ 100 , 100 ] m m = 30
F 2 ( x ) = i = 1 m | x i | + i = 1 m | x i | [ 10 , 10 ] m m = 30
F 3 ( x ) = i = 1 m ( j = 1 i x i ) 2 [ 100 , 100 ] m m = 30
F 4 ( x ) = m a x { | x i |   ,     1 i m   } [ 100 , 100 ] m m = 30
F 5 ( x ) = i = 1 m 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ) ] [ 30 , 30 ] m m = 30
F 6 ( x ) = i = 1 m ( [ x i + 0.5 ] ) 2 [ 100 , 100 ] m m = 30
F 7 ( x ) = i = 1 m i x i 4 + r a n d o m ( 0 , 1 ) [ 1.28 , 1.28 ] m m = 30
Table A2. High-dimensional multimodal test functions.
Table A2. High-dimensional multimodal test functions.
F 8 ( x ) = i = 1 m x i   sin ( | x i | ) [ 500 , 500 ] m m = 30
F 9 ( x ) = i = 1 m [   x i 2 10 cos ( 2 π x i ) + 10 ] [ 5.12 , 5.12 ] m m = 30
F 10 ( x ) = 20 exp ( 0.2 1 m i = 1 m x i 2 ) exp ( 1 m i = 1 m cos ( 2 π x i ) ) + 20 + e [ 32 , 32 ] m m = 30
F 11 ( x ) = 1 4000 i = 1 m x i 2 i = 1 m c o s ( x i i ) + 1 [ 600 , 600 ] m m = 30
F 12 ( x ) = π m   { 10 sin ( π y 1 ) + i = 1 m ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 m u ( x i , 10 , 100 , 4 )
u ( x i , a , i , n ) = { k ( x i a ) n ,                               x i > a ;           0 ,                         a < x i < a ; k ( x i a ) n ,                       x i < a  
[ 50 , 50 ] m m = 30
F 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 m ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x m ) ] } + i = 1 m u ( x i , 5 , 100 , 4 ) [ 50 , 50 ] m m = 30
Table A3. Fixed-dimensional multimodal test functions.
Table A3. Fixed-dimensional multimodal test functions.
F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 [ 65.53 , 65.53 ] 2 .
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 [ 5 , 5 ] 4
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [ 5 , 5 ] 2
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) c o s x 1 + 10 [−5,10] × [0,15]
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] [ 5 , 5 ] 2
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j P i j ) 2 ) [ 0 , 1 ] 3
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j P i j ) 2 ) [ 0 , 1 ] 6
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + 6 c i ] 1 [ 0 , 10 ] 4
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + 6 c i ] 1 [ 0 , 10 ] 4
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + 6 c i ] 1 [ 0 , 10 ] 4

References

  1. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Mendoza, R.R.; Samet, H.; Guerrero, J.M.; Dhiman, G. MLO: Multi Leader Optimizer. Int. J. Intell. Eng. Syst. 2020, 13, 364–373. [Google Scholar] [CrossRef]
  2. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Samet, H.; Sotelo, C.; Sotelo, D.; Ehsanifar, A.; Malik, O.P.; Guerrero, J.M.; Dhiman, G. DM: Dehghani Method for Modifying Optimization Algorithms. Appl. Sci. 2020, 10, 7683. [Google Scholar] [CrossRef]
  3. Van Laarhoven, P.J.; Aarts, E.H. Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Berlin, Germany, 1987; pp. 7–15. [Google Scholar]
  4. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2, 1–15. [Google Scholar] [CrossRef]
  5. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  6. Kennedy, J.; Eberhart, R. Particle swarm optimization, proceeding of the IEEE International Conference on Neural Networks, Perth, Australia. IEEE Serv. Center Piscataway 1942, 1948. [Google Scholar]
  7. Dhiman, G.; Singh, K.K.; Soni, M.; Nagar, A.; Dehghani, M.; Slowik, A.; Kaur, A.; Sharma, A.; Houssein, E.H.; Cengiz, K. MOSOA: A new multi-objective seagull optimization algorithm. Expert Syst. Appl. 2020, 167, 114150. [Google Scholar] [CrossRef]
  8. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  11. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  12. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  13. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Ehsanifar, A.; Dehghani, A. OSA: Orientation search algorithm. Int. J. Ind. Electron. Control Optim. 2019, 2, 99–112. [Google Scholar]
  14. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst 2020, 13, 286–294. [Google Scholar] [CrossRef]
  15. Bose, A.; Biswas, T.; Kuila, P. A novel genetic algorithm based scheduling for multi-core systems. In Smart Innovations in Communication and Computational Sciences; Springer: Berlin, Germany, 2019; pp. 45–54. [Google Scholar]
  16. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. Ieee Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  17. Daniel, W.W. Friedman two-way analysis of variance by ranks. Appl. Nonparametric Stat. 1990, 262–274. [Google Scholar]
Figure 1. The flowchart of the BGBGO.
Figure 1. The flowchart of the BGBGO.
Applsci 11 04382 g001
Figure 2. The boxplot of the results of the composition of objective functions for different optimization algorithms.
Figure 2. The boxplot of the results of the composition of objective functions for different optimization algorithms.
Applsci 11 04382 g002aApplsci 11 04382 g002b
Table 1. Parameter values for the comparative algorithms.
Table 1. Parameter values for the comparative algorithms.
AlgorithmParameterValue
GA
TypeReal coded
SelectionRoulette wheel (Proportionate)
CrossoverWhole arithmetic (Probability = 0.8,
α = [ 0.5 ,   1.5 ] )
MutationGaussian (Probability = 0.05)
PSO
TopologyFully connected
Cognitive and social constant(C1, C2) 2, 2
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
GSA
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
TLBO
TF: teaching factorTF = round [ ( 1 + r a n d ) ]
random numberrand   is   a   random   number   in   [ 0 ,   1 ] .
GWO
Convergence parameter (a)a: Linear reduction from 2 to 0.
WOA
Convergence parameter (a)a: Linear reduction from 2 to 0.
r is   a   random   vector   in   [ 0 , 1 ] .
l is   a   random   number   in   [ 1 , 1 ] .
TSA
Pmin and Pmax1, 4
c1, c2, c3Random numbers lie in the range from 0 to 1.
MPA
Constant numberp = 0.5
Random vectorR is   a   vector   of   uniform   random   numbers   in   [ 0 ,   1 ] .
Fish Aggregating Devices (FADs)FADS = 0.2
Binary vectorU = 0 or 1
Table 2. Optimization results of GBGBO and other algorithms on unimodal test function.
Table 2. Optimization results of GBGBO and other algorithms on unimodal test function.
GAPSOGSATLBOGWOGOATSAMPAGBGBO
F1Ave13.24051.7740 × 10−52.0255 × 10−178.3373 × 10−601.09 × 10−582.1741 × 10−97.71 × 10−383.2715 × 10−213.2429 × 10−121
std4.7664 × 10−156.4396 × 10−211.1369 × 10−324.9436 × 10−765.1413 × 10−747.3985 × 10−257.00 × 10−214.6153 × 10−214.8 × 10−138
F2Ave2.47940.34112.3702 × 10−87.1704 × 10−351.2952 × 10−340.54628.48 × 10−391.57 × 10−128.51 × 10−65
std2.2342 × 10−157.4476 × 10−175.1789 × 10−246.6936 × 10−501.9127 × 10−501.7377 × 10−165.92 × 10−411.42 × 10−121.66 × 10−79
F3Ave1536.8963589.492279.34392.7531 × 10−157.4091 × 10−151.7634 × 10−81.15 × 10−210.08644.45 × 10−64
std6.6095 × 10−137.1179 × 10−131.2075 × 10−132.6459 × 10−315.6446 × 10−301.0357 × 10−236.70 × 10−210.14444.15 × 10−74
F4Ave2.09423.96343.2547 × 10−99.4199 × 10−151.2599 × 10−142.9009 × 10−51.33 × 10−232.6 × 10−81.22 × 10−55
std2.2342 × 10−151.9860 × 10−162.0346 × 10−242.1167 × 10−301.0583 × 10−291.2121 × 10−201.15 × 10−229.25 × 10−92.27 × 10−70
F5Ave310.427350.2624536.10695146.456426.860741.776728.861546.04927.02145
std2.0972 × 10−131.5888 × 10−143.0982 × 10−141.9065 × 10−1402.5421 × 10−144.76 × 10−30.42193.97 × 10−15
F6Ave14.5520.2500.44350.64231.6085 × 10−97.10 × 10−210.3980
std3.1776 × 10−151.256404.2203 × 10−166.2063 × 10−174.6240 × 10−251.12 × 10−250.19140
F7Ave5.6799 × 10−30.11340.02060.00170.00080.02053.72 × 10−40.00183.3919 × 10−5
std7.7579 × 10−194.3444 × 10−172.7152 × 10−183.87896 × 10−197.2730 × 10−201.5515 × 10−185.09 × 10−50.0012.67 × 10−19
Table 3. Optimization results of GBGBO and other algorithms on high dimensional test function.
Table 3. Optimization results of GBGBO and other algorithms on high dimensional test function.
EGAPSOGSATLBOGWOGOATSAMPAGBGBO
F8Ave−8184.4142−6908.6558−2849.0724−7408.6107−5885.1172−1663.9782−5740.3388−3594.16321−4548.05
std833.2165625.6248264.3516513.5784467.5138716.349241.5811.326511.02 × 10−12
F9Ave62.411457.061316.267510.24858.5265 × 10−154.20115.70 × 10−3140.12380
std2.5421 × 10−146.3552 × 10−153.1776 × 10−155.5608 × 10−155.6446 × 10−304.3692 × 10−151.46 × 10−326.31240
F10Ave3.22182.15463.5673 × 10−90.27571.7053 × 10−140.32939.80 × 10−149.6987 × 10−124.44 × 10−15
std5.1636 × 10−157.9441 × 10−163.6992 × 10−252.5641 × 10−152.7517 × 10−291.9860 × 10−164.51 × 10−126.1325 × 10−121.41 × 10−30
F11Ave1.23020.04623.73750.60820.00370.11891.00 × 10−700
std8.4406 × 10−163.1031 × 10−182.7804 × 10−151.9860 × 10−161.2606 × 10−188.9991 × 10−177.46 × 10−700
F12Ave0.0470.48060.03620.02030.03721.74140.03680.08511.60 × 10−4
std4.6547 × 10−181.8619 × 10−166.2063 × 10−187.7579 × 10−194.3444 × 10−178.1347 × 10−121.5461 × 10−20.00522.17 × 10−17
F13Ave1.20850.50840.0020.32930.57630.34562.95750.49011.08 × 10−4
std3.2272 × 10−164.9650 × 10−174.2617 × 10−142.1101 × 10−162.4825 × 10−163.25391 × 10−121.5682 × 10−120.19322.98 × 10−16
Table 4. Optimization results of GBGBO and other algorithms on fixed dimensional test function.
Table 4. Optimization results of GBGBO and other algorithms on fixed dimensional test function.
EGAPSOGSATLBOGWOGOATSAMPAGBGBO
F14Ave0.99862.17353.59132.27213.74080.9981.99230.9980.998
std1.5640 × 10−157.9441 × 10−167.9441 × 10−161.9860 × 10−166.4545 × 10−159.4336 × 10−162.6548 × 10−74.2735 × 10−161.44 × 10−16
F15Ave5.3952 × 10−20.05350.00240.00330.00630.00490.00040.0030.0003
std7.0791 × 10−183.8789 × 10−192.9092 × 10−191.2218 × 10−171.1636 × 10−183.4910 × 10−189.0125 × 10−44.0951 × 10−153.64 × 10−20
F16Ave−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
std7.9441 × 10−163.4755 × 10−165.9580 × 10−161.4398 × 10−153.9720 × 10−169.9301 × 10−162.6514 × 10−164.4652 × 10−169.93 × 10−17
F17Ave0.43690.78540.39780.39780.39780.40470.39910.39790.3979
std4.9650 × 10−174.9650 × 10−179.9301 × 10−177.4476 × 10−178.6888 × 10−172.4825 × 10−172.1596 × 10−169.1235 × 10−159.93 × 10−17
F18Ave4.3592333.000933333
std5.9580 × 10−163.6741 × 10−156.9511 × 10−161.5888 × 10−152.0853 × 10−155.6984 × 10−152.6528 × 10−151.9584 × 10−151.99 × 10−16
F19Ave−3.85434−3.8627−3.8627−3.8609−3.8621−3.8627−3.8066−3.8627−3.8627
std9.9301 × 10−178.9371 × 10−158.3413 × 10−157.3483 × 10−152.4825 × 10−153.1916 × 10−152.6357 × 10−154.2428 × 10−154.97 × 10−16
F20Ave−2.8239−3.2619−3.0396−3.2014−3.2523−3.2424−3.3206−3.3211−3.3219
std3.97205 × 10−162.9790 × 10−162.1846 × 10−141.7874 × 10−152.1846 × 10−157.9441 × 10−165.6918 × 10−151.1421 × 10−112.88 × 10−15
F21Ave−4.3040−5.3891−5.1486−9.1746−9.6452−7.4016−5.5021−10.1532−10.1532
std1.5888 × 10−151.4895 × 10−152.9790 × 10−168.5399 × 10−156.5538 × 10−152.3819 × 10−115.4615 × 10−132.5361 × 10−117.94 × 10−16
F22Ave−5.1174−7.6323−9.0239−10.0389−10.4025−8.8165−5.0625−10.4029−10.4029
std1.2909 × 10−151.5888 × 10−151.6484 × 10−121.5292 × 10−141.9860 × 10−156.7524 × 10−158.4637 × 10−142.8154 × 10−117.15 × 10−15
F23Ave−6.5621−6.1648−8.9045−9.2905−10.1302−10.0003−10.3613−10.5364−10.5364
std3.8727 × 10−152.7804 × 10−157.1497 × 10−141.1916 × 10−154.5678 × 10−159.1357 × 10−157.6492 × 10−123.9861 × 10−117.94 × 10−16
Table 5. Results of Friedman rank test.
Table 5. Results of Friedman rank test.
Test Function GAPSOGSATLBOGWOGOATSAMPAGBGBO.
1UnimodalFriedman value56553727244116368
(F1–F7)Friedman rank986437251
2High dimensional multimodalFriedman value373430222136243211
(F8–F13)Friedman rank864327351
3Fixed dimension multimodalFriedman value564842373337352311
(F14–F23)Friedman rank876535421
4All 23-test functionFriedman value1491371098678114759130
Friedman rank986437251
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sadeghi, A.; Doumari, S.A.; Dehghani, M.; Montazeri, Z.; Trojovský, P.; Ashtiani, H.J. A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems. Appl. Sci. 2021, 11, 4382. https://0-doi-org.brum.beds.ac.uk/10.3390/app11104382

AMA Style

Sadeghi A, Doumari SA, Dehghani M, Montazeri Z, Trojovský P, Ashtiani HJ. A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems. Applied Sciences. 2021; 11(10):4382. https://0-doi-org.brum.beds.ac.uk/10.3390/app11104382

Chicago/Turabian Style

Sadeghi, Ali, Sajjad Amiri Doumari, Mohammad Dehghani, Zeinab Montazeri, Pavel Trojovský, and Hamid Jafarabadi Ashtiani. 2021. "A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems" Applied Sciences 11, no. 10: 4382. https://0-doi-org.brum.beds.ac.uk/10.3390/app11104382

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop