Next Article in Journal
Backpropagated Neural Network Modeling for the Non-Fourier Thermal Analysis of a Moving Plate
Next Article in Special Issue
Phenotype Analysis of Arabidopsis thaliana Based on Optimized Multi-Task Learning
Previous Article in Journal
DVIT—A Decentralized Virtual Items Trading Forum with Reputation System
Previous Article in Special Issue
An Offline Weighted-Bagging Data-Driven Evolutionary Algorithm with Data Generation Based on Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Parallel Compact Gannet Optimization Algorithm for Solving Engineering Optimization Problems

1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
3
Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 30 November 2022 / Revised: 6 January 2023 / Accepted: 9 January 2023 / Published: 13 January 2023
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)

Abstract

:
The Gannet Optimization Algorithm (GOA) has good performance, but there is still room for improvement in memory consumption and convergence. In this paper, an improved Gannet Optimization Algorithm is proposed to solve five engineering optimization problems. The compact strategy enables the GOA to save a large amount of memory, and the parallel communication strategy allows the algorithm to avoid falling into local optimal solutions. We improve the GOA through the combination of parallel strategy and compact strategy, and we name the improved algorithm Parallel Compact Gannet Optimization Algorithm (PCGOA). The performance study of the PCGOA on the CEC2013 benchmark demonstrates the advantages of our new method in various aspects. Finally, the results of the PCGOA on solving five engineering optimization problems show that the improved algorithm can find the global optimal solution more accurately.

1. Introduction

With the rapid development of modern industry, optimization theory and optimization methods have spread throughout all aspects of industrial production, and in recent years, the efficiency and accuracy of optimization solutions are more and more strict. Optimization problems exist in all walks of life around us, such as engineering optimization, neural network optimization, intelligent computing, and some path planning problems that require a highly efficient and accurate optimization algorithm to solve. It has been found that various optimization problems can be solved effectively by meta-heuristic algorithms [1], such as in the field of QR code technology [2]. Population intelligence algorithm is a kind of meta-heuristic algorithm, and population intelligence algorithm is an algorithm studied based on natural organisms or natural phenomenon, such as Particle Swarm Optimization Algorithm (PSO) [3,4,5,6], Cuckoo Search Algorithm (CSA) [7,8], Ant Colony Optimization Algorithm (ACO) [9,10,11], Flower Pollination Algorithm (FPA) [12,13,14], Phasmatodea Population Evolution Algorithm (PPE) [15], Symbiotic Organisms Search (SOS) [16,17], etc.
The Gannet Optimization Algorithm (GOA) is based on a summary of the patterns of fish predation of natural organisms gannets [18]. The Gannet Optimization Algorithm (GOA) is simple in structure and easy to understand. There are two stages of the GOA: the exploration stage of the fish when the Gannet hunt for food and the exploitation stage that unfolds when the Gannet find the fish and chase them. The GOA randomly selects these two modes to start feeding on the fish in each iteration, and this strategy allows global exploitation and local exploration to be performed randomly throughout the process.
Although the GOA has a better search capability than algorithms such as PSO, the large amount of memory it occupies in the process of finding the optimum is still a major drawback that limits the convergence efficiency of the GOA. There are many ways to improve the performance of the algorithm, such as adopting surrogate-assisted strategy [19,20], adaptive strategy [21,22,23] and so on. For algorithmic memory saving strategies, many compact schemes have been proposed in other algorithms, such as compact Genetic Algorithm (cGA) [24], compact Differential Evolution Algorithm (cDE) [25], compact Particle Swarm Optimization Algorithm (cPSO) [26], etc. However, the GOA does not currently have other options for the compact improvement strategy. The improved GOA with the compact strategy not only improves the convergence of the GOA but also saves the memory usage of the computer, which is called the cGOA. Although the cGOA has a fast convergence speed, the accuracy of its convergence has no advantage compared with the original algorithm. We try to improve the stability of the algorithm by adding a parallel communication strategy to the improved compact strategy, and call the improved algorithm the PCGOA.
Based on the above description, the improved GOA can both save computer memory and solve optimization problems more accurately. Finally, the PCGOA is applied to the selected five engineering optimization problems. This paper contributes as follows:
  • For the shortcomings of memory occupation and convergence efficiency of the GOA, this paper proposes a GOA with a combined strategy of parallel and compact, and the improved algorithm is called the PCGOA.
  • Two new parallel communication strategies are proposed in parallel strategies to improve the performance of the algorithm.
  • In this paper the proposed parallel compact GOA uses the test function CEC2013 to compare with some traditional algorithms, such as PSO algorithm, SCA algorithm, PMVO algorithm, etc. It is proved that the PCGOA has better performance.
  • The improved GOA algorithm was applied to five engineering optimization problems, and the results indicated that not only the convergence speed was improved, but also a large amount of computer memory was saved.
The sections of this paper are organized as follows. The related work in Section 2 briefly reviews the basic principles of the original GOA and the principles of Compact Scheme. Section 3 specifies the improvement process of the GOA and the principle of the PCGOA, and proposes two new communication strategies. Section 4 tests the performance of the improved algorithm based on the CEC2013 benchmark function and analyzes the data image curves. Section 5 explains the method of improving the algorithm to incorporate five engineering optimization problems and analyzes the performance of the improved algorithm. Finally, the entire article process is summarized in Section 5.

2. Related Works

There are two main parts in this section. The first part briefly reviews the basic principles of the GOA. The second part mainly introduces the compact scheme.

2.1. Gannet Optimization Algorithm

The GOA simulates a process of Gannet in a lake from finding a target to feeding on it. The Gannet is a large waterfowl that is ideally suited to feeding on targets in the water due to its size, and is able to grab targets and bring them out of the water at a faster rate during the feeding process. The Gannet also know how to work as a team. When flocks of gannets find schools of fish, they will line up or feed on them in a semicircle. The GOA is a simulation of the specific process of Gannet feeding on targets in the water.
The GOA has two stages of exploration and development. The first initialization of the GOA is to define a set of random solutions x i d representing n D-dimensional gannets locations of n*D matrices to start with, and the optimal solution obtained from this matrix is then considered as the global optimal solution. The formula for obtaining the solution of the matrix is as follows:
x i d = r 0 ( u b d l b d ) + l b d , i = 1 , 2 , , N , d = 1 , 2 , , D i m
where N denotes the total number of gannets. D i m represents the upper limit of the dimension of the solution. l b d and u b d represent the upper and lower limits of each dimension. r 0 represents a random number from 0 to 1.
After the initialization of the GOA is completed, Gannets starts to hunt. In the exploration phase, Gannets has two dive modes: one is U-shaped dive, which is suitable for feeding on fish in shallow water and corresponds to Equation (4), the other is a V-shaped dive, which is suitable for feeding on fish in deep water, corresponding to Equation (5),
t = 1 I t k K m a x , k = 1 , 2 , , K m a x
a u = 2 c o s ( 2 π r 1 ) × t
b v = 2 V ( 2 π r 2 ) × t
V s h ( y ) = y π + 1 y ( 0 , π ) y π 1 y ( π , 2 π )
where I t k denotes the kth iteration and K m a x denotes the upper limit of the number of iterations, r 1 is a random number from 0 to 1, like r 2 .
The probability of choosing these two dive strategies is the same, so a random number q is defined to represent the random selection of hunting strategies. The position update equations are as Equation (6),
M X i ( t + 1 ) = u 1 + u 2 + X i ( t ) q 0.5 ( a ) v 1 + v 2 + X i ( t ) q < 0.5 ( b )
u 2 = A ( X i ( t ) X r a n d ( t ) )
v 2 = B ( X i ( t ) X M e a n ( t ) )
A = ( 2 r 3 1 ) a u
B = ( 2 r 4 1 ) b v
where r 3 and r 4 both range from a random number between 0 and 1, u 1 ranges from a u and a u , and v 1 ranges from b v and b v . The ith solution in the population is denoted by X i ( t ) . X r a n d ( t ) represents a random selection of a solution from the entire population, X M e a n ( t ) represents a solution at the center of the population, and X M e a n ( t ) is calculated as shown Equation (11),
X M e a n ( t ) = 1 N i = 1 N X i ( t )
During the exploitation phase, when gannets encounter fish that suddenly turn around, they also need to take two actions to develop further. Here, capturing capability is defined as Equation (12),
C a p t u r a b i l i t y = 1 R t 2
t 2 = 1 + I t k K m a x
R = M v 2 L
L = 0.2 + ( 2 0.2 ) r 5
where M = 2.5 kg is set by the authors based on the average mass of the Gannet population, r 5 represents a random number from 0 to 1, and v = 1.5 m/s represents the speed of the Gannet in the water, given by the authors. If the fish escapes and the location where the fish escapes is within the capture capability of the Gannet, the Gannet will make a position change because it chases the fish; otherwise, the Gannet loses the target and takes a Levy flight for position update to research for the next target x at random, with the position update equations shown in Equation (16),
M X i ( I t + 1 ) = X i ( I t ) + t × D e l t × ( X i ( I t ) X b e s t ( I t ) ) C a p t u r a b i l i t y c ( a ) X b e s t ( I t ) ( X i ( I t ) X b e s t ( I t ) ) × t × L v C a p t u r a b i l i t y < c ( b )
D e l t = C a p t u r a b i l i t y × | X i ( I t ) X B e s t ( I t ) |
L v = L e v y ( D i m )
where c is a constant with a value of 0.2. X b e s t ( I t ) denotes the optimal Gannet. L e v y ( ) denotes the Levy flight of the Gannet, as shown in Equation (19):
L e v y ( D i m ) = 0.01 μ σ | v | 1 β
σ = s i n π β 2 Γ ( 1 + β ) Γ 1 + β 2 β 2 β 1 2 1 β
where μ and σ are random values between 0 and 1, and β is a predetermined constant with a value of 1.5.
The above is the formula for updating the position of Gannet during predation. After the above introduction, the pseudocode of the GOA is shown in Algorithm 1:
Algorithm 1: GOA
Mathematics 11 00439 i001

2.2. Compact Scheme

The compact strategy represents the entire population by using virtual populations, and it has also proven to be effective [27]. When the population is applied with the compact strategy, the update after each iteration is also conducted to the whole population in the form of a virtual population. The entire algorithmic process changes from updating the entire population to updating the probabilistic model representing the population, saving the use of computer memory.
The compact strategy for virtualization of the entire population is to use a probabilistic model. To represent the whole population, a perturbation vector ( P V ) is used to describe the population [28]. As the algorithm continues, the P V is also changing. The P V is represented as follows: P V = [ μ t , σ t ], where μ denotes the mean value of the P V , σ denotes the standard deviation of the P V , and t is the number of current iterations. In the virtual population, each dimension of all particles of the population corresponds to a Perturbation Vector ( P V ). Both μ and σ in the P V have corresponding probability density functions ( P D F ), and the P D F is to be normalized [29]. The distribution of the whole population is represented by the above. After having the P D F corresponding to the population, we can generate the solution x from the P D F . The P D F can be constructed by constructing a Chebyshev polynomial, which leads to a cumulative distribution function ( C D F ) that takes values ranging from 0 to 1 [30,31]. Because the P D F is defined in [−1, 1], the corresponding C D F needs to be expressed according to the definition of the P D F as follows:
C D F = 1 x P D F d x = 1 x 2 π e ( x μ ) 2 2 σ 2 σ ( e r f ( μ + 1 2 σ ) e r f ( μ 1 2 σ ) ) d x
where x takes values ranging from −1 to 1 and e r f denotes the error function [32]. The above formula shows that the P D F belongs to a truncated Gaussian distribution, which restricts the distribution function to the interval [−1, 1] and performs a normalization operation. In Equation (22), it can also be transformed into this form as follows:
C D F = e r f μ + 1 2 σ + e r f x μ 2 σ e r f μ + 1 2 σ e r f μ 1 2 σ
Usually, the generated X i is updated by the algorithmic update formula for the generated X i to obtain the new solution X n e w , and the adaptation is evaluated according to the generated X i with X n e w , and the one with good adaptation is called the winner and the one with poor adaptation is called the loser, and then the μ and σ of the P V are updated according to the winner and the loser [33]. The P V update formula is as follows:
μ i t + 1 = μ i t + 1 N t o t a l ( w i n n e r i l o s e r i )
In Equation (23), μ t + 1 denotes the newly generated mean after iteration. N t o t a l denotes the population size. The formula for updating the standard deviation in P V is as follows:
σ i t + 1 = ( σ i t ) 2 + ( μ i t ) 2 ( μ i + 1 t ) 2 + 1 N t o t a l ( w i n n e r i 2 l o s e r i 2 )
The population represented by the probability distribution greatly saves the computer’s storage when the algorithm is running. From storing the entire population and updating the entire population at the beginning to storing the P V and updating μ and σ in the PV, it is achieved to run the algorithm with less memory and facilitates the use on resource-constrained devices.

3. Parallel and Compact GOA

In this section, the first part will introduce the proposed parallel communication strategy and the cGOA, and the second part will introduce the parallel and compact hybrid strategy added to the GOA.

3.1. Two Proposed Parallel Communication Strategies and cGOA

The idea of the parallel strategy is to group all particles in equal or unequal fractions, and each group performs its own computation when the algorithm runs [34,35]. We use the grouping of Gannets in the algorithm improvement to improve the convergence and accuracy of the algorithm.
Among the parallel improvements of other algorithms, such as: Parallel Particle Swarm Optimization (PPSO) [36] and Parallel Fish Migration Optimization algorithm with Compact technology (PCFMO) [37], etc. From these algorithms improved by parallel strategies, it can be seen that adding parallel communication to the algorithm is more effective than the original algorithm.
Two novel communication schemes are used in this paper to improve the GOA using the parallel strategy. One parallel communication strategy is to replace the elite solution of a randomly selected group after each iteration when it is better adapted than the elite solution of another randomly selected group, called communication strategy with random replacement; another parallel communication strategy is to replace the elite solution of each group after each iteration when it is better adapted than the elite solution of a randomly selected group, called communication strategy with optimal replacement. In order to better use these two strategies, after each group performs iteration completion, a random selection is used to select a strategy for intergroup communication, and each strategy is selected with a probability of one-half. At the same time, because the two strategies are randomly selected groups and fitness values are calculated for elite solution replacement, a disturbance vector d is added to the elite solution in the group whose communication fails when each group communicates substitution failure, and the original solution is replaced if the disturbed solution is well fitness. Figure 1 and Figure 2 allow us to understand these two parallel communication strategies more intuitively.
The above is the introduction of the parallel communication strategy used in this paper, and the next will introduce the combination of GOA and compact strategy. In Section 2, we introduced the basic principle of the compact strategy, after which the compact strategy is combined with the GOA, and the combined algorithm is called the cGOA. The cGOA saves the computer storage space occupied by the particles during initialization. The initialization of the GOA is stored for each all particles, while the cGOA is initialized with only one perturbation vector ( P V ) at the time of initialization. The formula for C D F 1 is as follows:
y = 2 σ × e r f 1 e r f μ + 1 2 σ x × e r f μ 1 2 σ + x × e r f μ + 1 2 σ + μ
In Equation (25), e r f 1 denotes the inverse function of e r f . where y takes values in the range of [−1, 1] and x takes values in the random number of [0, 1]. In order to achieve the mapping of solution y, the following Equation (26) needs to be used to achieve it:
y d s = y 2 ( U b L b ) + 1 2 ( U b + L b )
In Equation (26), U b and L b represent the upper and lower limits of each dimension, respectively. y is obtained from Equation (25). y d s is an actual decision solution space. During the iterative process of the cGOA, each iteration is completed using the obtained y d s to update the μ and σ in the P V by Equations (23) and (24) mentioned in Section 2.

3.2. Hibrid Parallel and Compact GOA

Based on our two parallel communication strategies given in the previous section and the compact strategy, a specific implementation of the combination of the parallel and compact strategies is added to the GOA to be improved in this subsection. The improved algorithm after the mixture is called the Parallel Compact Gannet Optimization Algorithm (PCGOA).
In PCGOA, we divided the populations into five groups. While the algorithm is in progress, these five groups perform their own computations. After all five groups complete one iteration, inter-group communication starts. The communication strategy adopted is described in Section 3.1.
In this paper, the join for the compact strategy is to virtualize the populations in each group, because there are five groups, so there are five independent PVs corresponding to the virtual populations in each of the five groups. Each iteration of the algorithm ends by updating the PV corresponding to each of the five groups according to the winner and loser of each of the 5 groups. To facilitate the distinction we take w i n n e r i ( i = 1 , 2 , 3 , 4 , 5 ) and l o s e r i ( i = 1 , 2 , 3 , 4 , 5 ) as the winner and loser of the i group. The specific process is as follows:
1.
Dividing the entire population into 5 groups and initializing P V i for each group, where σ i = 10, μ i = 0, ( i = 1 , 2 , 3 , 4 , 5 ) .
2.
Generating the solution X i via P V i , generating the corresponding solution X via P V of each group.
3.
Compare X and X n e w of each group, and select the w i n n e r and l o s e r of each group by [ w i n n e r , l o s e r ] = compete(X, X n e w ).
4.
X of each group performs the position update formula of the GOA to generate X n e w .
5.
Updating the P V and updating the optimal solution for each group and the global optimal solution according to Equation (16).
6.
If the insufficiency condition is met, the algorithm is finalized, otherwise repeat Step 2 to Step 5.
The algorithm flow of the PCGOA is shown in Algorithm 2:
Algorithm 2: PCGOA
Mathematics 11 00439 i002
In order to understand PCGOA more clearly, we will analyze the theoretical computational complexity of the PCGOA. From Algorithm 1, we can see that the computational complexity of each iteration of the PCGOA is mainly in lines 9 to 22 of the pseudo-code, and the computational complexity is O( g × d ), where g denotes the total number of groups and d denotes the total number of dimensions. In lines 8 to 24 of the pseudo code, the computational complexity of the whole algorithm is O( K m a x × g × d ). In addition, in GOA, the computational complexity of the whole algorithm is O( K m a x × N × d ), where N denotes the total number of gannets in the population.

4. Experiments

In this section, CEC2013 will be used to test the PCGOA and demonstrate the performance of the PCGOA. A total of 28 functions are covered in CEC2013, including unimodal function distribution (F1–F5), multimodal function (F6–F20) and combinatorial function (F21–F28). These three functions are considered to cover most problems in reality. In the experiments, each algorithm is run in the same environment in CEC2013, which ensures the fairness of the algorithm operation. The development environment used for this experiment is MatLab2018b with Intel(R) Core(TM) I7-10750 H CPU @ 2.60 GHz RAM 16 GB.

4.1. Selection of Comparison Algorithm and Its Parameter Setting

In order to demonstrate the advantages of the PCGOA more comprehensively, the classical algorithms PSO and SCA, the CS algorithm with Levy flight, the MVO algorithm improved by parallel strategy (PMVO), and the AO and BOA algorithms recently proposed in the current research field are selected for experimental comparison in this paper. The specific algorithms and their parameters are set as follows:
  • Aquila Optimizer (AO) [38]: r 0 = 10 , d e l t a = 0.1 , a l p h a = 0.1 , u = 0.0265 ;
  • Butterfly Optimization Algorithm (BOA) [39]: p r o b a b i b i l i t y s w i t c h = 0.6 , p o w e r e x p o n e n t = 0.1 , s e n s o r y m o d a l i t y = 0.01 ;
  • Particle Swarm Optimization (PSO): V m a x = 6 , V m i n = 6 , c 1 = c 2 = 2 , w 3 = 0.3 ;
  • Sine Cosine Algorithm (SCA) [40]: a = 2 ;
  • Parallel Multi-Verse Optimizer (PMVO) [41]: G = 4 , R = 20 , 40 , , 2000 , w = 6 , W m i n = 0.2 , W m a x = 1 ;
  • Cuckoo Search Algorithm (CS): P a = 0.25 .
This paper runs each benchmark function in CEC2013 20 times, each dimension is 30-dimensional, and the functions are evaluated 20,000 times. The advantages of pcGOA over the original algorithm and other algorithms were compared based on the mean and variance of each function run 20 times. See Table 1 for specific data.
The bolded data represents the best value taken by this algorithm over other algorithms in the current function. The win, equal sign and los in the lowest row of Table 1 represent the number of wins, ties and failures of the PCGOA in comparison with other algorithms in 28 functions, respectively. We can see that the PCGOA has 17 wins and 2 function ties in the comparison with GOA, and among the winning functions, the PCGOA has an advantage over the GOA in the single-peaked functions as well as the combined functions, and partially in the multimodal functions.
To verify the statistical advantage of the PCGOA and to determine if the PCGOA is significantly different compared to other algorithms, we used the nonparametric Wilcoxon Rank-sum test for this experiment. The significance level was taken to be α of 0.05, and the tested p values are shown in Table 2, where data with p values greater than 0.05 are marked in bold. As can be seen in Table 2, the PCGOA has a significant gap with other algorithms in most cases.

4.2. Convergence Analysis

To better demonstrate the advantages of the PCGOA, the convergence performance of the PCGOA is tested. In this subsection, In this subsection, we present the tests of the PCGOA on 28 benchmark functions of CEC2013 in 10 dimensions. Because the CEC2013 contains three test functions, namely single-peak function, multi-modal function and composite function, the CEC2013 can better test the convergence and robustness of the PCGOA from many aspects. In order to show the convergence curve of each function more clearly, this paper selects several functions from each function for display.
In Figure 3, the convergence effect of the PCGOA in the single-peaked function is shown. Comparing PCGOA with GOA, PSO, PMVO and other algorithms, the convergence image of the PCGOA in the single-peak function shows that the PCGOA performs well in the single-peak function and converges to the optimal solution faster than several other algorithms. In the stages of single-peaked functions the PCGOA can all converge to the global optimal solution faster than other functions. Because the PCGOA communicates at each iteration, the algorithm performs better with single-peaked functions.
The performance of the PCGOA for multimodal functions is shown in Figure 4. The selected images of functions with more obvious convergence trends show that the convergence of the PCGOA for multimodal functions is also better than other functions, and the convergence to the final results are better than other algorithms. The performance of the PCGOA in the combinatorial function is shown in Figure 5, and four images with more obvious convergence are selected to show in this paper. It can be seen in F21, F23, F24 and F27 that the PCGOA still performs well in the combinatorial functions, and that the PCGOA has good convergence for the same number of evaluation functions. It can be seen from the final results that the PCGOA converges to in most functions that the PCGOA is superior compared to other algorithms.

4.3. Algorithm Memory Analysis

The PCGOA saves computer memory compared to GOA at runtime because the compact strategy used in PCGOA saves memory mainly in terms of storage of the Gannet population in the algorithm run, so this experiment focuses on comparing the size of computer-to-population storage for each iteration.The following table shows the computer memory occupied by several algorithms for the Gannet population at each iteration.
In Table 3, Name is the form in which the algorithm stores the population, Size represents the size stored in the computer, B y t e s C l a s s indicates the specific size of the storage occupied in the computer, groups represents the number of groups, D represents the total number of dimensions, and N p indicates the total number of individuals in the population. In the development environment of this experiment, each basic unit is 8 floating-point type data. In B y t e s C l a s s , it is multiplied by 2 because P V of each group is actually a matrix of 2 × D .
We can see from the above table that the use of the compact strategy achieves the virtualization of the population by storing only a few P V s instead of the whole population, which saves the computer memory when the algorithm is running.

5. Engineering Design Problems

In real life, there will be many optimally solved problems to solve. In this section, the PCGOA is applied to five constrained engineering optimization problems. Tension spring design [42], Pressure vessel design [43], Welded beam design [44], Speed reducer design [45] and Car side impact design [46].

5.1. Constraint Handling

The method of dealing with the boundary constraints in this experiment is the penalty function method, the basic idea of which is to transform the constrained problem into an unconstrained optimization problem with the help of penalty functions, and obtain the solution of the original constrained problem by solving a series of unconstrained optimization problems. This method is used to bring the infeasible point closer to the feasible domain by applying a penalty to it during the iteration. When this point is a feasible point, it is the optimal solution to the original problem.
The constrained optimization problem is transformed into an unconstrained optimization problem by the following equation,
m i n : L ( X ) = f ( X ) + σ i g ( c i ( X ) ) g ( c i ( X ) ) = m a x ( 0 , c i ( X ) ) 2
where i represents the ith constraint, c i ( X ) represents a series of constraints, g ( c i ( X ) ) is the external penalty function, and σ is the penalty factor. In Equation (27), the value of σ is 1,000,000.

5.2. Tension Spring Design

The purpose of pressure vessel design is to minimize its total cost F u n c ( X ) under its four constraints. There are three design variables involved: the average diameter of the spring coil ( x 1 ) , the diameter of the spring wire ( x 2 ) , and the number of effective coils of the spring ( x 3 ) . The mathematical description is as follows:
F u n c ( X ) = x 1 2 x 2 ( x 3 + 2 )
The constraints of this engineering optimization problem are as follows:
g 1 ( X ) = 1 x 2 2 x 3 7178 x 1 4 0 g 2 ( X ) = 4 x 2 2 x 1 x 2 12566 x 1 3 x 2 x 1 4 + 1 5108 x 1 2 1 0 g 3 ( X ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( X ) = x 1 + x 2 1.5 1 0
where the range of values of each variable is as follows:
0.05 x 1 2 0.25 x 2 1.3 2 x 3 15
In this paper, the PCGOA is applied to this engineering optimization problem and compared with other algorithms under the same conditions. The optimal solutions derived from each algorithm run for this engineering optimization problem indicate that the PCGOA yields optimal results in solving the problem. The results are shown in Table 4.

5.3. Pressure Vessel Design

The purpose of pressure vessel design is to minimize the total cost F u n c ( X ) while meeting production needs. The total costs affecting pressure vessel design are material, shape and welding. There are four design variables involved: head thickness ( x 2 ) , shell thickness ( x 1 ) , inner radius ( x 3 ) and vessel length ( x 4 ) . The mathematical description is as follows:
F u n c ( X ) = 3.1661 x 1 2 x 4 + 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 19.84 x 1 2 x 3
The constraints of this engineering optimization problem are as follows:
g 1 ( X ) = 0.0193 x 3 x 1 0 g 2 ( X ) = 0.00954 x 3 x 2 0 g 3 ( X ) = 1296000 π x 3 2 x 4 2 4 3 π x 3 3 0 g 4 ( X ) = x 4 240 0
where the range of values of each variable is as follows:
1 × 0.0625 x 1 x 2 99 × 0.0625 10 x 3 x 4 200
In this paper, the PCGOA is applied to this engineering optimization problem and compared with other algorithms under the same conditions. The optimal solutions derived by each algorithm running on this engineering optimization problem indicate that the PCGOA has an advantage over the other algorithms in solving the problem. The results are shown in Table 5.

5.4. Welded Beam Design

The purpose of the welded beam design is to minimize its design cost F u n c ( X ) under its seven constraints. There are four design variables involved in the design: the thickness of the weld ( x 1 ) , the length of the clamped reinforcement ( x 2 ) , the height of the reinforcement ( x 3 ) and the thickness of the reinforcement ( x 4 ) . The mathematical description is as follows:
F u n c ( X ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( x 2 + 14 )
The constraints of this engineering optimization problem are as follows:
g 1 ( X ) = τ ( X ) 13600 0 g 2 ( X ) = σ ( X ) 30000 0 g 3 ( X ) = x 1 x 4 0 g 4 ( X ) = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 0 g 5 ( X ) = 0.125 x 1 0 g 6 ( X ) = δ ( X ) 0.25 0 g 7 ( X ) = 6000 P a ( X ) 0 τ ( X ) = τ + ( 2 τ τ ) x 2 2 R + ( τ ) 2 τ = 6000 2 x 1 x 2 τ = T K L T = 6000 14 + x 2 2 K = x 1 + x 3 2 2 + x 2 2 4 L = 2 x 1 x 2 2 x 2 2 12 + x 1 + x 3 2 2 σ ( X ) = 504000 x 3 2 x 4 P a ( X ) = 4.013 30 × 10 6 x 3 2 x 4 6 36 196 × 1 x 3 30 × 10 6 4 ( 12 × 10 6 ) 28
where the range of values of each variable is as follows:
0.1 x 1 0.1 x 2 x 3 10 x 4 2
In this paper, the PCGOA is applied to this engineering optimization problem and compared with other algorithms under the same conditions. The optimal solutions derived by each algorithm running on this engineering optimization problem indicate that the PCGOA has an advantage over the other algorithms in solving the problem. The results are shown in Table 6.

5.5. Speed Reducer Design

In this optimization problem, the goal of the reducer design is to minimize its weight F u n c ( X ) under eleven constraints. There are seven design variables involved: tooth width ( x 1 ) , gear module ( x 2 ) , number of teeth in the pinion ( x 3 ) , length of the first shaft between bearings ( x 4 ) , length of the second shaft between bearings ( x 5 ) , diameter of the first shaft ( x 6 ) and diameter of the second shaft ( x 7 ) . The mathematical description is as follows:
F u n c ( X ) = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0934 + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) + 7.4777 ( x 6 2 + x 7 2 ) 1.508 x 1 ( x 6 2 + x 7 2 )
The constraints of this engineering optimization problem are as follows:
g 1 ( X ) = 27 x 1 x 2 2 x 3 1 0 g 2 ( X ) = 397.5 x 1 x 2 2 x 3 3 1 0 g 3 ( X ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 g 4 ( X ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 g 5 ( X ) = 1.0 110 x 6 3 745.0 x 4 x 2 x 3 2 + 1.69 × 10 6 1 0 g 6 ( X ) = 1.0 85 x 7 3 745.0 x 5 x 2 x 3 2 + 157.5 × 10 6 1 0
g 7 ( X ) = x 2 x 3 40 1 0 g 8 ( X ) = 5 x 2 x 1 1 0 g 9 ( X ) = x 1 12 x 2 1 0 g 10 ( X ) = 1.5 x 6 + 1.9 x 4 1 0 g 11 ( X ) = 1.1 x 7 + 1.9 x 5 1 0
where the range of values of each variable is as follows:
2.6 x 1 3.6 0.7 x 2 0.8 17.0 x 3 28.7 3.0 x 4 8.3 7.8 x 5 8.3 2.9 x 6 3.9 5 x 7 5.5
In this paper, the PCGOA is applied to this engineering optimization problem and compared with other algorithms under the same conditions. The optimal solutions derived by each algorithm running on this engineering optimization problem indicate that the PCGOA has an advantage over other algorithms. The results are shown in Table 7.

5.6. Car Side Impact Design

In daily life, the emergence of automobiles has greatly facilitated people’s travel. In this optimization problem, the car will be subjected to side collision, and the purpose of the car side collision design is to minimize the door weight F u n c ( X ) under ten constraints. There are eleven design variables involved: the thickness of the inner column plate ( x 1 ), the B-pillar reinforcement ( x 2 ), the thickness of the inner floor ( x 3 ), the crossmember ( x 4 ), the door beam ( x 5 ), the door beltline reinforcement ( x 6 ), the roof longitudinal beam ( x 7 ), the inner B-pillar ( x 8 ), the inner floor ( x 9 ), the height of the guardrail ( x 10 ) and the material at the crash location ( x 11 ). The mathematical description is as follows:
F u n c ( X ) = 1.98 + 4.90 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7
The constraints of this engineering optimization problem are as follows:
g 1 ( X ) = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 1 0 0.484 x 3 x 9 + 0.01343 x 6 x 1 0 1 0 g 2 ( X ) = 46.36 9.9 x 2 12.9 x 1 x 2 + 0.1107 x 3 x 1 0 32 0 g 3 ( X ) = 33.86 + 2.95 x 3 + 0.1792 x 3 5.057 x 1 x 2 11.0 x 2 x 8 0.0215 x 5 x 1 0 9.98 x 7 x 8 + 22.0 x 8 x 9 32 0 g 4 ( X ) = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 1 0 + 6.63 x 6 x 9 7.7 x 7 x 8 + 0.32 x 9 x 10 32 0 g 5 ( X ) = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.08045 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11 0.32 0 g 6 ( X ) = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 + 0.00184 x 9 x 10 0.02 x 2 2 0.32 0 g 7 ( X ) = 0.74 0.61 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32 0 g 8 ( X ) = 4.72 0.5 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 4 0 g 9 ( X ) = 10.58 0.674 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 9.9 0 g 10 ( X ) = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 0.000786 x 11 2 15.7 0
where the range of values of each variable is as follows:
0.5 x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 1.5 x 8 , x 9 0.192 , 0.345 30 x 10 , x 11 30
In this paper, the PCGOA is applied to this engineering optimization problem and compared with other algorithms under the same conditions. The optimal solutions derived by each algorithm running on this engineering optimization problem indicate that the PCGOA has an advantage over other algorithms. The results are shown in Table 8.

6. Conclusions

In this paper, in order to save the use of the GOA in computer memory, the compact strategy adopted achieves the effect of saving memory. The compact strategy is to represent the entire population with a probability model. The combination of parallel strategy and compact strategy allows the algorithm to find the best solution more accurately in various practical problems. In addition, this paper adds a strategy for subpopulation based on parallel communication, which enables each group to achieve better communication. The PCGOA also performs better than the original algorithm in the CEC2013 benchmark function, solving the drawbacks of the original algorithm in terms of large memory consumption and unstable convergence. Finally, this paper applies PCGOA to five engineering optimization problems, and all evidence shows that the test results of the PCGOA are excellent.

Author Contributions

Conceptualization, J.-S.P., B.S., S.-C.C., M.Z. and C.-S.S.; Sotftware, B.S.; formal analysis, B.S. and S.-C.C.; methodology, J.-S.P., B.S., M.Z. and C.-S.S.; writing—original draft, B.S.; writing—review and editing, J.-S.P., B.S. and S.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boussaïd, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  2. Pan, J.S.; Sun, X.X.; Chu, S.C.; Abraham, A.; Yan, B. Digital watermarking with improved SMS applied for QR code. Eng. Appl. Artif. Intell. 2021, 97, 104049. [Google Scholar] [CrossRef]
  3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  4. Bai, Q. Analysis of particle swarm optimization algorithm. Comput. Inf. Sci. 2010, 3, 180. [Google Scholar] [CrossRef] [Green Version]
  5. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  6. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  7. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  8. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  9. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 2, pp. 1470–1477. [Google Scholar]
  10. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  11. Parpinelli, R.S.; Lopes, H.S.; Freitas, A.A. Data mining with an ant colony optimization algorithm. IEEE Trans. Evol. Comput. 2002, 6, 321–332. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, X.S. Flower pollination algorithm for global optimization. In Proceedings of the International Conference on Unconventional Computing and Natural Computation, Milan, Italy, 1–5 July 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  13. Yang, X.S.; Karamanoglu, M.; He, X. Flower pollination algorithm: A novel approach for multiobjective optimization. Eng. Optim. 2014, 46, 1222–1237. [Google Scholar] [CrossRef] [Green Version]
  14. Abdel-Basset, M.; Shawky, L.A. Flower pollination algorithm: A comprehensive review. Artif. Intell. Rev. 2019, 52, 2533–2557. [Google Scholar] [CrossRef]
  15. Song, P.C.; Chu, S.C.; Pan, J.S.; Yang, H. Simplified Phasmatodea population evolution algorithm for optimization. Complex Intell. Syst. 2022, 8, 2749–2767. [Google Scholar] [CrossRef]
  16. Cheng, M.Y.; Prayogo, D. Symbiotic organisms search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  17. Ezugwu, A.E.; Prayogo, D. Symbiotic organisms search algorithm: Theory, recent advances and applications. Expert Syst. Appl. 2019, 119, 184–209. [Google Scholar] [CrossRef]
  18. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snášel, V.; Chu, S.C. Gannet Optimization Algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  19. Pan, J.S.; Liu, N.; Chu, S.C.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  20. Chu, S.C.; Du, Z.G.; Peng, Y.J.; Pan, J.S. Fuzzy hierarchical surrogate assists probabilistic particle swarm optimization for expensive high dimensional problem. Knowl. Based Syst. 2021, 220, 106939. [Google Scholar] [CrossRef]
  21. Xue, Y.; Jiang, J.; Zhao, B.; Ma, T. A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Comput. 2018, 22, 2935–2952. [Google Scholar] [CrossRef]
  22. Mareli, M.; Twala, B. An adaptive Cuckoo search algorithm for optimisation. Appl. Comput. Inform. 2018, 14, 107–115. [Google Scholar] [CrossRef]
  23. Xue, Y.; Zhu, H.; Liang, J.; Słowik, A. Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification. Knowl. Based Syst. 2021, 227, 107218. [Google Scholar] [CrossRef]
  24. Harik, G.R.; Lobo, F.G.; Goldberg, D.E. The compact genetic algorithm. IEEE Trans. Evol. Comput. 1999, 3, 287–297. [Google Scholar] [CrossRef]
  25. Mininno, E.; Neri, F.; Cupertino, F.; Naso, D. Compact differential evolution. IEEE Trans. Evol. Comput. 2010, 15, 32–54. [Google Scholar] [CrossRef]
  26. Yu, L.; Zheng, Q.; Zhewen, S. Compact Particle Swarm Optimization Algorithm. J. Xian Jiaotong Univ. 2006, 40, 883. [Google Scholar]
  27. Larra naga, P.; Lozano, J.A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001; Volume 2. [Google Scholar]
  28. Nguyen, T.T.; Pan, J.S.; Dao, T.K. A compact bat algorithm for unequal clustering in wireless sensor networks. Appl. Sci. 2019, 9, 1973. [Google Scholar] [CrossRef] [Green Version]
  29. Bronshtein, I.N.; Semendyayev, K.A. Handbook of Mathematics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  30. Mason, J.C.; Handscomb, D.C. Chebyshev Polynomials; CRC: Boca Raton, FL, USA, 2002. [Google Scholar]
  31. Cody, W.J. Rational Chebyshev approximations for the error function. Math. Comput. 1969, 23, 631–637. [Google Scholar] [CrossRef]
  32. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; US Government Printing Office: Washington, DC, USA; American Institute of Physics: College Park, MD, USA, 1964; Volume 55.
  33. Nguyen, T.T.; Pan, J.S.; Dao, T.K. An improved flower pollination algorithm for optimizing layouts of nodes in wireless sensor network. IEEE Access 2019, 7, 75985–75998. [Google Scholar] [CrossRef]
  34. Bäck, T. Parallel optimization of evolutionary algorithms. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Krakov, Poland, 11–15 September 1994; Springer: Berlin/Heidelberg, Germany, 1994; pp. 418–427. [Google Scholar]
  35. Censor, Y.; Zenios, S.A. Parallel Optimization: Theory, Algorithms, and Applications; Oxford University Press on Demand: Oxford, UK, 1997. [Google Scholar]
  36. Lalwani, S.; Sharma, H.; Satapathy, S.C.; Deep, K.; Bansal, J.C. A survey on parallel particle swarm optimization algorithms. Arab. J. Sci. Eng. 2019, 44, 2899–2923. [Google Scholar] [CrossRef]
  37. Chu, S.C.; Xu, X.W.; Yang, S.Y.; Pan, J.S. Parallel fish migration optimization with compact technology based on memory principle for wireless sensor networks. Knowl. Based Syst. 2022, 241, 108124. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  39. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  40. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  41. Wang, X.; Pan, J.S.; Chu, S.C. A parallel multi-verse optimizer for application in multilevel image segmentation. IEEE Access 2020, 8, 32018–32030. [Google Scholar] [CrossRef]
  42. Wang, L.; Li, L.-p. An effective differential evolution with level comparison for constrained engineering design. Struct. Multidiscip. Optim. 2010, 41, 947–963. [Google Scholar]
  43. Shadravan, S.; Naji, H.R.; Bardsiri, V.K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  44. Zhang, Z.; Ding, S.; Jia, W. A hybrid optimization algorithm based on cuckoo search and differential evolution for solving constrained engineering problems. Eng. Appl. Artif. Intell. 2019, 85, 254–268. [Google Scholar] [CrossRef]
  45. Lykouris, T.; Syrgkanis, V.; Tardos, É. Learning and efficiency in games with dynamic population. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, Arlington, VI, USA, 10–12 January 2016; pp. 120–129. [Google Scholar]
  46. Marklund, P.O.; Nilsson, L. Optimization of a car body component subjected to side impact. Struct. Multidiscip. Optim. 2001, 21, 383–392. [Google Scholar] [CrossRef]
Figure 1. The communication strategy with random replacement.
Figure 1. The communication strategy with random replacement.
Mathematics 11 00439 g001
Figure 2. The communication strategy with optimal replacement.
Figure 2. The communication strategy with optimal replacement.
Mathematics 11 00439 g002
Figure 3. The convergence trend of the unimodal state of the benchmark function in CEC2013. (a) F1. (b) F2. (c) F4. (d) F5.
Figure 3. The convergence trend of the unimodal state of the benchmark function in CEC2013. (a) F1. (b) F2. (c) F4. (d) F5.
Mathematics 11 00439 g003aMathematics 11 00439 g003b
Figure 4. The convergence trend of the multi-modal state of the benchmark function in CEC2013. (a) F6. (b) F9. (c) F10. (d) F11. (e) F12. (f) F14. (g) F15. (h) F16.
Figure 4. The convergence trend of the multi-modal state of the benchmark function in CEC2013. (a) F6. (b) F9. (c) F10. (d) F11. (e) F12. (f) F14. (g) F15. (h) F16.
Mathematics 11 00439 g004aMathematics 11 00439 g004b
Figure 5. Convergence trend of the composition function of the benchmark function in CEC2013. (a) F21. (b) F23. (c) F24. (d) F27.
Figure 5. Convergence trend of the composition function of the benchmark function in CEC2013. (a) F21. (b) F23. (c) F24. (d) F27.
Mathematics 11 00439 g005
Table 1. Simulation results on 30D.
Table 1. Simulation results on 30D.
Func_Num PCGOAGOAAOBOAPSOSCAPMVOCS
1Mean−1.40 × 10 3 −1.40 × 10 3 4.80 × 10 3 5.29 × 10 4 1.54 × 10 4 1.85 × 10 4 −1.40 × 10 3 −1.40 × 10 3
Std7.28 × 10 2 7.96 × 10 2 1.44 × 10 3 5.59 × 10 3 3.19 × 10 3 2.72 × 10 3 5.08 × 10 1 3.79 × 10 3
2Mean1.88 × 10 7 2.62 × 10 7 1.63 × 10 8 5.19 × 10 8 3.51 × 10 8 2.51 × 10 8 1.83 × 10 7 1.37 × 10 7
Std4.55 × 10 6 1.07 × 10 7 6.79 × 10 7 4.64 × 10 8 1.24 × 10 8 4.51 × 10 7 3.57 × 10 6 4.14 × 10 6
3Mean5.53 × 10 8 3.88 × 10 9 3.00 × 10 12 6.73 × 10 19 1.77 × 10 14 1.88 × 10 11 2.58 × 10 9 −1.00 × 10 10
Std2.05 × 10 9 7.23 × 10 9 1.19 × 10 11 2.47 × 10 20 4.69 × 10 13 4.65 × 10 10 1.05 × 10 9 7.56 × 10 8
4Mean8.44 × 10 3 4.76 × 10 4 5.89 × 10 4 5.54 × 10 4 6.72 × 10 4 6.28 × 10 4 4.23 × 10 4 8.58 × 10 4
Std2.00 × 10 3 7.26 × 10 3 3.66 × 10 3 2.45 × 10 3 1.10 × 10 4 1.41 × 10 4 1.59 × 10 4 9.94 × 10 3
5Mean−9.99 × 10 2 −9.88 × 10 2 1.03 × 10 2 3.32 × 10 4 2.41 × 10 3 3.22 × 10 3 −9.00 × 10 2 −1.00 × 10 3
Std2.03 × 10 1 1.37 × 106.18 × 10 2 7.68 × 10 3 1.67 × 10 3 6.30 × 10 2 4.05 × 103.52 × 10 2
6Mean−8.77 × 10 2 −7.42 × 10 2 8.60 × 101.29 × 10 4 1.32 × 10 3 1.55 × 10 3 −8.22 × 10 2 −8.63 × 10 2
Std2.81 × 103.51 × 102.47 × 10 2 2.60 × 10 3 2.10 × 10 3 4.18 × 10 2 2.13 × 101.75 × 10
7Mean−6.90 × 10 2 −6.86 × 10 2 −4.99 × 10 2 9.36 × 10 4 −6.57 × 10 2 −5.95 × 10 2 −6.81 × 10 2 −6.60 × 10 2
Std4.05 × 104.28 × 104.67 × 10 2 5.72 × 10 5 7.07 × 10 3 1.14 × 10 2 3.59 × 101.84 × 10
8Mean−6.79 × 10 2 −6.79 × 10 2 −6.79 × 10 2 −6.79 × 10 2 −6.79 × 10 2 −6.79 × 10 2 −6.79 × 10 2 −6.79 × 10 2
Std4.68 × 10 2 4.83 × 10 2 6.52 × 10 2 4.29 × 10 2 6.58 × 10 2 5.31 × 10 2 6.70 × 10 2 4.58 × 10 2
9Mean−5.69 × 10 2 −5.65 × 10 2 −5.59 × 10 2 −5.58 × 10 2 −5.63 × 10 2 −5.57 × 10 2 −5.73 × 10 2 −5.68 × 10 2
Std2.83 × 104.72 × 102.80 × 101.50 × 104.46 × 108.34 × 10 1 2.77 × 101.19 × 10
10Mean−4.96 × 10 2 −2.95 × 10 2 4.59 × 10 2 7.77 × 10 3 2.57 × 10 3 2.31 × 10 3 −4.93 × 10 2 −4.98 × 10 2
Std9.91 × 10 1 3.31 × 103.85 × 10 2 1.10 × 10 3 4.61 × 10 2 4.76 × 10 2 2.39 × 102.13 × 10 1
11Mean−1.80 × 10 2 −2.25 × 10 2 2.46 × 104.95 × 10 2 −1.20 × 10 2 9.51 × 10−3.02 × 10 2 −2.93 × 10 2
Std7.14 × 103.68 × 104.89 × 106.31 × 103.91 × 105.17 × 102.77 × 101.88 × 10
12Mean−2.99 × 10−1.65 × 10 2 4.74 × 105.27 × 10 2 1.31 × 10 2 2.10 × 10 2 −2.20 × 10 2 −1.16 × 10 2
Std1.16 × 10 2 4.40 × 107.30 × 101.01 × 10 2 1.01 × 10 2 4.31 × 104.24 × 102.67 × 10
13Mean4.09 × 101.48 × 103.64 × 10 2 6.17 × 10 2 3.43 × 10 2 2.44 × 10 2 2.55 × 101.55 × 10
Std5.15 × 106.85 × 107.71 × 106.01 × 107.76 × 103.17 × 106.13 × 103.24 × 10
14Mean4.07 × 10 3 3.94 × 10 3 5.44 × 10 3 8.29 × 10 3 4.45 × 10 3 7.97 × 10 3 2.78 × 10 3 3.41 × 10 3
Std5.83 × 10 2 5.53 × 10 2 7.96 × 10 2 3.12 × 10 2 6.54 × 10 2 5.16 × 10 2 5.01 × 10 2 2.32 × 10 2
15Mean4.52 × 10 3 5.77 × 10 3 5.49 × 10 3 8.05 × 10 3 4.65 × 10 3 8.25 × 10 3 5.52 × 10 3 5.10 × 10 3
Std9.84 × 10 2 1.03 × 10 3 7.32 × 10 2 3.44 × 10 2 5.09 × 10 2 4.09 × 10 2 8.36 × 10 2 2.32 × 10 2
16Mean2.01 × 10 2 2.03 × 10 2 2.03 × 10 2 2.04 × 10 2 2.03 × 10 2 2.04 × 10 2 2.02 × 10 2 2.03 × 10 2
Std3.86 × 10 1 4.08 × 10 1 4.67 × 10 1 2.96 × 10 1 5.61 × 10 1 4.66 × 10 1 4.67 × 10 1 3.69 × 10 1
17Mean5.01 × 10 2 4.76 × 10 2 1.00 × 10 3 1.22 × 10 3 6.79 × 10 2 9.61 × 10 2 5.14 × 10 2 4.92 × 10 2
Std7.87 × 102.67 × 108.93 × 104.38 × 106.80 × 106.87 × 103.84 × 102.35 × 10
18Mean7.26 × 10 2 6.69 × 10 2 9.85 × 10 2 1.31 × 10 3 8.75 × 10 2 1.10 × 10 3 6.50 × 10 2 6.36 × 10 2
Std4.86 × 103.98 × 108.54 × 105.94 × 101.10 × 10 2 7.91 × 104.16 × 101.79 × 10
19Mean5.14 × 10 2 5.56 × 10 2 8.45 × 10 2 4.61 × 10 5 5.53 × 10 4 2.23 × 10 4 5.15 × 10 2 5.13 × 10 2
Std5.16 × 107.21 × 104.45 × 10 2 1.17 × 10 5 1.10 × 10 4 1.61 × 10 4 3.67 × 102.67 × 10
20Mean6.15 × 10 2 6.13 × 10 2 6.15 × 10 2 6.15 × 10 2 6.15 × 10 2 6.15 × 10 2 6.15 × 10 2 6.14 × 10 2
Std6.75 × 10 1 1.15 × 101.33 × 10 1 2.23 × 10 9 1.33 × 10 1 3.04 × 10 1 6.43 × 10 1 5.62 × 10 1
21Mean1.01 × 10 3 1.02 × 10 3 2.49 × 10 3 3.21 × 10 3 2.80 × 10 3 2.88 × 10 3 1.02 × 10 3 9.62 × 10 2
Std6.96 × 107.70 × 104.68 × 10 2 5.29 × 101.64 × 10 2 1.11 × 10 2 8.94 × 103.82 × 10
22Mean6.06 × 10 3 4.53 × 10 3 7.24 × 10 3 9.61 × 10 3 4.67 × 10 3 8.48 × 10 3 6.51 × 10 3 5.19 × 10 3
Std1.07 × 10 3 7.75 × 10 2 1.02 × 10 3 3.29 × 10 2 8.62 × 10 2 3.10 × 10 2 1.35 × 10 3 3.75 × 10 2
23Mean7.98 × 10 3 7.14 × 10 3 7.47 × 10 3 9.71 × 10 3 6.32 × 10 3 9.11 × 10 3 6.68 × 10 3 6.83 × 10 3
Std9.87 × 10 2 6.57 × 10 2 9.50 × 10 2 3.56 × 10 2 1.26 × 10 3 4.26 × 10 2 6.99 × 10 2 3.30 × 10 2
24Mean1.28 × 10 3 1.29 × 10 3 1.32 × 10 3 1.45 × 10 3 1.34 × 10 3 1.33 × 10 3 1.27 × 10 3 1.30 × 10 3
Std1.41 × 101.12 × 109.04 × 102.72 × 103.83 × 104.26 × 108.42 × 104.92 × 10
25Mean1.40 × 10 3 1.39 × 10 3 1.43 × 10 3 1.44 × 10 3 1.49 × 10 3 1.43 × 10 3 1.37 × 10 3 1.41 × 10 3
Std1.61 × 106.66 × 109.56 × 102.79 × 101.63 × 103.28 × 101.49 × 104.09 × 10
26Mean1.40 × 10 3 1.40 × 10 3 1.58 × 10 3 1.45 × 10 3 1.58 × 10 3 1.62 × 10 3 1.40 × 10 3 1.40 × 10 3
Std2.89 × 10 1 5.73 × 107.69 × 107.78 × 109.23 × 107.52 × 107.45 × 106.75 × 10 1
27Mean2.41 × 10 3 2.52 × 10 3 2.70 × 10 3 3.06 × 10 3 2.46 × 10 3 2.77 × 10 3 2.12 × 10 3 2.43 × 10 3
Std9.90 × 108.95 × 106.41 × 106.26 × 101.27 × 10 2 4.34 × 101.24 × 10 2 1.83 × 10 2
28Mean1.72 × 10 3 1.79 × 10 3 5.17 × 10 3 6.12 × 10 3 4.68 × 10 3 4.79 × 10 3 1.74 × 10 3 1.77 × 10 3
Std1.42 × 10 3 7.31 × 10 2 5.55 × 10 2 2.75 × 10 2 3.68 × 10 2 2.30 × 10 2 5.45 × 10 2 3.63 × 10
win/=/los 17/2/926/0/227/0/126/0/228/0/017/0/1125/2/1
Table 2. p-Values of the Wilcoxon rank-sum test for CEC2013 functions.
Table 2. p-Values of the Wilcoxon rank-sum test for CEC2013 functions.
FunctionGOAAOBOAPSOSCAPMVOCS
F11.40 × 10 2 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4
F23.30 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 0.12121.83 × 10 4
F31.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 0.67766.39 × 10 5
F42.46 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4
F51.31 × 10 3 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4
F62.20 × 10 3 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 0.67761.71 × 10 3
F70.07574.40 × 10 4 1.83 × 10 4 1.40 × 10 2 1.83 × 10 4 1.73 × 10 2 0.5205
F82.20 × 10 3 1.01 × 10 3 1.01 × 10 3 0.12122.83 × 10 3 2.46 × 10 4 3.30 × 10 4
F90.67761.01 × 10 3 1.83 × 10 4 4.52 × 10 2 1.83 × 10 4 2.46 × 10 4 0.1212
F101.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 4.40 × 10 4 1.83 × 10 4
F111.83 × 10 4 1.73 × 10 2 1.83 × 10 4 3.12 × 10 2 2.46 × 10 4 1.83 × 10 4 7.69 × 10 4
F121.31 × 10 3 2.20 × 10 3 1.83 × 10 4 2.20 × 10 3 2.20 × 10 3 2.20 × 10 3 0.4274
F133.30 × 10 4 2.46 × 10 4 1.83 × 10 4 4.40 × 10 4 5.83 × 10 4 3.61 × 10 3 0.1041
F142.20 × 10 3 3.61 × 10 3 1.83 × 10 4 0.34471.83 × 10 4 0.47270.7337
F152.83 × 10 3 0.14051.83 × 10 4 0.38471.83 × 10 4 5.80 × 10 3 2.83 × 10 3
F162.83 × 10 3 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 0.14051.83 × 10 4
F171.83 × 10 4 4.40 × 10 4 1.83 × 10 4 0.16201.83 × 10 4 1.40 × 10 2 0.0640
F181.31 × 10 3 1.83 × 10 4 1.83 × 10 4 3.30 × 10 4 1.83 × 10 4 0.24130.1212
F190.52051.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 4.40 × 10 4 0.0140
F200.67641.49 × 10 4 1.49 × 10 4 1.73 × 10 4 7.28 × 10 3 7.71 × 10 3 2.83 × 10 3
F211.71 × 10 3 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 0.16201.83 × 10 4
F221.83 × 10 4 0.30751.83 × 10 4 7.57 × 10 2 2.46 × 10 4 0.27300.4727
F230.67761.71 × 10 3 1.83 × 10 4 0.67761.83 × 10 4 3.12 × 10 2 0.1212
F247.28 × 10 3 3.12 × 10 2 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 0.9097
F257.69 × 10 4 7.69 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 3.30 × 10 4 1.40 × 10 2
F264.40 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 1.83 × 10 4 7.69 × 10 4 1.83 × 10 4
F271.40 × 10 2 1.31 × 10 3 1.83 × 10 4 4.40 × 10 4 1.83 × 10 4 3.30 × 10 4 2.11 × 10 2
F280.08909.11 × 10 3 1.83 × 10 4 9.11 × 10 3 0.16200.23650.6758
Table 3. Computer storage of populations at each iteration.
Table 3. Computer storage of populations at each iteration.
AlgorithmNameSizeBytes Class
PCGOA G r o u p 2 × g r o u p s × D 2 × g r o u p s × D × 8 d o u b l e
GOA N p N P × D N P × D × 8 d o u b l e
PMVO G r o u p N P × D N P × D × 8 d o u b l e
CS N p N P × D N P × D × 8 d o u b l e
Table 4. Comparison results of each algorithm for the tension spring design problem.
Table 4. Comparison results of each algorithm for the tension spring design problem.
Algorithm x 1 x 2 x 3 Best
PCGOA0.0500.28220.00282
GOA0.0500.28220.00282
PSO0.0810.7842.08090.02120
BOA0.0500.28220.00282
AO0.0500.2502.87120.00305
SCA0.0500.28220.00282
PMVO0.0500.28220.00282
Table 5. Comparison results of each algorithm for the pressure vessel design problem.
Table 5. Comparison results of each algorithm for the pressure vessel design problem.
Algorithm x 1 x 2 x 3 x 4 Best
PCGOA0.1930.09610.00064.13108.8280
GOA0.1920.09510.00064.12108.8980
PSO3.39945.54619.95876.3642,851.7246
BOA0.1920.16210.00065.967126.6560
AO0.1940.09510.09064.537113.9758
SCA0.1950.10710.00065.569114.0628
PMVO0.1920.10010.000185.174269.7348
Table 6. Comparison results of each algorithm for the welded beam design problem.
Table 6. Comparison results of each algorithm for the welded beam design problem.
Algorithm x 1 x 2 x 3 x 4 Best
PCGOA0.20563.47059.04550.20571.7258
GOA0.20503.43059.18330.20501.7380
PSO0.41934.86516.64270.42793.5247
BOA0.18946.72797.73890.35232.9854
AO0.16565.52639.15040.20521.9317
SCA0.20273.88208.95290.21601.8395
PMVO0.19213.78949.04670.20571.7470
Table 7. Comparison results of each algorithm for the speed reducer design problem.
Table 7. Comparison results of each algorithm for the speed reducer design problem.
Algorithm x 1 x 2 x 3 x 4 x 5 x 6 x 7 Best
PCGOA3.60.8287.37.83.95.2847201,613.2
GOA3.60.8287.37.83.95.2847201,613.2
PSO3.55240.708827.79117.49797.88043.77235.1926585,169.9
BOA3.60.8287.38.02413.95.5000201,760.3
AO3.60.8287.38.29653.95.3078201,638.6
SCA3.60.8287.37.83.95.2936201,618.5
PMVO3.60.8287.37.95123.95.2855201,616.7
Table 8. Comparison results of each algorithm for the Car side impact design problem.
Table 8. Comparison results of each algorithm for the Car side impact design problem.
PCGOAGOAAOBOAPSOSCAPMVO
x 1 0.5000.5000.5140.5000.6380.5000.500
x 2 1.0011.0130.9970.9261.1840.9281.056
x 3 0.5000.5000.5260.5000.6180.5000.500
x 4 0.5000.5010.5320.5000.5070.6450.500
x 5 0.5000.5000.6590.6810.6250.5000.507
x 6 1.1841.4360.8720.5870.9870.5090.851
x 7 0.5000.5000.5000.5600.9690.5000.504
x 8 0.1920.1920.1920.1920.1920.1920.192
x 9 0.1920.1920.1920.1920.1920.1920.192
x 10 −8.419−5.597−12.897−26.948−16.715−30.0004.384
x 11 −0.614−3.464−14.472−12.508−11.029−4.106−1.477
b e s t 19.07419.12319.75519.21823.83019.26619.490
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, J.-S.; Sun, B.; Chu, S.-C.; Zhu, M.; Shieh, C.-S. A Parallel Compact Gannet Optimization Algorithm for Solving Engineering Optimization Problems. Mathematics 2023, 11, 439. https://0-doi-org.brum.beds.ac.uk/10.3390/math11020439

AMA Style

Pan J-S, Sun B, Chu S-C, Zhu M, Shieh C-S. A Parallel Compact Gannet Optimization Algorithm for Solving Engineering Optimization Problems. Mathematics. 2023; 11(2):439. https://0-doi-org.brum.beds.ac.uk/10.3390/math11020439

Chicago/Turabian Style

Pan, Jeng-Shyang, Bing Sun, Shu-Chuan Chu, Minghui Zhu, and Chin-Shiuh Shieh. 2023. "A Parallel Compact Gannet Optimization Algorithm for Solving Engineering Optimization Problems" Mathematics 11, no. 2: 439. https://0-doi-org.brum.beds.ac.uk/10.3390/math11020439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop