Next Article in Journal
Ranking-Based Salient Object Detection and Depth Prediction for Shallow Depth-of-Field
Previous Article in Journal
Reference Frame Unification of IMU-Based Joint Angle Estimation: The Experimental Investigation and a Novel Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Equilibrium Optimizer with Application in Unmanned Aerial Vehicle Path Planning

Aeronautics Engineering College, Air Force Engineering University, Xi’an 710038, China
*
Author to whom correspondence should be addressed.
Submission received: 13 February 2021 / Revised: 24 February 2021 / Accepted: 2 March 2021 / Published: 5 March 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The unmanned aerial vehicle (UAV) path planning problem is a type of complex multi-constraint optimization problem that requires a reasonable mathematical model and an efficient path planning algorithm. In this paper, the fitness function including fuel consumption cost, altitude cost, and threat cost is established. There are also four set constraints including maximum flight distance, minimum flight altitude, maximum turn angle, and maximum climb angle. The constrained optimization problem is transformed into an unconstrained optimization problem by using the penalty function introduced. To solve the model, a multiple population hybrid equilibrium optimizer (MHEO) is proposed. Firstly, the population is divided into three subpopulations based on fitness and different strategies are executed separately. Secondly, a Gaussian distribution estimation strategy is introduced to enhance the performance of MHEO by using the dominant information of the populations to guide the population evolution. The equilibrium pool is adjusted to enhance population diversity. Furthermore, the Lévy flight strategy and the inferior solution shift strategy are used to help the algorithm get rid of stagnation. The CEC2017 test suite was used to evaluate the performance of MHEO, and the results show that MHEO has a faster convergence speed and better convergence accuracy compared to the comparison algorithms. The path planning simulation experiments show that MHEO can steadily and efficiently plan flight paths that satisfy the constraints, proving the superiority of the MHEO algorithm while verifying the feasibility of the path planning model.

1. Introduction

With the continuous development of UAVs, the use of this technology in civilian commercial and military fields is increasing. Path planning and routing problems are common in the domain of commercial UAVs such as logistics drones [1,2,3]. For the defense field, its role in performing intelligence reconnaissance, combat strikes and other operations continues to be revealed. The UAV path planning is an indispensable part of the UAV mission planning. The term refers to the planning of one or more feasible, optimal, or near-optimal flight paths from the starting position to the target position which satisfies a series of constraints. Such planning also takes into account the UAV’s own performance, environmental factors and mission requirements [4]. The UAV path planning problem is a typical non-deterministic polynomial problem [5]. As the planning space and the constraints increase, the computational burden of the problem grows rapidly, and the demands on the performance of the algorithm become higher and higher. Currently, there are two types of UAV path planning methods. The first type is the traditional optimization algorithms, including the A* algorithm [6,7], the artificial potential field method [8,9], and rapidly exploring random trees [10,11]. Since traditional optimization methods rely on gradient information for a solution and are easily trapped in local optimum as the dimensionality of the problem keeps increasing [12]. Another type is the intelligent optimization algorithm, which has received a lot of attention from researchers because of its simple structure, high optimization accuracy, fast convergence and robustness.
A series of intelligent optimization algorithms have been used to solve the UAV path planning problem thanks to continuous research in its area. A hybrid genetic algorithm combined with a ray casting algorithm is proposed by Gustavo et al. [13] for solving the path planning problem of obstacle avoidance. Qu et al. [14] combine a simplified grey wolf optimizer with an improved symbiotic biological search and applies it to solving the UAV path planning problem. Yu et al. [15] present a differential evolutionary algorithm with adaptive selection variance constraints and solve the UAV path planning problem in disaster environments. Zhang and Duan [16] propose a path planning model based on the timestamp technique and solve it using an improved social class pigeon-inspired optimization. Da Silva et al. [17] combine greedy heuristics and genetic algorithms and applies them to the UAV path planning problem during emergency landings. Zhang et al. [18] integrate phase angle encoding and mutation adaptive mechanisms into the basic fruit fly optimization and applies it to plan flight path in complex 3D environments. Wu et al. [19] propose a solar-powered UAV path planning framework for complex urban environments and solve it using an improved whale optimization algorithm that includes an adaptive switching strategy and a coordinated decision mechanism. Vincent et al. [20] propose a genetic algorithm implemented in parallel on a graphics processing unit and apply it to the UAV path planning problem-solving. Puneet et al. [21] compare several existing metaheuristic optimization algorithms and propose the use of a multi-universe optimizer to obtain UAV paths with high QoS assurance.
Recently, an intelligent optimization algorithm inspired by the volume–mass balance model–the equilibrium optimizer—has been proposed. Faramarzi et al. [22] shows that EO outperforms GA [23], PSO [24], GWO [25], GSA [26], SSA [27] and covariance matrix adaptive evolution strategy (CMA-ES) [28] in terms of performance. Due to its better performance, EO has been used in a variety of practical engineering problems. Abdel-Basset et al. [29] use linear reduction diversity techniques and local minima elimination methods to improve EO performance and to estimate solar PV parameters. Agnihotri et al. [30] employ EO for solving economic dispatch problems. Improved EO based on the chaotic search is proposed in Ref. [31] and used for nonlinear planning and petrochemical applications.
Although EO has competitive advantages over other algorithms, it still suffers from slow convergence, low accuracy, and the tendency to fall into local optima in some cases. It is known from the No Free Lunch theory [32] that no optimization algorithm can solve all types of optimization problems. Therefore, the EO is improved in this paper in order to be used for solving the UAV path planning problem.
In order to improve the performance of the basic EO, this paper employs three strategies to improve the EO. Firstly, we adopt a multiple population strategy as a way to balance the exploitation and exploration ability of the algorithm. Secondly, a Gaussian distribution estimation strategy using the dominant population information to guide the evolution of individuals is introduced to enhance the algorithm performance. In addition, the Levy flight strategy and inferior solution shift strategy are used to apply perturbations to the particles to help the algorithm get rid of stagnation. The main innovations of this paper are summarized as follows.
(1) A multiple population hybrid equalization optimizer (MHEO) is proposed.
(2) A path planning model with three costs and four constraints is developed.
(3) The CEC2017 test set was used to evaluate the MHEO performance. Compared with other algorithms, MHEO has better performance.
(4) MHEO has been used to solve the path planning problem and obtains more satisfactory results.
The remainder of this paper is organized as follows. The basic EO is presented in detail in Section 2. The improvement strategy and the proposed MHEO are given in Section 3. The UAV path planning model is described in detail in Section 4. In Section 5 and Section 6, the CEC2017 function suite and the path planning problem are solved and analyzed, respectively. Finally, in Section 7, we summarize work in this paper and point out the future research that needs to be done.

2. The Basic EO

2.1. Initialization

The equilibrium optimizer (EO) is a physics-based metaheuristic algorithm. Similar to other metaheuristic algorithms, EO generates a randomly distributed initial population in the search space with the following equation.
X i = l b + r 1 ( u b l b )
where X i is the ith generation of individual particles, r 1 is a random vector obeying a uniform distribution from 0 to 1, l b and u b are the upper and lower bounds of the search space, respectively.

2.2. Equilibrium Pool and Candidates

After initialization, the four particles with the best fitness are selected to form the equilibrium pool, and the equilibrium pool particles are updated after each iteration.
X e q = { X e q 1 , X e q 2 , X e q 3 , X e q 4 , X e q a v e }
X e q a v e = ( X e q 1 + X e q 2 + X e q 3 + X e q 4 ) / 4
where X e q indicates the equilibrium pool, X e q 1 , X e q 2 , X e q 3 and X e q 4 are the four best individual particles so far, and X e q a v e is the average of the four individual particles.

2.3. Exponential Term

EO balances the exploitation and exploration of the algorithm by the variation of the exponential term F . The mathematical formula of F is described as follows:
F = a 1 × s i g n ( r 0.5 ) ( e λ t 1 )
t = ( 1 i t e r i t e r max ) ( a 2 × i t e r i t e r max )
where a 1 is a constant with a value of 2, r and λ are uniform random vectors from 0 to 1, t is a nonlinearly varying coefficient, i t e r denotes the number of current iterations, i t e r max denotes the maximum number of iterations, and a 2 is a constant with a value of 1.

2.4. Generation Rate

The generation rate G is an important factor affecting the performance of EO and represents the exploitation ability of EO in the optimization process. The mathematical model is described as follows:
G = G C P ( X e q λ X i )
G C P = { 0.5 r 2 , r 3 G P 0 , r 3 < G P
The G C P is a control vector that can be obtained from Equation (7). X e q is a randomly selected particle from the equilibrium pool. X i is the current particle. r 2 and r 3 are random numbers uniformly distributed from 0 to 1. G P is a constant of value 0.5.
In summary, the equation for generating candidate particles by EO can be described as follows:
X = X e q + ( X X e q ) F + G λ v ( 1 F )
where V is considered to be a unit volume, and the other variables are as shown above. EO balances the exploitation and exploration of the algorithm by adjusting the contribution of the second and third terms through F , where the second term helps exploration and the third term helps exploitation.

3. The Proposed MHEO

Compared with other metaheuristic optimization algorithms, EO solves optimization problems effectively. However, EO relies on the particles in the equilibrium pool to generate candidate particles, which still suffers from the defects of reduced population diversity and falling into the local optimum. In order to overcome the shortcomings of basic EO, this paper proposes a multiple population hybrid equilibrium optimizer (MHEO). The population is divided into three subpopulations and different position update formulas are adopted to enhance the population diversity and improve the performance of the algorithm. A stagnation perturbation strategy is introduced to help the algorithm jump out of the local optimum. The mathematical model of MHEO is described in the following detail.

3.1. Multipopulation Strategy

In this paper, the whole population is divided into three subpopulations according to their fitness: the exploitation population, the equilibrium population, and the exploration population. In the optimization process, the three populations are updated separately according to different strategies and then evaluated together to divide the new three subpopulations. Specifically, the exploitation population uses information from the dominant population to exploit and provide better solutions. The equilibrium population balances exploitation behavior and exploration behavior. Exploration populations are exploring more search areas and maintaining population diversity. For the optimization algorithm, extensive exploration is mainly performed in the early stage to ensure that it does not fall into the local optimum, while precise exploitation is mainly performed in the later stage to ensure convergence efficiency. Thus, the size of the three subpopulations is dynamically changing, and the mathematical model is described as follows:
p o p u l a t i o n   s i z e = { 0.3 N P , e x p l o i t a t i o n   p o p u l a t i o n 0.4 N P , e q u i l i b r i u m   p o p u l a t i o n [ 0.3 0.3 × ( i t e r i t e r max ) ] N P , e x p l o r a t i o n   p o p u l a t i o n
where N P is the population size. In order to avoid premature convergence, the size of exploited populations should not be large and is set to 0.3 N P . Meanwhile, the exploration population will be gradually reduced as the optimization proceeds in order to ensure the convergence efficiency of the algorithm at a later stage.
In Equation (9), the three subpopulations are sorted from smallest to largest based on fitness as follows.
P i = ( N P r a n k ( f ( X i ) ) + 1 ) / N P
X i = { e x p l o i t a t i o n   p o p u l a t i o n , i f   P i > 0.7 e q u i l i b r i u m   p o p u l a t i o n , i f   P i > [ 0.3 0.3 × ( i t e r i t e r max ) ] and   P i 0.7 e x p l o r a t i o n   p o p u l a t i o n , i f   P i [ 0.3 0.3 × ( i t e r i t e r max ) ]  

3.2. Gaussian Distribution Estimation Strategy

The basic EO mainly uses the individuals in the equilibrium pool for updating. However, when the individuals in the equilibrium pool fall into a local optimum, the quality of the whole population will be affected. To fully utilize the information of the dominant population, the Gaussian distribution estimation strategy is introduced to enhance the performance of the algorithm considering that it can use a probability model to represent the relationship between individuals. The Gaussian distribution estimation strategy computes a probability distribution model using the current dominant population and generates new candidates based on the probability distribution model sampling. In this paper, the distribution model is estimated using a weighted maximum likelihood estimation method, and the mathematical model of this strategy is described as follows:
X m e a n = i = 1 N P / 2 ω i × X i
ω i = ln ( N P / 2 + 0 . 5 ) - ln ( i ) i = 1 N P / 2 ( ln ( N P / 2 + 0.5 ) - ln ( i ) )
C o v ( i ) = 1 N P / 2 i = 1 N P / 2 ( X i - X m e a n ) ( X i - X m e a n ) T
where X m e a n is the weighted mean value of the dominant population, ω i denotes the weighting coefficients in the dominant population in descending order of fitness, and C o v is the weighted covariance matrix of the dominant population.
For the exploitation population, we use the dominant population information to modify the direction of particle evolution as a way to enhance the optimization ability of the algorithm. The dominant population equation is as follows.
X i = G a u s s i a n ( X m e a n , C o v ( i ) ) + r a n d n × ( X i X m e a n )
For the equilibrium population, X e q a v e is influenced by the four optimal individuals, which is not conducive to the algorithm to explore other regions. X m e a n , however, is the information set of the dominant population and is more representative of the evolutionary direction of the population. Therefore X m e a n is used instead. The modified balance pool is described as follows:
X e q = { X e q 1 , X e q 2 , X e q 3 , X e q 4 , X m e a n }
For the exploration population, since they are far from the optimal solution, the mean points should be far from the inferior solution as a way to drive the sampling points closer to the optimal solution, while the sampling points for each inferior solution are different, which helps to enhance the exploration ability of the algorithm. The exploration population equation is described as follows.
X i = G a u s s i a n ( X m e a n , C o v ( i ) ) + r a n d n × ( X m e a n X i )

3.3. Stagnation Perturbation Strategy

When an algorithm stalls, two types of methods are usually used to help the algorithm jump and get rid of the stagnation. One type is the restart strategy. It helps the algorithm to get out of stagnation by randomly initializing particles in the search space. However, this method does not refer to the current population of the information and is too random to be effective in general. Therefore, this paper uses another type of method to help the algorithm get rid of stagnation–particle perturbation strategy.

3.3.1. Lévy Flight Strategy

Lévy flight is an important non-Gaussian random walk whose random steps obey a large tailed probability distribution [33]. Due to the infinite and fast growth of the variance, the most important feature of this flight pattern is the ability to maximize the exploration of space in an uncertain environment. Current algorithms that apply the Lévy wandering strategy all generate random search steps centered on arbitrary individuals, and this treatment maximizes the spatial search through. However, when the step size is small, the probability of obtaining better offspring individuals around inferior solutions is small, which tends to cause computational waste and is not conducive to algorithm convergence. In MHEO, the Lévy walk performs a local search centered on the current optimal solution to balance the exploration and exploitation performance of this strategy. The specific mathematical model is described as follows:
S t e p = R L ( X e q R L X i )
X i = X i + r a n d S t e p / 2
where R L is a vector that obeys the Lévy distribution.

3.3.2. Inferior Solution Shift Strategy

When the algorithm stalls, the shift of the mean value is considered to expand the search range as a way to enhance the population diversity. Meanwhile, it can effectively utilize the information of inferior solutions and avoid the waste of inferior solutions. The specific mathematical model is described as follows:
X m e a n , s h i f t = ( X m e a n + X e q + X i ) / 3
X i = G a u s s i a n ( X m e a n , s h i f t , C o v ( i ) )
C o v = C o v ( 1 i t e r / i t e r max )
In the optimization process, the average fitness of the dominant population is used to determine whether the algorithm is in stagnation. If the average fitness of the dominant population does not change in two consecutive iterations, the algorithm is considered to be in stagnation and the two-particle perturbation strategies above are employed. The mathematical model is described as follows:
F mean = i = 1 0.5 N P f ( X i ) 0.5 N P
f l a g = { 1 ,   i f   F mean , old F mean , new 0 ,   i f   F mean , old < F mean , new
where F mean is the mean fitness of the dominant population. F mean , old is the mean fitness of the previous generation and F mean , new is the mean contemporary fitness.
After each iteration is finished, new populations are generated using the following selection mechanism.
X i ( i t e r + 1 ) = { X i ( i t e r + 1 ) , f ( X i ( i t e r + 1 ) ) < f ( X i ( i t e r ) ) X i ( i t e r ) , f ( X i ( i t e r + 1 ) ) f ( X i ( i t e r ) )
The pseudo-code of the proposed multiple population hybrid equilibrium optimizer (MHEO) is described below (Algorithm 1).
Algorithm 1. The procedure of MHEO
1. Set the maximum iterations as i t e r max ;
2. Set the population size as N P ;
3. Assign free parameters a 1 = 2 , a 2 = 1 , G P = 0.5 , i t e r = 1 ;
4. Initialize the position using Equation (1);
5. While i t e r < i t e r max
6. Estimate weight mean X m e a n , covariance matrix C o v using Equations (12) and (14);
7. Evaluate population and form Equilibrium pool using Equation (16);
8. Calculate P i using Equation (10);
8. Generate subpopulations using Equation (11);
8. Update t using Equation (5);
9. For i = 1:NP
10. if f l a g = 0 ,
11. X i is updated by Equations (19) and (21);
12. Else
13. i f   P i > 0.7
14. X i is updated by Equation (15);
15. Elseif P i > [ 0.3 0.3 × ( i t e r i t e r max ) ] and   P i 0.7 ;
16. X i is updated by Equation (8);
17. Else
18. X i is updated by Equation (17);
19. End if;
20. End if;
21. End if;
22. Generate new populations by Equation (25)
23. End for
24. i t e r = i t e r + 1 ;
25. End while;
26. Output the best individual X e q 1 and best fitness f ( X e q 1 ) .

4. UAV Path Planning Model

4.1. Battlefield Environment Construction

The battlefield environment construction problem is mainly to solve the establishment of 3D terrain environment and the establishment of various types of threats. In this paper, we mainly use Digital Elevation Map (DEM), and the 3D mountain terrain is described as follows.
Φ = { ( x , y , z ) | 0 x x m a x , 0 y y m a x , 0 z z m a x }
where x , y , z are the horizontal and vertical coordinates and height of any point in the battlefield environment respectively, and x m a x , y m a x , z m a x are the boundary values of the battlefield environment respectively.
Since this paper mainly considers static path planning, the deployment position of enemy air defense systems and the corresponding weapon performance will be obtained by various detection methods before the mission is carried out. In addition, the UAV mainly relies on low altitude assault and defense to carry out strike missions, and when encountering enemy radar or anti-aircraft fire, it usually adopts a close ground turn for evasion instead of climbing altitude to get rid of it.
Based on the above considerations, in order to ensure the survival rate of UAV flying to the strike target and the simplicity of the trajectory planning model, we equate the threat range of the threat source to a cylindrical obstacle model with the maximum cross-sectional area covering the threat area. In summary, the battlefield environment is constructed as shown in Figure 1.

4.2. Constraints and Objective Functions

Solving the path planning problem requires the establishment of a suitable fitness function and the consideration of various constraints affecting the quality of the path. The static global 3D path planning model contains two main aspects. One is the cost function, which is the objective function of the intelligent optimization algorithm. We need to consider the cost of fuel consumption, the cost of flight altitude, and the cost of the integrated threat of the UAV. The second is the performance constraints of the UAV itself, such as the maximum flight path constraint, the minimum flight altitude constraint, the maximum turn angle constraint, and the maximum climb angle constraint.

4.2.1. Cost of Fuel Consumption

In actual combat missions, there is a limit to the amount of fuel a UAV can carry. The UAV cannot fly without limitation and needs to ensure that the UAV returns safely after the mission. In this paper, it is assumed that UAV performs uniform rate flight, so the size of the track length reflects the size of fuel consumption, i.e., the cost of UAV flight fuel consumption can be expressed by the track length.
Cos t g = i = 1 N + 1 L i
where L i is the length of the i th path segment.

4.2.2. Cost of Flight Altitude

Flying at low altitudes is beneficial to play the role of terrain shielding and reduce the risk of being detected by radar [34]. Therefore, the UAV is required to fly at as low an altitude as possible during the mission, and the flight altitude cost can be described as follows:
Cos t h = i = 1 N z i
where z i is the altitude corresponding to the i th path point.

4.2.3. Cost of Integrated Threat

The UAV will encounter enemy air defense systems in its mission, which include detection radar, anti-aircraft artillery, ground-to-air missiles and other threats. The above threats are approximated as a cylindrical area in the three-dimensional plane, and the detection range or strike range is used as its radius, stipulating that the UAV cannot pass through the cylindrical area.

4.2.4. Constraint of Maximum Flight Distance

Assuming that the maximum range that the UAV can fly with fuel is L max , the maximum flight path constraint to ensure that the UAV can have enough fuel to return to base after the mission is as follows:
2 i = 1 N + 1 L i L max < 0

4.2.5. Constraint of Minimum Flight Altitude

When the UAV performs a low-altitude surprise strike mission, although flying close to the ground can reduce the probability of being detected by radar, in the actual combat space, the terrain is complex and the risk of the aircraft crashing to the ground can easily occur. Therefore, in order to ensure UAV flight safety, a minimum flight altitude needs to be set. Each flight track altitude H min flown by the UAV should satisfy the following constraint.
H min z i < 0

4.2.6. Constraint of Maximum Turn Angle

Due to the limitation of the UAV’s own maneuverability, the maximum turn angle of the actual flight needs to be limited. At the same time, the UAV will slightly deviate from the planned trajectory during the turn. The trajectory curvature should therefore ensure that the planned path can be effectively followed by the UAV. The angle between the generated adjacent trajectory segments φ i , j cannot exceed the maximum turn angle φ max .
max | φ i j | φ max 0

4.2.7. Constraint of Maximum Climb Angle

The maximum climb angle is the maximum angle at which the UAV can climb or descend in flight and is another important criterion for achieving UAV maneuvers in addition to the maximum turn angle. Similar to the maximum turn angle, the climb angle between trajectory segments θ i , j cannot exceed the maximum climb angle θ max .
max | θ i j | θ max 0
In summary, the objective function model for the path planning used in this paper is as follows.
min F = ω 1 Cos t g + ω 2 Cos t h + η P e n a l t y
P e n a l t y = i = 1 n g i , n = 1 , 2 , 3 , 4
g i = { 0 ,   Satisfing   constraints 1 , No   satisfing   constraints
where ω 1 and ω 2 are the proportional weight coefficients of flight fuel consumption cost and flight altitude cost respectively, and they satisfy ω 1 + ω 2 = 1 . η is the penalty function factor, and the multi-constraint optimization problem is transformed into an unconstrained optimization problem for a solution by introducing the penalty function. Algorithm 2 describes the process of using MHEO for UAV 3D flight path planning in detail.
Algorithm 2. MHEO for unmanned aerial vehicle (UAV) path planning
1. Set the parameters of CASSA same as Algorithm 1;
2. Set the start point, target point, the boundaries of battlefield space;
3. Set the position and range of threats;
4. While i t e r < i t e r max
5. Each particle represents a path; calculate the fitness F according to Equation (33) and rank the individuals according to the fitness
6. Estimate weight mean X m e a n , covariance matrix C o v using Equations (12) and (14);
7. Evaluate population and form Equilibrium pool using Equation (16);
8. Update t using Equation (5);
9. For i th X i , if population stagnates, X i is updated by Equations (19) and (21);
10. Else, if X i belongs to exploitation population, X i is updated by Equation (15);
11. Else, if X i belongs to equilibrium population, X i is updated by Equation (8);
12. Else, if X i belongs to exploration population, X i is updated by Equation (17);
13. If the new path is better than before, update it;
14. i t e r = i t e r + 1 ;
15. End while;
16. Output the best path X e q 1 and best fitness F ( X e q 1 ) .

5. Experiment and Analysis of CEC 2017 Test

To comprehensively verify the superior performance of the MHEO, the algorithm is tested using the IEEE CEC2017 single-objective test function. The CEC2017 test suite includes 28 test functions, among which F1 belongs to unimodal test functions, which are used to test the convergence accuracy of the algorithm; F2–F8 belong to multimodal test functions, which are mainly used to test the ability of the algorithm to jump out of the local optimum; F9–F18 and F18–F28 are hybrid and composite functions, respectively, which are composed of complex functions and can be used to test the potential of the algorithm to solve complex optimization problems in the real world. The definition of the function and the optimal value are referred to the Ref. [35].
Three improved swarm intelligence optimization algorithms HFPSO [36], GEDGWO [37], VCS [38] and two state-of-the-art swarm intelligence optimization algorithms MPA [39], SMA [40] are used for comparison with MHEO. HFPSO is an improved particle swarm algorithm with a hybrid firefly algorithm. GEDGWO is a variant of the GWO algorithm incorporating a distribution estimation algorithm. VCS is a fusion algorithm that combines DE and CMA-ES. All these three improved algorithms have proved their good performance in the respective literature. MPA and SMA are the latest proposed better performing swarm intelligence optimization algorithms that have been used in different subjects and engineering fields in recent years. Therefore, a comparison using these algorithms can verify whether the proposed algorithms in this paper have improved in performance.
The maximum number of iterations is 600, and the population size is 500 in each experiment. The parameter settings of each algorithm refer to the original literature, as shown in Table 1. All algorithms were run independently 51 times to record the experimental results. The experimental results are shown in Appendix A, Table A1. The simulation experimental environment is AMD R7 4800U (1.80 GHz) processor and 16GB memory, and the program is run on MATLAB 2016b platform.
As shown in Table A1, MHEO performs the best on F1, which indicates the advantage of MHEO in solving the sickness function and verifies that the improvement strategy can effectively improve the exploitation ability of the algorithm. For the multimodal test functions F2–F8, MHEO achieves better solutions on six of them (F3–F7, F9), which indicates that the improvement strategy can well enhance the population diversity and avoid local optimum. HFPSO ranks first on F2. For the hybrid and composite functions F9–F28, each algorithm is superior and inferior. MHEO ranks first on 12 of the functions (F9–F13, F16–F17, F20–F22, F26–F27) and second on six of the functions (F14–F15, F18–F19, F24, F28). MPA ranks first on five of the functions tested (F14–F15, F18–F19, F24). MHEO achieves satisfactory results on F28. HFPSO and VCS obtain the best results on F25 and F23, respectively. Overall, MHEO ranked first on 19 out of 28 test functions and second on six test functions, indicating that the improvement strategy proposed in this paper can well enhance the development and exploration of the algorithm.
To analyze the solution quality of the improved algorithms, box plots are drawn based on the results of each algorithm solving the test function 51 times independently, as shown in Figure 2. Analysis of Figure 2 shows that the distribution of the solutions of MHEO is more concentrated in most of the test functions, which indicates that MHEO is more stable; and the median of MHEO is smaller, which indicates that MHEO solves the functions with higher accuracy.
Convergence speed and convergence accuracy are also important indicators of algorithm performance, and Figure 3 shows the mean error convergence curve graphs for the seven algorithms solving the CEC2017 test suite. MHEO has a faster convergence speed and better convergence accuracy in 11 tested functions (F1, F4, F7, F10–F13, F16–F17, F20, F26). For F3, F5, F6, F9, F21–F22 and F27, MHEO converges slowly at the beginning, but converges faster and has better convergence accuracy in the later stages compared to other algorithms.
In order to statistically analyze the differences between the algorithms, the Wilcoxon signed-rank test was employed to test each algorithm at a significance level of α = 0.05 . The results of the Wilcoxon signed-rank test are given in Appendix A, Table A2. The meanings of the symbols in Table A2. are as follows: the p-value is the probability of the observed result, and when p < 0.05 which indicates a significant difference between the two algorithms. R indicates the result of the Wilcoxon signed-rank test, where “+” means that the MHEO outperforms competitor, while “-” means that MHEO is inferior to the competitor. The symbol “=” indicates that the competitors are similar to MHEO and not significantly different. We can learn from Table A2 that MHEO is significantly different from the comparison algorithm for most of the test functions. MHEO outperformed the comparison algorithm in at least 19 tested functions. It is worth noting that MHEO significantly differs from the basic EO in 26 test functions with no differences in only two test functions. Therefore, the MHEO proposed in this paper significantly outperforms the comparison algorithm in terms of performance.
Time cost is one of the important indicators of algorithm performance. Appendix A, Table A3 shows the average time cost of solving the function 51 times for each algorithm. The last row shows the average rank of the algorithm’s time cost. HFPSO ranked first, while MHEO ranked only fourth. From the theory of No Free Lunch, it is known that the increase in time cost due to the improvement of algorithm performance is acceptable. In addition, this paper studies the offline UAV path planning problem with the main consideration of accuracy, thus allowing the increase of time cost with the performance improvement.

6. MHEO for UAV Path Planning

In this section, we will evaluate the ability in solving the path planning problem by MHEO through simulation experiments. The experiments are conducted on a simulation platform with AMD R7 4700U, 1.8GHz,16GB RAM, and the programming environment is MATLAB R2016b. The parameters of the path planning model are set as follows: the battlefield space is x max = 10 0   km , y max = 10 0   km , the number of path nodes Dim = 20, the maximum turn angle is 60 degrees, and the maximum climb angle is 60 degrees. The flight fuel consumption cost ω 1 = 0.65 , flight altitude cost ω 2 = 0.35 , penalty function factor η = 10 , 000 . The number of populations is 100 and the maximum number of iterations is 500. The experiments are divided into two cases. The first one is the no-threat condition. Experiments are conducted using EO and MHEO to verify the effectiveness of the improved algorithm. The second one is with threat condition, using GEDGWO and MPA, which are ranked better in the function test, for comparison and verifying the superiority of the improved algorithm.

6.1. No-Threat Condition

The UAV start point is (3 km, 6 km) and the target point is (96 km, 94 km). The two algorithms were solved independently for the path planning model 20 times, and the statistical results are recorded in Table 2.
From the statistical results in Table 2, MHEO is better than EO in terms of mean, optimal and worst values. They show the better quality of the paths planned by the MHEO proposed in this paper and indicate a greater improvement of the MHEO algorithm in convergence accuracy and algorithm robustness. Although the time cost of MHEO is larger, this paper discusses offline path planning and focuses more on the algorithm computational accuracy, so the time cost of MHEO is acceptable. Figure 4a shows the optimal 3D path for each algorithm, and Figure 4b shows the optimal path in 2D contour topography. We can intuitively see that MHEO is able to plan a shorter path in the non-threatening case, which illustrates the effectiveness of MHEO. Apart from that, we can find that the path planned by EO is the same as that of MHEO in most areas, but the path is chosen further away when passing through low-lying terrain, which leads to a bigger fitness of the path obtained by EO. On the other hand, both algorithms are able to plan paths that are close enough to the ground, which also indicates the effectiveness of the path planning model proposed in this paper.

6.2. Threat Condition

The UAV start point is (3 km, 6 km), the target point is (96 km, 94 km), and the threat source settings are shown in Table 3. The three algorithms are solved independently for the path planning model 20 times, and the statistical results are recorded in Table 4.
From the statistical results in Table 4, MHEO is better than MPA and GEDGWO in terms of mean, optimal and worst value. Meanwhile, MPA has planning failures, while MHEO can consistently plan better quality paths, indicating the better performance of the MHEO proposed in this paper. From Figure 5, we can also know that the path of MHEO can effectively avoid threats and has the shortest distance. Figure 6 gives the variation of climb angles and turn angles of the optimal paths of each algorithm. All algorithms are able to satisfy the turn angle and climb angle constraints, proving the effectiveness of this proposed trajectory planning model. Figure 7 shows the convergence curves of each algorithm. It can be seen that MHEO has higher convergence accuracy. On the other hand, the time cost of MHEO is the largest, which is the same as in the test function. Considering that this paper studies the offline path planning problem, the increased time cost of higher performance is acceptable.

7. Conclusions

In this paper, we proposed an improved equilibrium optimizer, a three-dimensional trajectory planning model, and the improved algorithm applied to solve the model.
In the proposed MHEO, a multiple population strategy is used to combine the basic EO and Gaussian distribution estimation strategy as a way to enhance the algorithm performance. The basic EO relies on the top three optimal individuals of the equilibrium pool for updating, which can easily lead to local optimum, while the Gaussian distribution estimation strategy is able to use the dominant population information to correct the evolutionary direction, thus avoiding this shortcoming. When the algorithm stagnates, two-particle perturbation strategies are used. A Lévy flight strategy is applied to the dominant population to help the dominant individuals get rid of the local optimum. An inferior solution shift strategy is imposed on the inferior individuals to help the worse individuals search more space.
CEC2017 was used to test the performance of the MHEO and the algorithm performance was evaluated by statistical analysis, stability analysis, convergence analysis, and Wilcoxon test. The experimental results show that MHEO has better performance and ranks first among the seven algorithms, outperforming HFPSO, GEDGWO, VCS, MPA, SMA and EO. In addition, MHEO is applied to solve the path planning model. Simulation results show that MHEO can steadily plan flight paths with better quality. The path planning model proposed in this paper is feasible. On the other hand, the time cost of MHEO is higher than that of EO due to the increased time cost caused by the calculation of the covariance matrix. However, for static path planning, the optimization accuracy is more important compared to the time cost. Therefore, the time cost due to the increased performance is acceptable.
There is still a lot of work to be done in the future. For our fund, the establishment of a single UAV path planning model and the improvement of algorithms have been completed so far. In the future, we need to further solve the multi-UAV collaborative mission planning, including multi-UAV path planning and multi-UAV target assignment. In addition, we need to further reduce the time cost of the algorithm to enable it to solve real-time problems.

Author Contributions

Conceptualization, A.-D.T. and T.H.; methodology, A.-D.T.; software, A.-D.T. and L.X.; validation, A.-D.T. and T.H.; formal analysis, A.-D.T.; investigation, A.-D.T.; data curation, A.-D.T.; writing—original draft preparation, A.-D.T.; writing—review and editing, A.-D.T., T.H., H.Z. and L.X.; project administration, T.H.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The author acknowledges funding received from the following science foundations: The Science Foundation of the Shanxi Province, China (2020JQ-481), the Aero Science Foundation of China (201951096002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Results statistics of seven algorithms in the CEC2017 30D test.
Table A1. Results statistics of seven algorithms in the CEC2017 30D test.
NoHFPSOGEDGWOVCSMPASMAEOMHEO
F1Mean2.21 × 1024.79 × 10−77.32 × 1032.73 × 10−14.94 × 10−11.74 × 1022.58 × 10−8
Std1.35 × 1021.83 × 10−75.58 × 1033.30 × 10−13.18 × 10−11.63 × 1021.66 × 10−8
Rank6273451
F2Mean4.22 × 1014.25 × 1017.39 × 1018.68 × 1018.99 × 1019.80 × 1015.09 × 101
Std2.87 × 1012.64 × 1013.48 × 1016.25 × 1005.12 × 1001.62 × 1011.85 × 101
Rank1245673
F3Mean7.42 × 1014.53 × 1011.09 × 1026.89 × 1018.17 × 1015.06 × 1011.86 × 101
Std1.82 × 1011.24 × 1012.56 × 1011.86 × 1011.99 × 1011.65 × 1016.05 × 100
Rank5274631
F4Mean9.20 × 10−17.22 × 10−38.72 × 10−11.92 × 1007.35 × 10−11.27 × 10−34.47 × 10−6
Std1.29 × 1001.47 × 10−28.88 × 10−11.10 × 1003.21 × 10−14.09 × 10−34.39 × 10−6
Rank6357421
F5Mean8.74 × 1017.18 × 1011.79 × 1021.00 × 1021.18 × 1027.86 × 1014.13 × 101
Std2.05 × 1011.30 × 1011.20 × 1021.68 × 1012.40 × 1011.38 × 1014.20 × 100
Rank4275631
F6Mean7.06 × 1014.50 × 1019.19 × 1017.05 × 1019.39 × 1015.32 × 1011.82 × 101
Std2.31 × 1011.17 × 1012.25 × 1011.44 × 1012.03 × 1011.38 × 1016.36 × 100
Rank5264731
F7Mean8.99 × 1001.76 × 10−35.22 × 1028.87 × 1019.95 × 1021.05 × 1008.31× 10−13
Std1.11 × 1011.25 × 10−21.44 × 1034.60 × 1011.22 × 1031.19 × 1001.85× 10−13
Rank4265731
F8Mean2.87 × 1033.01 × 1034.00 × 1032.50 × 1033.04 × 1033.13 × 1033.00 × 103
Std6.39 × 1024.30 × 1021.03 × 1034.10 × 1025.06 × 1028.08 × 1026.06 × 102
Rank2471563
F9Mean9.43 × 1011.56 × 1011.25 × 1024.59 × 1011.16 × 1024.54 × 1018.02 × 100
Std3.12 × 1011.54 × 1012.42 × 1012.30 × 1014.33 × 1013.21 × 1011.43 × 101
Rank5274631
F10Mean1.04 × 1057.75 × 1023.51 × 1069.82 × 1041.31 × 1061.04 × 1052.54 × 102
Std8.70 × 1045.82 × 1021.89 × 1067.77 × 1041.09 × 1067.47 × 1041.55 × 102
Rank4273651
F11Mean1.83 × 1045.80 × 1019.00 × 1041.39 × 1032.71 × 1042.31 × 1042.44 × 101
Std2.99 × 1041.90 × 1014.08 × 1043.80 × 1022.64 × 1041.84 × 1049.91 × 100
Rank4273651
F12Mean1.05 × 1043.66 × 1011.49 × 1044.57 × 1014.71 × 1046.46 × 1032.23 × 101
Std1.06 × 1041.32 × 1011.80 × 1041.08 × 1012.84 × 1045.98 × 1031.51 × 100
Rank5263741
F13Mean9.39 × 1032.49 × 1011.59 × 1049.26 × 1011.99 × 1043.62 × 1036.72 × 100
Std1.12 × 1041.56 × 1011.82 × 1042.60 × 1011.57 × 1045.19 × 1031.84 × 100
Rank5263741
F14Mean7.09 × 1025.90 × 1028.28 × 1023.40 × 1028.18 × 1025.12 × 1023.59 × 102
Std2.30 × 1022.89 × 1022.45 × 1021.84 × 1022.83 × 1023.04 × 1021.63 × 102
Rank5471632
F15Mean2.11 × 1029.90 × 1012.56 × 1027.97 × 1014.34 × 1021.57 × 1028.52 × 101
Std8.26 × 1017.71 × 1011.22 × 1023.97 × 1011.64 × 1021.24 × 1025.97 × 101
Rank5361742
F16Mean1.67 × 1053.31 × 1012.30 × 1051.11 × 1023.75 × 1051.78 × 1052.09 × 101
Std1.45 × 1057.76 × 1001.74 × 1052.67 × 1013.55 × 1051.70 × 1052.84 × 100
Rank4263751
F17Mean6.60 × 1032.73 × 1011.49 × 1044.38 × 1013.00 × 1046.01 × 1039.70 × 100
Std8.00 × 1038.80 × 1001.87 × 1048.44 × 1002.11 × 1049.98 × 1032.95 × 100
Rank5263741
F18Mean2.70 × 1021.60 × 1022.22 × 1021.11 × 1023.59 × 1021.79 × 1021.48 × 102
Std1.03 × 1026.87 × 1016.41 × 1015.76 × 1011.59 × 1021.30 × 1025.00 × 101
Rank6351742
F19Mean2.69 × 1022.41 × 1022.78 × 1021.40 × 1022.93 × 1022.47 × 1022.14 × 102
Std1.94 × 1011.40 × 1012.98 × 1017.03 × 1012.17 × 1011.72 × 1015.88 × 100
Rank5361742
F20Mean9.77 × 1021.00 × 1021.07 × 1021.04 × 1022.90 × 1037.51 × 1021.00 × 102
Std1.53 × 1036.36 × 10−66.10 × 1004.14 × 1001.36 × 1031.30 × 1037.02 × 10−12
Rank6243751
F21Mean4.52 × 1023.93 × 1024.62 × 1023.82 × 1024.35 × 1023.99 × 1023.57 × 102
Std2.90 × 1011.61 × 1012.35 × 1012.39 × 1011.95 × 1011.49 × 1016.53 × 100
Rank6372541
F22Mean5.34 × 1024.58 × 1025.32 × 1024.83 × 1025.30 × 1024.69 × 1024.30 × 102
Std3.47 × 1011.28 × 1013.20 × 1011.79 × 1012.95 × 1011.43 × 1015.48 × 100
Rank7264531
F23Mean3.79 × 1023.87 × 1023.78 × 1023.87 × 1023.88 × 1023.88 × 1023.87 × 102
Std4.73 × 1001.49 × 10−23.35 × 1001.71 × 1001.69 × 1007.82 × 1001.36 × 10−2
Rank2513674
F24Mean1.50 × 1031.25 × 1031.55 × 1033.00 × 1021.98 × 1031.52 × 1039.42 × 102
Std8.58 × 1023.15 × 1029.39 × 1025.84 × 10−23.42 × 1021.76 × 1026.97 × 101
Rank4361752
F25Mean4.45 × 1024.92 × 1025.00 × 1025.03 × 1025.11 × 1025.10 × 1024.98 × 102
Std1.01 × 1011.33 × 1011.51 × 10−47.53 × 1001.17 × 1019.09 × 1005.80 × 100
Rank1245763
F26Mean4.05 × 1023.13 × 1024.96 × 1024.08 × 1024.46 × 1024.01 × 1023.11 × 102
Std4.98 × 1013.48 × 1015.05 × 1006.99 × 1002.89 × 1012.60 × 1013.36 × 101
Rank4275631
F27Mean5.53 × 1025.11 × 1025.69 × 1025.32 × 1027.75 × 1025.42 × 1024.68 × 102
Std1.13 × 1026.00 × 1011.43 × 1026.92 × 1011.83 × 1021.12 × 1022.75 × 101
Rank5263741
F28Mean2.96 × 1031.98 × 1039.38 × 1034.46 × 1031.27 × 1046.13 × 1032.03 × 103
Std3.54 × 1032.30 × 1017.22 × 1031.42 × 1035.29 × 1033.88 × 1032.42 × 101
Rank3164752
Average Rank4.432.435.893.216.254.251.54
Table A2. Wilcoxon signed-rank test results (α = 0.05).
Table A2. Wilcoxon signed-rank test results (α = 0.05).
NoHFPSOGEDGWOVCS
p-ValueR+R−Winp-ValueR+R−Winp-ValueR+R−Win
F15.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F21.44 × 10−24029244.88 × 10−1589737=1.26 × 10−31007319+
F35.15 × 10−1013260+5.80 × 10−1013242+5.15 × 10−1013260+
F45.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F55.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F65.15 × 10−1013260+6.93 × 10−1013215+5.15 × 10−1013260+
F75.15 × 10−1013260+5.14 × 10−10132605.15 × 10−1013260+
F83.44 × 10−1562764=9.78 × 10−1666660=9.03 × 10−71187139+
F95.15 × 10−1013260+2.31 × 10−61167159+5.15 × 10−1013260+
F105.15 × 10−1013260+5.71 × 10−61147179+5.15 × 10−1013260+
F115.15 × 10−1013260+2.36 × 10−9130026+5.15 × 10−1013260+
F125.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F135.15 × 10−1013260+4.42 × 10−9128937+5.15 × 10−1013260+
F148.18 × 10−9127848+3.03 × 10−51108218+1.57 × 10−9130719+
F152.72 × 10−8125670+2.61 × 10−1783543=3.33 × 10−9129432+
F165.15 × 10−1013260+5.46 × 10−1013251+5.15 × 10−1013260+
F175.15 × 10−1013260+9.31 × 10−10131610+5.15 × 10−1013260+
F184.64 × 10−8124680+1.80 × 10−1806520=1.01 × 10−7123195+
F195.15 × 10−1013260+6.93 × 10−1013215+9.66 × 10−9127551+
F205.15 × 10−1013260+5.15 × 10−10132605.15 × 10−1013260+
F215.15 × 10−1013260+6.15 × 10−1013233+5.15 × 10−1013260+
F225.15 × 10−1013260+5.46 × 10−1013251+5.15 × 10−1013260+
F234.81 × 10−712612005.55 × 10−1726600=4.94 × 10−9391287
F242.65 × 10−61164162+7.13 × 10−61142184+3.99 × 10−61155171+
F255.15 × 10−10013268.92 × 10−33849422.06 × 10−2910416+
F268.65 × 10−9127749+5.14 × 10−51095231+5.15 × 10−1013260+
F273.43 × 10−51105221+1.06 × 10−51133193+8.49 × 10−61138188+
F285.49 × 10−1727599=6.55 × 10−94412825.15 × 10−1013260+
+/−/=23/3/219/4/527/1/0
NoMPASMAEO
p−ValueR+R−Winp−ValueR+R−Winp−ValueR+R−Win
F15.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F25.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F35.15 × 10−1013260+5.15 × 10−1013260+7.80 × 10−1013197+
F45.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F55.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F65.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F75.15 × 10−1013260+5.15 × 10−1013260+6.91 × 10−9128145+
F85.14 × 10−523110957.15 × 10−1702624=4.53× 10−1743583=
F92.36 × 10−9130026+5.46 × 10−1013251+3.94 × 10−9129135+
F105.15 × 10−1013260+5.15 × 10−1013260+5.15× 10−1013260+
F115.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F125.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F135.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F145.74 × 10−1603723=6.92× 10−9128145+1.30 × 10−2928398+
F159.48 × 10−1656670=6.93 × 10−1013215+1.22 × 10−31008318+
F165.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F175.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F181.98 × 10−33339938.65 × 10−9127749+2.45 × 10−1787539=
F194.17 × 10−87812485.15 × 10−1013260+5.15 × 10−1013260+
F205.15 × 10−1013260+5.15 × 10−1013260+5.14 × 10−1013260+
F211.77 × 10−9130521+5.15 × 10−1013260+5.15 × 10−1013260+
F225.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
F237.08 × 10−1703623=1.32 × 10−61179147+6.38 × 10−3954372+
F245.15 × 10−10013267.35 × 10−1013206+5.15 × 10−1013260+
F257.15 × 10−41024302+7.10 × 10−71192134+6.36 × 10−8124086+
F261.05 × 10−9131412+5.15 × 10−1013260+1.18 × 10−9131214+
F273.18 × 10−61160166+6.53 × 10−1013224+4.74 × 10−51097229+
F285.15 × 10−1013260+5.15 × 10−1013260+5.15 × 10−1013260+
+/−/=21/4/327/0/126/0/2
Table A3. Time cost in the CEC2017 30D test (unit: seconds).
Table A3. Time cost in the CEC2017 30D test (unit: seconds).
NoHFPSOGEDGWOVCSMPASMAEOMHEO
F10.801.571.973.7220.332.752.69
F20.821.581.963.6420.372.763.95
F31.081.822.294.1620.603.012.98
F42.102.933.296.2221.663.994.05
F51.111.902.364.2820.683.062.81
F61.091.842.264.2420.693.042.97
F71.141.832.294.3620.813.053.06
F81.312.152.684.7620.923.295.69
F90.941.732.253.9320.412.883.10
F101.101.592.344.3020.633.083.21
F110.981.442.264.0820.582.903.28
F121.351.822.784.8220.793.303.94
F130.911.352.113.9120.392.863.03
F141.041.842.544.2520.533.013.94
F151.822.693.395.7521.423.804.56
F161.091.532.384.3220.583.103.15
F176.016.587.6314.1825.647.998.49
F181.982.833.336.0221.543.995.00
F192.493.444.057.1722.074.504.54
F202.783.754.247.8222.454.764.71
F213.314.337.818.7922.835.255.61
F223.594.547.419.3423.195.546.01
F233.254.394.538.6622.795.205.90
F244.075.116.0610.4223.656.036.24
F254.735.828.3411.7324.306.708.82
F264.125.165.7110.3923.666.056.13
F273.134.194.818.5122.675.155.84
F287.348.6310.1516.8626.989.3011.81
Average Rank1.002.003.255.937.004.004.82

References

  1. Park, J.; Kim, S.; Suh, K. A Comparative Analysis of the Environmental Benefits of Drone-Based Delivery Services in Urban and Rural Areas. Sustainability 2018, 10, 888. [Google Scholar] [CrossRef] [Green Version]
  2. Ren, X.; Cheng, C. Model of Third-Party Risk Index for Unmanned Aerial Vehicle Delivery in Urban Environment. Sustainability 2020, 12, 8318. [Google Scholar] [CrossRef]
  3. Baniasadi, P.; Foumani, M.; Smith-Miles, K.; Ejov, V. A transformation technique for the clustered generalized traveling salesman problem with applications to logistics. Eur. J. Oper. Res. 2020, 285, 444–457. [Google Scholar] [CrossRef]
  4. Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based UAV path planning. Knowl. Based Syst. 2018, 158, 54–64. [Google Scholar] [CrossRef]
  5. Huo, L.; Zhu, J.; Wu, G.; Li, Z. A Novel Simulated Annealing Based Strategy for Balanced UAV Task Assignment and Path Planning. Sensors 2020, 20, 4769. [Google Scholar] [CrossRef] [PubMed]
  6. Wu, X.; Xueli, W.; Zhen, R.; Wu, X. Bi-Directional Adaptive A* Algorithm Toward Optimal Path Planning for Large-Scale UAV Under Multi-Constraints. IEEE Access 2020, 8, 85431–85440. [Google Scholar] [CrossRef]
  7. Meng, B.B.; Gao, X. UAV path planning based on bidirectional Sparse A* Search algorithm. In Proceedings of the 2010 International Conference on Intelligent Computation Technology and Automation, ICICTA 2010, Changsha, China, 11–12 May 2010. [Google Scholar]
  8. Chen, Y.B.; Luo, G.C.; Mei, Y.S.; Yu, J.Q.; Su, X.L. UAV path planning using artificial potential field method updated by optimal control theory. Int. J. Syst. Sci. 2016, 47, 1407–1420. [Google Scholar] [CrossRef]
  9. Sun, J.; Tang, J.; Lao, S. Collision Avoidance for Cooperative UAVs with Optimized Artificial Potential Field Algorithm. IEEE Access 2017, 5, 18382–18390. [Google Scholar] [CrossRef]
  10. Kothari, M.; Postlethwaite, I. A Probabilistically Robust Path Planning Algorithm for UAVs Using Rapidly-Exploring Random Trees. J. Intell. Robot. Syst. Theory Appl. 2013, 71, 231–253. [Google Scholar] [CrossRef]
  11. Kothari, M.; Postlethwaite, I.; Gu, D.W. Multi-UAV path planning in obstacle rich environments using rapidly-exploring random trees. In Proceedings of the IEEE Conference on Decision and Control, Shanghai, China, 15–18 December 2009. [Google Scholar]
  12. Wu, G.; Pedrycz, W.; Suganthan, P.N.; Mallipeddi, R. A variable reduction strategy for evolutionary algorithms handling equality constraints. Appl. Soft Comput. J. 2015, 37, 774–786. [Google Scholar] [CrossRef]
  13. De Moura Souza, G.; Toledo, C.F.M. Genetic Algorithm Applied in UAV’s Path Planning. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation, CEC 2020—Conference Proceedings, Glasgow, UK, 19–24 July 2020. [Google Scholar]
  14. Qu, C.; Gai, W.; Zhang, J.; Zhong, M. A novel hybrid grey wolf optimizer algorithm for unmanned aerial vehicle (UAV) path planning. Knowl. Based Syst. 2020, 194, 105530. [Google Scholar] [CrossRef]
  15. Yu, X.; Li, C.; Zhou, J.F. A constrained differential evolution algorithm to solve UAV path planning in disaster scenarios. Knowl. Based Syst. 2020, 204, 106209. [Google Scholar] [CrossRef]
  16. Zhang, D.; Duan, H. Social-class pigeon-inspired optimization and time stamp segmentation for multi-UAV cooperative path planning. Neurocomputing 2018, 313, 229–246. [Google Scholar] [CrossRef]
  17. Da Silva Arantes, J.; Da Silva Arantes, M.; Toledo, C.F.M.; Júnior, O.T.; Williams, B.C. Heuristic and Genetic Algorithm Approaches for UAV Path Planning under Critical Situation. Int. J. Artif. Intell. Tools 2017, 26. [Google Scholar] [CrossRef]
  18. Zhang, X.; Lu, X.; Jia, S.; Li, X. A novel phase angle-encoded fruit fly optimization algorithm with mutation adaptation mechanism applied to UAV path planning. Appl. Soft Comput. J. 2018, 70, 371–388. [Google Scholar] [CrossRef]
  19. Wu, J.; Wang, H.; Li, N.; Yao, P.; Huang, Y.; Yang, H. Path planning for solar-powered UAV in urban environment. Neurocomputing 2018, 275, 2055–2065. [Google Scholar] [CrossRef]
  20. Roberge, V.; Tarbouchi, M.; LaBonte, G. Fast Genetic Algorithm Path Planner for Fixed-Wing Military UAV Using GPU. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 2105–2117. [Google Scholar] [CrossRef]
  21. Kumar, P.; Garg, S.; Singh, A.; Batra, S.; Kumar, N.; You, I. MVO-Based 2-D Path Planning Scheme for Providing Quality of Service in UAV Environment. IEEE Internet Things J. 2018, 5, 1698–1707. [Google Scholar] [CrossRef]
  22. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  23. John, H. Holland, Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  24. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks—Conference Proceedings, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  25. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  26. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  28. Hansen, N. The CMA Evolution Strategy: A Comparing Review. Stud. Fuzziness Soft Comput. 2006, 75–102. [Google Scholar] [CrossRef]
  29. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S.; Chakrabortty, R.K.; Ryan, M.J. Solar photovoltaic parameter estimation using an improved equilibrium optimizer. Sol. Energy 2020, 209, 694–708. [Google Scholar] [CrossRef]
  30. Agnihotri, S.; Atre, A.; Verma, H.K. Equilibrium optimizer for solving economic dispatch problem. In Proceedings of the PIICON 2020—9th IEEE Power India International Conference, Sonepat, India, 28 February–1 March 2020. [Google Scholar]
  31. Mousa, A.A.; El-Shorbagy, M.A.; Mustafa, I.; Alotaibi, H. Chaotic Search Based Equilibrium Optimizer for Dealing with Nonlinear Programming and Petrochemical Application. Processes 2021, 9, 200. [Google Scholar] [CrossRef]
  32. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  33. Long, N.C.; Meesad, P.; Unger, H. A highly accurate firefly based algorithm for heart disease prediction. Expert Syst. Appl. 2015, 42, 8221–8231. [Google Scholar] [CrossRef]
  34. Causa, F.; Fasano, G.; Grassi, M. Multi-UAV Path Planning for Autonomous Missions in Mixed GNSS Coverage Scenarios. Sensors 2018, 18, 4188. [Google Scholar] [CrossRef] [Green Version]
  35. Liu, Z.; Qin, Z.; Zhu, P.; Li, H. An adaptive switchover hybrid particle swarm optimization algorithm with local search strategy for constrained optimization problems. Eng. Appl. Artif. Intell. 2020, 95, 103771. [Google Scholar] [CrossRef]
  36. Aydilek, I.B. A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput. J. 2018, 66, 232–249. [Google Scholar] [CrossRef]
  37. Wang, X.; Zhao, H.; Han, T.; Zhou, H.; Li, C. A grey wolf optimizer using Gaussian estimation of distribution and its application in the multi-UAV multi-target urban tracking problem. Appl. Soft Comput. J. 2019, 78, 240–260. [Google Scholar] [CrossRef]
  38. Li, M.D.; Zhao, H.; Weng, X.W.; Han, T. A novel nature-inspired algorithm for optimization: Virus colony search. Adv. Eng. Softw. 2016, 92, 65–88. [Google Scholar] [CrossRef]
  39. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  40. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
Figure 1. Battlefield environment.
Figure 1. Battlefield environment.
Sensors 21 01814 g001
Figure 2. Boxes diagram of CEC2017. Subfigures (a-bb) represent functions F1–F28 respectively.
Figure 2. Boxes diagram of CEC2017. Subfigures (a-bb) represent functions F1–F28 respectively.
Sensors 21 01814 g002aSensors 21 01814 g002b
Figure 3. Convergence curves of CEC2017. Subfigures (a-bb) represent functions F1–F28 respectively.
Figure 3. Convergence curves of CEC2017. Subfigures (a-bb) represent functions F1–F28 respectively.
Sensors 21 01814 g003
Figure 4. The best path of each algorithm: (a) path in three-dimensional space; (b) path in three-dimensional space.
Figure 4. The best path of each algorithm: (a) path in three-dimensional space; (b) path in three-dimensional space.
Sensors 21 01814 g004
Figure 5. The best path of each algorithm: (a) path in three-dimensional space; (b) path in three-dimensional space.
Figure 5. The best path of each algorithm: (a) path in three-dimensional space; (b) path in three-dimensional space.
Sensors 21 01814 g005
Figure 6. The best path of each algorithm: (a) climb angle; (b) turn angle.
Figure 6. The best path of each algorithm: (a) climb angle; (b) turn angle.
Sensors 21 01814 g006
Figure 7. Convergence curves for three algorithms.
Figure 7. Convergence curves for three algorithms.
Sensors 21 01814 g007
Table 1. Algorithm parameter setting.
Table 1. Algorithm parameter setting.
AlgorithmAlgorithm Parameter SettingAlgorithmAlgorithm Parameter Setting
HFPSO c 1 = c 2 = 1.49445 , a = 0.2 , B 0 = 2 γ = 1 , ω i = 0.9 , ω f = 0.5 MPA F A D s = 0.2 , P = 0.5
GEDGWO r 3 ( 0 , 1 ) , r 4 ( 0 , 1 ) , r 5 ( 0 , 1 ) , SMA z = 0.3
VCS λ = 0.5 , σ = 0.3 EO a 1 = 2 , a 2 = 1
Table 2. Comparison of path planning results (No-threat).
Table 2. Comparison of path planning results (No-threat).
AlgorithmMeanBestWorstStdTime
EO185.25184.33188.461.422.51
MHEO184.13182.71186.861.342.61
Table 3. Threat source settings.
Table 3. Threat source settings.
ThreatTypePosition/kmRadius/kmHeight
Threat1Rader(35, 20)132.8
Threat 2missile(35, 52)82.9
Threat 3artillery(52, 72)83.0
Threat 4missile(63, 45)10.72.9
Threat 5Rader(78, 78)93.1
Threat 6artillery(87, 45)73.0
Table 4. Comparison of path planning results (Threat).
Table 4. Comparison of path planning results (Threat).
AlgorithmMeanBestWorstStdTimeSuccess
GEDGWO217.923184.91248.764120.7172953.929841
MPA719.5117198.345110228.082238.11953.687440.95
MHEO196.0584184.8934224.311713.2260655.103371
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, A.-D.; Han, T.; Zhou, H.; Xie, L. An Improved Equilibrium Optimizer with Application in Unmanned Aerial Vehicle Path Planning. Sensors 2021, 21, 1814. https://0-doi-org.brum.beds.ac.uk/10.3390/s21051814

AMA Style

Tang A-D, Han T, Zhou H, Xie L. An Improved Equilibrium Optimizer with Application in Unmanned Aerial Vehicle Path Planning. Sensors. 2021; 21(5):1814. https://0-doi-org.brum.beds.ac.uk/10.3390/s21051814

Chicago/Turabian Style

Tang, An-Di, Tong Han, Huan Zhou, and Lei Xie. 2021. "An Improved Equilibrium Optimizer with Application in Unmanned Aerial Vehicle Path Planning" Sensors 21, no. 5: 1814. https://0-doi-org.brum.beds.ac.uk/10.3390/s21051814

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop