Next Article in Journal
Enhanced Fuzzy Elephant Herding Optimization-Based OTSU Segmentation and Deep Learning for Alzheimer’s Disease Diagnosis
Next Article in Special Issue
Elite Directed Particle Swarm Optimization with Historical Information for High-Dimensional Problems
Previous Article in Journal
Inferring from References with Differences for Semi-Supervised Node Classification on Graphs
Previous Article in Special Issue
Empirical Study of Data-Driven Evolutionary Algorithms in Noisy Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization

School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Submission received: 12 March 2022 / Revised: 4 April 2022 / Accepted: 8 April 2022 / Published: 11 April 2022
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)

Abstract

:
Although particle swarm optimization (PSO) has been successfully applied to solve optimization problems, its optimization performance still encounters challenges when dealing with complicated optimization problems, especially those with many interacting variables and many wide and flat local basins. To alleviate this issue, this paper proposes a differential elite learning particle swarm optimization (DELPSO) by differentiating the two guiding exemplars as much as possible to direct the update of each particle. Specifically, in this optimizer, particles in the current swarm are divided into two groups, namely the elite group and non-elite group, based on their fitness. Then, particles in the non-elite group are updated by learning from those in the elite group, while particles in the elite group are not updated and directly enter the next generation. To comprise fast convergence and high diversity at the particle level, we let each particle in the non-elite group learn from two differential elites in the elite group. In this way, the learning effectiveness and the learning diversity of particles is expectedly improved to a large extent. To alleviate the sensitivity of the proposed DELPSO to the newly introduced parameters, dynamic adjustment strategies for parameters were further designed. With the above two main components, the proposed DELPSO is expected to compromise the search intensification and diversification well to explore and exploit the solution space properly to obtain promising performance. Extensive experiments conducted on the widely used CEC 2017 benchmark set with three different dimension sizes demonstrated that the proposed DELPSO achieves highly competitive or even much better performance than state-of-the-art PSO variants.

1. Introduction

Particle swarm optimization (PSO) has received extensive attention since it was proposed by Eberhart and Kennedy in 1995 [1]. By means of its easiness in implementation, strong global search ability, and no requirement for the mathematical properties of optimization problems, it has been widely used to solve various optimization problems [2] and has been widely used to solve real-world engineering problems, such as influence spread [3] and Hilbert transform [4].
In PSO, the most critical part is the learning strategy to update particles. In the classical PSO, each particle learns from its own historical best position and the global best position of the swarm discovered so far [1,5]. Such a learning strategy has two limitations [6,7]. On the one hand, the global best position is too greedy. On the other hand, the global best position is shared by all particles. These two limitations lead to a low learning diversity of particles. Therefore, the classical PSO loses its effectiveness when solving multimodal problems [8]. To improve the optimization performance of PSO, many researchers have been devoted to designing novel learning strategies to improve the learning effectiveness and the learning diversity of particles. As a result, many remarkable advanced learning strategies [9,10,11,12,13,14] have been developed. Roughly speaking, existing learning strategies of PSO can be divided into two main categories, namely topology-based learning strategies [15,16,17] and exemplar construction based learning strategies [18,19,20].
Topology-based learning strategies [5,15,21,22,23] mainly utilize different topologies to communicate with other particles and find suitable guiding exemplars for each particle. To avoid falling into local regions and premature convergence, neighborhood topology structures are commonly employed to increase the learning diversity of particles to explore the solution space. In this research direction, many topologies have been developed, such as the ring topology [24], the star topology [25], the wheel topology [5], the random topology [26,27], and the dynamic topology [28].
Different from topology-based learning strategies, exemplar construction-based strategies [10,29,30,31] mainly construct a new guiding exemplar for each particle by using various dimension recombination techniques. In general, the constructed exemplar is not visited by particles in the swarm. In this research direction, the most representative method is the comprehensive learning PSO (CLPSO) [10], which constructs a guiding exemplar dimension by dimension based on the personal best positions of all particles. Since the advent of CLPSO, many other effective recombination techniques have been devised, such as the orthogonal learning PSO (OLPSO) [29], which designs an orthogonal matrix to roughly seek for the effective recombination of dimensions, and the genetic learning PSO (GLPSO) [31], which uses operators in genetic algorithms to construct a guiding exemplar for each particle.
The above two main kinds of learning strategies mainly utilize the historical best positions, such as the personal best positions, the global best position, and the neighbor best positions, to direct the update of particles. These historical best positions usually remain unchanged for many generations, especially in the late stage of the evolution, leading to the learning effectiveness and the learning diversity of particles being improved limitedly. To alleviate this issue, in recent years, many researchers have attempted to abandon the use of this historical information to direct the update of particles, but turn to utilizing the predominant particles in the current swarm to direct the update of inferior ones. Along this line, many novel PSO variants have been developed [9,32,33,34], such as the competitive swarm optimizer (CSO) [33], and the level-based learning swarm optimizer (LLSO) [35]. When compared with traditional PSO variants utilizing historical information to direct the update of particles, these new PSO variants preserve higher search diversity because particles in the current swarm are updated generation-by-generation, and thus the predominant particles in the current swarm used to direct the update of inferior ones are different in different generations.
Taking inspiration from the idea of employing predominant particles in the current swarm to guide the update of particles, this paper proposes a differential elite learning particle swarm optimization (DELPSO) algorithm to further improve the optimization performance of PSO. Specifically, instead of randomly adopting predominant particles in the swarm to guide the update of inferior one in existing studies [33,34,36,37], the proposed DELPSO adopts two very differential predominant particles to update inferior ones, so that each particle could learn from two very different exemplars, and thus the learning diversity of particles is further improved largely. In particular, the main features and main components are summarized as follows:
(1)
A differential elite learning strategy (DEL) is devised to update particles. Specifically, this strategy first divides the swarm into two exclusive sets, namely the elite group (EG) containing the top best egs (egs is the size of EG) particles and the non-elite group (NEG) containing the rest. Then, particles in the elite group are not updated, and only particles in the non-elite group are updated by learning from those in the elite group. To compromise a promising balance between fast convergence and high diversity, we let each particle in the non-elite group learn from two very differential exemplars. In this way, the learning diversity and the learning effectiveness of particles could be promoted largely.
(2)
A dynamic partition strategy for separating the swarm into the two groups is further devised by dynamically adjusting the elite group size. Specifically, the elite group size is gradually decreased from a large value to a small value. In this way, more and more particles come into the non-elite group and then are updated by learning from fewer and fewer elites in the elite group. With this mechanism, the swarm gradually changes from exploring the solution space to exploiting the promising areas.
With the above mechanisms, the proposed DELPSO is expected to balance high search diversity and fast convergence well, in order to explore and exploit the solution space to achieve satisfactory performance. To verify its effectiveness, extensive experiments were carried out on the CEC 2017 benchmark problem set [38] with three dimension sizes (namely 30, 50, and 100) by comparing DELPSO with several state-of-the-art PSO variants. In addition, deep investigations on the effectiveness of each component in DELPSO were also performed to find what contributes to its promising performance in solving optimization problems.
The rest of this paper is organized as follows. In Section 2, the closely related work is briefly reviewed. Section 3 elaborates the proposed DELPSO in details, and Section 4 carries out extensive experiments to validate the effectiveness of DELPSO. At last, the conclusion of this paper is given in Section 5.

2. Related Work

2.1. Canonical Particle Swarm Optimization

In PSO [1], each particle is represented by two vectors, namely the position vector X i = x i 1 , , x i d , x i D and the velocity vector V i = v i 1 , , v i d , , v i D , where i 1 , 2 , , N P , N P is the population size and D   is the dimension size. In the classical PSO, each particle cognitively learns from its own experience and socially learns from the social experience of the whole swarm. In particular, each particle is updated as follows:
v i d t + 1 = ω v i d t + c 1 r 1 p b e s t i d t x i d t + c 2 r 2 g b e s t i d t x i d t
x i d t + 1 = x i d t + v i d t + 1
where p b e s t i is the personal best position of the i th particle and g b e s t is the global best position found so far by the whole swarm. In terms of the parameters, ω is the inertia weight, c 1 and c 2 are two acceleration coefficients, r 1 and r 2 are two real random numbers uniformly sampled within (0,1).
In Equation (1), it is found that in the classical PSO, all particles share one same guiding exemplar, namely the global best position of the swarm g b e s t . Besides, such a guiding exemplar is too greedy. These two limitations lead to the classical PSO preserving low search diversity but having fast convergence. Hence, it usually obtains promising performance on unimodal problems, but loses its effectiveness on multimodal problems [6,39].

2.2. Development of PSO

To improve the optimization performance of PSO, researchers have been devoted to designing novel PSO variants from different perspectives. For example, to alleviate the sensitivity of PSO to the parameters in Equation (1), researchers have devised many adaptive parameter adjustment strategies [40,41,42]. To improve the learning effectiveness of particles, researchers have proposed a lot of novel and effective learning strategies for PSO [10,34,43,44].
Among the extensive research of PSO, the most widely researched direction is the learning strategies for PSO, which play a vital role in helping PSO achieve good performance. In a broad sense, existing learning strategies for PSO can be classified into two main categories, namely topology-based learning strategies [15,21,28,45,46] and constructive learning strategies [10,18,29,47,48].
Topology-based learning strategies mainly make use of different topological structures to communicate with other particles to find appropriate exemplars to guide the updating of each particle. In fact, the learning strategy in classical PSO [1,5] is a global topology based one, where all particles are connected to interchange information. Such a full topology usually leads to greedy attraction of the second guiding exemplar in Equation (1), which likely results in premature convergence of PSO in solving multimodal problems. To alleviate this issue and to reduce the greedy attraction of the second guiding exemplar, researchers have developed many neighborhood topologies to increase the diversity of the second exemplar in Equation (1), in order to improve the performance of PSO [5,22,23]. For instance, in [15], a ring topology along with a local search method was introduced into PSO to maintain the balance between exploration and exploitation. Specifically, the ring topology was used to construct a neighbor region for each particle to determine a less greedy exemplar for the associated particle, so that high swarm diversity could be maintained, and the possibility of particles being trapped in local optima could be expectedly reduced. Except for the ring topology, the star topology [5] and the wheel topology [5] were also employed to connect particles and determine a promising exemplar to replace g b e s t in Equation (1) for each particle. In [49], Shi et al. adopted the cellular automata (CA) with the lattice and the “smart-cell” structure to connect particles to select the second guiding exemplar for each particles.
Since different topologies preserve different properties, to take advantage of the merits of different topologies, researchers have attempted to use multiple topologies to select guiding exemplars for particles [50,51]. For instance, Du et al. proposed a heterogeneous strategy PSO (HSPSO) [52] by using different topological structures for different particles. Specifically, some particles adopt the global topology to achieve fast convergence, while some particles adopt the local topology to maintain high diversity. In addition, instead of using fixed topologies, some researchers have even proposed to use dynamic topologies to find promising exemplars to direct the update of particles. For instance, Zeng et al. proposed a dynamic-neighborhood-based switching PSO (DNSPSO) algorithm [28] by devising a distance-based dynamic topology and a novel switching learning strategy to adaptively adjust the topology based on the state of the swarm. In [53], a small-world network based topology was designed to let each particle interact with its nearest neighbors with a high probability and to communicate with some distant particles with a low probability.
Instead of selecting exemplars from existing personal best positions of particles, constructive learning strategies [10,30,47,48,54] mainly construct promising guiding exemplars for particles by recombining dimensions of historical best positions. In this research direction, the most representative algorithm is the comprehensive learning particle swarm optimizer (CLPSO) [10]. Specifically, in this algorithm, a guiding exemplar is constructed dimension-by-dimension, based on the selected personal best positions. Due to the randomness in the selection of the personal best positions, the constructed exemplars are likely different for different particles, and thus CLPSO shows good performance in solving multimodal problems. To further improve the optimization performance of CLPSO, many effective techniques have been additionally proposed to cooperate with CLPSO to achieve more promising performance [30,54,55]. For instance, Lynn et al. proposed a heterogeneous CLPSO in [47]. Specifically, this algorithm first divides the swarm into two subgroups. Then, one subgroup adopts the comprehensive learning strategy to generate diversified exemplars by using the personal best positions of particle only in this subgroup, while another subgroup utilizes the comprehensive learning strategy to generate promising exemplars by the personal best positions of all particles in the entire swarm. To obtain high-quality solutions, Cao et al. proposed a CLPSO variant embedded with local search (CLPSO-LS) [48] by executing the Broyden–Fletch–Goldfarb–Shanno (BFGS) local search method adaptively during the evolution of CLPSO. With the help of the local search method, the accuracy of the solutions obtained by CLPSO is improved.
The above CLPSO variants have shown promising performance in solving multimodal problems. However, the construction of the guiding exemplars is inefficient due to the random recombination of dimensions. Therefore, to construct effective guiding exemplars, many researchers have designed various efficient recombination techniques [29,31]. For instance, in [29], Zhan et al. proposed an orthogonal learning PSO (OLPSO) by the orthogonal experimental design. Specifically, in this algorithm, an orthogonal matrix is maintained to discover potentially useful recombination of dimensions. Though this recombination of dimensions is efficient to construct promising exemplars, it takes too many fitness evaluations in the orthogonal experimental design. In [31], Gong et al. utilized the genetic operators, such as crossover, mutation, and selection, to construct guiding exemplars. By means of this method, the constructed guiding exemplars are expectedly not only well diversified, but also of high quality.
The above two main kinds of learning strategies mainly determine or construct guiding exemplars based on the historical information, such as the personal best positions of particles and the global best position of the swarm. However, it is well known that these historical best positions likely remain unchanged for many generations, especially in the late stage of the evolution. Therefore, the learning diversity of particles is improved limitedly to help the swarm escape from wide and flat local regions. To alleviate this issue, many researchers have attempted to abandon the use of historical best positions but turned to utilizing predominant particles in the current swarm to direct the update of inferior particles. As a result, many novel effective PSO variants have been developed [33,34,35]. For instance, inspired from the social learning behavior among social animals, a social learning PSO (SL-PSO) [21] was developed by letting each updated particle learn from a predominant one, randomly selected from those which are better than the updated particle. In [33]. Cheng et al. proposed a competitive swarm optimizer (CSO) to tackle complicated optimization problems. Specifically, this algorithm first arranges particles into pairs. Then, each pair of particles competes with each other. After competition, the loser is updated by learning from the winner, while the winner is not updated. To further improve the learning effectiveness and the learning diversity of particles, a level-based learning swarm optimizer (LLSO) [35] was designed. This algorithm first partitions particles into several levels and then lets particles in lower levels learn from those in higher levels. In this way, each particle is guided by two different predominant ones, and different particles preserve different guiding exemplars. Therefore, the learning diversity of particles is improved largely, which is beneficial for the swarm to escape from local regions.
The above predominant particle guided learning strategies help PSO achieve very promising performance in problem optimization. However, these learning strategies randomly choose predominant particles to direct the update of inferior ones. The random selection of predominant particles may lead to the two guiding exemplars being close to each other. Once the two guiding exemplars fall into local regions, the updated particle likely falls into local areas as well. In this situation, the learning effectiveness of particles may degrade. To alleviate this issue, this paper proposes a differential elite learning particle swarm optimization (DELPSO), by trying to select two very differential predominant particles to guide the update of inferior ones.

3. Proposed DELPSO

To improve the learning diversity of particles, the two guiding exemplars directing the update of the velocity of each particle should be as different as possible, so that particles could search the solution space in different directions. Bearing this in mind, we propose a differential elite learning particle swarm optimization (DELPSO) to tackle complicated optimization problems. The concrete elucidation of each component is presented as follows.

3.1. Differential Elite Learning Strategy

To let each particle learn from two very different predominant elites, we propose a differential elite learning strategy (DEL) for PSO to direct the update of inferior particles. Specifically, given that NP particles are maintained in the swarm, the strategy first partitions particles in the swarm into two exclusive groups, namely the elite group (EG), containing the top best egs particles (where egs is the elite group size), and the non-elite group (NEG), consisting of the rest (NP-egs) particles. Then, similar to [9,33,34,35], we employed the predominant particles in EG to guide the update of those in NEG. As for particles in EG, they are not updated and directly enter the next generation, so that valuable evolutionary information could be preserved and prevented from being destroyed. In this manner, particles in EG become better and better, and thus the convergence of the swarm could be guaranteed.
Specifically, each particle in NEG is updated as follows:
v i d r 1 v i d + r 2 x E G r 1 d x i d + α r 3 x E G r 2 d x i d
x i d x i d + v i d
where x i = x i 1 , , x i d , x i D and v i = v i 1 , , v i d , v i D are the position and the velocity of the i th particle in NEG respectively. x E G r 1 = x E G r 1 1 , , x E G r 1 d , x E G r 1 D and x E G r 2 = x E G r 2 1 , , x E G r 2 d , x E G r 2 D are two randomly selected predominant particles in EG. r 1 , r 2 and r 3 are three real random numbers uniformly sampled within 0 , 1 . α is a control parameter within 0 , 1 in charge of the learning preference of the second exemplar.
With respect to the selection of the two guiding exemplars, unlike existing studies [9,33,34,35], which randomly select predominant particles with the uniform distribution (which means that all predominant particles have the same probability to be selected), we selected the two guiding exemplars based on the two different roulette wheel selection strategies, so as to differentiate the two selected exemplars as much as possible to compromise fast convergence and high diversity at the particle level.
Specifically, in Equation (3), we consider that the first exemplar is responsible for fast convergence, while the second exemplar is in charge of the swarm diversity, which prevents the updated particle from being greedily attracted by the first exemplar. Based on this consideration, we consider that the first exemplar should be better than the second exemplar. However, if the two selected exemplars are too similar to each other, then the updated particle likely approaches the areas where the two exemplars are located. Once the two similar exemplars fall into local basins, the updated particle also likely falls into the local areas. To prevent this situation, by taking inspiration from [56,57], we defined two very different roulette wheel selection strategies to select the two guiding exemplars for each particle.
First, for the first exemplar, since it is expectedly better than the second one, we calculated the selection probabilities of particles in EG as follows:
P j = w j i = 1 e g s w i
where P j is the selection probability of the j th particle in EG, egs is the size of EG, and w j is the weight of the j th particle in EG, which is computed as follows:
w j = 1 0.2 × e g s 2 π e 1 × r a n k j 1 2 2 × 0.2 2 e g s 2
where r a n k j 1 , e g s is the rank of the j th particle in EG after sorting particles in EG from the best to the worst.
After the calculation of the selection probability of each particle in EG, we randomly selected a guiding exemplar from EG based on the roulette wheel selection strategy with the calculated probabilities. From Equations (5) and (6), we can see that the better one particle in EG is, the larger its weight is as computed by Equation (6), and thus, the larger its selection probability is as calculated by Equation (5). In this way, better particles in EG are preferred to be selected as the first guiding exemplar in Equation (3).
Second, as for the second exemplar, to differentiate it from the first exemplar, we propose another selection probability calculation method for particles in EG. Specifically, the selection probability of each particle in EG as the second guiding exemplar is calculated as follows:
P j = w j i = 1 e g s w i
w j = 1 0.2 × e g s 2 π e 1 × e g s r a n k j 2 2 × 0.2 2 e g s 2
From Equations (7) and (8), we can see that the worse one particle in EG is, the larger its weight is as computed by Equation (8), and thus the larger its selection probability is as calculated by Equation (7). In this way, worse particles in EG are preferred to be selected as the second guiding exemplar in Equation (3). However, it should be mentioned that the selected particle in EG as the second guiding exemplar is still better than the updated particle in NEG, because we separated particles in the swarm into the two exclusive sets based on their fitness.
With the above defined roulette wheel selection strategies, the two guiding exemplars in Equation (3) are likely different from each other. On the one hand, the updated particle learns from two predominant ones in EG, and thus the learning effectiveness is expectedly guaranteed. Therefore, fast convergence could be maintained. On the other hand, the two very different exemplars could afford diverse learning directions for each updated particle. As a result, the greedy attraction of the first guiding exemplar is expectedly prevented, which is beneficial for particles to explore the solution space. Therefore, high swarm diversity is expectedly maintained as well. Overall, we can see that a promising balance between exploration and exploitation could be maintained at the particle level.
In addition, it should be mentioned that, when compared with the classical updating strategy as shown in Equation (1), we used a random real number, r 1 , to replace the inertia weight w . This brings two benefits for the proposed method. Firstly, with this random setting, different particles in NEG have different settings of the inertia weight in the same generation, and the same particle has different settings of this parameter in different generations. This is beneficial for improving the learning diversity of particles, and thus the swarm diversity is likely improved, which is advantageous for escaping from local areas. Besides, we also adopted c 1 = 1 and c 2 = α in the proposed DELPSO. This reduces the number of parameters from two to only one, and thus the parameter fine-tuning process becomes easier to adjust the optimal settings.

3.2. Dynamic Partition of the Swarm

In DELPSO, particles in NEG are updated by learning from those in EG. Therefore, the partition of the swarm into the two groups is crucial for DELPSO to achieve promising performance. In particular, a large EG leads to a large number of elite particles being preserved, and only a small number of particles in NEG being updated. In this case, particles in NEG have a large range to learn from, and thus this is beneficial for the swarm to explore the solution space. On the contrary, a small EG leads to a small number of elite particles being preserved and a large number of particles in NEG being updated. In this situation, particles in NEG have a narrow range to learn from. Therefore, this is advantageous for the swarm to exploit the found promising areas.
Based on the above analysis, it is not suitable to keep the size of EG, namely egs, fixed during the evolution. Instead, we devised the following dynamic adjustment of egs to realize the dynamic partition of the swarm into the two groups:
e g s = N P × 0.8 0.6 × 10 5 f e s F E S max 1
where f e s represents the number of fitness evaluations used so far, F E S max is the maximum fitness evaluations. It should be mentioned here that we set egs in the range of [0.2 * NP, 0.8 * NP] by borrowing thought from the “Pareto Principle” theory [58], which is also popularly recognized as the 80—20 rule, namely that 80% of the consequences can be attributed to 20% of the causes.
From Equation (9), we get the following findings:
(1)
The size of EG (egs) decreases from 0.8 * NP to 0.2 * NP as the iteration proceeds. This indicates that as the evolution goes, fewer and fewer elite particles are preserved in EG, while more and more particles are updated in NEG. In this way, the swarm gradually changes its evolution from exploring the solution space to exploiting the found promising areas.
(2)
In the early stage, a large egs is maintained, while in the late stage, a small egs is preserved. This just matches the expectation that in the early stage, the swarm should explore the solution space, while in the late stage, the swarm should exploit the found promising areas.
(3)
Based on the above analysis, a promising balance between exploration and exploitation could be maintained during the evolution at the swarm level.
The effectiveness of this dynamic adjustment scheme was verified by the experiments in Section 4.3.

3.3. Difference between DELPSO and Existing PSO Variants

When compared with existing PSO variants [28,42,47,48,59], the proposed DELPSO distinguishes them in the following aspects:
(1)
Different from traditional PSOs [28,42], which utilize the historical best positions to update particles, the proposed DELPSO abandons the historical best positions, but directly employs predominant particles in the swarm to update inferior ones (as shown in Line 8–23 in Algorithm 1). The historical best positions may remain unchanged for many generations, especially in the late stage of the evolution. However, particles in the current swarm are updated generation by generation. Therefore, when compared with traditional PSO variants, the proposed DELPSO is expected to preserve higher diversity and thus have more chances to escape from local areas. In addition, when compared with the selection method in [60], the proposed DELPSO selects two very different predominant exemplars for each particle based on two different roulette wheel selection strategies (as shown in Line 15–18 in Algorithm 1). Although in [60] the selection method considers both the fitness of individuals and the distance between individuals and the global best position to select exemplars to direct the update of each individual, it does not take the difference between the selected exemplars into account. Nevertheless, the proposed DELPSO considers selecting two very different elite individuals in EG with respect to the fitness to direct the update of inferior individuals. Using the fitness difference to roughly measure the difference between the selected two exemplars, DELPSO avoids the pairwise Euclidean distance calculation, and thus is more efficient.
(2)
Different from existing studies like CSO [33] and SL-PSO [34], which only employ one predominant particle to direct the update of each inferior one, the proposed DELPSO utilizes two predominant particles in EG to direct the update of each inferior one in NEG (as shown in Line 15–18 in Algorithm 1). In this way, the two guiding exemplars for different particles are expectedly different and thus DELPSO is expected to preserve higher diversity and thus have more chances to jump out of local basins.
(3)
Different from existing studies such as LLSO [35] and SDLSO [36], which randomly choose two different predominant particles to update inferior ones, the proposed DELSPO tries to differentiate the two selected predominant particles as much as possible and then uses them to direct the update of inferior particles (as shown in Line 15–18 in Algorithm 1). In this way, each updated particle could be guided by two different directions, which is beneficial in enhancing the learning diversity of particles. As a result, the updated particles could explore the solution space in different directions, and thus the probability of falling into local areas could be reduced.
Algorithm 1: General steps of the search process in DELPSO
Input: NP : population size, FESmax : maximum fitness evaluations, α: control parameter;
1:P: create a population of solution candidates randomly;
2:fes: record the used number of fitness evaluations;
3:For  i = 1 : NP do
4: F: evaluate the fitness of each member in P;
5: fes ++
6:End For
7:While (fesFESmax) do
8: Population partition process:
9:       Calculate egs ( the elite group size) according to Equation (9);
10:       Sort P by F from the best to the worst;
11:       The top ranked egs individuals of P are placed into the elite group, namely EG;
12:       The rest (NP-egs) individuals of P are put into the none-elite group, namely NEG
13:    For i = 1 : NP-egs (the size of NEG) do
14:       Selection process:
15:         P r 1 : Calculate the selection probability (with respect to the first exemplar) of each individual in EG
                  according to Equations (5) and (6);
16:         P r 2 : Calculate another selection probability (with respect to the second exemplar) of each individual in
                  EG according to Equations (7) and (8);
17:        Randomly select an exemplar x E G r 1 based on P r 1 ;
18:        Randomly select an exemplar x E G r 2 based on P r 2 ;
19:Update process:
20:        Update x i in NEG by x E G r 1 and x E G r 2 based on Equation (3) and Equation (4);
21:        Evaluate the fitness of the updated x i ;
22:        fes ++;
23:    End For
24:End While
25:Obtain the global best solution gbest and its fitness F(gbest);
Output: F(gbest) and gbest

3.4. Overall Procedure of DELPSO

Integrating the above components together, we obtained the complete procedure of the proposed DELPSO, whose general steps are shown in Algorithm 1.
The first stage of the proposed DELPSO is to create a population P, consisting of NP members. Once the population is generated, the fitness of each individual in the population P is calculated, with the used number of fitness evaluations being recorded by fes.
The second stage is the iterative search process. The first step in this stage is the population division (Lines 8–12). After the elite group size egs is calculated according to Equation (9), the population P is sorted from the best to the worst. Then, the top ranked egs individuals are placed into the elite group (EG), and the rest (NP-egs) individuals are put into the none-elite group (NEG). Subsequently, individuals in NEG are updated. For each individual x i in NEG, two operations are performed, namely the exemplar selection process (Lines 14–18) and the updating process (Lines 19–22). In the exemplar selection process, two kinds of selection probabilities of individuals in EG ( P r 1 and P r 2 ) are first calculated according to Equations (5)–(8) respectively (Line 15 and Line 16). Then, based on the two kinds of calculated probabilities, the roulette wheel selection strategy is used to pick up two very different exemplars (Line 17 and Line 18). After the selection of the two exemplars, the velocity v i of particle x i is updated according to Equation (3) and the position of particle x i is updated according to Equation (4). Then, the fitness of the updated particle x i is calculated and fes is updated. The above iteration process continues until the maximum number of fitness evaluations is exhausted. At last, at the end of the algorithm, the global best solution is obtained along with its fitness as the output (Line 25).
In Algorithm 1, the main differences between the proposed DELPSO and existing PSO variants lie in two aspects. The first is the population partition, where the current swarm is separated into two groups. Between the two groups, the only particles in NEG are updated by learning from those in EG, while particles in EG are not updated and directly enter the next generation. The second is the selection process, where two kinds of selection probabilities of individuals in EG ( P r 1 and P r 2 ) are calculated to select two very different guiding exemplars for each particle in NEG. These two differences contribute to the main unique advantages of the proposed DELPSO and its good performance.
From Algorithm 1, we can see that in each generation, except for the function evaluation time, DELPSO takes O(NPlogNP) to sort the swarm and O(NP) to partition the swarm into two groups. Then, it takes O(egs) to calculate the two kinds of probabilities. After that, O(NP*egs) is needed to select two exemplars for each particle in NEG and O(NP*D) is required to update them. As a whole, the time complexity of the proposed DELPSO is O(NP*D), which is the same as the classical PSO.
With respect to the space complexity, we found that DELPSO needs O(NP*D) to store the position of all particles, and another O(NP*D) to store their velocities. Except for that, O(NP) space is needed to store the particle index of the two groups, and another O(NP) is needed to compute the selection probabilities of particles in EG. When compared with the classical PSO, O(NP*D) space can be saved because it does not need to store the personal best positions of particles. Therefore, DELPSO preserves slightly less space.

4. Experiments

In this section, we conducted extensive experiments on the widely used CEC 2017 benchmark set [38] to verify the effectiveness of the proposed DELPSO. Specifically, this benchmark set contains 29 optimization problems with four types, namely the unimodal problems (F1 and F3), the simple multimodal problems (F4–F10), the hybrid problems (F11–F20), and the composition problems (F21–F30). For more information about this benchmark set, please refer to [38].

4.1. Experimental Setup

To comprehensively verify the effectiveness of DELPSO, we compared it with several state-of-the-art PSO algorithms. Specifically, we selected nine representative and state-of-the-art PSO variants, namely XPSO [59], TCSPSO [61], DNSPSO [28], AWPSO [42], CLPSO_LS [48], HCLPSO [47], DPLPSO [62], SCDLPSO [6], and TLBO-FL [63]. Among these compared algorithms, XPSO, DNSPSO, AWPSO, DPLPSO, SCDLPSO, and TLBO-FL are topology-based PSO variants, while TCSPSO, CLPSO-LS and HCLPSO are constructive learning-based PSO variants. To fully compare the proposed DELPSO with the compared PSO variants, we evaluated their performance on the CEC 2017 benchmark set with three different dimension sizes, namely 30-D, 50-D, and 100-D. For fairness, the maximum of fitness evaluations ( F E S m a x ) was set as 10,000 * D (where D is the dimension size) for all algorithms.
To make fair comparisons, we fine-tuned the swarm size of all algorithms on the CEC 2017 benchmark set with different dimension sizes. As for other key parameters, we directly used the recommended settings in the associated papers. After preliminary fine-tuning experiments, the parameter settings of the swarm size for all algorithms along with the directly adopted settings of other parameters in all algorithms are shown in Table 1.
Furthermore, to evaluate each algorithm comprehensively and fairly, we ran each algorithm independently 30 times, and evaluated its optimization performance by using the median, the mean, and the standard deviation over the 30 independent runs. In addition, the Wilcoxon rank sum test was performed at the significance level α = 0.05 to tell the statistical difference between two algorithms. Besides, to examine the overall performance of each algorithm on the whole CEC 2017 benchmark set, the Friedman test was also performed at the significance level α = 0.05 to obtain the average rank of each algorithm.
At last, it is worth mentioning that we implemented the proposed DELPSO in Py-thon and ran all algorithms on the same computer with 8 Intel Core i7-10700 2.90-GHz CPUs, 8-GB memory and the 64-bit Ubuntu 12.04 LTS system.

4.2. Comparison with State-Of-The-Art PSO Variants

In this section, we conducted extensive comparative experiments on the CEC 2017 benchmark set with three dimension sizes, to compare the proposed DELPSO with the nine state-of-the-art PSO variants. Table 2, Table 3 and Table 4 respectively show the detailed comparison results on the 30-D, 50-D, and 100-D CEC 2017 benchmark problems. In these tables, the symbols “+”, “−”, and “=“ behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems, respectively. In the second to last row of these tables, “w/t/l” counts the number of problems in which the designed DELPSO achieves significantly better, equivalent, and significantly worse performance than the associated compared algorithm. In the last row of these tables, the average rank of each algorithm obtained by the Friedman test is given. In addition, Table 5 summarizes the statistical results with respect to “w/t/l” between DELPSO and the nine PSO variants on the CEC 2017 benchmark set with three dimension sizes.
As shown in Table 2, the comparison results on the 30-D CEC 2017 benchmark problems can be summarized as follows:
(1)
As can be seen from the last row of Table 2, the proposed DELPSO has the lowest rank among all algorithms, and its rank value (1.79) is much smaller than those of the other algorithms. This indicates that DELPSO achieves the best overall performance on the 30-D CEC 2017 benchmark set, and its overall performance is significantly better than the compared algorithms.
(2)
As can be seen from the second to last row of Table 2, except for SCDLPSO and XPSO, DELPSO significantly outperforms the other seven comparison algorithms on at least 23 problems and is inferior to them on at most 4 problems. When compared with XPSO, DELPSO shows significant superiority on 16 problems and displays inferiority on only 4 problems. In comparison with SCDLPSO, DELPSO achieves highly competitive performance with it.
(3)
From the comparison results on different types of optimization problems, DELPSO is superior to DNSPSO, CLPSO-LS, AWPSO, DPLPSO, and TLBO-FL on the two unimodal problems, but achieves highly competitive performance with XPSO, TCSPSO, HCLPSO, and SCDLPSO. On the seven simple multimodal problems, DELPSO was significantly superior to CLPSO-LS and DPLPSO on all the seven problems, and beats DNSPSO, AWPSO, HCLPSO, and TLBO-FL all on five problems. When compared with XPSO, TCSPSO, and SCDLPSO, DELPSO achieves very similar performance with them. On the ten hybrid problems, DELPSO achieves significant superiority to AWPSO, DPLPSO, and HCLPSO on all the ten problems, and outperforms DNSPSO, TCSPSO, CLPSO-LS, and TLBO-FL on eight, nine, eight, and nine problems, respectively. When compared with XPSO, it obtains significantly better performance on five problems and shows no failure to XPSO on these problems. In comparison with SCDLPSO, DELPSO shows slightly worse performance on this kind of optimization problem. In terms of the ten composition problems, DELPSO performs significantly better than the nine compared PSO variants on at least five problem and shows inferiority to them on at most two problems. In particular, DELPSO significantly outperforms TCSPSO, AWPSO, and DPLPSO on all the ten problems.
According to the comparison results between DELPSO and the compared algorithms on the 50-D CEC 2017 benchmark problems as shown in Table 3, the following conclusions can be drawn:
(1)
As can be seen from the last row of Table 3, the proposed DELPSO still has the lowest rank among all algorithms, and its rank (1.79) is still much smaller than those of the other algorithms. This demonstrates that DELPSO still achieves the best overall performance on the 50-D CEC 2017 benchmark set, and its performance is still significantly superior to the compared algorithms.
(2)
As can be seen from the second to last row of Table 3, DELPSO achieves significantly better performance than the nine compared algorithms on at least sixteen problems and shows inferiority to them on at most five problems. In particular, when compared with TCSPSO, CLPSO_LS, and DPLPSO, DELPSO displays no inferiority to them on all problems.
(3)
From the comparison results on different types of optimization problems, DELPSO is superior to DNSPSO, AWPSO, and DPLPSO on the two unimodal problems, but achieves highly competitive performance with XPSO, TCSPSO, CLPSO-LS, HCLPSO, SCDLPSO, and TLBO-FL. In particular, on the two unimodal problems, DELPSO shows no inferiority to the nine compared algorithms. On the seven simple multimodal problems, DELPSO is significantly better than the nine compared PSO variants on at least four problems. In particular, it significantly outperforms CLPSO-LS and DPLPSO on all the seven problems. On the ten hybrid problems, DELPSO achieves significant superiority to the nine compared algorithms on at least six problems and shows inferiority to them on at most one problem. In particular, DELPSO beats AWPSO on all the ten problems, and outperforms DNSPSO, DPLPSO, and HCLPSO all on nine problems. In terms of the ten composition problems, except for SCDLPSO, DELPSO performs significantly better than the other eight compared PSO variants on at least seven problem and shows inferiority to them on at most two problems. In particular, DELPSO significantly outperforms TCSPSO, CLPSO-LS, AWPSO, DPLPSO, and TLBO-FL on all the ten problems. When compared with SCDLPSO, DELPSO is significantly better on five problems and displays inferiority on only two problems.
At last, the following conclusions can be drawn from the comparison results between DELPSO and the seven state-of-the-art PSO variants on the 100-D CEC 2017 benchmark problems as shown in Table 4.
(1)
As can be seen from the last row of Table 4, the proposed DELPSO still has the lowest rank among all algorithms. This indicates that DELPSO still achieves the best overall performance on the 100-D CEC 2017 benchmark set.
(2)
As can be seen from the second to last row of Table 4, except for XPSO and SCDLPSO, DELPSO is significantly better than the other seven compared algorithms on at least twenty-two problems and is inferior to them on at most four problems. In particular, DELPSO achieves significantly better performance than DPLPSO and TLBO-FL on all the 29 problems. In competition with SCDLPSO, DELPSO significantly wins the competition on fifteen problems and fails on only four problems. Compared with XPSO, DELPSO achieves slightly better performance.
(3)
From the comparison results on different types of optimization problems, DELPSO is superior to AWPSO, DPLPSO, and TLBO-FL on the two unimodal problems and achieves highly competitive performance with the other algorithms. In particular, on the two unimodal problems, DELPSO achieves no worse performance than the nine compared algorithms. On the seven simple multimodal problems, except for XPSO and SCDLPSO, DELPSO is significantly superior to the other seven compared PSO variants on at least five problems and is inferior to them on at most one problem. In particular, it significantly outperforms CLPSO-LS, DPLPSO, and TLBO-FL on all the seven problems. When compared with XPSO and SCDLPSO, DELPSO achieves highly competitive performance. On the ten hybrid problems, DELPSO achieves significant superiority to DNSPSO, TCSPSO, AWPSO, DPLPSO, HCLPSO, and TLBO-FL on at least nine problems, and shows no inferiority to them on these problems. In comparison with SCDLPSO, DELPSO performs significantly better on six problems and shows no inferiority on this kind of problems. When compared with XPSO, DELPSO achieves slightly worse performance on these problems. In terms of the ten composition problems, DELPSO performs significantly better than TCSPSO, CLPSO-LS, AWPSO, DPLPSO, HCLPSO, and TLBO-FL on at least nine problems and shows no inferiority to them on these problems.
The above comparative experiments have proven the effectiveness of DELPSO with respect to the solution quality. In order to further prove its efficiency in solving complex optimization problems, experiments were carried out on the 50-D CEC 2017 benchmark set to investigate the convergence behavior comparison between the proposed DELPSO and the nine compared algorithms. Figure 1 shows the comparison results on the twenty 50-D CEC 2017 benchmark problems.
From Figure 1, the following observations can be obtained: (1) At the first look on Figure 1, we find that the proposed DELPSO achieves faster convergence speed and higher solution quality than all the nine algorithms on 16 problems. (2) In-depth observation shows that on F4, F5, F8, F16. and F21, DELPSO obtains both faster convergence speed and higher solution quality than eight compared algorithms and shows inferiority to only one compared method. (3) These two observations demonstrate the superiority of DELPSO in both the convergence speed and the solution quality, which verifies that DELPSO is efficient and effective to solve optimization problems.
To summarize, as shown in Table 5 and Figure 1, we found that the proposed DELPSO consistently shows significant superiority over most of the nine compared PSO variants on the CEC 2017 benchmark set with three dimension sizes. On the one hand, from a comprehensive point of view, the above comparative experiments verify that DELPSO maintains a good scalability when solving optimization problems. On the other hand, through the in-depth comparisons on different types of optimization problems, we found that the performance of DELPSO is significantly better than that of the compared algorithms on complicated problems, such as the multimodal problems, the hybrid problems, and the composition problems. This demonstrates that DELPSO is promising for solving complicated optimization problems. The superiority of DELPSO mainly benefits from the proposed differential elite learning strategy and the dynamic swarm partition strategy. The former selects two very different predominant particles to direct the update of each inferior one. As a result, a promising balance between fast convergence and high diversity could be maintained at the particle level. The latter strategy dynamically adjusts the number of particles in the elite group, which lets the swarm gradually change from exploring solution space to exploiting the found promising areas. As a consequence, a promising balance between exploration and exploitation at the swarm level could be maintained. With the above two techniques, the proposed DELPSO is expected to compromise search diversification and intensification well at both the particle level and the swarm level to explore and exploit the solution space properly. Therefore, DELPSO expectedly achieves promising performance in solving optimization problems.

4.3. Deep Investigation on DELPSO

In this section, we conducted extensive experiments on the 50-D CEC 2017 benchmark set, to undertake deep investigations on the proposed DELPSO. Specifically, we mainly conducted experiments to validate the effectiveness of the two main components in DELPSO, namely the proposed DEL strategy and the proposed dynamic swarm partition strategy.

4.3.1. Effectiveness of the Proposed DEL

First, we conducted experiments to investigate the effectiveness of the proposed DEL strategy. To this end, we first adopted the same roulette wheel selection strategy for the selection of the first exemplar to select the second exemplar. That is to say, better particles in EG are preferred to be selected as the two guiding exemplars. With this selection mechanism, a variant of DELPSO was developed, which we named as “DELPSO-A”. On the contrary, we also adopted the same roulette wheel selection strategy for the selection of the second exemplar to select the first exemplar. That is to say, worse particles in EG are preferred to be selected as the two guiding exemplars. With this selection mechanism, another variant of DELPSO was developed, which we named as “DELPSO-D”. At last, instead of using the devised ranking weight based roulette wheel selection strategy, we used the rankings of particles in EG to calculate the two probabilities and then select the two exemplars based the associated roulette wheel selection strategies. This variant of EDLPSO was named as “DELPSO-R”.
After the above preparation, we then conducted experiments on the 50-D CEC 2017 benchmark set to compare the four versions of DELPSO mentioned above. Table 6 shows the comparison results among the four versions of DELPSO. In this table, the best results are highlighted in bold.
From Table 6, the following observations can be obtained. (1) From the perspective of the Friedman test, the rank value of DELPSO is the smallest among the four versions of DELPSO. This demonstrates that DELPSO achieves the best overall performance. (2) Specifically, when compared with DELPSO-A and DELPSO-D, we can see that DELPSO is much better. This demonstrates that using two very different guiding exemplars based on the proposed DEL is much more effective than randomly selection of the two exemplars without considering making the two exemplars as different as possible. (3) When compared with DELPSO-R, DELPSO presents great superiority. This demonstrates the effectiveness of the probability computation as shown in Equations (6) and (8).
Based on the above observations, it was found that the proposed DEL strategy is effective and plays a key role in helping DELPSO achieve good performance.

4.3.2. Effectiveness of the Dynamic Swarm Partition Strategy

In this section, we verified the effectiveness of the proposed dynamic swarm partition strategy by adjusting the elite group size (egs) dynamically based on Equation (9). To this end, we first set egs with different fixed values by ranging from 0.2 * NP to 0.8 * NP. Then, we conducted experiments on the 50-D CEC 2017 benchmark set to compare DELPSO with the dynamic strategy and the ones with fixed values of egs. Table 7 shows the comparison results with respect to the mean fitness results over 30 independent runs. In this table, the best results are highlighted in bold.
From Table 7, the following conclusions can be drawn. (1) No matter whether it is from the perspective of the Friedman test results or in view of the number of problems where the associated algorithms achieve the best results, we can see that DELPSO with the dynamic strategy achieves much better performance than the ones with fixed settings. (2) By undertaking deep investigations, we found that the optimal setting of egs is different for different problems. Further, we found that the results obtained by DELPSO with the dynamic strategy on the problems where it does not obtain the best results are very close to the results obtained by DELPSO with the associated optimal setting of egs.
Based on the above experiments, it is verified that the dynamic partition strategy is helpful for DELPSO to achieve good performance.

5. Conclusions

This paper has proposed a differential elite learning particle swarm optimization (DELPSO) algorithm to tackle optimization problems effectively. Unlike traditional PSO variants that utilize historical best positions to direct the update of particles, DELPSO employs predominant particles in the swarm to guide the update of inferior ones. Specifically, the swarm is first dynamically divided into two groups, namely the elite group and the non-elite group, based on the devised dynamic swarm partition strategy. Then, particles in the non-elite group are updated by learning from those in the elite group, while particles in the elite group are not updated and directly enter the next generation. To let each updated particle learn from different exemplars with diverse direction, we devised two kinds of selection probabilities of particles in the elite group and then selected two very different guiding exemplars for each particle in the non-elite group. With the proposed differential elite learning strategy and the devised dynamic swarm partition strategy, the proposed DELPSO is expected to compromise exploration and exploitation well at both the swarm level and the particle level to obtain promising performance.
Extensive comparative experiments have been conducted on the widely used CEC 2017 benchmark problem set with three dimension sizes to demonstrate the effectiveness of the proposed DELPSO. By comparing with nine state-of-the-art and representative PSO variants, experimental results have demonstrated that DELPSO achieves much better performance than the compared peer algorithms. In particular, it was found that the proposed DELPSO shows particular superiority on complicated optimization problems, such as the multimodal problems, the hybrid problems, and the composition problems. At last, deep investigation on DELPSO has also been conducted to verify the effectiveness of the proposed DEL strategy and the devised dynamic swarm partition strategy.
In the future, we will apply the proposed DELPSO to solve other optimization problems, such as constrained optimization problems, multimodal optimization problems, and real-world engineering optimization problems.

Author Contributions

Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. X.G.: Implementation, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, and writing—review and editing. D.-D.X.: Methodology, and writing—review and editing. Z.-Y.L.: Writing—review and editing, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, and in part by the Startup Foundation for Introducing Talent of NUIST.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  2. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A Survey on Evolutionary Computation for Complex Continuous Optimization. Artif. Intell. Rev. 2022, 55, 59–110. [Google Scholar] [CrossRef]
  3. Kundu, G.; Choudhury, S. A Discrete Genetic Learning Enabled PSO for Targeted Positive Influence Maximization in Consumer Review Networks. Innov. Syst. Softw. Eng. 2021, 17, 247–259. [Google Scholar] [CrossRef]
  4. Kumar, A.; Agrawal, N.; Sharma, I.; Lee, S.; Lee, H. Hilbert Transform Design Based on Fractional Derivatives and Swarm Optimization. IEEE Trans. Cybern. 2020, 50, 2311–2320. [Google Scholar] [CrossRef] [PubMed]
  5. Kennedy, J. Small Worlds and Mega-minds: Effects of Neighborhood Topology on Particle Swarm Performance. In Proceedings of the IEEE 1999 Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1931–1938. [Google Scholar]
  6. Yang, Q.; Hua, L.T.; Gao, X.D.; Xu, D.D.; Lu, Z.Y.; Jeon, S.-W.; Zhang, J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  7. Li, W.; Meng, X.; Huang, Y. Differential Learning Particle Swarm Optimization with Full Dimensional Information. In Proceedings of the 2019 15th International Conference on Computational Intelligence and Security, Macao, China, 13–16 December 2019; pp. 31–35. [Google Scholar]
  8. Parsopoulos, K.E.; Vrahatis, M.N. On The Computation of All Global Minimizers Through Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2004, 8, 211–224. [Google Scholar] [CrossRef]
  9. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2020, 52, 1960–1976. [Google Scholar] [CrossRef] [PubMed]
  10. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  11. Meerza, S.I.A.; Islam, M.; Uzzal, M.M. Q-Learning Based Particle Swarm Optimization Algorithm for Optimal Path Planning of Swarm of Mobile Robots. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology, Dhaka, Bangladesh, 3–5 May 2019; pp. 1–5. [Google Scholar]
  12. Panda, A.; Ghoshal, S.; Konar, A.; Banerjee, B.; Nagar, A.K. Static Learning Particle Swarm Optimization with Enhanced Exploration and Exploitation Using Adaptive Swarm Size. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 1869–1876. [Google Scholar]
  13. Panda, A.; Mallipeddi, R.; Das, S. Particle Swarm Optimization with A Modified Learning Strategy and Blending Crossover. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence, Honolulu, HI, USA, 27 November–1 December 2017. [Google Scholar]
  14. Srimakham, S.; Jearanaitanakij, K. Improving Particle Swarm Optimization by Using Incremental Attribute Learning and Centroid of Particle’s Best Positions. In Proceedings of the 2017 14th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Phuket, Thailand, 27–30 June 2017. [Google Scholar]
  15. Xu, G.; Zhao, X.; Wu, T.; Li, R.; Li, X. An Elitist Learning Particle Swarm Optimization With Scaling Mutation and Ring Topology. IEEE Access 2018, 6, 78453–78470. [Google Scholar] [CrossRef]
  16. Tang, Y.; Wei, B.; Xia, X.; Gui, L. Dynamic Multi-swarm Particle Swarm Optimization Based on Elite Learning. In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence, Xiamen, China, 6–9 December 2019; pp. 2311–2318. [Google Scholar]
  17. Mabaso, R.; Cleghorn, C.W. Topology-Linked Self-Adaptive Quantum Particle Swarm Optimization for Dynamic Environments. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence, Canberra, Australia, 1–4 December 2020; pp. 1565–1572. [Google Scholar]
  18. Wei, L.; Fan, R.; Li, X. A Novel Multi-objective Decomposition Particle Swarm Optimization Based on Comprehensive Learning Strategy. In Proceedings of the 2017 36th the Chinese Control Conference, Dalian, China, 26–28 July 2017; pp. 2761–2766. [Google Scholar]
  19. Liu, S.; Lin, Q.; Li, Q.; Tan, K.C. A Comprehensive Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
  20. Song, W.; Hua, Z. Multi-Exemplar Particle Swarm Optimization. IEEE Access 2020, 8, 176363–176374. [Google Scholar] [CrossRef]
  21. Chen, Z.G.; Zhan, Z.H.; Liu, D.; Kwong, S.; Zhang, J. Particle Swarm Optimization with Hybrid Ring Topology for Multimodal Optimization Problems. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics, Toronto, ON, Canada, 11–14 October 2020; pp. 2044–2049. [Google Scholar]
  22. Blackwell, T.; Kennedy, J. Impact of Communication Topology in Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2019, 23, 689–702. [Google Scholar] [CrossRef] [Green Version]
  23. Kennedy, J.; Mendes, R. Population Structure and Particle Swarm Performance. In Proceedings of the IEEE 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; pp. 1671–1676. [Google Scholar]
  24. Borowska, B. Genetic Learning Particle Swarm Optimization with Interlaced Ring Topology. In Proceedings of the Computational Science, Amsterdam, The Netherlands, 3–5 June 2020; pp. 136–148. [Google Scholar]
  25. Miranda, V.; Keko, H.; Jaramillo Duque, Á. Stochastic Star Communication Topology in Evolutionary Particle Swarms. Int. J. Comput. Intell. Res. 2008, 4, 105–116. [Google Scholar] [CrossRef]
  26. Liu, Q.; Wei, W.; Yuan, H.; Zhan, Z.-H.; Li, Y. Topology Selection for Particle Swarm Optimization. Inf. Sci. 2016, 363, 154–173. [Google Scholar] [CrossRef]
  27. Elsayed, S.M.; Sarker, R.A.; Essam, D.L. Memetic Multi-Topology Particle Swarm Optimizer for Constrained Optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  28. Zeng, N.; Wang, Z.; Liu, W.; Zhang, H.; Hone, K.; Liu, X. A Dynamic Neighborhood-Based Switching Particle Swarm Optimization Algorithm. IEEE Trans. Cybern. 2020, 1–12. [Google Scholar] [CrossRef]
  29. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef] [Green Version]
  30. Liang, J.J.; Zhigang, S.; Zhihui, L. Coevolutionary Comprehensive Learning Particle Swarm Optimizer. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  31. Gong, Y.J.; Li, J.J.; Zhou, Y.; Li, Y.; Chung, H.S.H.; Shi, Y.H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2016, 46, 2277–2290. [Google Scholar] [CrossRef] [Green Version]
  32. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 50, 3393–3408. [Google Scholar] [CrossRef]
  33. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef]
  34. Cheng, R.; Jin, Y. A Social Learning Particle Swarm Optimization Algorithm for Scalable Optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  35. Yang, Q.; Chen, W.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  36. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2017, 47, 2896–2910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Song, G.W.; Yang, Q.; Gao, X.D.; Ma, Y.Y.; Lu, Z.Y.; Zhang, J. An Adaptive Level-Based Learning Swarm Optimizer for Large-Scale Optimization. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics, Melbourne, Australia, 17–20 October 2021; pp. 152–159. [Google Scholar]
  38. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; Technical Report; Nanyang Technological University: Singapore, 2016; pp. 1–16. [Google Scholar]
  39. Liang, J.; Ban, X.; Yu, K.; Qu, B.; Qiao, K.; Yue, C.; Chen, K.; Tan, K.C. A Survey on Evolutionary Constrained Multi-objective Optimization. IEEE Trans. Evol. Comput. 2022, 1. [Google Scholar] [CrossRef]
  40. Chen, K.; Zhou, F.; Liu, A. Chaotic Dynamic Weight Particle Swarm Optimization for Numerical Function Optimization. Knowl.-Based Syst. 2018, 139, 23–40. [Google Scholar] [CrossRef]
  41. Nickabadi, A.; Ebadzadeh, M.M.; Safabakhsh, R. A Novel Particle Swarm Optimization Algorithm with Adaptive Inertia Weight. Appl. Soft Comput. 2011, 11, 3658–3670. [Google Scholar] [CrossRef]
  42. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A Novel Sigmoid-Function-Based Adaptive Weighted Particle Swarm Optimizer. IEEE Trans. Cybern. 2021, 51, 1085–1093. [Google Scholar] [CrossRef]
  43. Xie, H.Y.; Yang, Q.; Hu, X.M.; Chen, W.N. Cross-generation Elites Guided Particle Swarm Optimization for large scale optimization. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  44. Song, A.; Chen, W.N.; Gu, T.; Zhang, H.; Zhang, J. A Constructive Particle Swarm Optimizer for Virtual Network Embedding. IEEE Trans. Netw. Sci. Eng. 2020, 7, 1406–1420. [Google Scholar] [CrossRef]
  45. Zhang, Y.H.; Lin, Y.; Gong, Y.J.; Zhang, J. Particle Swarm Optimization with Minimum Spanning Tree Topology for Multimodal Optimization. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 234–241. [Google Scholar]
  46. Chen, R.M.; Huang, H.T. Particle Swarm Optimization Enhancement by Applying Global Ratio Based Communication Topology. In Proceedings of the Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kitakyushu, Japan, 27–29 August 2014; pp. 443–446. [Google Scholar]
  47. Lynn, N.; Suganthan, P.N. Heterogeneous Comprehensive Learning Particle Swarm Optimization with Enhanced Exploration and Exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  48. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive Learning Particle Swarm Optimization Algorithm With Local Search for Multimodal Functions. IEEE Trans. Evol. Comput. 2019, 23, 718–731. [Google Scholar] [CrossRef]
  49. Shi, Y.; Liu, H.; Gao, L.; Zhang, G. Cellular Particle Swarm Optimization. Inf. Sci. 2011, 181, 4460–4493. [Google Scholar] [CrossRef]
  50. Oca, M.A.M.d.; Pena, J.; Stutzle, T.; Pinciroli, C.; Dorigo, M. Heterogeneous Particle Swarm Optimizers. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 698–705. [Google Scholar]
  51. Engelbrecht, A.P. Scalability of A Heterogeneous Particle Swarm Optimizer. In Proceedings of the 2011 IEEE Symposium on Swarm Intelligence, Paris, France, 11–15 April 2011; pp. 1–8. [Google Scholar]
  52. Du, W.B.; Ying, W.; Yan, G.; Zhu, Y.B.; Cao, X.B. Heterogeneous Strategy Particle Swarm Optimization. IEEE Trans. Circuits Syst. II: Express Briefs 2017, 64, 467–471. [Google Scholar] [CrossRef] [Green Version]
  53. Gong, Y.-J.; Zhang, J. Small-World Particle Swarm Optimization with Topology Adaptation. In Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, Amsterdam, The Netherlands, 6–10 July 2013; pp. 25–32. [Google Scholar]
  54. Lynn, N.; Suganthan, P.N. Comprehensive Learning Particle Swarm Optimizer with Guidance Vector Selection. In Proceedings of the 2013 IEEE Symposium on Swarm Intelligence, Singapore, 16–19 April 2013; pp. 80–84. [Google Scholar]
  55. Jin, Q.; Bin, X.; Kun, W.; Xi, Y.; Xiaoxuan, H.; Yanfei, S. Comprehensive Learning Particle Swarm Optimization with Tabu Operator Based on Ripple Neighborhood for Global Optimization. In Proceedings of the 2015 11th International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, Taipei, Taiwan, 19–20 August 2015; pp. 280–286. [Google Scholar]
  56. Yang, Q.; Chen, W.N.; Yu, Z.; Gu, T.; Li, Y.; Zhang, H.; Zhang, J. Adaptive Multimodal Continuous Ant Colony Optimization. IEEE Trans. Evol. Comput. 2017, 21, 191–205. [Google Scholar] [CrossRef] [Green Version]
  57. Socha, K.; Dorigo, M. Ant Colony Optimization for Continuous Domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef] [Green Version]
  58. Erridge, P. The Pareto Principle. Br. Dent. J. 2006, 201, 419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.-H. An Expanded Particle Swarm Optimization Based on Multi-Exemplar and Forgetting Ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  60. Kahraman, H.T.; Aras, S.; Gedikli, E. Fitness-Distance Balance (FDB): A New Selection Method for Meta-Heuristic Search Algorithms. Knowl.-Based Syst. 2020, 190, 105169. [Google Scholar] [CrossRef]
  61. Zhang, X.; Liu, H.; Zhang, T.; Wang, Q.; Wang, Y.; Tu, L. Terminal Crossover and Steering-based Particle Swarm Optimization Algorithm with Disturbance. Appl. Soft Comput. 2019, 85, 105841. [Google Scholar] [CrossRef]
  62. Shen, Y.; Wei, L.; Zeng, C.; Chen, J. Particle Swarm Optimization with Double Learning Patterns. Comput. Intell. Neurosci. 2016, 2016, 6510303. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Kommadath, R.; Kotecha, P. Teaching Learning Based Optimization with Focused Learning and Its Performance on CEC2017 Functions. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 2397–2403. [Google Scholar]
Figure 1. Convergence behavior comparison between DELPSO and the nine compared algorithms on the twenty 50-D CEC 2017 benchmark problems.
Figure 1. Convergence behavior comparison between DELPSO and the nine compared algorithms on the twenty 50-D CEC 2017 benchmark problems.
Mathematics 10 01261 g001
Table 1. Parameters settings of all algorithms.
Table 1. Parameters settings of all algorithms.
AlgorithmsDParameter Settings
DELPSO30NP = 200α = 0.4 egs = 0.8 * NP~0.2 * NP
50NP = 180
100NP = 180
XPSO30NP = 200η = 0.2 p = 0.5 Stagemax = 5
50NP = 200
100NP = 150
DNSPSO30NP = 50w = 0.9~0.4 k = 5 F = 0.5 CR = 0.9
50NP = 50
100NP = 50
TCSPSO30NP = 150w = 0.9~0.4 c1 = c2 = 2
50NP = 150
100NP = 80
CLPSO-LS30NP = 50c = 1.4945 w = 0.9~0.4 β = 1/3 θ = 0.94 Pc = 0.05~0.5
50NP = 50
100NP = 50
AWPSO30NP = 150w = 0.9~0.4 b = 0.5 c = 0 d = 1.5
50NP = 200
100NP = 200
DPLPSO30NP = 40w = 0.9~0.3 c1s = c2s = c1m = c2m = 2.0 L = 50
50NP = 40
100NP = 40
HCLPSO30NP = 200w = 0.99~0.2 c1 = 2.5~0.5 c2 = 0.5~2.5 c = 3~1.5
50NP = 180
100NP = 160
SCDLPSO30NP = 100w = 0.9~0.4 β = 0.5
50NP = 100
100NP = 150
TLBO-FL30NP = 100-
50NP = 100
100NP = 100
Table 2. Comparison results between DELPSO and seven state-of-the-art PSO variants on the 30-D CEC 2017 benchmark functions. The symbols “+”, “−”, and “=“behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems.
Table 2. Comparison results between DELPSO and seven state-of-the-art PSO variants on the 30-D CEC 2017 benchmark functions. The symbols “+”, “−”, and “=“behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems.
FCategoryQualityDELPSOXPSODNSPSOTCSPSOCLPSO-LSAWPSODPLPSOHCLPSOSCDLPSOTLBO-FL
F1Unimodal
Functions
median1.76 × 1032.57 × 1031.24 × 1051.70 × 1031.63 × 1047.64 × 1093.19 × 1095.49 × 1032.28 × 1032.81 × 103
mean2.19 × 1033.57 × 1031.58 × 1053.34 × 1032.01 × 1048.47 × 1093.22 × 1099.72 × 1033.11 × 1033.72 × 103
std2.31 × 1033.37 × 1031.33 × 1053.81 × 1031.08 × 1044.61 × 1091.19 × 1097.90 × 1033.15 × 1033.23 × 103
p-value-7.25 × 10−2 =5.29 × 10−8 +1.68 × 10−1 =3.66× 10−12 +4.55× 10−14 +4.82× 10−21 +7.26× 10−6 +2.09 × 10−1 =4.18× 10−2 +
F3median3.22 × 1024.32 × 1011.57 × 1052.10 × 1047.68 × 10−112.02 × 1043.55 × 1045.86 × 1012.66 × 1023.00 × 103
mean4.16 × 1026.64 × 1011.54 × 1052.15 × 1041.11 × 1042.03 × 1043.70 × 1041.21 × 1024.82 × 1023.10 × 103
std3.60 × 1025.70 × 1012.97 × 1046.19 × 1032.26 × 1041.53 × 1046.91 × 1031.47 × 1025.86 × 1021.09 × 103
p-value-3.09 × 10−61.06× 10−35 +9.32× 10−26 +1.35× 10−2 +2.92× 10−9 +8.25× 10−36 +1.36 × 10−42.09 × 10−1 =2.65× 10−18 +
F1,3w/t/l-0/1/12/0/01/1/02/0/02/0/02/0/01/0/10/2/02/0/0
F4Simple
Multimodal
Functions
median8.20 × 1011.23 × 1022.55 × 1011.26 × 1028.89 × 1015.35 × 1028.17 × 1028.56 × 1018.33 × 1018.58 × 101
mean7.60 × 1011.22 × 1022.55 × 1011.26 × 1028.93 × 1018.27 × 1028.51 × 1028.66 × 1017.66 × 1018.88 × 101
std9.90 × 1001.23 × 1019.55 × 10−14.70 × 1011.88 × 1005.36 × 1023.89 × 1028.64 × 1001.11 × 1012.01 × 101
p-value-1.11× 10−22 +1.28 × 10−354.97× 10−7 +2.21× 10−9 +3.54 × 10−10 +2.13 × 10−15 +6.23 × 10−5 +8.53 × 10−1 =3.35 × 10−3 +
F5median8.95 × 1004.03 × 1011.97 × 1027.32 × 1012.20 × 1021.44 × 1022.05 × 1026.31 × 1014.97 × 1003.52 × 101
mean3.42 × 1014.15 × 1011.97 × 1027.31 × 1012.19 × 1021.40 × 1022.02 × 1026.72 × 1015.14 × 1003.74 × 101
std5.08 × 1011.20 × 1011.07 × 1011.97 × 1019.81 × 1002.75 × 1012.82 × 1011.63 × 1011.84 × 1001.80 × 101
p-value-4.53 × 10−1 =9.80× 10−25 +2.96 × 10−4 +7.66× 10−27 +4.28× 10−14 +2.10× 10−22 +1.48 × 10−3 +3.17 × 10−37.45 × 10−1 =
F6median1.14 × 10−132.63 × 10−31.76 × 10−11.88 × 10−23.66 × 10−11.48 × 1013.02 × 1013.36 × 10−41.11 × 10−63.75 × 10−1
mean1.10 × 10−131.04 × 10−21.83 × 10−11.47 × 10−14.53 × 10−11.70 × 1013.06 × 1011.11 × 10−38.31 × 10−64.65 × 10−1
std2.04 × 10−142.95 × 10−26.57 × 10−24.12 × 10−15.13 × 10−17.60 × 1004.93 × 1001.60 × 10−31.41 × 10−54.07 × 10−1
p-value-6.19 × 10−2 =9.16× 10−22 +5.90 × 10−2 =1.33× 10−5 +2.14× 10−17 +1.27 × 10−39 +4.11 × 10−4 +2.42× 10−3 +7.76× 10−8 +
F7median1.69 × 1027.73 × 1012.39 × 1021.14 × 1022.38 × 1021.44 × 1022.88 × 1029.32 × 1013.48 × 1011.37 × 102
mean1.65 × 1027.99 × 1012.36 × 1021.18 × 1022.41 × 1021.77 × 1022.92 × 1029.38 × 1013.94 × 1011.34 × 102
std2.65 × 1011.96 × 1011.13 × 1012.65 × 1011.88 × 1018.15 × 1012.82 × 1011.92 × 1012.05 × 1014.75 × 101
p-value-4.55 × 10−201.10× 10−19 +6.99 × 10−92.56× 10−18 +4.34 × 10−1 =6.16× 10−25 +6.83 × 10−177.19 × 10−283.02 × 10−3
F8median8.46 × 1004.03 × 1012.02 × 1027.51 × 1012.15 × 1021.39 × 1021.91 × 1026.47 × 1013.98 × 1003.04 × 101
mean1.72 × 1014.29 × 1012.00 × 1028.04 × 1012.15 × 1021.39 × 1021.91 × 1026.49 × 1014.58 × 1003.21 × 101
std3.08 × 1011.29 × 1011.06 × 1012.46 × 1011.20 × 1013.06 × 1012.20 × 1011.84 × 1011.33 × 1008.44 × 100
p-value-1.12× 10−4 +1.01× 10−37 +3.06× 10−12 +8.85× 10−39 +9.49× 10−22 +1.50× 10−32 +1.61× 10−9 +3.12 × 10−21.46× 10−2 +
F9median0.00 × 1009.98 × 10−11.96 × 1001.31 × 1021.23 × 1012.20 × 1031.34 × 1035.00 × 1011.14 × 10−133.29 × 101
mean0.00 × 1001.15 × 1002.57 × 1001.66 × 1022.19 × 1012.54 × 1031.42 × 1038.07 × 1012.11 × 10−23.87 × 101
std0.00 × 1001.00 × 1002.31 × 1001.73 × 1023.23 × 1011.40 × 1033.89 × 1021.41 × 1028.34 × 10−22.94 × 101
p-value-6.67× 10−8 +2.59× 10−7 +3.00× 10−6 +5.84× 10−4 +7.69× 10−14 +2.29× 10−27 +3.11× 10−3 +1.78 × 10−1 =2.16× 10−9 +
F10median5.68 × 1032.79 × 1035.59 × 1032.67 × 1036.20 × 1033.83 × 1036.39 × 1032.83 × 1036.32 × 1036.74 × 103
mean5.13 × 1032.71 × 1035.49 × 1032.74 × 1036.11 × 1033.98 × 1036.34 × 1032.88 × 1035.97 × 1036.71 × 103
std1.77 × 1037.34 × 1029.03 × 1025.83 × 1024.91 × 1026.95 × 1025.07 × 1025.16 × 1021.35 × 1032.65 × 102
p-value-6.27 × 10−93.57 × 10−1 =4.01 × 10−95.75× 10−3 +1.90 × 10−37.85× 10−4 +1.45 × 10−84.66× 10−2 +1.30× 10−5 +
F4–10w/t/l-3/2/25/1/14/1/27/0/05/1/17/0/05/0/22/2/35/1/1
F11Hybrid
Functions
median1.41 × 1018.70 × 1018.91 × 1011.12 × 1021.75 × 1025.96 × 1023.89 × 1026.80 × 1011.28 × 1017.90 × 101
mean2.22 × 1018.04 × 1018.89 × 1011.15 × 1021.78 × 1027.06 × 1024.24 × 1028.31 × 1013.41 × 1018.37 × 101
std2.21 × 1013.18 × 10+9.01 × 1005.90 × 1014.71 × 1013.55 × 1021.16 × 1024.33 × 1012.95 × 1014.43 × 101
p-value-4.18 × 10−11 +2.89 × 10−22 +8.29 × 10−11 +4.75 × 10−23 +8.32 × 10−15 +9.20 × 10−26 +7.50 × 10−9 +8.52 × 10−2 =9.71 × 10−9 +
F12median2.18 × 1042.40 × 1046.10 × 1073.87 × 1054.35 × 1052.90 × 1081.86 × 1081.86 × 1052.68 × 1043.55 × 104
mean2.70 × 1041.39 × 1056.69 × 1075.47 × 1058.38 × 1054.34 × 1081.81 × 1082.81 × 1052.72 × 1044.95 × 104
std1.29 × 1043.86 × 1052.84 × 1076.21 × 1057.93 × 1055.07 × 1089.20 × 1072.59 × 1051.47 × 1047.14 × 104
p-value-1.22 × 10−1 =2.38 × 10−18 +3.23 × 10−5 +8.85 × 10−7 +2.36 × 10−5 +3.64 × 10−15 +2.14 × 10−6 +9.74 × 10−1 =1.01 × 10−1 =
F13median1.06 × 1047.43 × 1031.45 × 1065.98 × 1036.55 × 1034.59 × 1064.97 × 1064.02 × 1048.46 × 1031.61 × 104
mean1.12 × 1041.52 × 1041.64 × 1061.88 × 1041.38 × 1042.31 × 1071.19 × 1073.36 × 1041.80 × 1042.13 × 104
std5.69 × 1031.45 × 1046.42 × 1053.42 × 1041.85 × 1043.20 × 1072.24 × 1072.49 × 1041.79 × 1041.96 × 104
p-value-1.67 × 10−1 =7.97 × 10−20 +2.39 × 10−1 =4.67 × 10−1 =2.70 × 10−4 +5.80 × 10−3 +1.42 × 10−5 +5.38 × 10−2 =1.00 × 10−2 +
F14median2.65 × 1032.97 × 1031.70 × 1022.34 × 1049.37 × 1043.61 × 1041.19 × 1051.78 × 1041.92 × 1036.75 × 103
mean3.25 × 1034.79 × 1031.76 × 1024.22 × 1049.42 × 1047.51 × 1041.15 × 1052.11 × 1043.19 × 1037.81 × 103
std2.43 × 1035.22 × 1033.03 × 1015.53 × 1046.01 × 1048.59 × 1049.80 × 1042.01 × 1043.20 × 1036.54 × 103
p-value-1.57 × 10−1 =5.81 × 10−93.69 × 10−4 +3.66 × 10−11 +3.28 × 10−5 +7.20 × 10−8 +1.44 × 10−5 +9.36 × 10−1 =8.50 × 10−4 +
F15median3.18 × 1025.82 × 1033.74 × 1048.01 × 1032.94 × 1049.92 × 1041.42 × 1041.13 × 1046.60 × 1021.73 × 104
mean1.03 × 1036.93 × 1034.17 × 1049.89 × 1033.15 × 1041.28 × 1052.78 × 1041.51 × 1041.70 × 1032.56 × 104
std1.50 × 1036.25 × 1031.70 × 1048.86 × 1039.88 × 1039.79 × 1046.34 × 1041.26 × 1042.38 × 1032.67 × 104
p-value-7.02 × 10−6 +1.37 × 10−18 +1.81 × 10−6 +2.01 × 10−23 +3.34 × 10−9 +2.69 × 10−2 +1.73 × 10−7 +2.05 × 10−1 =7.06 × 10−6 +
F16median1.36 × 1025.86 × 1021.88 × 1039.22 × 1021.28 × 1031.37 × 1031.57 × 1036.92 × 1022.21 × 1014.27 × 102
mean1.96 × 1025.70 × 1021.84 × 1039.00 × 1021.10 × 1031.32 × 1031.61 × 1036.74 × 1029.21 × 1015.50 × 102
std2.30 × 1021.44 × 1022.22 × 1023.06 × 1024.51 × 1023.54 × 1022.80 × 1022.64 × 1021.21 × 1023.92 × 102
p-value-5.27 × 10−10 +3.89 × 10−35 +4.35 × 10−14 +1.32 × 10−13 +1.11 × 10−20 +7.90 × 10−29 +7.17 × 10−10 +3.61 × 10−29.22 × 10−5 +
F17median3.46 × 1011.45 × 1028.28 × 1023.18 × 1027.31 × 1025.99 × 1024.96 × 1022.98 × 1025.11 × 1011.03 × 102
mean4.51 × 1011.37 × 1028.28 × 1023.30 × 1021.02 × 1035.64 × 1024.81 × 1022.97 × 1025.80 × 1011.33 × 102
std3.76 × 1017.78 × 1011.59 × 1021.60 × 1027.09 × 1022.19 × 1021.53 × 1021.54 × 1022.44 × 1016.50 × 101
p-value-3.97 × 10−7 +5.09 × 10−29 +3.79 × 10−13 +6.29 × 10−10 +2.87 × 10−18 +1.97 × 10−21 +6.63 × 10−12 +1.28 × 10−1 =4.83 × 10−8 +
F18median9.53 × 1041.15 × 1052.04 × 1052.08 × 1056.65 × 1056.21 × 1055.81 × 1051.58 × 1058.54 × 1043.52 × 105
mean1.03 × 1051.37 × 1052.08 × 1053.19 × 1052.18 × 1061.63 × 1061.04 × 1062.01 × 1051.16 × 1053.86 × 105
std5.13 × 1048.19 × 1048.98 × 1042.64 × 1052.84 × 1062.81 × 1061.18 × 1061.24 × 1059.74 × 1041.81 × 105
p-value-6.40 × 10−2 =9.45 × 10−7 +5.93 × 10−5 +2.24 × 10−4 +4.80 × 10−3 +7.08 × 10−5 +2.41 × 10−4 +5.21 × 10−1 =4.28 × 10−11 +
F19median2.74 × 1032.32 × 1032.06 × 1039.93 × 1031.57 × 1041.16 × 1061.52 × 1041.07 × 1041.72 × 1036.75 × 103
mean3.34 × 1033.67 × 1032.44 × 1031.31 × 1042.49 × 1062.35 × 1072.95 × 1041.77 × 1043.58 × 1031.18 × 104
std2.49 × 1033.04 × 1031.28 × 1031.25 × 1041.33 × 1074.77 × 1075.66 × 1041.99 × 1043.83 × 1031.30 × 104
p-value-6.59 × 10−1 =8.63 × 10−2 =5.33 × 10−5 +3.17 × 10−1 =1.02 × 10−2 +1.57 × 10−2 +2.97 × 10−4 +7.75 × 10−1 =1.01 × 10−3 +
F20median1.42 × 1021.74 × 1023.64 × 1023.31 × 1025.75 × 1024.28 × 1023.75 × 1021.78 × 1024.23 × 1012.16 × 102
mean1.16 × 1021.86 × 1023.95 × 1023.44 × 1025.18 × 1024.37 × 1023.95 × 1021.89 × 1026.38 × 1012.27 × 102
std4.69 × 1015.72 × 1011.84 × 1021.18 × 1021.90 × 1021.80 × 1021.24 × 1021.18 × 1027.20 × 1011.11 × 102
p-value-4.98 × 10−6 +8.70 × 10−11 +1.23 × 10−13 +6.52 × 10−16 +5.01 × 10−13 +2.51 × 10−16 +3.21 × 10−3 +1.64 × 10−36.19 × 10−6 +
F11–20w/t/l-5/5/08/1/19/1/08/2/010/0/010/0/010/0/00/8/29/1/0
F21Composition
Functions
median2.09 × 1022.42 × 1023.99 × 1022.77 × 1024.06 × 1023.44 × 1024.09 × 1022.73 × 1022.09 × 1022.34 × 102
mean2.24 × 1022.43 × 1023.99 × 1022.83 × 1024.04 × 1023.53 × 1024.05 × 1022.72 × 1022.09 × 1022.34 × 102
std3.93 × 1011.23 × 1011.11 × 1012.02 × 1019.77 × 1003.66 × 1012.10 × 1011.53 × 1012.82 × 1001.06 × 101
p-value-1.63 × 10−2 +8.00 × 10−31 +1.96 × 10−9 +1.17 × 10−31 +1.20 × 10−18 +1.22 × 10−29 +8.06 × 10−8 +4.85 × 10−2 -1.81 × 10−1 =
F22median1.00 × 1021.00 × 1026.45 × 1031.00 × 1027.00 × 1033.90 × 1035.41 × 1021.00 × 1021.00 × 1021.00 × 102
m × 10an1.00 × 1022.40 × 1026.28 × 1031.04 × 1036.98 × 1033.36 × 1035.63 × 1021.88 × 1022.73 × 1021.01 × 102
std0.00 × 1005.29 × 1026.67 × 1021.46 × 1033.01 × 1021.28 × 1031.30 × 1024.72 × 1029.33 × 1021.47 × 100
p-value-1.61 × 10−1 =2.30 × 10−49 +9.84 × 10−4 +7.93 × 10−72 +7.66 × 10−20 +8.89 × 10−27 +3.17 × 10−1 =3.21 × 10−1 =2.82 × 10−3 +
F23median3.63 × 1023.91 × 1025.74 × 1024.41 × 1025.56 × 1025.60 × 1026.77 × 1024.49 × 1023.92 × 1023.92 × 102
mean3.63 × 1023.93 × 1025.86 × 1024.43 × 1025.54 × 1025.63 × 1026.85 × 1024.51 × 1023.91 × 1023.96 × 102
std4.89 × 1001.34 × 1014.21 × 1013.10 × 1011.14 × 1014.09 × 1013.69 × 1011.97 × 1019.71 × 1001.51 × 101
p-value-7.10 × 10−16 +1.27 × 10−35 +9.05 × 10−20 +6.78 × 10−62 +8.34 × 10−34 +1.19 × 10−47 +5.27 × 10−31 +8.69 × 10−20 +1.08 × 10−15 +
F24median4.32 × 1024.56 × 1026.61 × 1025.04 × 1026.21 × 1026.48 × 1027.31 × 1025.38 × 1024.66 × 1024.64 × 102
mean4.32 × 1024.58 × 1026.73 × 1025.19 × 1026.22 × 1026.52 × 1027.35 × 1025.41 × 1024.70 × 1024.68 × 102
std3.12 × 1001.23 × 1014.38 × 1014.73 × 1011.14 × 1013.91 × 1014.60 × 1012.46 × 1011.60 × 1011.57 × 101
p-value-8.36 × 10−16 +1.17 × 10−36 +5.08 × 10−14 +5.57 × 10−63 +4.29 × 10−37 +6.94 × 10−41 +2.05 × 10−31 +4.49 × 10−18 +4.17 × 10−17 +
F25median3.88 × 1023.91 × 1023.79 × 1024.16 × 1023.88 × 1025.41 × 1025.86 × 1023.89 × 1023.88 × 1023.95 × 102
mean3.88 × 1023.93 × 1023.78 × 1024.13 × 1023.88 × 1026.37 × 1026.06 × 1023.89 × 1023.88 × 1024.03 × 102
std3.98 × 10−16.41 × 1006.16 × 10−12.16 × 1011.28 × 10−11.95 × 1026.43 × 1016.58 × 1005.00 × 10−11.62 × 101
p-value-2.90 × 10−5 +2.98 × 10−51 -2.62 × 10−8 +NAN =4.17 × 10−9 +1.07 × 10−25 +1.29 × 10−1 =9.79 × 10−1 =3.98 × 10−6 +
F26median1.09 × 1033.00 × 1023.20 × 1032.05 × 1033.18 × 1033.27 × 1031.62 × 1031.93 × 1031.33 × 1031.48 × 103
mean1.07 × 1035.95 × 1023.27 × 1032.02 × 1033.16 × 1033.23 × 1032.21 × 1031.77 × 1031.33 × 1031.39 × 103
std1.60 × 1025.28 × 1023.88 × 1024.75 × 1021.19 × 1026.87 × 1021.16 × 1036.07 × 1021.15 × 1024.67 × 102
p-value-2.06 × 10−5 -1.57 × 10−35 +1.40 × 10−14 +1.93 × 10−52 +1.45 × 10−23 +2.53 × 10−6 +1.35 × 10−7 +1.45 × 10−9 +9.01 × 10−4 +
F27median5.10 × 1025.36 × 1025.00 × 1025.63 × 1025.12 × 1025.85 × 1027.96 × 1025.13 × 1025.14 × 1025.33 × 102
mean5.12 × 1025.37 × 1025.00 × 1025.64 × 1025.18 × 1026.02 × 1028.01 × 1025.15 × 1025.15 × 1025.34 × 102
std8.45 × 1001.68 × 1017.67 × 10−51.99 × 1011.48 × 1015.52 × 1014.09 × 1011.40 × 1019.77 × 1002.08 × 101
p-value-1.65 × 10−9 +1.90 × 10−10 -1.13 × 10−18 +5.10 × 10−2 =5.05 × 10−12 +3.46 × 10−42 +3.03 × 10−1 =1.82 × 10−1 =2.10 × 10−6 +
F28median3.17 × 1024.04 × 1025.00 × 1024.28 × 1023.52 × 1039.04 × 1028.91 × 1024.45 × 1024.08 × 1024.24 × 102
mean3.44 × 1023.84 × 1025.00 × 1024.49 × 1023.00 × 1031.24 × 1039.21 × 1024.49 × 1024.15 × 1024.30 × 102
std4.93 × 1016.17 × 1017.50 × 10−55.53 × 1019.61 × 1027.22 × 1021.46 × 1023.77 × 1013.61 × 1012.52 × 101
p-value-8.41 × 10−3 +3.09 × 10−24 +2.81 × 10−10 +1.99 × 10−21 +1.06 × 10−8 +6.68 × 10−28 +1.03 × 10−12 +5.79 × 10−8 +1.47 × 10−11 +
F29median4.34 × 1025.39 × 1021.72 × 1038.34 × 1029.84 × 1021.10 × 1031.22 × 1036.50 × 1024.80 × 1025.88 × 102
mean4.46 × 1025.50 × 1021.67 × 1038.19 × 1021.08 × 1031.11 × 1031.28 × 1037.08 × 1024.80 × 1026.19 × 102
std4.30 × 1017.92 × 1012.74 × 1021.72 × 1025.52 × 1022.96 × 1022.19 × 1022.01 × 1022.24 × 1019.51 × 101
p-value-6.02 × 10−8 +1.75 × 10−31 +2.36 × 10−16 +6.78 × 10−8 +2.72 × 10−17 +7.08 × 10−28 +4.98 × 10−9 +3.84 × 10−4 +1.87 × 10−12 +
F30median3.33 × 1037.33 × 1035.18 × 1041.15 × 1041.38 × 1041.39 × 1061.86 × 1066.27 × 1033.90 × 1031.57 × 104
mean3.63 × 1038.63 × 1035.96 × 1041.55 × 1041.38 × 1042.79 × 1063.35 × 1068.22 × 1035.05 × 1032.93 × 104
std9.10 × 1024.01 × 1033.90 × 1041.30 × 1041.36 × 1033.17 × 1063.00 × 1065.02 × 1032.81 × 1033.45 × 104
p-value-1.71 × 10−8 +1.79 × 10−10 +7.93 × 10−6 +1.35 × 10−39 +1.45 × 10−5 +1.28 × 10−7 +9.64 × 10−6 +1.18 × 10−2 +1.77 × 10−4 +
F21–30w/t/l-8/1/18/0/210/0/08/2/010/0/010/0/07/3/06/3/19/1/0
w/t/l-16/9/423/2/424/3/225/4/027/1/129/0/023/3/38/15/625/3/1
Rank1.793.486.765.937.598.528.834.722.484.90
Table 3. Comparison results between DELPSO and seven state-of-the-art PSO variants on the 50-D CEC 2017 benchmark functions. The symbols “+”, “−”, and “=“behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems.
Table 3. Comparison results between DELPSO and seven state-of-the-art PSO variants on the 50-D CEC 2017 benchmark functions. The symbols “+”, “−”, and “=“behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems.
FCategoryQualityDELPSOXPSODNSPSOTCSPSOCLPSO-LSAWPSODPLPSOHCLSPOSCDLPSOTLBO-FL
F1Unimodal
Functions
median1.46 × 1031.99 × 1031.72 × 1033.79 × 1034.97 × 1073.84 × 10101.86 × 10109.83 × 1031.01 × 1031.77 × 104
mean2.86 × 1033.17 × 1035.68 × 1036.30 × 1033.73 × 1083.81 × 10101.82 × 10107.17 × 1072.32 × 1036.09 × 105
std3.17 × 1033.47 × 1037.60 × 1036.91 × 1031.34 × 1097.86 × 1094.26 × 1092.73 × 1082.71 × 1032.49 × 106
p-value-2.68 × 10−2 +5.88 × 10−3 +1.65 × 10−1 =1.40 × 10−1 =1.01 × 10−33 +8.00 × 10−31 +1.63 × 10−1 =1.93 × 10−1 =1.95 × 10−1 =
F3median6.00 × 1036.14 × 1033.80 × 1057.83 × 1045.36 × 10−109.79 × 1041.16 × 1054.71 × 1031.38 × 1042.43 × 104
mean6.46 × 1036.56 × 1033.79 × 1057.79 × 1041.52 × 1049.87 × 1041.16 × 1055.11 × 1031.44 × 1042.53 × 104
std1.98 × 1032.57 × 1033.68 × 1041.21 × 1044.57 × 1042.62 × 1041.82 × 1042.40 × 1033.94 × 1034.79 × 103
p-value-2.41 × 10−1 =1.65 × 10−51 +1.50 × 10−39 +2.75 × 10−1 =6.46 × 10−28 +8.84 × 10−39 +1.92 × 10−1 =4.79 × 10−15 +4.55 × 10−28 +
F1,3w/t/l-1/1/02/0/01/1/00/2/02/0/02/0/00/2/01/1/01/1/0
F4Simple
Multimodal
Functions
median2.85 × 1012.14 × 1024.56 × 1012.90 × 1022.40 × 1022.97 × 1033.55 × 1031.61 × 1021.33 × 1022.18 × 102
mean5.65 × 1012.10 × 1025.25 × 1012.86 × 1022.39 × 1024.33 × 1033.55 × 1031.48 × 1021.23 × 1022.00 × 102
std3.96 × 1015.38 × 1011.80 × 1017.76 × 1019.77 × 1002.53 × 1038.41 × 1025.23 × 1015.20 × 1014.52 × 101
p-value-4.04 × 10−16 +3.09 × 10−1 =7.71 × 10−18 +8.41 × 10−28 +1.09 × 10−12 +4.03 × 10−30 +1.28 × 10−8 +1.70 × 10−5 +1.76 × 10−16 +
F5median1.94 × 1019.25 × 1014.10 × 1021.87 × 1024.47 × 1023.11 × 1024.61 × 1021.65 × 1021.04 × 1019.65 × 101
mean2.65 × 1019.59 × 1014.06 × 1021.88 × 1024.47 × 1023.15 × 1024.55 × 1021.66 × 1021.08 × 1019.52 × 101
std3.59 × 1012.36 × 1012.60 × 1014.08 × 1011.31 × 1014.96 × 1013.04 × 1013.23 × 1013.61 × 1001.60 × 101
p-value-5.62 × 10−9 +4.60 × 10−43 +1.99 × 10−18 +3.79 × 10−48 +4.64 × 10−29 +5.35 × 10−45 +6.78 × 10−19 +1.97 × 10−21.39 × 10−9 +
F6median1.14 × 10−136.53 × 10−21.03 × 10−12.04 × 1006.61 × 1003.65 × 1015.06 × 1011.85 × 10−35.14 × 10−44.43 × 100
mean3.52 × 10−81.14 × 10−11.14 × 10−12.82 × 1006.96 × 1003.79 × 1015.02 × 1012.65 × 10−36.37 × 10−44.51 × 100
std1.15 × 10−71.42 × 10−14.30 × 10−22.67 × 1001.66 × 1007.15 × 1004.23 × 1002.37 × 10−35.24 × 10−41.76 × 100
p-value-6.06 × 10−5 +1.22 × 10−20 +6.40 × 10−8 +2.18 × 10−30 +8.42 × 10−36 +1.70 × 10−55 +1.24 × 10−7 +1.64 × 10−8 +5.22 × 10−20 +
F7median3.40 × 1021.41 × 1024.67 × 1022.98 × 1025.06 × 1026.87 × 1027.56 × 1021.94 × 1026.18 × 1011.75 × 102
mean3.26 × 1021.47 × 1024.66 × 1022.91 × 1025.07 × 1026.94 × 1027.61 × 1022.02 × 1026.22 × 1011.77 × 102
std5.19 × 1012.36 × 1011.69 × 1014.53 × 1013.91 × 1011.95 × 1026.56 × 1013.06 × 1012.31 × 1004.21 × 101
p-value-6.62 × 10−111.03 × 10−13 +7.17 × 10−1 =3.56 × 10−16 +6.88 × 10−21 +1.16 × 10−29 +9.74 × 10−68.03 × 10−191.77 × 10−7
F8median1.89 × 1017.71 × 1014.05 × 1021.76 × 1024.40 × 1023.47 × 1024.35 × 1021.56 × 1029.45 × 1008.66 × 101
mean5.54 × 1018.07 × 1014.11 × 1021.80 × 1024.40 × 1023.44 × 1024.33 × 1021.56 × 1021.01 × 1019.28 × 101
std8.62 × 1011.93 × 1011.83 × 1014.21 × 1011.81 × 1015.59 × 1012.37 × 1012.67 × 1013.34 × 1001.51 × 101
p-value-2.37 × 10−23 +3.55 × 10−96 +7.62 × 10−23 +1.14 × 10−71 +5.42 × 10−35 +1.02 × 10−64 +1.43 × 10−34 +6.55 × 10−171.48 × 10−32 +
F9median1.14 × 10−131.13 × 1011.14 × 1011.72 × 1039.87 × 1029.41 × 1031.15 × 1041.25 × 1035.89 × 10−17.96 × 102
mean2.11 × 10−21.96 × 1011.69 × 1011.98 × 1031.05 × 1039.96 × 1031.21 × 1041.26 × 1031.57 × 1001.28 × 103
std8.35 × 10−23.02 × 1012.48 × 1011.09 × 1033.18 × 1024.02 × 1032.22 × 1035.35 × 1023.67 × 1001.12 × 103
p-value-9.62 × 10−4 +5.95 × 10−4 +4.85 × 10−10 +4.03 × 10−25 +2.42 × 10−19 +1.52 × 10−36 +2.26 × 10−18 +3.80 × 10−2 +7.55 × 10−8 +
F10median1.05 × 1045.17 × 1031.13 × 1045.62 × 1031.32 × 1048.09 × 1031.24 × 1045.58 × 1031.20 × 1041.27 × 104
mean8.27 × 1035.20 × 1031.13 × 1045.61 × 1031.31 × 1047.57 × 1031.24 × 1045.57 × 1039.87 × 1031.26 × 104
std4.48 × 1037.20 × 1021.17 × 1039.05 × 1024.43 × 1021.26 × 1036.48 × 1026.11 × 1023.77 × 1034.32 × 102
p-value-7.92 × 10−2 =7.63 × 10−6 +1.43 × 10−1 =2.17 × 10−9 +1.07 × 10−75.72 × 10−8 +1.75 × 10−1 =8.85 × 10−3 +1.75 × 10−8 +
F4–10w/t/l-5/1/16/1/05/2/07/0/06/0/17/0/05/1/14/0/36/0/1
F11Hybrid
Functions
median4.16 × 1011.56 × 1022.01 × 1022.37 × 1022.75 × 1021.44 × 1032.45 × 1032.08 × 1026.05 × 1011.68 × 102
mean4.20 × 1011.51 × 1022.00 × 1022.45 × 1022.71 × 1022.66 × 1032.55 × 1032.07 × 1026.61 × 1011.67 × 102
std9.17 × 1003.32 × 1011.66 × 1011.02 × 1027.06 × 1012.26 × 1037.55 × 1026.30 × 1011.85 × 1014.80 × 101
p-value-4.53 × 10−23 +1.11 × 10−45 +8.92 × 10−13 +4.31 × 10−24 +4.42 × 10−8 +2.99 × 10−25 +1.40 × 10−19 +8.60 × 10−6 +5.13 × 10−19 +
F12median2.09 × 1053.27 × 1053.05 × 1072.45 × 1062.02 × 1075.67 × 1093.43 × 1092.94 × 1062.22 × 1056.12 × 105
mean2.29 × 1056.22 × 1053.60 × 1073.89 × 1062.37 × 1076.39 × 1093.65 × 1093.61 × 1062.78 × 1058.59 × 105
std1.00 × 1058.04 × 1052.33 × 1075.41 × 1061.20 × 1072.91 × 1091.16 × 1092.48 × 1061.61 × 1059.08 × 105
p-value-1.18 × 10−2 +2.05 × 10−15 +1.92 × 10−9 +4.07 × 10−15 +4.53 × 10−17 +3.63 × 10−24 +7.90 × 10−10 +1.91 × 10−1 =4.77 × 10−4 +
F13median1.12 × 1033.06 × 1032.02 × 1067.19 × 1033.78 × 1046.36 × 1083.74 × 1082.14 × 1043.29 × 1036.37 × 103
mean2.61 × 1035.58 × 1032.59 × 1069.61 × 1034.00 × 1069.74 × 1083.68 × 1082.29 × 1045.90 × 1037.86 × 103
std4.15 × 1035.99 × 1031.78 × 1068.94 × 1031.01 × 1071.22 × 1092.55 × 1081.32 × 1046.31 × 1034.11 × 103
p-value-4.04 × 10−4 +1.22 × 10−10 +8.43 × 10−6 +3.75 × 10−2 +6.92 × 10−5 +1.48 × 10−10 +2.76 × 10−12 +3.06 × 10−4 +7.83 × 10−11 +
F14median2.35 × 1043.05 × 1047.46 × 1037.62 × 1042.69 × 1055.86 × 1051.76 × 1068.86 × 1042.21 × 1047.62 × 104
mean2.35 × 1043.70 × 1049.07 × 1031.44 × 1053.59 × 1051.63 × 1062.14 × 1061.22 × 1053.49 × 1048.69 × 104
std1.21 × 1042.59 × 1043.81 × 1032.22 × 1054.60 × 1052.71 × 1061.86 × 1061.15 × 1052.94 × 1044.88 × 104
p-value-2.99 × 10−3 +1.56 × 10−54.23 × 10−3 +2.01 × 10−4 +2.00 × 10−3 +8.29 × 10−8 +1.45 × 10−5 +1.85 × 10−2 +1.99 × 10−9 +
F15median3.72 × 1033.52 × 1034.60 × 1054.35 × 1033.16 × 1047.13 × 1077.51 × 1061.81 × 1047.16 × 1036.01 × 103
mean4.91 × 1035.14 × 1035.72 × 1056.99 × 1034.42 × 1071.13 × 1083.51 × 1071.92 × 1046.48 × 1037.25 × 103
std3.23 × 1035.10 × 1033.08 × 1056.66 × 1031.25 × 1082.03 × 1081.03 × 1081.02 × 1044.35 × 1036.70 × 103
p-value-9.66 × 10−1 =4.27 × 10−14 +6.12 × 10−2 =6.24 × 10−2 =3.99 × 10−3 +7.06 × 10−2 =2.69 × 10−9 +2.02 × 10−1 =1.37 × 10−1 =
F16median3.72 × 1028.50 × 1023.86 × 1031.66 × 1033.13 × 1032.54 × 1033.21 × 1031.51 × 1034.14 × 1028.76 × 102
mean4.11 × 1028.53 × 1023.82 × 1031.61 × 1033.10 × 1032.47 × 1033.14 × 1031.53 × 1034.29 × 1028.80 × 102
std1.60 × 1023.19 × 1022.12 × 1024.35 × 1022.31 × 1025.37 × 1024.56 × 1023.56 × 1022.14 × 1023.38 × 102
p-value-8.28 × 10−7 +3.90 × 10−55 +1.13 × 10−19 +5.31 × 10−48 +1.08 × 10−28 +2.90 × 10−36 +2.66 × 10−20 +4.25 × 10−1 +5.10 × 10−7 +
F17median1.99 × 1028.41 × 1022.41 × 1031.22 × 1032.03 × 1032.21 × 1031.96 × 1031.30 × 1032.48 × 1027.17 × 102
mean3.23 × 1028.60 × 1022.36 × 1031.19 × 1032.20 × 1032.30 × 1031.97 × 1031.20 × 1033.70 × 1027.61 × 102
std3.03 × 1022.55 × 1022.22 × 1022.66 × 1029.10 × 1024.58 × 1022.68 × 1023.24 × 1022.98 × 1023.56 × 102
p-value-1.11 × 10−10 +4.90 × 10−38 +8.36 × 10−13 +2.54 × 10−15 +5.66 × 10−34 +4.02 × 10−31 +2.88 × 10−16 +4.97 × 10−1 =1.74 × 10−6 +
F18median6.00 × 1042.29 × 1053.65 × 1061.93 × 1064.85 × 1063.10 × 1065.53 × 1063.18 × 1059.30 × 1049.73 × 105
mean6.39 × 1043.58 × 1053.93 × 1062.99 × 1067.57 × 1065.65 × 1066.67 × 1064.33 × 1051.30 × 1051.11 × 106
std2.75 × 1044.64 × 1051.51 × 1062.73 × 1066.29 × 1066.53 × 1064.64 × 1063.39 × 1058.55 × 1045.29 × 105
p-value-1.04 × 10−3 +5.46 × 10−20 +5.90 × 10−8 +2.71 × 10−8 +2.54 × 10−5 +2.15 × 10−10 +1.92 × 10−7 +7.63 × 10−5 +2.91 × 10−15 +
F19median1.30 × 1041.51 × 1042.38 × 1048.95 × 1032.52 × 1031.08 × 1071.16 × 1061.03 × 1041.43 × 1041.10 × 104
mean1.29 × 1041.43 × 1042.77 × 1041.35 × 1044.84 × 1052.63 × 1072.40 × 1061.48 × 1041.43 × 1041.28 × 104
std4.48 × 1037.86 × 1031.29 × 1041.31 × 1041.44 × 1065.21 × 1073.75 × 1061.36 × 1047.16 × 1039.41 × 103
p-value-7.06 × 10−1 =5.81 × 10−6 +1.01 × 10−1 =8.47 × 10−2 =8.69 × 10−3 +1.12 × 10−3 +9.50 × 10−1 =6.95 × 10−1 =2.90 × 10−1 =
F20median4.11 × 1014.58 × 1021.56 × 1038.76 × 1021.70 × 1031.19 × 1031.57 × 1039.47 × 1028.70 × 1021.08 × 103
mean5.60 × 1014.89 × 1021.55 × 1038.36 × 1021.69 × 1031.18 × 1031.48 × 1038.97 × 1027.04 × 1021.07 × 103
std3.97 × 1012.29 × 1023.09 × 1022.65 × 1021.28 × 1021.95 × 1022.95 × 1022.33 × 1025.18 × 1023.34 × 102
p-value-6.07 × 10−14 +2.40 × 10−33 +2.85 × 10−24 +9.97 × 10−56 +6.25 × 10−36 +3.42 × 10−33 +1.92 × 10−26 +1.17 × 10−8 +5.23 × 10−23 +
F11–20w/t/l-8/2/09/0/18/2/08/2/010/0/09/1/09/1/06/4/08/2/0
F21Composition
Functions
median2.20 × 1022.92 × 1026.00 × 1023.61 × 1026.48 × 1025.63 × 1026.52 × 1023.85 × 1022.18 × 1022.79 × 102
mean2.28 × 1022.96 × 1025.99 × 1023.67 × 1026.45 × 1025.57 × 1026.48 × 1023.80 × 1022.19 × 1022.78 × 102
std4.63 × 1012.74 × 1011.40 × 1013.48 × 1011.30 × 1014.78 × 1013.77 × 1012.91 × 1014.32 × 1001.51 × 101
p-value-2.32 × 10−14 +4.06 × 10−57 +5.84 × 10−24 +1.93 × 10−60 +3.85 × 10−36 +2.92 × 10−49 +4.02 × 10−29 +1.83 × 10−1 =3.68 × 10−13 +
F22median1.24 × 1034.83 × 1031.25 × 1046.04 × 1031.33 × 1048.29 × 1031.27 × 1046.24 × 1033.76 × 1031.23 × 104
mean2.50 × 1033.84 × 1031.24 × 1045.93 × 1031.32 × 1048.26 × 1031.11 × 1045.76 × 1036.52 × 1037.30 × 103
std3.90 × 1032.82 × 1031.02 × 1031.41 × 1033.45 × 1029.72 × 1023.69 × 1031.63 × 1034.14 × 1036.30 × 103
p-value-4.06 × 10−5 +9.95 × 10−37 +1.71 × 10−16 +2.56 × 10−41 +5.95 × 10−47 +3.84 × 10−19 +6.71 × 10−15 +2.09 × 10−8 +3.71 × 10−6 +
F23median4.56 × 1025.17 × 1028.84 × 1026.19 × 1028.59 × 1029.55 × 1021.21 × 1036.66 × 1025.21 × 1025.53 × 102
mean4.57 × 1025.21 × 1029.31 × 1026.39 × 1028.57 × 1029.74 × 1021.22 × 1036.65 × 1025.27 × 1025.61 × 102
std8.55 × 1003.48 × 1011.16 × 1025.75 × 1011.30 × 1019.15 × 1016.75 × 1014.00 × 1013.04 × 1013.69 × 101
p-value-4.84 × 10−13 +1.10 × 10−29 +8.94 × 10−20 +3.82 × 10−72 +3.17 × 10−41 +7.67 × 10−54 +1.98 × 10−34 +1.21 × 10−16 +7.06 × 10−21 +
F24median5.27 × 1025.96 × 1021.09 × 1037.13 × 1029.04 × 1021.03 × 1031.32 × 1037.27 × 1025.91 × 1026.68 × 102
mean5.27 × 1026.07 × 1021.13 × 1037.28 × 1029.01 × 1021.03 × 1031.31 × 1037.23 × 1025.98 × 1026.68 × 102
std5.82 × 1003.97 × 1011.58 × 1027.54 × 1011.07 × 1011.04 × 1026.66 × 1013.46 × 1013.21 × 1014.26 × 101
p-value-3.75 × 10−15 +2.85 × 10−28 +5.43 × 10−18 +9.29 × 10−79 +1.32 × 10−37 +2.54 × 10−55 +5.54 × 10−37 +1.07 × 10−16 +6.01 × 10−25 +
F25median5.54 × 1025.88 × 1024.31 × 1026.56 × 1025.54 × 1022.54 × 1032.75 × 1034.80 × 1024.80 × 1026.24 × 102
mean5.39 × 1025.90 × 1024.37 × 1026.70 × 1025.55 × 1022.87 × 1032.82 × 1035.01 × 1025.08 × 1026.24 × 102
std3.07 × 1013.49 × 1011.40 × 1016.82 × 1014.22 × 1001.31 × 1036.19 × 1023.53 × 1013.86 × 1012.58 × 101
p-value-1.58 × 10−7 +4.34 × 10−172.67 × 10−13 +2.32 × 10−5 +1.40 × 10−14 +1.62 × 10−27 +3.46 × 10−32.95 × 10−26.77 × 10−15 +
F26median1.36 × 1031.87 × 1036.33 × 1033.48 × 1035.67 × 1037.21 × 1036.23 × 1033.69 × 1031.84 × 1032.88 × 103
mean1.30 × 1031.48 × 1036.91 × 1033.55 × 1035.65 × 1037.14 × 1036.53 × 1033.64 × 1031.89 × 1032.95 × 103
std2.21 × 1028.59 × 1021.57 × 1034.92 × 1021.70 × 1021.14 × 1031.93 × 1033.13 × 1021.65 × 1026.92 × 102
p-value-6.20 × 10−1 =2.35 × 10−26 +1.05 × 10−25 +2.54 × 10−71 +1.11 × 10−35 +1.17 × 10−20 +3.46 × 10−42 +1.30 × 10−20 +2.60 × 10−17 +
F27median5.92 × 1027.15 × 1025.00 × 1028.94 × 1027.25 × 1021.02 × 1031.99 × 1036.53 × 1026.66 × 1029.10 × 102
mean5.96 × 1027.22 × 1025.00 × 1029.11 × 1027.44 × 1021.04 × 1031.95 × 1036.89 × 1026.79 × 1028.78 × 102
std3.26 × 1016.57 × 1015.42 × 10−51.04 × 1027.62 × 1011.65 × 1021.69 × 1021.06 × 1026.07 × 1011.78 × 102
p-value-8.11 × 10−10 +1.37 × 10−193.50 × 10−22 +6.96 × 10−11 +2.73 × 10−24 +1.01 × 10−44 +7.60 × 10−4 +1.57 × 10−5 +1.37 × 10−10 +
F28median5.08 × 1025.55 × 1025.00 × 1026.80 × 1025.56 × 1035.07 × 1032.91 × 1034.92 × 1024.59 × 1026.07 × 102
mean4.98 × 1025.53 × 1025.00 × 1026.77 × 1025.54 × 1034.74 × 1032.94 × 1034.92 × 1024.74 × 1026.10 × 102
std1.88 × 1012.62 × 1016.98 × 10−57.93 × 1013.01 × 1021.64 × 1034.42 × 1023.40 × 1012.69 × 1014.34 × 101
p-value-4.28 × 10−13 +1.91 × 10−1 =4.61 × 10−15 +4.98 × 10−64 +8.52 × 10−21 +8.27 × 10−37 +7.45 × 10−1 =1.93 × 10−31.42 × 10−18 +
F29median4.60 × 1028.42 × 1023.28 × 1031.15 × 1032.07 × 1032.80 × 1033.37 × 1031.19 × 1034.98 × 1021.02 × 103
mean4.67 × 1028.92 × 1023.20 × 1031.22 × 1033.13 × 1032.83 × 1033.57 × 1031.16 × 1035.17 × 1021.05 × 103
std1.16 × 1022.07 × 1022.63 × 1022.71 × 1023.09 × 1035.96 × 1026.55 × 1023.49 × 1021.27 × 1022.41 × 102
p-value-2.02 × 10−13 +8.44 × 10−50 +1.04 × 10−19 +2.08 × 10−5 +2.45 × 10−29 +8.42 × 10−33 +1.96 × 10−14 +1.50 × 10−1 =1.09 × 10−16 +
F30median7.99 × 1051.78 × 1062.28 × 1062.40 × 1061.61 × 1065.53 × 1071.90 × 1081.16 × 1067.96 × 1051.02 × 106
mean8.08 × 1051.85 × 1062.70 × 1062.68 × 1061.65 × 1061.03 × 1081.87 × 1081.23 × 1068.26 × 1051.18 × 106
std6.94 × 1043.57 × 1051.30 × 1061.30 × 1062.21 × 1051.12 × 1087.64 × 1074.15 × 1051.11 × 1053.35 × 105
p-value-9.65 × 10−22 +1.96 × 10−10 +2.56 × 10−10 +1.22 × 10−26 +7.44 × 10−6 +4.79 × 10−19 +4.56 × 10−6 +5.82 × 10−1 =1.08 × 10−6 +
F21–30w/t/l-9/1/07/1/210/0/010/0/010/0/010/0/08/1/15/3/210/0/0
w/t/l-23/5/124/2/322/5/025/4/028/0/128/1/022/5/216/8/525/3/1
Rank1.793.416.525.797.868.559.104.662.524.79
Table 4. Comparison results between DELPSO and seven state-of-the-art PSO variants on the 100-D CEC 2017 benchmark functions. The symbols “+”, “−”, and “=“behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems.
Table 4. Comparison results between DELPSO and seven state-of-the-art PSO variants on the 100-D CEC 2017 benchmark functions. The symbols “+”, “−”, and “=“behind the p-values imply that DELPSO is significantly superior to, significantly inferior to, and equivalent to the compared algorithms on the relevant problems.
FCategoryQualityDELPSOXPSODNSPSOTCSPSOCLPSO-LSAWPSODPLPSOHCLPSOSCDLPSOTLBO-FL
F1Unimodal
functions
median2.77 × 1031.90 × 1025.48 × 1033.04 × 1031.24 × 10101.57 × 10111.12 × 10111.37 × 1041.94 × 1036.69 × 108
mean4.62 × 1033.97 × 1029.34 × 1031.11 × 1061.22 × 10101.63 × 10111.08 × 10112.79 × 1045.40 × 1038.93 × 108
std5.68 × 1034.39 × 1021.23 × 1045.95 × 1061.32 × 1093.24 × 10101.20 × 10102.52 × 1044.91 × 1035.81 × 108
p-value-8.47 × 10−2 =6.57 × 10−2 =3.19 × 10−1 =2.43 × 10−49 +1.22 × 10−34 +8.76 × 10−49 +9.40 × 10−6 +7.51 × 10−1 =2.09 × 10−11 +
F3median6.61 × 1042.72 × 1031.04 × 1062.73 × 1052.75 × 10−84.85 × 1053.71 × 1057.13 × 1041.74 × 1051.78 × 105
mean6.50 × 1046.26 × 1031.04 × 1062.69 × 1054.92 × 1094.83 × 1053.69 × 1057.37 × 1041.72 × 1051.76 × 105
std1.06 × 1047.18 × 1039.19 × 1043.09 × 1042.64 × 10108.88 × 1043.97 × 1042.17 × 1042.09 × 1042.27 × 104
p-value-1.06 × 10−1 =1.95 × 10−52 +9.46 × 10−40 +3.20 × 10−1 =7.16 × 10−33 +7.12 × 10−44 +5.72 × 10−2 =9.04 × 10−33 +9.56 × 10−32 +
F1,3w/t/l-0/2/01/1/01/1/01/1/02/0/02/0/01/1/01/1/02/0/0
F4Simple
Multimodal
Functions
median2.02 × 1021.93 × 1022.03 × 1027.11 × 1021.05 × 1032.92 × 1042.14 × 1042.36 × 1022.18 × 1026.75 × 102
mean2.08 × 1021.89 × 1022.13 × 1027.30 × 1021.04 × 1032.95 × 1042.19 × 1042.47 × 1022.18 × 1026.86 × 102
std2.55 × 1013.96 × 1014.58 × 1013.23 × 1021.27 × 1027.70 × 1033.98 × 1032.56 × 1011.98 × 1011.16 × 102
p-value-1.38 × 10−305.88 × 10−1 =4.43 × 10−12 +1.84 × 10−40 +2.97 × 10−28 +1.83 × 10−36 +2.78 × 10−7 +1.18 × 10−1 =1.67 × 10−29 +
F5median6.47 × 1013.15 × 1021.03 × 1035.28 × 1021.07 × 1039.97 × 1021.21 × 1034.81 × 1022.59 × 1013.28 × 102
mean8.61 × 1013.21 × 1021.03 × 1035.39 × 1021.08 × 1031.01 × 1031.20 × 1034.89 × 1022.59 × 1013.42 × 102
std1.15 × 1025.50 × 1014.01 × 1017.89 × 1012.27 × 1011.03 × 1025.76 × 1017.81 × 1013.52 × 1004.17 × 101
p-value-7.61 × 10−8 +4.94 × 10−45 +7.21 × 10−25 +3.33 × 10−47 +1.05 × 10−38 +7.37 × 10−48 +1.76 × 10−22 +6.44 × 10−32.64 × 10−16 +
F6median1.43 × 10−66.60 × 10−52.20 × 10−11.28 × 1013.28 × 1016.23 × 1017.43 × 1019.16 × 10−33.73 × 10−21.89 × 101
mean2.92 × 10−47.51 × 10−52.40 × 10−11.32 × 1013.26 × 1016.20 × 1017.58 × 1011.25 × 10−25.58 × 10−21.93 × 101
std1.51 × 10−33.10 × 10−51.23 × 10−14.20 × 1001.66 × 1008.72 × 1004.23 × 1001.03 × 10−25.01 × 10−22.94 × 100
p-value-1.00 × 10−94.54 × 10−15 +4.47 × 10−24 +7.29 × 10−68 +7.01 × 10−43 +9.96 × 10−66 +3.96 × 10−8 +1.59 × 10−7 +6.72 × 10−41 +
F7median8.03 × 1025.77 × 1021.13 × 1031.04 × 1031.57 × 1033.33 × 1032.72 × 1036.66 × 1021.38 × 1027.91 × 102
mean5.46 × 1025.63 × 1021.14 × 1031.04 × 1031.68 × 1033.39 × 1032.82 × 1036.57 × 1021.39 × 1027.84 × 102
std3.28 × 1026.80 × 1014.81 × 1011.28 × 1022.20 × 1026.15 × 1023.06 × 1021.04 × 1025.87 × 1008.75 × 101
p-value-1.23 × 10−1 =1.26 × 10−13 +3.16 × 10−10 +3.40 × 10−22 +8.56 × 10−30 +1.02 × 10−34 +8.89 × 10−2 =1.01 × 10−83.77 × 10−4 +
F8median6.42 × 1013.48 × 1021.03 × 1035.13 × 1021.06 × 1031.04 × 1031.24 × 1035.69 × 1022.89 × 1013.79 × 102
mean8.69 × 1013.40 × 1021.03 × 1035.40 × 1021.06 × 1031.03 × 1031.25 × 1035.71 × 1022.93 × 1013.79 × 102
std1.22 × 1025.44 × 1013.27 × 1019.51 × 1012.65 × 1011.07 × 1025.10 × 1017.11 × 1014.21 × 1006.01 × 101
p-value-1.49 × 10−6 +4.91 × 10−44 +1.29 × 10−22 +3.91 × 10−45 +4.38 × 10−38 +5.81 × 10−48 +6.08 × 10−26 +1.36 × 10−21.11 × 10−16 +
F9median3.36 × 1004.47 × 1031.12 × 1031.32 × 1041.49 × 1043.26 × 1044.99 × 1048.66 × 1031.89 × 1011.90 × 104
mean4.81 × 1004.79 × 1032.25 × 1031.43 × 1041.51 × 1043.32 × 1045.04 × 1048.77 × 1032.59 × 1011.90 × 104
std4.69 × 1002.56 × 1032.57 × 1035.30 × 1031.37 × 1036.93 × 1035.84 × 1032.83 × 1032.49 × 1014.39 × 103
p-value-1.26 × 10−11 +1.61 × 10−5 +6.09 × 10−21 +1.56 × 10−53 +1.73 × 10−33 +1.39 × 10−47 +7.99 × 10−24 +3.67 × 10−5 +3.90 × 10−31 +
F10median2.70 × 1041.08 × 1043.05 × 1041.32 × 1043.02 × 1041.74 × 1042.92 × 1041.28 × 1041.05 × 1042.91 × 104
mean2.48 × 1041.06 × 1043.03 × 1041.30 × 1043.01 × 1041.77 × 1042.91 × 1041.31 × 1041.55 × 1042.91 × 104
std6.89 × 1037.79 × 1028.42 × 1021.22 × 1035.53 × 1021.79 × 1037.93 × 1021.18 × 1038.91 × 1035.04 × 102
p-value-2.01 × 10−136.56 × 10−5 +1.23 × 10−121.20 × 10−4 +1.45 × 10−69.43 × 10−6 +1.26 × 10−126.08 × 10−51.35 × 10−3 +
F4–10w/t/l-3/1/36/1/06/0/17/0/06/0/17/0/05/1/12/1/47/0/0
F11Hybrid
functions
median3.60 × 1021.05 × 1032.43 × 1045.04 × 1031.62 × 1034.46 × 1048.13 × 1048.47 × 1028.01 × 1029.68 × 102
mean3.40 × 1021.04 × 1032.64 × 1045.48 × 1035.07 × 1035.02 × 1047.95 × 1048.43 × 1028.31 × 1021.00 × 103
std8.26 × 1012.38 × 1029.36 × 1032.53 × 1038.07 × 1033.19 × 1041.32 × 1042.45 × 1021.52 × 1021.88 × 102
p-value-3.08 × 10−25 +1.34 × 10−21 +1.02 × 10−15 +2.52 × 10−3 +1.20 × 10−11 +7.82 × 10−39 +5.47 × 10−15 +4.53 × 10−22 +1.37 × 10−24 +
F12median3.30 × 1052.75 × 1051.73 × 1074.08 × 1077.41 × 1084.91 × 10103.19 × 10101.32 × 1075.91 × 1052.48 × 107
mean3.32 × 1052.99 × 1051.74 × 1076.38 × 1078.00 × 1085.01 × 10103.43 × 10102.77 × 1076.04 × 1052.55 × 107
std1.23 × 1051.34 × 1056.62 × 1061.02 × 1084.18 × 1081.43 × 10107.18 × 1097.20 × 1073.25 × 1051.41 × 107
p-value-5.10 × 10−64.84 × 10−20 +1.48 × 10−3 +1.01 × 10−14 +2.40 × 10−26 +2.32 × 10−33 +4.51 × 10−2 +7.11 × 10−5 +1.40 × 10−13 +
F13median1.68 × 1036.30 × 1022.78 × 1033.21 × 1031.23 × 1044.71 × 1092.75 × 1092.20 × 1042.46 × 1031.22 × 104
mean3.02 × 1037.44 × 1026.72 × 1035.81 × 1033.58 × 1075.62 × 1092.83 × 1092.24 × 1044.14 × 1031.38 × 104
std3.43 × 1034.29 × 1021.10 × 1045.25 × 1031.24 × 1083.13 × 1099.06 × 1081.27 × 1044.23 × 1036.16 × 103
p-value-1.09 × 10−1 =9.08 × 10−2 =1.98 × 10−2 =1.25 × 10−1 =1.10 × 10−13 +5.88 × 10−24 +8.12 × 10−11 +2.75 × 10−1 =2.20 × 10−11 +
F14median5.87 × 1044.41 × 1042.10 × 1061.28 × 1065.07 × 1069.30 × 1061.17 × 1076.31 × 1057.92 × 1041.74 × 106
mean6.70 × 1044.80 × 1042.05 × 1062.34 × 1065.52 × 1061.35 × 1071.36 × 1079.83 × 1058.59 × 1041.92 × 106
std3.07 × 1042.01 × 1044.90 × 1051.99 × 1064.18 × 1061.32 × 1076.47 × 1069.20 × 1053.53 × 1047.36 × 105
p-value-1.96 × 10−31.71 × 10−29 +8.04 × 10−8 +2.66 × 10−9 +1.03 × 10−6 +3.27 × 10−16 +1.49 × 10−6 +2.54 × 10−2 +1.23 × 10−19 +
F15median5.87 × 1024.35 × 1025.19 × 1041.46 × 1037.69 × 1032.06 × 1094.55 × 1087.93 × 1031.06 × 1032.60 × 103
mean1.13 × 1034.89 × 1027.58 × 1043.10 × 1032.01 × 1071.97 × 1094.51 × 1081.16 × 1042.64 × 1034.17 × 103
std1.76 × 1032.21 × 1026.57 × 1044.15 × 1031.06 × 1081.37 × 1092.25 × 1081.02 × 1044.67 × 1034.24 × 103
p-value-3.63 × 10−28.66 × 10−8 +2.15 × 10−2 +3.14 × 10−1 =1.68 × 10−10 +1.60 × 10−15 +1.23 × 10−6 +1.08 × 10−1 =7.21 × 10−4 +
F16median1.01 × 1032.93 × 1038.92 × 1034.15 × 1038.50 × 1037.13 × 1039.90 × 1034.15 × 1031.26 × 1032.63 × 103
mean1.11 × 1033.02 × 1038.85 × 1034.03 × 1038.44 × 1037.49 × 1039.71 × 1034.08 × 1031.19 × 1032.56 × 103
std3.51 × 1024.11 × 1023.95 × 1028.68 × 1022.87 × 1021.45 × 1035.84 × 1028.26 × 1024.58 × 1025.77 × 102
p-value-1.28 × 10−22 +1.02 × 10−60 +6.02 × 10−24 +3.69 × 10−63 +8.34 × 10−31 +4.70 × 10−57 +3.34 × 10−25 +4.37 × 10−1 =9.12 × 10−17 +
F17median7.72 × 1022.28 × 1035.89 × 1033.05 × 1035.65 × 1038.24 × 1036.53 × 1033.76 × 1031.31 × 1032.27 × 103
mean8.15 × 1022.33 × 1035.87 × 1033.04 × 1031.10 × 1041.23 × 1046.58 × 1033.86 × 1031.34 × 1032.27 × 103
std5.53 × 1023.43 × 1023.29 × 1025.18 × 1021.63 × 1041.10 × 1047.71 × 1027.15 × 1025.07 × 1024.59 × 102
p-value-3.95 × 10−17 +2.83 × 10−45 +1.04 × 10−22 +1.34 × 10−3 +6.35 × 10−7 +4.37 × 10−39 +1.47 × 10−25 +3.37 × 10−4 +1.33 × 10−15 +
F18median1.31 × 1051.02 × 1053.22 × 1073.22 × 1062.65 × 1071.72 × 1071.54 × 1071.10 × 1062.80 × 1054.29 × 106
mean1.28 × 1051.04 × 1053.33 × 1073.89 × 1062.89 × 1072.16 × 1071.93 × 1071.97 × 1062.91 × 1054.47 × 106
std2.98 × 1044.14 × 1041.09 × 1072.44 × 1061.83 × 1071.89 × 1071.47 × 1071.62 × 1061.25 × 1051.57 × 106
p-value-1.13 × 10−61.95 × 10−23 +1.84 × 10−11 +1.00 × 10−11 +8.20 × 10−8 +2.76 × 10−9 +8.68 × 10−8 +2.14 × 10−9 +1.59 × 10−21 +
F19median1.15 × 1032.83 × 1028.65 × 1032.43 × 1031.26 × 1041.12 × 1094.85 × 1081.28 × 1048.30 × 1022.30 × 103
mean1.53 × 1033.18 × 1021.10 × 1045.29 × 1033.83 × 1071.60 × 1095.24 × 1083.40 × 1052.07 × 1033.89 × 103
std1.32 × 1031.40 × 1029.63 × 1036.65 × 1031.98 × 1081.35 × 1092.97 × 1081.73 × 1062.56 × 1034.29 × 103
p-value-1.26 × 10−22.16 × 10−6 +4.12 × 10−3 +3.01 × 10−1 =3.22 × 10−8 +1.89 × 10−13 +2.96 × 10−1 =3.18 × 10−1 =6.36 × 10−3 +
F20median4.87 × 1022.21 × 1035.63 × 1032.69 × 1034.75 × 1033.60 × 1034.94 × 1032.56 × 1031.18 × 1034.26 × 103
mean5.06 × 1022.14 × 1035.58 × 1032.66 × 1034.74 × 1033.60 × 1034.92 × 1032.51 × 1031.87 × 1034.24 × 103
std2.20 × 1023.52 × 1022.97 × 1025.82 × 1022.72 × 1024.71 × 1024.36 × 1024.03 × 1021.32 × 1033.28 × 102
p-value-2.29 × 10−27 +4.39 × 10−59 +3.92 × 10−26 +9.89 × 10−84 +1.40 × 10−38 +1.01 × 10−48 +2.34 × 10−31 +1.79 × 10−7 +8.06 × 10−50 +
F11–20w/t/l-4/1/59/1/09/1/07/3/010/0/010/0/09/1/06/4/010/0/0
F21Composition
Functions
median2.88 × 1025.63 × 1021.21 × 1036.94 × 1021.30 × 1031.35 × 1031.63 × 1038.27 × 1022.88 × 1026.02 × 102
mean2.91 × 1025.59 × 1021.21 × 1037.04 × 1021.30 × 1031.35 × 1031.63 × 1038.36 × 1022.94 × 1026.02 × 102
std1.14 × 1015.80 × 1013.43 × 1019.23 × 1012.52 × 1011.24 × 1027.24 × 1017.15 × 1011.83 × 1014.13 × 101
p-value-5.55 × 10−25 +6.25 × 10−75 +1.07 × 10−31 +2.50 × 10−84 +6.44 × 10−47 +2.68 × 10−66 +3.21 × 10−44 +5.32 × 10−1 =2.29 × 10−43 +
F22median5.19 × 1031.18 × 1043.06 × 1041.49 × 1043.06 × 1041.90 × 1043.08 × 1041.43 × 1049.97 × 1033.02 × 104
mean8.68 × 1031.11 × 1043.06 × 1041.51 × 1043.06 × 1041.90 × 1043.05 × 1041.41 × 1041.27 × 1042.90 × 104
std8.22 × 1033.02 × 1038.56 × 1021.11 × 1034.79 × 1021.66 × 1039.53 × 1029.48 × 1026.78 × 1035.39 × 103
p-value-5.53 × 10−2 =1.28 × 10−20 +9.91 × 10−5 +1.06 × 10−20 +1.30 × 10−8 +1.37 × 10−20 +8.87 × 10−4 +4.66 × 10−2 +4.74 × 10−16 +
F23median6.57 × 1027.87 × 1021.77 × 1031.06 × 1031.53 × 1032.01 × 1032.88 × 1038.91 × 1027.60 × 1021.15 × 103
mean6.56 × 1027.85 × 1021.83 × 1031.06 × 1031.52 × 1032.04 × 1032.87 × 1038.80 × 1027.62 × 1021.16 × 103
std2.44 × 1012.66 × 1012.88 × 1027.63 × 1013.43 × 1012.02 × 1022.23 × 1023.75 × 1014.79 × 1019.61 × 101
p-value-1.79 × 10−21 +9.71 × 10−30 +1.86 × 10−34 +8.38 × 10−69 +1.29 × 10−41 +7.47 × 10−51 +1.69 × 10−34 +2.72 × 10−15 +7.81 × 10−35 +
F24median9.78 × 1021.28 × 1033.34 × 1031.52 × 1031.85 × 1032.91 × 1034.79 × 1031.52 × 1031.24 × 1031.99 × 103
mean9.78 × 1021.28 × 1033.38 × 1031.55 × 1031.84 × 1032.89 × 1034.70 × 1031.53 × 1031.24 × 1032.05 × 103
std1.97 × 1013.82 × 1016.33 × 1021.59 × 1024.91 × 1012.54 × 1023.91 × 1026.45 × 1017.89 × 1012.41 × 102
p-value-1.47 × 10−17 +3.67 × 10−28 +5.73 × 10−27 +3.89 × 10−63 +4.04 × 10−44 +5.52 × 10−50 +4.63 × 10−46 +6.14 × 10−25 +1.21 × 10−31 +
F25median7.98 × 1027.66 × 1027.57 × 1021.38 × 1033.44 × 1031.64 × 1041.01 × 1047.87 × 1028.22 × 1021.30 × 103
mean7.88 × 1027.59 × 1027.53 × 1021.38 × 1033.44 × 1031.62 × 1041.02 × 1047.82 × 1028.00 × 1021.31 × 103
std5.63 × 1017.14 × 1015.31 × 1012.53 × 1022.18 × 1025.06 × 1031.44 × 1037.57 × 1015.76 × 1011.19 × 102
p-value-1.79 × 10−251.71 × 10−21.10 × 10−17 +2.56 × 10−73 +1.64 × 10−23 +7.11 × 10−41 +7.24 × 10−1 =4.32 × 10−1 =4.05 × 10−29 +
F26median4.03 × 1037.60 × 1032.83 × 1041.02 × 1041.40 × 1042.65 × 1042.85 × 1041.11 × 1046.20 × 1031.09 × 104
mean4.04 × 1037.10 × 1032.78 × 1041.03 × 1041.40 × 1042.64 × 1042.85 × 1041.10 × 1046.16 × 1031.11 × 104
std2.13 × 1021.45 × 1036.85 × 1031.17 × 1034.00 × 1022.97 × 1032.29 × 1038.47 × 1024.81 × 1021.74 × 103
p-value-7.82 × 10−1 =3.73 × 10−26 +6.02 × 10−36 +7.95 × 10−71 +3.32 × 10−44 +7.74 × 10−53 +8.26 × 10−46 +1.26 × 10−29 +1.91 × 10−29 +
F27median6.94 × 1027.82 × 1025.00 × 1021.15 × 1037.05 × 1022.28 × 1034.09 × 1037.96 × 1027.49 × 1021.15 × 103
mean6.97 × 1027.88 × 1025.00 × 1021.15 × 1037.20 × 1022.35 × 1034.07 × 1037.95 × 1027.66 × 1021.18 × 103
std2.41 × 1013.50 × 1016.85 × 10−51.11 × 1024.63 × 1013.87 × 1026.80 × 1026.27 × 1016.46 × 1011.52 × 102
p-value-3.33 × 10−17 +2.35 × 10−462.80 × 10−29 +2.34 × 10−2 +7.73 × 10−31 +2.82 × 10−34 +1.08 × 10−10 +1.63 × 10−6 +3.43 × 10−24 +
F28median5.56 × 1025.46 × 1025.00 × 1021.20 × 1031.30 × 1042.15 × 1041.45 × 1045.84 × 1025.64 × 1021.44 × 103
mean5.63 × 1025.55 × 1025.00 × 1021.24 × 1031.31 × 1042.19 × 1041.43 × 1043.14 × 1035.72 × 1021.52 × 103
std2.55 × 1013.01 × 1014.35 × 10−52.71 × 1022.00 × 1022.74 × 1031.73 × 1034.51 × 1033.12 × 1012.85 × 102
p-value-1.24 × 10−332.64 × 10−191.68 × 10−19 +2.61 × 10−96 +4.96 × 10−45 +1.32 × 10−45 +3.21 × 10−3 +2.60 × 10−1 =1.47 × 10−25 +
F29median1.62 × 1033.05 × 1036.82 × 1033.89 × 1036.22 × 1039.32 × 1031.00 × 1043.95 × 1031.77 × 1033.40 × 103
mean1.59 × 1033.03 × 1036.76 × 1033.87 × 1036.24 × 1039.45 × 1031.02 × 1043.93 × 1031.82 × 1033.48 × 103
std3.67 × 1023.77 × 1024.42 × 1024.04 × 1022.70 × 1022.27 × 1039.09 × 1025.15 × 1023.99 × 1024.34 × 102
p-value-8.67 × 10−19 +1.23 × 10−48 +2.67 × 10−30 +1.89 × 10−44 +6.50 × 10−26 +5.43 × 10−48 +1.25 × 10−27 +2.80 × 10−2 +2.94 × 10−25 +
F30median5.96 × 1034.68 × 1038.62 × 1021.22 × 1051.87 × 1044.29 × 1092.38 × 1095.58 × 1035.68 × 1034.32 × 104
mean6.27 × 1035.48 × 1039.31 × 1021.63 × 1051.66 × 1074.60 × 1092.59 × 1091.85 × 1047.04 × 1035.91 × 104
std2.45 × 1032.53 × 1032.84 × 1021.81 × 1058.93 × 1072.39 × 1097.73 × 1082.58 × 1043.52 × 1036.47 × 104
p-value-1.36 × 10−78.02 × 10−171.96 × 10−5 +2.18 × 10−1 =8.52 × 10−15 +1.90 × 10−25 +1.36 × 10−2 +3.38 × 10−1 =4.82 × 10−5 +
F21–30w/t/l-5/2/36/0/410/0/09/1/010/0/010/0/09/1/06/4/010/0/0
w/t/l-12/6/1122/3/426/2/124/5/028/0/129/0/024/4/115/10/429/0/0
Rank1.902.346.285.487.868.729.214.902.765.55
Table 5. Statistical comparison results between DELPSO and the seven state-of-the-art PSO variants on the CEC 2017 benchmark set with different dimension sizes in terms of “w/t/l”.
Table 5. Statistical comparison results between DELPSO and the seven state-of-the-art PSO variants on the CEC 2017 benchmark set with different dimension sizes in terms of “w/t/l”.
CategoryDXPSODNSPSOTCSPSOCLPSO-LSAWPSODPLPSOHCLPSOSCDLPSOTLBO-FL
Unimodal
Functions
300/1/12/0/01/1/02/0/02/0/02/0/01/0/10/2/02/0/0
501/1/02/0/01/1/00/2/02/0/02/0/00/2/01/1/01/1/0
1000/2/01/1/01/1/01/1/02/0/02/0/01/1/01/1/02/0/0
Simple
Multimodal
Functions
303/2/25/1/14/1/27/0/05/1/17/0/05/0/22/2/35/1/1
505/1/16/1/05/2/07/0/06/0/17/0/05/1/14/0/36/0/1
1003/1/36/1/06/0/17/0/06/0/17/0/05/1/12/1/47/0/0
Hybrid
Functions
305/5/08/1/19/1/08/2/010/0/010/0/010/0/00/8/29/1/0
508/2/09/0/18/2/08/2/010/0/09/1/09/1/06/4/08/2/0
1004/1/59/1/09/1/07/3/010/0/010/0/09/1/06/4/010/0/0
Composition
Functions
308/1/18/0/210/0/08/2/010/0/010/0/07/3/06/3/19/1/0
509/1/07/1/210/0/010/0/010/0/010/0/08/1/15/3/210/0/0
1005/2/36/0/410/0/09/1/010/0/010/0/09/1/06/4/010/0/0
Whole Set3016/9/423/2/424/3/225/4/027/1/129/0/023/3/38/15/625/3/1
5023/5/124/2/322/5/025/4/028/0/128/1/022/5/216/8/525/3/1
10012/6/1122/3/426/2/124/5/028/0/129/0/024/4/115/10/429/0/0
Table 6. Comparison results between different versions of DELPSO on the 50-D CEC 2017 benchmark functions.
Table 6. Comparison results between different versions of DELPSO on the 50-D CEC 2017 benchmark functions.
FDELPSODELPSO-ADELPSO-DDELPSO-R
F12.86 × 1032.61 × 1039.93 × 1022.47 × 103
F36.46 × 1037.48 × 1036.44 × 1047.57 × 103
F45.65 × 1018.33 × 1011.51 × 1027.44 × 101
F52.65 × 1013.47 × 1013.10 × 1022.11 × 102
F63.52 × 10−83.21 × 10−31.14 × 10−131.60 × 10−9
F73.26 × 1027.75 × 1013.48 × 1023.46 × 102
F85.54 × 1013.24 × 1013.05 × 1022.06 × 102
F92.11 × 10−27.86 × 1002.85 × 10−16.95 × 10−2
F108.27 × 1032.32 × 1031.22 × 1041.18 × 104
F114.20 × 1011.21 × 1021.24 × 1033.47 × 101
F122.29 × 1056.37 × 1042.31 × 1066.02 × 105
F132.61 × 1033.45 × 1039.90 × 1021.73 × 103
F142.35 × 1041.04 × 1048.12 × 1052.97 × 104
F154.91 × 1036.10 × 1036.87 × 1035.57 × 103
F164.11 × 1025.55 × 1024.49 × 1023.79 × 102
F173.23 × 1024.08 × 1021.14 × 1034.81 × 102
F186.39 × 1045.17 × 1041.96 × 1062.49 × 105
F191.29 × 1041.64 × 1041.56 × 1041.54 × 104
F205.60 × 1011.54 × 1023.79 × 1024.05 × 101
F212.28 × 1022.37 × 1024.97 × 1023.53 × 102
F222.50 × 1032.11 × 1031.27 × 1031.42 × 103
F234.57 × 1024.94 × 1024.90 × 1024.48 × 102
F245.27 × 1025.60 × 1025.46 × 1025.20 × 102
F255.39 × 1025.09 × 1025.75 × 1025.50 × 102
F261.30 × 1031.70 × 1031.30 × 1031.28 × 103
F275.96 × 1026.22 × 1026.48 × 1026.07 × 102
F284.98 × 1024.75 × 1025.40 × 1025.04 × 102
F294.67 × 1026.13 × 1024.57 × 1024.12 × 102
F308.08 × 1058.61 × 1059.23 × 1058.32 × 105
Rank1.902.623.312.17
Table 7. Comparison results between DELPSO with the dynamic swarm partition strategy and the ones with different fixed settings of egs on the 50-D CEC 2017 benchmark functions.
Table 7. Comparison results between DELPSO with the dynamic swarm partition strategy and the ones with different fixed settings of egs on the 50-D CEC 2017 benchmark functions.
FDynamicegs = 0.2 * NPegs = 0.3 * NPegs = 0.4 * NPegs = 0.5 * NPegs = 0.6 * NPegs = 0.7 * NPegs = 0.8 * NP
F12.86 × 1033.98 × 1033.44 × 1034.27 × 1032.69 × 1033.21 × 1031.40 × 1032.26 × 103
F36.46 × 1034.60 × 1042.50 × 1041.26 × 1048.73 × 1036.49 × 1036.38 × 1036.36 × 103
F45.65 × 1011.46 × 1021.30 × 1029.69 × 1016.45 × 1016.41 × 1016.31 × 1017.07 × 101
F52.65 × 1014.83 × 1013.55 × 1012.82 × 1012.65 × 1012.32 × 1012.14 × 1015.19 × 101
F63.52 × 10−83.46 × 10−31.77 × 10−31.34 × 10−64.20 × 10−79.97 × 10−84.15 × 10−87.01 × 10−8
F73.26 × 1028.78 × 1017.69 × 1017.15 × 1011.02 × 1022.22 × 1022.60 × 1023.25 × 102
F85.54 × 1014.85 × 1013.46 × 1013.13 × 1012.57 × 1012.47 × 1012.23 × 1012.06 × 101
F92.11 × 10−21.45 × 1009.08 × 10−13.35 × 10−11.60 × 10−16.04 × 10−21.51 × 10−18.77 × 10−2
F108.27 × 1033.12 × 1032.39 × 1032.51 × 1033.22 × 1033.68 × 1035.17 × 1037.90 × 103
F114.20 × 1011.35 × 1029.59 × 1017.18 × 1016.83 × 1016.00 × 1015.00 × 1014.64 × 101
F122.29 × 1059.52 × 1057.32 × 1054.59 × 1052.98 × 1053.12 × 1052.62 × 1052.42 × 105
F132.61 × 1034.55 × 1034.12 × 1032.03 × 1032.58 × 1032.63 × 1031.96 × 1031.79 × 103
F142.35 × 1044.79 × 1043.75 × 1042.46 × 1042.08 × 1042.47 × 1042.59 × 1042.62 × 104
F154.91 × 1036.88 × 1036.00 × 1035.62 × 1034.79 × 1037.97 × 1034.97 × 1034.11 × 103
F164.11 × 1027.30 × 1025.40 × 1025.64 × 1024.48 × 1024.02 × 1024.54 × 1024.24 × 102
F173.23 × 1025.68 × 1024.63 × 1023.17 × 1023.07 × 1023.47 × 1023.18 × 1023.24 × 102
F186.39 × 1044.47 × 1051.62 × 1051.17 × 1057.29 × 1047.73 × 1047.34 × 1045.40 × 104
F191.29 × 1041.46 × 1041.44 × 1041.30 × 1041.55 × 1041.29 × 1041.46 × 1041.51 × 104
F205.60 × 1012.70 × 1021.51 × 1021.20 × 1027.58 × 1017.82 × 1018.20 × 1018.24 × 101
F212.28 × 1022.46 × 1022.38 × 1022.30 × 1022.28 × 1022.25 × 1022.31 × 1022.21 × 102
F222.50 × 1033.58 × 1032.17 × 1031.88 × 1031.77 × 1031.47 × 1038.32 × 1021.84 × 103
F234.57 × 1024.95 × 1024.79 × 1024.71 × 1024.59 × 1024.63 × 1024.54 × 1024.59 × 102
F245.27 × 1025.58 × 1025.45 × 1025.41 × 1025.36 × 1025.34 × 1025.30 × 1025.32 × 102
F255.39 × 1025.36 × 1025.40 × 1025.39 × 1025.41 × 1025.36 × 1025.23 × 1025.39 × 102
F261.30 × 1031.79 × 1031.58 × 1031.55 × 1031.52 × 1031.46 × 1031.37 × 1031.41 × 103
F275.96 × 1026.45 × 1026.27 × 1026.11 × 1026.08 × 1026.05 × 1026.02 × 1026.07 × 102
F284.98 × 1025.09 × 1024.98 × 1025.01 × 1024.89 × 1024.96 × 1024.98 × 1025.01 × 102
F294.67 × 1027.80 × 1025.62 × 1025.55 × 1025.20 × 1025.37 × 1025.12 × 1024.55 × 102
F308.08 × 1059.01 × 1058.79 × 1058.60 × 1058.50 × 1058.44 × 1058.57 × 1058.27 × 105
Rank3.037.246.215.103.863.763.213.59
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Q.; Guo, X.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y. Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10, 1261. https://0-doi-org.brum.beds.ac.uk/10.3390/math10081261

AMA Style

Yang Q, Guo X, Gao X-D, Xu D-D, Lu Z-Y. Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics. 2022; 10(8):1261. https://0-doi-org.brum.beds.ac.uk/10.3390/math10081261

Chicago/Turabian Style

Yang, Qiang, Xu Guo, Xu-Dong Gao, Dong-Dong Xu, and Zhen-Yu Lu. 2022. "Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization" Mathematics 10, no. 8: 1261. https://0-doi-org.brum.beds.ac.uk/10.3390/math10081261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop