Abstract

Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

1. Introduction

Simulated annealing (SA) algorithm, which was first independently presented as a search algorithm for combinatorial optimization problems in [1, 2], is a popular iterative metaheuristic algorithm widely used to address discrete and continuous optimization problems. The key feature of SA algorithm lies in means to escape from local optima by allowing hill-climbing moves to find a global optimum. One of the major shortages of SA is that it has several parameters to be tuned, and its performance is sensitive to the values of those control parameters. There are two special strategies for parameter tuning: the online parameter tuning and the off-line parameter tuning. In the online approach, the parameters are controlled and updated dynamically or adaptively throughout the execution of the SA algorithm [37], whereas in the off-line approach, the values of different parameters are tuned before the execution of the SA algorithm and are fixed during its execution. Fine-tuning parameter values is not trivial, and those parameters are quite often very poorly made by trial and error. So, SA algorithm, which has less parameter or is less sensitive to parameter setting, is very attractive for practical users.

Recently, a metaheuristic algorithm called the list-based threshold-accepting (LBTA) algorithm has been developed and has shown significant performance for combinatorial optimization problems that are NP-complete. The advantage of LBTA over the majority of other neighbourhood search-based metaheuristic methods is that it has fewer controlling parameters that have to be tuned in order to produce satisfactory solutions. Since its appearance, LBTA has been successfully applied to many combinatorial optimization problems [814].

In this work we are motivated by the success of LBTA in simplifying parameter tuning to study how list-based parameter control strategy can be applied to SA algorithm. Towards by this goal, our paper presents a novel list-based cooling schedule for SA algorithm to solve travelling salesman problem (TSP), and we call our proposed algorithm as list-based simulated annealing (LBSA) algorithm. In list-based cooling schedule, all temperatures are stored in a list which is organized as a priority queue. A higher temperature has higher priority. In LBSA, a list of temperatures is created first, and then, in each generation, the maximum value in the list is used as current temperature to calculate acceptance probability for candidate solution. The temperature list is updated adaptively according to the topology of the solution space of the problem. Using the list-based cooling schedule, SA algorithm has good performance on a wide range of parameter values; and it also has competitive performance compared with some other state-of-the-art algorithms. The parameter robustness of list-based cooling schedule can greatly simplify the design and implementation of LBSA algorithm for practical applications.

The remainder of this paper is organized as follows: Section 2 provides a short description of TSP problem and SA algorithm. Section 3 presents our proposed list-based SA algorithm. Section 4 gives the experimental approach and results of experiments carried out on benchmark TSP problems. Finally, in Section 5 we summarize our study.

2. Preliminaries

2.1. Traveling Salesman Problem

TSP problem is one of the most famous hard combinatorial optimization problems. It belongs to the class of NP-hard optimization problems. This means that no polynomial time algorithm is known to guarantee its global optimal solution. Consider a salesman who has to visit cities. The TSP problem consists of finding the shortest tour through all the cities such that no city is visited twice and the salesman returns back to the starting city at the end of the tour. It can be defined as follows. For cites problem, we can use a distance matrix to store distances between all the pair of cites, where each element of matrix represents the distance between cities and . And we use a set of permutations of the integers from 1 to , which contains all the possible tours of the problem. The goal is to find a permutation that minimizes

TSP problem may be symmetric or asymmetric. In the symmetric TSP, the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the asymmetric TSP, paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic collisions, one-way streets, and airfares for cities with different departure and arrival fees are examples of how this symmetry could break down.

2.2. Simulated Annealing Algorithm

SA algorithm is commonly said to be the oldest among the metaheuristics and surely one of the few algorithms that have explicit strategies to avoid local minima. The origins of SA are in statistical mechanics and it was first presented for combinatorial optimization problems. The fundamental idea is to accept moves resulting in solutions of worse quality than the current solution in order to escape from local minima. The probability of accepting such a move is decreased during the search through parameter temperature. SA algorithm starts with an initial solution , and candidate solution is then generated (either randomly or using some prespecified rule) from the neighbourhood of . The Metropolis acceptance criterion [15], which models how a thermodynamic system moves from one state to another state in which the energy is being minimized, is used to decide whether to accept or not. The candidate solution is accepted as the current solution based on the acceptance probability:where is the parameter temperature. The SA algorithm can be described by Figure 1.

In order to apply the SA algorithm to a specific problem, one must specify the neighbourhood structure and cooling schedule. These choices and their corresponding parameter setting can have a significant impact on the SA’s performance. Unfortunately, there are no choices of these strategies that will be good for all problems, and there is no general easy way to find the optimal parameter sets for a given problem. A cooling schedule should consist of starting temperature, temperature decrement function, Markov chain length, and termination condition. Geometric cooling schedule, which can be described by the temperature-update formula , is probably the most commonly used in the SA literature and acts as a base line for comparison with other more elaborate schemes [16]. Typical values of for moderately slow cooling rates are 0.8 through 0.99.

For practical application where computation complexity of objective function is high, SA algorithm may run with constant Markov chain length and use fixed iteration times as termination condition. As a result, initial temperature and cooling coefficient are the two key parameters that impact the performance of SA algorithm. Even in this simple situation, it is not an easy task to find optimal values for those two parameters, because SA’s performance is sensitive to those parameters. One way to tackle the parameter setting problem of SA algorithm is to use adaptive cooling strategy [37]. Adaptive cooling strategy is definitely efficient and it makes SA algorithm less sensitive to user defined parameters than canonical SA, but it also makes SA algorithm lose the feature of simplicity. Another way to tackle this problem is to find new cooling schedule which has fewer parameters or the parameters are more robust.

2.3. Simulated Annealing Algorithm for TSP Problems

In order to use SA algorithm for TSP problem, we can represent solution as a permutation . Then, a large set of operators, such as the famous 2-Opt, 3-Opt, inverse, insert, and swap operators, can be used to generate candidate solution . Since its appearance, SA algorithm has been widely and deeply studied on TSP problems [1721]. Many cooling schedules, such as geometric, exponential, logarithmic, and arithmetic ones and their variants, have been used in literature.

The theory convergence conditions of SA make it very time consuming in most cases [22]. Wang et al. [23] proposed a two-stage SA algorithm which was tested on 23 TSP benchmark instances with scale from 51 to 783. The numerical results show that it is difficult for SA algorithm to solve TSP benchmark instances with scale exceeding 1000 cities based on time complexity. Geng et al. [24] proposed an adaptive simulated annealing algorithm with greedy search (ASA-GS), where greedy search technique is used to speed up the convergence rate. The ASA-GS achieves a reasonable trade-off among computation time, solution quality, and complexity of implementation. It has good scalability and has good performance even for TSP benchmark instances with scale exceeding 1000 cities. Wang et al. [25] proposed a multiagent SA algorithm with instance-based sampling (MSA-IBS) by exploiting learning ability of instance-based search algorithm. The learning-based sampling can effectively improve the SA’s sampling efficiency. Simulation results showed that the performance of MSA-IBS is far better than ASA-GS in terms of solution accuracy and CPU time. In this paper, MSA-IBS is used as basis to use list-based cooling schedule.

3. List-Based Simulated Annealing Algorithm

3.1. The Neighbourhood Structure

In this paper, we use the greedy hybrid operator proposed by Wang et al. [25] to produce candidate solution. This is a kind of multiple move operators, which select the best one from three neighbours. Specifically, after two positions and are selected, it uses inverse operator, insert operator, and swap operator to produce three neighbour solutions. And the best one is used as the candidate solution. The inverse, insert, and swap operators can be defined as follows.

Define 1 (inverse operator ). It means to inverse the cities between positions and . The will generate a new solution such that , , , where ; in addition, if , it means and , and then and . Two edges will be replaced by inverse operator for symmetric TSP problems.

Define 2 (insert operator ). It means to move the city in position to position . The operator will generate a new solution such that , , , in the case of , or ,,, in the case of . In general, three edges will be replaced by insert operator.

Define 3 (swap operator ). It means to swap the city in position and city in position . The operator will generate a new solution such that and . In general, four edges will be replaced by swap operator.

Using the above three operators, the used hybrid greedy operator can be defined as follows: where returns the best one among its parameters.

3.2. Produce Initial Temperature List

As in LBTA algorithms [814], list-based parameter controlling strategy needs to produce an initial list of parameters. Because temperature in SA is used to calculate acceptance probability of candidate solution, we use initial acceptance probability to produce temperature as follows. Suppose is current solution and is candidate solution which is randomly selected from ’s neighbours. Their objective function values are and , respectively. If is worse than , then the acceptance probability of can be calculated using formula (2). Conversely, if we know the acceptance probability , then we can calculate the corresponding temperature as follows:

Figure 2 is the flowchart of producing initial temperature list. In Figure 2, after a feasible solution is produced, a candidate solution is randomly selected from ’s neighbours. If is better than , then is replaced by . And using formula (4), the absolute value of is used to calculate an initial temperature value . Then will be inserted into initial temperature list. The temperature list is a priority list, where higher temperature has higher priority. The same procedure is repeated until the list of temperature values is filled.

3.3. Temperature Controlling Procedure

In each iteration , the maximum temperature in list is used as current temperature . If Markov chain length is , then may be used at best times for the calculation of acceptance probability of candidate solution. Suppose there are times that bad solution is accepted as current solution; we use and to represent the difference of objective function values and acceptance probability, respectively, where , and the relation between and can be described as the following equation:

According to the Metropolis acceptance criterion, for each time a bad candidate solution is met, a random number is created. If is less than the acceptance probability, then the bad candidate solution will be accepted. So, for each pair of and , there is a random number and is less than . We use and to calculate a new temperature as the following formula described:

We use the average of to update the temperature list. Specifically, will be removed from list, and average of will be inserted into list. Because is always less than , the average of is less than also. In this way, the temperature decreases always as the search proceeds.

3.4. Description of LBSA Algorithm

Because the main purpose of this paper is to study the effectiveness of list-based cooling schedule, we use a simple SA framework which uses fixed iteration times for outer loop and fixed Markov chain length in each temperature. The detailed flowchart of the main optimization procedure is shown in Figure 3. Besides the creation of initial temperature list, the main difference between the flowchart of LBSA and the flowchart of canonical SA is about the temperature control strategy. In canonical SA, the temperature controlling procedure is independent of the topology of solution space of the problem. Conversely, the temperature controlling procedure of LBSA is adaptive according to the topology of solution space of the problem. In Figure 3, is maximum iteration times of outer loop, which is termination condition of LBSA. is Markov chain length, which is termination condition of inner loop. Counter is used to record the current iteration times of outer loop, is used to record current sampling times of inner loop, and is used to record how many times bad solution is accepted in each temperature. Variable is used to store the total temperature calculated by formula (6), and the average temperature will be used to update the temperature list. We can use a maximum heap to implement the temperature list. As the time complexity of storage and retrieval from heap is logarithmic, the use of temperature list will not increase the time complexity of SA algorithm.

4. Simulation Results

In order to observe and analyse the effect of list-based cooling schedule and the performance of LBSA algorithm, five kinds of experiments were carried out on benchmark TSP problems. The first kind of experiments was used to analyse the effectiveness of the list-based cooling schedule. The second kind was carried out to analyse the parameter sensitivity of LBSA algorithm. The third kind was carried out to compare the performance of different ways to update the temperature list. The fourth kind was carried out to compare the performance of different neighbourhood structures. And the fifth kind was carried out to compare LBSA’s performance with some of the state-of-the-art algorithms.

4.1. The Effectiveness of List-Based Cooling Schedule

To illustrate the effectiveness of list-based cooling schedule, we compare the temperature varying process and search process of LBSA algorithm with SA algorithms based on the classical geometric cooling schedule and arithmetic cooling schedule. Those experiments were carried out on BCL380, XQL662, XIT1083, and XSC6880 problems from VLSI data sets. The best known integer solutions of those problems are 1621, 2513, 3558, and 21537, respectively. The iteration times of outer loop are 1000, and the Markov chain length in each temperature is the city number of the problem, which is 380, 662, 1083, and 6880, respectively.

Figure 4 compares the temperature varying process of different cooling schedules on the four benchmark problems. List-based cooling schedule can be viewed as a kind of geometric cooling schedule with variable cooling coefficient. Compared with the temperature varying of geometric cooling schedule, temperature of list-based cooling schedule decreases quicker in the early stage, but slower in the later stage. As indicated by Abramson et al. [16], geometric cooling schedule always uses constant cooling coefficient regardless of the stages of search. However, at high temperatures almost all candidate solutions are accepted, even though many of them could be nonproductive. To use variable cooling coefficients, which depend on the stages, would allow SA algorithm to spend less time in the high temperature stages. Consequently, more time would be spent in the low temperature stages, thus reducing the total amount of time required to solve the problem.

Figure 5 compares the search process of different cooling schedules on the four benchmark problems. Compared with geometric cooling schedule and arithmetic cooling schedule, list-based cooling schedule has quicker convergence speed and has good long-term behaviour also. This good performance may be due to the variable cooling coefficient feature of list-based cooling schedule. The temperature is updated adaptively according to the topology of solution space of the problem. As a result, LBSA algorithm can spend more time to search on promising area finely.

4.2. Sensitivity Analysis of Temperature List Length and Initial Temperature Values

We observe the sensitivity of list length on BCL380, XQL662, XIT1083, and XSC6880 problems. For each problem, we test 30 different list lengths from 10 to 300 with a step 10. For each list length, we run LBSA algorithm 50 times and calculate the average tour length and the percentage error of the mean tour length to the best known tour length. Figure 6 is the relation between percentage error and list length. It shows the following: () for all the problems, there is a wide range of list length for LBSA to have similar promising performance; () list length is more robust on small problems than on large problems; () list length should not be too big for large problems. For big problems, the used Markov chain length, which is the city number of the problem, is not big enough for LBSA to reach equilibrium on each temperature. Big list length means the temperature decreases more slowly, so the algorithm will spend too much time on high temperature and accept too much nonproductive solutions. As a result, a big list length will dramatically deteriorate LBSA’s performance for large problems. Because of the robust temperature list length, we use fixed temperature list length in the following simulations, which is 120 for all instances.

In order to observe the sensitivity of initial temperature values, we carried out experiments with different initial temperature values on BCL380, XQL662, XIT1083, and XSC6880 problems. Because the initial temperature values are produced according to initial acceptance probability , we use 30 different , which range from 10−20 to 0.9, to create initial temperature values. Specifically, the set of is the union of geometric sequence from 10−20 to 10−2 with common ratio 10 and arithmetic sequence from 0.1 to 0.9 with common difference 0.1. For each initial acceptance probability, we run LBSA algorithm 50 times and calculate the average tour length and the percentage error of the mean tour length to the best known tour length. Figure 7 is the relation between percentage error and index of initial acceptance probability. It shows the following: () the performance of LBSA is not sensitive to the initial temperature values; () there is a different varying direction among different problems. For small problems, the initial temperature should not be too low, but for big problems, the initial temperature should not be too high. This difference is due to the limited computation resource used and the nonlinear computation complexity of TSP problem. The LBSA algorithm can have similar promising performance on a wide range of initial temperature values; this high robustness of initial temperature is due to the adaptive nature of list-based cooling schedule.

4.3. Comparing Different Temperature Updating Schemes

In our algorithm described in Section 3.4, we use the average of temperature to update temperature list. There are several variants, such as using maximum or minimum . To show the advantage of using average of , we compare the results and the decreasing process of temperature using those three updating schemes. Table 1 is the simulation results; it is clear that using average temperature to update temperature list has far better performance than the other methods. The temperature decreasing process of different strategies on BCL380, which is showed in Figure 8, can explain why using average temperature is the best option. If we use maximum temperature to update the temperature list, the temperature will decrease very slowly. As a result, the acceptation probability of bad solution is always high, and SA algorithm will search randomly in the solution space. Conversely, if we use minimum temperature to update the temperature list, the temperature will decrease sharply. As a result, the SA algorithm will be trapped into local minimum quickly and lose the advantage of escaping from local minimum by using Metropolis acceptance criterion.

4.4. Comparing Different Neighbourhood Structures

In our algorithm, we use a greedy hybrid neighbour operator proposed by Wang et al. [25] to produce candidate solution. This hybrid operator uses the best one produced by inverse, insert, and swap operators. To show its advantage, we compare the results produced by inverse, insert, swap, and hybrid operators. And we compare the percentage of inverse, insert, and swap operators accepted when we use hybrid operator. Table 2 is the simulation results; it is clear that hybrid operator has the best performance. Among the three basic operators, inverse is the best. If we compare the percentages of different basic operators accepted when we use hybrid operator, we found that inverse operator is accepted most. Figure 9 is the percentages of different operators accepted on instance BCL380. The percentages of inverse, insert, and swap operators are 65%, 31%, and 4%, respectively. The relative percentages of the three operators accepted are similar on other instances as on BCL380. The good performance of inverse operator is due to its ability to produce more fine-grained neighbourhood structure, because it changes two edges only. The hybrid operator, which uses inverse, insert, and swap operators at the same time, has a bigger neighbourhood structure. So it has higher probability to find promising solutions.

4.5. Competitiveness of LBSA Algorithm

We compare LBSA algorithm with genetic simulated annealing ant colony system (GSAACS) [26] and MSA-IBS on 24 benchmark instances with cities from 51 to 1655. The GSAACS is a hybrid algorithm which uses the ant colony system (ACS) to generate the initial solutions of the genetic algorithms. Then, it uses genetic algorithm (GA), which uses SA as mutation operator, to generate offspring solutions based on the initial solutions. If the solutions searched by GA are better than the initial solutions, GSAACS will use these better solutions to update the pheromone for the ACS. After a predefined number of generations, GSAACS uses particle swarm optimization (PSO) to exchange the pheromone information between groups.

The results of GSAACS are from [26]. GSAACS uses 120 ants and 1000 generations. In each generation of GSAACS, GA runs 100 generations. In MSA-IBS and LBSA algorithm, we set the population size to 30 and the iteration times of outer loop to 1000, and the Markov chain length in each temperature is two times the number of cities. The end condition is that either it finds the optimal solution or the iteration times of outer loop reach 1000. The algorithm was executed 25 times on each TSP problem, and the results are listed in Table 3.

In Table 3, the columns Instance, OPT, Best, Mean, and PEav denote the name of the TSP problem, the optimal tour length from TSPLIB, the shortest tour length found, the average tour length among the 25 trials, and the percentage error of the mean tour length to the OPT, respectively. As can be seen in Table 3, both MSA-IBS and LBSA have better performance than GSAACS on all 24 instances. LBSA is a little better than MSA-IBS in terms of average percentage error.

We compare LBSA algorithm with GA-PSO-ACO [27] and MSA-IBS on 35 benchmark instances with cities from 48 to 33810. GA-PSO-ACO combines the evolution ideas of the genetic algorithms, particle swarm optimization, and ant colony optimization. In the GA-PSO-ACO algorithm, the whole process is divided into two stages. In the first stage, GA and PSO are used to obtain a series of suboptimal solutions to adjust the initial allocation of pheromone for ACO. In the second stage, ACO is used to search the optimal solution. The results of GA-PSO-ACO are from [27]. GA-PSO-ACO uses 100 individuals and 1000 generations. In LBSA and MSA-IBS algorithm, we set the number of population size to 10 and the iteration times of outer loop to 1000, and the Markov chain length in each temperature is the number of cities. The end condition of LBSA is that either it finds the optimal solution or the iteration times of outer loop reach 1000. Like GA-PSO-ACO and MSA-IBS, LBSA was executed 20 times on each TSP instance, and the results are listed in Table 4.

As can be seen in Table 4, both MSA-IBS and LBSA have better performance than GA-PSO-ACO on all 35 instances. LBSA is a little bit better than MSA-IBS in terms of average percentage error.

We compare LBSA algorithm with ASA-GS [24] and MSA-IBS algorithm on 40 benchmark instances with cities from 151 to 85900. The experiments were run on a 2.8 GHz PC, and the ASA-GS was run on a 2.83 GHz PC. For all instances, we set the iteration times of outer loop to 1000 and set the Markov chain length in each temperature to the number of cities. As in MSA-IBS algorithm, a suitable population size is selected for each instance such that the CPU time of LBSA is less than that of ASA-GS. The end condition of LBSA and MSA-IBS is either when it finds the optimal solution or when the iteration times of outer loop reach 1000. The algorithm is executed 25 trials on each instance, and the results are listed in Table 5.

As can be seen in Table 5, the average PEav of LBSA for all instances is 0.75, which is better than 1.87 of ASA-GS, and the average CPU time of LBSA for all instances is 282.49, which is far less than 650.31 of ASA-GS. LBSA is a little bit better than MSA-IBS in terms of average PEav.

5. Conclusion

This paper presents a list-based SA algorithm for TSP problems. The LBSA algorithm uses novel list-based cooling schedule to control the decrease of temperature parameter. The list-based cooling schedule can be viewed as a special geometric cooling schedule with variable cooling coefficient. The variance of cooling coefficient is adaptive according to the topology of solution space, which may be more robust to problems at hand. Simulated results show that this novel cooling schedule is insensitive to the parameter values and the proposed LBSA algorithm is very effective, simple, and easy to implement. The advantage of insensitive parameter is very attractive, allowing LBSA algorithm to be applied in diverse problems without much effort on tuning parameters to produce promising results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by Nature Science Foundation of Fujian Province of China (no. 2014J01219, no. 2013J01216).