Abstract

This study considers a two-machine flowshop with a limited waiting time constraint between the two machines and sequence-dependent setup times on the second machine. These characteristics are motivated from semiconductor manufacturing systems. The objective of this scheduling problem is to minimize the total tardiness. In this study, a mixed-integer linear programming formulation was provided to define the problem mathematically and used to find optimal solutions using a mathematical programming solver, CPLEX. As CPLEX required a significantly long computation time because this problem is known to be NP-complete, a genetic algorithm was proposed to solve the problem within a short computation time. Computational experiments were performed to evaluate the performance of the proposed algorithm and the suggested GA outperformed the other heuristics considered in the study.

1. Introduction

The two-machine flowshop scheduling problem with a limited waiting time constraint and sequence-dependent setup times investigated in this study is motivated by semiconductor manufacturing systems. In a two-machine flowshop, all jobs are processed in the same order of machine 1 and then machine 2, which is called permutation schedule. Owing to the limited waiting time constraint, jobs must be started on the second machine within a limited waiting time after the completion of those jobs on the first machine. Additionally, sequence-dependent setup times are incurred between two consecutive jobs on the second machine. The objective function of this scheduling problem is a minimization of the total tardiness. Thus, this problem can be defined by and in the three-field notation suggested by [1], where max-wait and refer to a limited waiting time constraint and sequence-dependent setup times, respectively. is the tardiness of job defined as , where and are the completion time and due date of job , respectively. This scheduling problem is NP-complete because the typical two-machine flowshop tardiness problem is NP-complete [2].

The considered limited waiting time constraint is common in semiconductor wafer fabrication, in which there are many chemical processing processes. One example is a flowshop consisting of cleaning and diffusion processes [3]. Wafers that have completed the cleaning process may have problems that increase the likelihood of contamination and decrease quality if the surface is exposed to air for a long time before the next process (the diffusion process). To prevent this problem, the limited waiting time constraint is set between cleaning and diffusion processes. Additionally, if the chemical treatment effect is valid only when the treated wafer is processed on the next machine within a certain time after the treatment, the waiting time would be limited. If a wafer violates this time limit, the first operation must be reworked or discarded in case of high contamination. Failure to comply with the time constraints usually occurs when production volume is high, reducing productivity and causing management losses.

Setup time refers to the time required for preparation before processing jobs. If the setup time for a succeeding job varies depending on the preceding job, it is called the sequence-dependent setup time. In other words, the setup time depends on the sequence of the jobs. In semiconductor manufacturing systems, wafers for different types of products may require different chemicals or operating temperatures even though they are processed in the same machine. Therefore, job scheduling is important to schedule jobs to minimize the setup times when the sequence-dependent setup times are considered.

A two-machine flowshop scheduling problem with sequence-dependent setup times (SDSTs) to minimize the makespan is very similar to the typical travel salesman problem (TSP) [4], and dynamic programming methods [5, 6] and branch and bound algorithms [7, 8] have been proposed to obtain optimal solutions for the problem. However, since finding optimal solutions of the flowshop problem with SDST requires considerably long computation times, recent studies have focused on development of metaheuristic algorithms: genetic algorithms [911], variable neighborhood search algorithm [12], migrating bird optimization algorithm [13], discrete artificial bee colony optimization [14], iterated greedy algorithm [1517], and local search based heuristic algorithm [18].

A two-machine flowshop scheduling problem with a limited waiting time (LWT) constraint to minimize the makespan has been proven to be NP-hard [19]. For this problem, [19, 20] suggest branch and bound algorithms to find optimal solutions, and [21] proposes several dominance properties for optimal solutions. The study in [22] considers a situation where some jobs skip the first machine in a two-machine flowshop and suggests an approximation algorithm to minimize the waiting time variation of jobs. The study in [23] considers a three-machine flowshop with overlapping waiting time constraints and proposes a branch and bound algorithm to minimize makespan. The study in [24, 25] consider flowshop problems with time lag constraints and develop a simulated annealing algorithm and a constructive heuristic algorithm with and insertion mechanism, respectively. On the other hand, for a flowshop problem with LWT for the objective of minimizing total tardiness, a heuristic algorithm and a Lagrangian relaxation-based lower bound strategy [26] have been developed.

Only a few studies have considered SDSTs and LWT together, although both are common and important in semiconductor manufacturing systems. The study in [3] suggests a branch and bound algorithm for the objective of minimizing makespan. On the other hand, no-wait flowshop problems with SDSTs have been studied recently. The study in [27] proposes a branch and bound algorithm for no-wait flowshop, and [28] proposes a pairwise iterated greedy algorithm, while [29] suggests an insertion neighborhood search algorithm to minimize total flowtime. Note that this study is the first attempt to consider the two-machine flowshop with SDST and LWT for the objective of minimizing total tardiness.

In this study, a mixed-integer programming formulation for the considered scheduling problem is provided to describe the problem clearly and to obtain optimal schedule by CPLEX. Additionally, heuristic algorithms are developed to obtain good (or near-optimal) schedules in a short computation time rather than an optimal schedule because the problem is NP-complete. Three types of heuristics are proposed: list scheduling methods, a constructive heuristic, and a genetic algorithm. To demonstrate the performance of the proposed heuristic algorithms, computational experiments were performed on randomly generated instances.

In the remainder of this paper, Section 2 describes the problem considered in this study in more detail including several assumptions and presents a mixed-integer linear programming formulation. Section 3 proposes heuristic algorithms. Section 4 describes the computational experiments and shows the results. Finally, Section 5 concludes the paper with a short summary.

2. Problem Description

In this section, the considered scheduling problem is described in more detail, and a mixed-integer linear programming formulation is presented. There are jobs to be processed in the two-machine flowshop with a limited waiting time constraint and sequence-dependent setup times on the second machine. Here, only permutation schedules are considered; that is, job sequences on the two machines are the same. Although permutation schedules are not dominant in the problem, permutation schedules are common in real situations because of the simplicity of work management, flexibility in material handling, and limitation in buffer space. Note that permutation schedules are dominant in a typical two-machine flowshop with regular measures, including total tardiness. Other assumptions include the following:(i)All jobs are ready to start at time zero(ii)Information on the jobs to be scheduled is given; that is, processing time, due date, limited waiting time, and sequence-dependent setup time are given in advance(iii)Preemption is not allowed(iv)Each machine can process only one job at a time, and a job can be processed on only one machine at a time

In this paper, symbols in Table 1 are used to describe the mathematical formulation and proposed algorithms.

Completion times and tardiness of the jobs in a given sequence are computed as follows:

The mixed-integer linear programming (MILP) formulation for the problems is given:subject to

Objective function (6) is to minimize the total tardiness. Constraints (7) and (8) ensure that each job is placed at only one position in the sequence, and only one job can be placed at each position. Constraints (9)–(12) define the start time of each job in the considered flowshop. Constraints (13)–(15) define the relationship between and for sequence-dependent setup times. Constraints (16) and (17) compute the completion time and tardiness of jobs. Constraints (18) and (19) define the domain of the decision variables.

3. Heuristic Algorithms

In this section, three types of heuristic algorithms are proposed to solve the considered problem. The algorithms are list scheduling rules, a modified NEH algorithm (MNEH), and a genetic algorithm (GA). As only permutation schedules are considered in this study, feasible permutation schedules are obtained by these heuristic algorithms, and completion times and tardiness of each jobs are computed with equations (1)–(5) provided in Section 2.

3.1. List Scheduling Rules

List scheduling rules are widely used in practice (e.g., in semiconductor manufacturing systems) because they are intuitive and easy to develop and apply. Four list scheduling rules are used as described below, and schedules are obtained by sorting jobs in a nondecreasing order of the values according to the methods:(i)Earliest due date (EDD): (ii)Modified due date (MDD): (iii)Slack: (iv)Greedy:

3.2. Modified NEH Algorithm

The heuristic algorithm proposed in this subsection is modified from the constructive NEH algorithm [30], which is known to work well in flowshop scheduling problems and has been modified by many researchers for various scheduling problems. The procedure of MNEH is summarized as follows.

3.2.1. Procedure: MNEH

(i)Step 0. Obtain a seed sequence from the list scheduling method. Let be this sequence and set . Go to Step 1.(ii)Step 1. Select the rth job in . Go to Step 2.(iii)Step 2. For , insert the selected job into the ith position of and compute the total tardiness.(iv)Step 3. Among the partial schedules obtained in Step 2, select a partial schedule that results in minimum total tardiness. Let be the selected partial schedule. Go to Step 4.(v)Step 4.. If , go to Step 1; otherwise, go to Step 5.(vi)Step 5. Set and . Go to Step 6.(vii)Step 6. Generate a new sequence by interchanging two jobs in the ith and jth positions in . If the total tardiness of the new sequence is less than that of , then replace with the new sequence and go to Step 5; otherwise, go to Step 7.(viii)Step 7. Let . If , go to Step 6; otherwise, let and . If , go to Step 6; otherwise, terminate. is the final solution of this heuristic.

3.3. Genetic Algorithm

The GA was first introduced [31] as an optimization method that mimics the evolution of the natural world, and it is one of the most popular methods in the field of optimization because of its powerful search mechanism and ability to solve problems. The effectiveness of a GA depends on the procedure design, operators, and parameters. Thus, the following subsections describe the design of the proposed GA in this study.

3.3.1. Solution Representation

In the proposed GA, solutions are expressed in the chromosome structure, and the performance of the algorithm may vary depending on how the chromosome is expressed. In this study, the sequence of jobs is expressed in chromosomal structure because only the permutation schedule is considered.

3.3.2. Initial Population

To generate an initial population, list scheduling methods and the MNEH algorithm are used. That is, four solutions are generated from the list scheduling methods and used to generate four further solutions by MNEH. However, Steps 5–7 of MNEH are not applied here because these steps require a long computation time in large-sized problems. The remainder of the initial population is generated randomly.

3.3.3. Fitness Evaluation

Fitness is the evaluated objective value of a solution; that is, the evaluation value represents how good a solution is. Here, the total tardiness, which is aimed to be minimized by the objective function of the considered problem, is used for fitness evaluation. Then, the lower the total tardiness, the better the solution.

3.3.4. Selection

Selection is to choose a set of parents from the population to generate the offspring. In general, selection is based on the quality of the solutions and randomness. In the proposed GA, the two most widely used schemes in the literature (tournament and roulette) are used for selection.

In the tournament method, four chromosomes are selected randomly from the population, and pairwise comparisons are performed to choose two chromosomes as parents. That is, the better of the first two chromosomes and the better of the last two chromosomes are selected as the parents.

For the roulette methods, the inverse of the objective function value for all chromosomes in the population is computed; that is, . Note that 1 is added to the denominator to avoid a case of dividing by zero because the objective function value (total tardiness ) can be zero.

The selection probability of each chromosome in the population is then obtained by . Based on these selection probabilities of chromosomes, two chromosomes to be parents are selected randomly.

3.3.5. Crossover

The purpose of the crossover operation is to generate better offspring by exchanging the information of the selected parents. In the proposed GA, nine types of crossover are considered: one-point order crossover (OX1), two-point order crossover (OX2), order-based crossover (OBX), position-based crossover (PBX), partially matched crossover (PMX), similar job crossover (SJX), two-point similar job crossover (SJX2), similar block crossover (SBX), and two-point similar block crossover (SBX2).

For a description of these crossovers, let parent1 and parent2 be the selected parents to generate offspring.(i)Procedure: OX1Step 0. Let point1 be a randomly selected cutoff point from parent1.Step 1. Offspring inherits the front part (from the first to point1) of parent1.Step 2. Delete jobs that are already placed in offspring from parent2.Step 3. Place the jobs in parent2 into the unplaced positions of offspring in the order in which they appear in parent2.(ii)Procedure: OX2Step 0. Let point1 and point2 be two randomly selected cutoff points from parent1.Step 1. Offspring inherits the middle part (between point1 and point2) of parent1.Step 2. Delete jobs that are already placed in offspring from parent2.Step 3. Place the jobs in parent2 into the unplaced positions of offspring from left to right in the order in which they appear in parent2.(iii)Procedure: OBXStep 0. Let be a set of jobs from parent1 at random.Step 1. Let setP be a set of positions corresponding jobs in from parent2.Step 2. Place the jobs in into the positions in of offspring from left to right in the order in which the jobs appear in parent1.Step 3. Delete jobs that are already placed in offspring from parent2.Step 4. Place the jobs in parent2 into the unplaced positions of offspring from left to right in the order in which they appear in parent2.(iv)Procedure: PBXStep 0. Select a set of positions from parent1 at random.Step 1. Offspring inherits the jobs placed in the positions of the set from parent1.Step 2. Delete jobs that are already placed in offspring from parent2.Step 3. Place the jobs in parent2 into the unplaced positions of offspring from left to right in the order in which they appear in parent2.(v)Procedure: PMXStep 0. Let point1 and point2 be two randomly selected cutoff points from parent1.Step 1. Offspring inherits the middle part (between point1 and point2) of parent1.Step 2. For each unplaced position in offspring, the job in the same position in parent2 is placed.Step 3. Delete jobs that are already placed in offspring from parent2.Step 4. If duplicate jobs occur before point1 and after point2, these jobs are exchanged with the remaining jobs in parent2, considering the order of parent2.(vi)Procedure: SJXStep 0. Offspring inherits jobs in the same position in the two parents.Step 1. Let point1 be a randomly selected cutoff point from parent1.Step 2. Offspring inherits the front part (from the first to the point1) of parent1.Step 3. Delete jobs that are already placed in offspring from parent2.Step 4. Place the jobs in parent2 into the unplaced positions of offspring in the order in which they appear in parent2.

In SJX2, Steps 1 and 2 of SJX are modified by considering two cutoff points, as in OX2. Additionally, SBX and SBX2 are very similar to SJX and SJX2, respectively. In SBX and SBX2, a block means two consecutive jobs, and the only difference is that an offspring inherits the blocks in the same position in the two parents in Step 0. Thus, the detailed procedures of SJX2, SBX, and SBX2 are omitted.

3.3.6. Mutation

A mutation is an operation to escape from the local optimum and increase the variability and diversity in the population. This operation partially modifies a chromosome to generate offspring with a new chromosome. In the proposed GA, three types of mutation schemes are considered: interchange, insertion, and swap.(i)Interchange: two randomly selected jobs are interchanged.(ii)Insertion: a randomly selected job is inserted into a randomly selected position.(iii)Swap: a randomly selected job is swapped with the next job.

3.3.7. Improvement (Local Search)

Local search methods are commonly used in genetic algorithms as well as other (meta)heuristics to improve the solutions. A local search strategy based on insertion is also used in the proposed GA. This local search is triggered when the current best objective function value is not improved for a predetermined number of consecutive generations. When the local search is triggered, insertion is applied times to the best solution in the population, where is the number of jobs. If the new solution is better than the current best solution, the current solution is replaced with the new solution. Note that because excessive local search attempts may consume a long computation time, the number of interchanges is limited to .

3.3.8. Restart

The goal of a restart procedure is to avoid premature convergence in the population and improve the solution quality obtained by GA. The restart procedure in the proposed GA is based on that of Ruiz and Maroto [32]. This restart procedure is triggered when the current best objective function value is not improved for a predetermined number of consecutive generations. The restart procedure is summarized below.(i)Procedure: restartStep 0. Sort the chromosomes in the population in a nondecreasing order of their fitness value.Step 1. Keep the first 20% of chromosomes and remove the remaining 80% of the chromosomes from the population.Step 2. For the 20% of the population size, chromosomes are selected randomly from the first 20% of chromosomes, and the selected chromosomes are mutated once with an insertion operation.Step 3. For the 20% of the population size, chromosomes are selected randomly from the first 20% of chromosomes, and the selected chromosomes are mutated once with an interchange operation.Step 4. The remaining 40% of chromosomes are generated randomly.

3.3.9. Termination Criterion

The termination criterion applied to the proposed GA is the maximum CPU time. That is, when the maximum CPU time from the start of the GA elapses, the GA procedure stops.

3.3.10. Whole Procedure of the Proposed GA

The procedure of the proposed GA is summarized below, using the operators described above.(i)Procedure: proposed GA(ii)Step 0. Let POP, , , and be the population, population size, crossover probability, and mutation probability, respectively.(iii)Step 1. Generate an initial population with chromosomes. Let POP be the initial population.(iv)Step 2. Set . While , do the following.(v)Perform a selection operation to select two parents from POP. Let parent1 and parent2 be the selected parents.(vi)Generate a random number in [0, 1]. If , perform a crossover operation to generate offspring1 and offspring2. Otherwise, offspring1 and offspring2 inherit the information from parent1 and parent2, respectively.(vii)Generate a random number in [0, 1]. If , perform a mutation operation for offspring1 and offspring2.(viii)Evaluate the fitness value of offspring1 and offspring2.(ix)If offsping1 is better than the worst chromosome in POP, replace the worst chromosome with offsping1. Repeat the same procedure for offspring2.(x)Let .(xi)Step 3. Check the local search condition; if it is met, trigger a local search for the best chromosome in POP.(xii)Step 4. Check the restart condition; if it is met, trigger the restart procedure for POP. Evaluate the fitness values of the chromosomes in POP.(xiii)Step 5. Check the termination condition; if it is met, terminate and the best solution in POP is the final solution. Otherwise, go to Step 2.

4. Computational Experiments

4.1. Test Instance and Environment

Computational experiments were performed with randomly generated problems to evaluate the performance of the proposed algorithms. All algorithms were coded in Java, and the experiments were performed on a PC with a 3.2 GHz Intel Core i7-8700 CPU.

To generate problem instances, job information (processing times, sequence-dependent setup times SDST, and limited waiting times LWT) was generated according to the method presented in [3]. The processing times were generated from , where denotes the discrete uniform distribution with a range . SDST and LWT were generated from and , respectively. The due dates of the jobs were generated from , where is the lower bound on the makespan, and and are the tardiness factor and range of due dates parameters, respectively. Here, to obtain the values of , we assumed that SDSTs for each job were set to and LWTs were infinite; then is obtained by Johnson’s algorithm [33] that provides the minimum makespan in the typical two-machine flowshop. For the distribution of due dates, three levels for each of and were used; that is, and .

4.2. GA Calibration

The performance of GA significantly depends on its operators and parameters. Hence, a calibration experiment is needed to select the best operators and parameters. The proposed GA has eight operators and parameters, which are listed below, and all possible combinations were evaluated to obtain the best among them.(i)Selection: two levels (tournament and roulette)(ii)Crossover: nine levels (OX1, OX2, OBX, PBX, PMX, SJX, SJX2, SBX, and SBX2)(iii)Mutation: three levels (interchange, insertion, and swap)(iv)Population size : three levels (v)Crossover probability : six levels (vi)Mutation probability : four levels (vii)Local search : three levels (viii)Restart : four levels

For these eight types of operators and parameters, 46,656 different combinations are possible. To simplify the experiment, this test is divided into two experiments. The first experiment was to select the best combination for selection, crossover, and mutation, and the second experiment was to find the best combination for the other parameters.

For the evaluation and selection for the GA operators in the first experiment, a full factorial design of selection, crossover, and mutation is considered; hence, 54 different algorithms are tested. The remaining parameters were fixed to . Additionally, four levels were considered for the number of jobs , and five instances for each combination (, , and ) were generated. That is, 180 problem instances were generated. For a termination criterion, the maximum CPU time was set to milliseconds (ms). To compare different GAs for each problem instance, the relative deviation index (RDI) was considered as a measure; RDI is defined as , where is the objective function value from the current algorithm, and and are the minimum and maximum values among the objective function values obtained from the algorithms in this comparison, respectively. Particularly when , RDI will be 0 for all the algorithms.

To see the effect of GA operators on the performance, analysis of variance (ANOVA) was performed using SPSS and the ANOVA table is shown in Table 2. As can be seen in the table, the performance of the proposed GA was significantly affected by the operators with a significance level of 0.05. According to the F-values, three operators were significantly effective on the RDI. Figure 1 illustrates the main effect plot to find the best operators with lower RDI values. As shown in the figure, roulette, OX2, and insertion showed better performance. This is possibly because the change of fore and rear parts in sequences can lead to searching for better sequences in this scheduling problem with sequence-dependent setup times. For mutation, insertion is observed to be the best, which has been regarded in various flowshop scheduling studies as the best mutation operator [32]. Therefore, roulette, OX2, and insertion were used in the following experiment.

The second experiment was performed to find the best combination of parameters of the proposed GA. Considering all levels of , 864 different combinations were evaluated using the same approach as that for the operators. The ANOVA results are summarized in Table 3, which shows that , , and are the most significant parameters that affect the GA performance. As shown in Figure 2, which illustrates the main effect plot of GA parameters, RDI had the lowest value at each of , respectively. The bath-curves are observed for all parameters except . That is, both higher and lower values from the selected values yielded worse results, validating the chosen range of values for these parameters in the experiments. For , frequent local searches were more effective when the best solution in the population was not improved. Notably, the probability of mutation is larger than that of crossover. This is probably because a small change by a mutation may lead to a better sequence rather than crossover. That is, because limited waiting time constraints and sequence-dependent setup times were considered in this study, the solution may be improved by small changes in a job sequence. Thus, using the selected parameters, the proposed GA attempts to find better solutions by keeping the sequences in good solutions and making a few changes. In summary, the proposed GA uses roulette, OX2, and insertion for the operators and , , , , and for the parameters.

4.3. Performance Evaluation

First, the performance of the proposed MILP formulation was evaluated using CPLEX 12.9. To avoid excessive computation times for CPLEX, the maximum CPU time limit was set to 3,600 seconds (s) for each instance. For the evaluation of MILP, four levels for the number of jobs were considered, and five instances for each combination of were generated. That is, 36 combinations for were considered, and 180 problem instances are generated. The results are summarized in Table 4, which shows the average computation time (ACPUT) and the number of instances (NI) that failed to find an optimal solution within the computation time limit (3,600 s). As can be seen in the table, CPLEX obtained optimal solutions for all problems up to but could not find optimal solutions for eight problems with . Furthermore, for , no problems were solved to optimality within the computation time limit.

In general, due dates significantly affect due date-related measures (e.g., total tardiness, which is the objective function of the considered scheduling problem in this study). Hence, the impact of the due dates on the performance of CPLEX was analyzed. The computation times of CPLEX with respect to the due date parameters are summarized in Table 5, when . As can be seen from the table, the performance of CPLEX was significantly affected by the values of and . Particularly, when , the instances took a significantly long computation time to be solved. However, when , the instances were solved within shorter computation times. That is, CPLEX can solve problems quickly when the due dates of jobs are tight. Regarding the range of due dates , problems in which due dates of jobs are widely distributed were solved in a shorter computation time.

Next, the performances of the proposed list scheduling rules and MNEH algorithms were evaluated by comparison to solutions obtained from CPLEX. Here, let be an MNEH algorithm that starts with an initial solution obtained by a list scheduling rule x. Note that, for the instances not solved within the time limit, the solutions terminated at the time limit were compared. Table 6 presents the average gaps between the heuristics and CPLEX, , and the number of instances where heuristics found solutions which were equal to or better than those of CPLEX. As can be observed in the table, MEDD and Greedy demonstrated good performance among the list scheduling rules, and was the best performing heuristic algorithm. However, the average gaps from CPLEX were too large. These results explain why a metaheuristic algorithm, GA, was needed to find good solutions within a short computation time in this study.

To evaluate the performance of the suggested GA, a performance evaluation by comparing it with other algorithms was required. However, to the best of our knowledge, there are no algorithms for the same problem considered in this study. Thus, the evaluation was performed by comparing the proposed GA to CPLEX in small-sized problem instances (n = 5, 10, and 15) and the proposed heuristic algorithms in medium- and large-sized problem instances (, 50, 100, and 200). Through these comparisons, the effectiveness of the solutions from the proposed GA was evaluated. In these experiments, the maximum computation times of GA were set to four levels . Let GAx be the genetic algorithm in which the maximum computation time is set to . The effectiveness of GA solutions compared to those from CPLEX is shown in Table 7. As can be seen from the table, the suggested GA found optimal solutions for almost all problem instances. Particularly, GA outperformed CPLEX for ; a minus value means that the objective value of solutions from GA is less than that from CPLEX. This is probably because CPLEX fails to obtain optimal solutions within the maximum CPU time for .

For the medium and large problem instances, the GA solutions were compared with those from the MNEH algorithms. The results of this test are summarized in Table 8, which shows RDI values and the number of instances where a heuristic finds the best solutions among the heuristic solutions. The table illustrates that GAs outperformed MNEH algorithms, although GAs took longer CPU times. Additionally, in the case where and had similar computation times (approximately 10 s), still showed better performance. Overall, although GA requires a longer CPU time, GA solves the problems better, and the CPU time of is also reasonably short.

Additionally, the effectiveness of the proposed GA was evaluated through comparison with a simulated annealing (SA) algorithm, which is also a widely used metaheuristic for various optimization problems. SA attempts to improve an initial solution repeatedly by making small alterations until further improvements cannot be made by such alterations. The SA algorithm can avoid entrapment in a local optimum by allowing occasional uphill moves that deteriorate the objective function value. In this experiment, a simple SA algorithm is devised for comparison with the proposed GA. In this SA algorithm, a neighborhood solution of a solution is generated using an insertion method, which is reported as the best mutation in the previous experiment. If , where is the total tardiness of solution , is replaced with . Otherwise, if , the replacement is done with probability , where is a parameter called the temperature. This replacement procedure is repeated L times at a temperature, where L is called the epoch length, which is set to . The temperature is initially set to and decreases gradually by a cooling function, which is set as , where α is a parameter that is set to 0.025 in this algorithm. The algorithm is terminated when the temperature decreases to zero or the computation time reaches the maximum computation time. The procedure of this SA algorithm is summarized below.

4.3.1. Procedure: SA

(i)Step 0. Set initial values for parameters used in the algorithm . Let and .(ii)Step 1. Set an initial solution as the best solution among those obtained with , , , and . Let .(iii)Step 2. If , terminate and return as the final solution. Otherwise, let .(iv)Step 3. Generate a neighborhood solution of by an insertion method.(v)Step 4. Compute . If , let and . Otherwise, let with probability .(vi)Step 5. Let . If , go to Step 3. Otherwise, and go to Step 2.

For fair comparison, the same maximum completion time is set for SA. Let and be the simulated annealing algorithms in which the maximum CPU times are set to and , respectively. Here, and are compared with and , respectively.

The results of comparisons with the SA algorithms are summarized in Table 9, which shows the average rate of , where is the objective function value from the current algorithm, and , where and 1000. Note that because the objective value can be zero, the maximum values are used as the denominator rather than the minimum value to avoid dividing by zero; and if is zero, then the rate will be zero. According to this measurement, algorithms with low rate value are better algorithms. As can be seen from Table 9, the proposed GA outperformed a simple SA except for . Additionally, the outperformance is clearer in large-sized problems. Although the devised SA for this test was simple, the effectiveness of the proposed GA is validated from the results.

Finally, to check the effects of due date parameters ( and ) on the performance of GA, convergence rate charts are proposed in Figure 3. For values, instances generated with large values converged more quickly in the early generations. That is, when the due dates of jobs are tight, GA can find good solutions in early generations. This result is like that of CPLEX. On the other hand, instances with converged at later generations. That is, when the range of due dates is moderate, GA requires more computation time to find good solutions. According to the results, instances with a moderate range of due dates are more complicated than those of other cases.

5. Conclusion

In this study, we considered a two-machine flowshop scheduling problem with limited waiting time constraints and sequence-dependent setup times, which is very important in semiconductor manufacturing systems. A mixed-integer programming formulation is provided to describe the considered problem clearly and used to obtain optimal solutions with CPLEX. However, CPLEX could not obtain optimal solutions when because the considered problem is NP-complete. Thus, to obtain good solutions in a short computation time, three types of heuristic algorithms were proposed: list scheduling methods, a constructive heuristic algorithm, and a genetic algorithm (GA). To find the best combination of the GA, 46,656 different combinations were evaluated, and the suggested GA outperformed the other heuristics. Additionally, GA found optimal solutions for all instances with , 10, and 15 and showed better performance compared to CPLEX when . Thus, it can be said that GA works well to obtain acceptable solutions within a reasonably short computation time.

Although the limited waiting time constraint and sequence-dependent setup times are very important characteristics in semiconductor manufacturing systems, there are only a few studies considering both characteristics together. Thus, we need to develop more effective and efficient methodologies including many types of metaheuristics. On the other hand, since it is very difficult to find optimal solutions for the considered problem, it is necessary to develop effective lower bounding strategies to evaluate solutions obtained by heuristic algorithms. Also, different waiting time constraints like overlapping waiting time constraints in [23] can be considered in the future research. Finally, since the problem considered in this study is a simple type of flowshop, this problem can be extended to more practical flowshops like hybrid flowshops in which there are parallel machines in each stage.

Data Availability

All data for the computational experiments are generated randomly and the method for generating data is written in Section 4.1 in the paper.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) Grant Funded by the Korea Government (MSIT) (no. NRF-2020R1G1A1006268).