Next Article in Journal
Interpolative Meir–Keeler Mappings in Modular Metric Spaces
Next Article in Special Issue
Multipurpose Aggregation in Risk Assessment
Previous Article in Journal
Improper Integrals Involving Powers of Inverse Trigonometric and Hyperbolic Functions
Previous Article in Special Issue
A Path Planning Model for Stock Inventory Using a Drone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time

School of Science, Shenyang Aerospace University, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
Submission received: 19 July 2022 / Revised: 16 August 2022 / Accepted: 16 August 2022 / Published: 18 August 2022

Abstract

:
This paper studies a single-machine problem with resource allocation ( R A ) and deteriorating effect ( D E ). Under group technology ( G T ) and limited resource availability, our goal is to determine the schedules of groups and jobs within each group such that the total completion time is minimized. For three special cases, polynomial time algorithms are given. For a general case, a heuristic, a tabu search algorithm, and an exact (i.e., branch-and-bound) algorithm are proposed to solve this problem.

1. Introduction

With the development of economy, the research on the group technology (denoted by G T ) problem involves a variety of fields, especially in the supply chain management, information processing, computer systems, and other industries (see Ham et al. [1], Wang et al. [2]). Yang [3] and Bai et al. [4] investigated single-machine G T scheduling with learning and deterioration effects. Lu et al. [5] studied the single-machine problem with G T and time-dependent processing times (i.e., time-dependent scheduling), i.e., the processing time of jobs and setup time of groups are time-dependent. For the makespan minimization subject to release dates, they presented a polynomial time algorithm. Wang et al. [6] examined the single-machine problem with G T and shortening job processing times. For the makespan minimization with ready times, they demonstrated that some special cases were optimally solved in polynomial time. Liu et al. [7] studied the single-machine problem with G T and deterioration effects (denoted by D E ), i.e., the processing time of jobs are time-dependent and setup time of groups are constants. For the makespan minimization with ready times, they proposed a branch-and-bound algorithm. Zhu et al. [8] discussed the single-machine problem with G T , resource allocation (denoted by R A ), and learning effects. For the weighted sum minimization of makespan and total resource consumption, Zhu et al. [8] proved that the problem remains polynomially solvable. In 2018, Zhang et al. [9] discussed the single-machine problem with G T and position-dependent processing times. In 2020, Liao et al. [10] considered the two-competing scheduling problem with G T and learning effects. In 2021, Lv et al. [11] addressed single-machine slack due date assignment problems with G T , R A , and learning effects. In 2021, Xu et al. [12] investigated the single-machine problem with G T , nonperiodical maintenance, and D E . For the makespan minimization, they proposed some heuristic algorithms.
Recently, Oron [13] and Li and Wang [14] considered a single-machine scheduling model combining R A and D E . Later, Wang et al. [15] discussed a scheduling model combining G T , R A , and D E . Under the single-machine setting, the objective is to minimize the weighted sum of makespan and total resource consumption. Wang et al. [15] showed that some special cases remain polynomially solvable. In 2020, Liang et al. [16] considered the same model as Wang et al. [15] for the general case; they provided heuristic and branch-and-bound algorithms. In 2019, Wang and Liang [17] studied the single-machine problem with G T , R A , and D E concurrently. For the makespan minimization under the constraint that total resource consumption cannot exceed an upper bound, they proved that some special cases remain polynomially solvable. For the general case, they provided heuristic and branch-and-bound algorithms.
This paper conducts a further study on the problem with G T , R A , and D E , but the objective cost is to minimize the total completion time under the constraint that total resource consumption cannot exceed an upper bound. For three special cases, polynomial algorithms are given. For the general case, upper and lower bounds of the problem are given, then the branch-and-bound algorithm is proposed. In addition, a tabu search algorithm and numerical simulation analysis are given.
The rest of this paper is organized as follows: Section 2 presents a formulation of the problem. Section 3 gives some basic properties. Section 4 studies some special cases. Section 5 considers the general case, and we propose some algorithms to solve this problem. Section 6 presents the numerical simulations. The conclusions are given in Section 7.

2. Problem Statement

The following notation (see Table 1) will be used throughout this paper. There are n ˜ independent jobs. In order to exploit G T in production (see Ji et al. [18]), all the jobs are classified into m ˜ m ˜ 2 groups (i.e., Ω 1 , Ω 2 , , Ω m ˜ ) in advance according to their processing similarities. All the jobs in the same group must be processed in succession on a single machine. Assume that the single machine and all jobs are available at time zero. Let J ˘ h j be the job j in group Ω h , and the number of jobs in group Ω h is n ˜ h , i.e., h = 1 m ˜ n ˜ h = n ˜ . The actual processing time of J ˘ h j is:
p h j A p t = ς h j r ˜ h j η + θ t ,
where ς h j (resp. r ˜ h j 0 ) is a workload (respective amount of resource) of J ˘ h j , η > 0 is a constant, θ 0 is a common deterioration rate, and t 0 is its starting time. The actual setup time of Ω h is:
s h A p t = o h r ˜ h η + μ t ,
where o h (respectively, r ˜ h ) is a workload (amount of resource) of Ω h , and μ 0 is a common deterioration rate. Obviously, the parameters n ˜ , m ˜ , ς h j , n ˜ h , o h , η , θ , and μ are given in advance, and the resource allocation r ˜ h j and r ˜ h are decision variables. Our goal is to find the optimal group schedule π ¯ Ω * , job schedule π ¯ h * h = 1 , , m ˜ within Ω h , and resource allocation R * (i.e., r ˜ h j and r ˜ h ) such that a total completion time,
t c t ^ ( π ¯ Ω , π ¯ h | h = 1 , , m ˜ , R ) = h = 1 m ˜ j = 1 n ˜ h C ¯ h j
is minimized subject to h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ , h = 1 m ˜ r ˜ h U ˜ , where V ˜ and U ˜ are given constants (there is not any constraint between the r ˜ h j variables and the r ˜ h variables, and h = 1 m ˜ j = 1 n ˜ h r ˜ h j and h = 1 m ˜ r ˜ h are independent from each other). By using the three-field notation (see Gawiejnowicz [19]), the problem can be denoted by
1 p h j A p t = ς h j r ˜ h j η + θ t , s h A p t = o h r ˜ h η + μ t , h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ , h = 1 m ˜ r ˜ h U ˜ , G T t c t ^ ,
where 1 denotes the single machine, the middle field is the job and group characteristics, and t c t ^ is the objective function (this problem is abbreviated as P t c t ^ ). Wang et al. [15] and Liang et al. [16] considered the problem
1 p h j A p t = ς h j r ˜ h j η + θ t , s h A p t = o h r ˜ h η + μ t , G T α 1 × C max + α 2 h = 1 m ˜ j = 1 n ˜ h r ˜ h j + α 3 h = 1 m ˜ r ˜ h ,
where α l 0 ( l = 1 , 2 , 3 ) is a given constant and C max = max { C ¯ h j | h = 1 , , m ˜ ; j = 1 , , n ˜ h } . Wang and Liang [17] studied the problem
1 p h j A p t = ς h j r ˜ h j η + θ t , s h A p t = o h r ˜ h η + μ t , h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ , h = 1 m ˜ r ˜ h U ˜ , G T C max .

3. Basic Results

For a given schedule Π , stemming from Wang et al. [15] and Liang et al. [16], by a mathematical induction, we have
C ¯ 1 1 = o 1 r ˜ 1 η + ς 1 1 r ˜ 1 1 η + θ o 1 r ˜ 1 η = ς 1 1 r ˜ 1 1 η + 1 + θ o 1 r ˜ 1 η , C ¯ 1 2 = C ¯ 1 1 + ς 1 2 r ˜ 1 2 η + θ C ¯ 1 1 = ς 1 2 r ˜ 1 2 η + 1 + θ ς 1 1 r ˜ 1 1 η + 1 + θ 2 o 1 r ˜ 1 η , C ¯ 1 n ˜ 1 = j = 1 n ˜ 1 1 + θ n ˜ 1 j ς 1 j r ˜ 1 j η + 1 + θ n ˜ 1 o 1 r ˜ 1 η , C ¯ 2 1 = C ¯ 1 n ˜ 1 + o 2 r ˜ 2 η + μ C ¯ 1 n ˜ 1 + ς 2 1 r ˜ 2 1 η + θ C ¯ 1 n ˜ 1 + o 2 r ˜ 2 η + μ C ¯ 1 n ˜ 1 = j = 1 n ˜ 1 1 + μ 1 + θ n ˜ 1 j + 1 ς 1 j r ˜ 1 j η + 1 + μ 1 + θ n ˜ 1 + 1 o 1 r ˜ 1 η + 1 + θ o 2 r ˜ 2 η + ς 2 1 r ˜ 2 1 η , C ¯ 2 2 = C ¯ 2 1 + ς 2 2 r ˜ 2 2 η + θ C ¯ 2 1 = j = 1 n ˜ 1 1 + μ 1 + θ n ˜ 1 j + 2 ς 1 j r ˜ 1 j η + 1 + μ 1 + θ n ˜ 1 + 2 o 1 r ˜ 1 η + 1 + θ 2 o 2 r ˜ 2 η + 1 + θ ς 2 1 r ˜ 2 1 η + ς 2 2 r ˜ 2 2 η , C ¯ 2 n ˜ 2 = j = 1 n ˜ 1 1 + μ 1 + θ n ˜ 1 + n ˜ 2 j ς 1 j r ˜ 1 j η + 1 + μ 1 + θ n ˜ 1 + n ˜ 2 o 1 r ˜ 1 η + j = 1 n ˜ 2 1 + θ n ˜ 2 j ς 2 j r ˜ 2 j η + 1 + θ n ˜ 2 o 2 r ˜ 2 η , C ¯ [ m ˜ ] [ n ˜ m ] = h = 1 m ˜ j = 1 n ˜ [ h ] ( 1 + μ ) m ˜ h ( 1 + θ ) l = h m ˜ n ˜ [ l ] j ς [ h ] [ j ] r ˜ [ h ] [ j ] η + h = 1 m ˜ ( 1 + μ ) m ˜ h ( 1 + θ ) l = h m ˜ n ˜ [ l ] o [ h ] r ˜ [ h ] η .
According to the above equations, we have
t c t ^ = i = 1 m ˜ j = 1 n ˜ h C ¯ [ i ] [ j ] = h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ς [ i ] [ j ] r ˜ [ i ] [ j ] η + h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] o [ h ] r ˜ [ h ] η .
Lemma 1.
For a given schedule Π of P t c t ^ , the optimal resource allocation R * π ¯ Ω , π ¯ h | h = 1 , , m ˜  is
r ˜ * [ h ] [ j ] = l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 × V ˜
for h = 1 , , m ˜ ; j = 1 , , n ˜ i , and
r ˜ * [ h ] = k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( o [ h ] ) η 1 η + 1 h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( o [ h ] ) η 1 η + 1 × U ˜
for h = 1 , , m ˜ .
Proof. 
Obviously, Equation (4) is a convex function with respect to r ˜ h j and r ˜ h . It is obvious that in the optimal solution all resources should be consumed, i.e., h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ = 0 and h = 1 m ˜ r ˜ h U ˜ = 0 . As in Wang and Liang [17], Shabtay and Kaspi [20], and Wang and Wang [21], for a given schedule, the optimal resource allocation of the problem P t c t ^ can be solved by the Lagrange multiplier method. The Lagrangian function is
Q κ , υ , R = h = 1 m ˜ j = 1 n ˜ h C ¯ h j + κ h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ + υ h = 1 m ˜ r ˜ h U ˜ = h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ς [ h ] [ j ] r ˜ [ h ] [ j ] η + h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] o [ h ] r ˜ [ h ] η + κ h = 1 m ˜ j = 1 n ˜ r ˜ h j V ˜ + υ h = 1 m ˜ r ˜ h U ˜ ,
where κ 0 and υ 0 are the Lagrangian multipliers. Differentiating Equation (7) with respect to r ˜ h j and κ , then
Q κ , υ , R r ˜ i j = δ η l = j n ˜ [ i ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] × ζ h j η r ˜ h j η + 1 = 0
and
Q κ , υ , R κ = h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ = 0 .
By using Equations (8) and (9), it follows that
r ˜ h j = η l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] κ 1 ( η + 1 ) × ζ h j η η + 1
and
κ 1 η + 1 = h = 1 m ˜ j = 1 n ˜ h η l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k i l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ζ h j η 1 η + 1 V ˜ .
From Equations (10) and (11), then
r ˜ h j * = l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 × V ˜ .
Similarly, Equation (6) can be obtained.    □
By Lemma 1, substituting Equations (5) and (6) into t c t ^ = h = 1 m ˜ j = 1 n ˜ h C ¯ h j , we have
t c t ^ = h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ς [ h ] [ j ] r ˜ [ h ] [ j ] η + h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] o [ h ] r ˜ [ h ] η = V ˜ η h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 η + 1 + U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1 .
Lemma 2.
For P t c t ^ , the optimal job schedule π ¯ h * within group Ω h h = 1 , , m ˜ is the non-decreasing order of ζ h j , i.e., ζ h 1 ζ h 2 ζ h n ˜ h .
Proof. 
From Equation (12), for group Ω h , the objective cost is:
j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 = j = 1 n ˜ h x [ h ] [ j ] y [ h ] [ j ] ,
where x [ h ] [ j ] = l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 and y [ h ] [ j ] = ( ς [ h ] [ j ] ) η η + 1 . The term x [ h ] [ j ] is a monotonically decreasing function of j, by the HLP rule (Hardy et al. [22], i.e., the term j = 1 n ˜ h x [ h ] [ j ] y [ h ] [ j ] is minimized if sequence x [ h ] [ 1 ] , x [ h ] [ 2 ] , , x [ h ] [ n ˜ h ] is ordered non-decreasingly and sequence y [ h ] [ 1 ] , y [ h ] [ 2 ] , , y [ h ] [ n ˜ h ] is ordered non-increasingly or vice versa), for the group Ω h h = 1 , , m ˜ , if ζ h j is a non-decreasing order, i.e., ζ h 1 ζ h 2 ζ h n ˜ h , the result can be obtained.    □

4. Special Cases

By Lemma 2, for group Ω h , the optimal schedule π ¯ h * is the non-decreasing order of ζ h j , i.e., ζ h 1 ζ h 2 ζ h n ˜ h . From Equation (12), let
X = U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1
and
Y = V ˜ η h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 η + 1 .
In this section, we study some special cases (i.e., the cases of parameters ς h j , o h , and n ˜ h have some relationship, then X (Y) is minimized or a constant) which can be solved in polynomial time. The special cases stemmingfrom the parameters ς h j , o h , and n ˜ h have some relationship.

4.1. Case 1

If o h = o and n ˜ h = n ˜ m ˜ = n ¨ ( h = 1 , , m ˜ ) , from Equation (12), it follows that
U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1 = U ˜ η o η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l + ( k h ) n ¨ 1 η + 1 η + 1
is a constant (i.e., U ˜ , o , η , μ , θ , n ¨ , and m ˜ are given constants, and this term is independent of these parameters). Let
Y h ρ = 1 , if Ω h is assigned to ρ th position 0 , otherwise
and
Θ h ρ = j = 1 n ¨ l = j n ¨ ( 1 + θ ) l j + k = ρ + 1 m ˜ ( 1 + μ ) k ρ l = 1 n ¨ ( 1 + θ ) l j + ( k ρ ) n ¨ 1 η + 1 ( ς h j ) η η + 1 .
The optimal group schedule can be translated into the following assignment problem:
Min h = 1 m ˜ ρ = 1 m ˜ Θ h ρ Y h ρ
s . t . ρ = 1 m ˜ Y h ρ = 1 , h = 1 , , m ˜ ,
h = 1 m ˜ Y h ρ = 1 , ρ = 1 , , m ˜ ,
Y h ρ = 0 or 1 , h , ρ = 1 , , m ˜ .
Thus, for the special case o h = o and n ˜ h = n ˜ m ˜ = n ¨ ( h = 1 , , m ˜ ) , the problem P t c t ^ can be solved by:
Theorem 1.
If o h = o and n ˜ h = n ˜ m ˜ = n ¨ h = 1 , , m ˜ , P t c t ^ is solvable by Algorithm 1 in O n ˜ 3 time.
Algorithm 1: Case 1
  • Step 1. For group Ω h h = 1 , , m ˜ , optimal job schedule π ¯ h * can be determined by Lemma 2, i.e., ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. Calculate Θ h ρ h , ρ = 1 , , m ˜ , and determine optimal group schedule π ¯ Ω * by using Equations (15)–(18).
  • Step 3. Optimal resource allocations r ˜ h j * and r ˜ h * are calculated by Equations (5) and (6) (see Lemma 1).
Proof. 
Time of Step 1 is O h = 1 m n ˜ h log n ˜ h O ( n ˜ log n ˜ ) . Steps 3 needs O n ˜ time. For an assignment problem, Step 2 needs O m ˜ 3 O n ˜ 3 time. Thus, the total time is O n ˜ 3 .    □

4.2. Case 2

If ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ , h = 1 , , m ˜ , j = 1 , 2 , , n ˜ h , we have:
Lemma 3.
For P t c t ^ , if ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ h = 1 , , m ˜ ; j = 1 , , n ˜ h , then the optimal group schedule π ¯ Ω * is the non-decreasing order of o h , i.e., o ( 1 ) o ( 2 ) o ( m ˜ ) .
Proof. 
From Equation (12), if ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ ,
V ˜ η h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 η + 1 = V ˜ η ς η h = 1 m ˜ j = 1 n ¨ l = j n ¨ ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l j + ( k h ) n ¨ 1 η + 1 η + 1
is a constant (i.e., V ˜ , ς , η , μ , θ , n ¨ , and m ˜ are given constants, and this term is independent of these parameters).
From Equation (12) and the above analysis, it can be proved that minimizing t c t ^ is equal to minimizing the following expression:
U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1 = U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l n ¨ + ξ = h k n ¨ 1 η + 1 ( o [ h ] ) η η + 1 η + 1 = U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l + ( k h ) n ¨ 1 η + 1 ( o [ h ] ) η η + 1 η + 1 .
Similar to Lemma 2, k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l + ( k h ) n ¨ 1 η + 1 is a monotonically decreasing function of h, and by the HLP rule (Hardy et al. [22]), Equation (19) can be minimized by arranging groups in the non-decreasing order of o h ; this completes the proof.    □
Thus, for the special case ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ i = 1 , , m ˜ ; j = 1 , , n ˜ h , the problem P t c t ^ can be solved by:
Theorem 2.
If ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨   h = 1 , , m ˜ , P t c t ^ is solvable by Algorithm 2 in O ( n ˜ log n ˜ ) time.
Algorithm 2: Case 2
  • Step 1. For group Ω h h = 1 , , m ˜ , optimal job schedule can be obtained in any order.
  • Step 2. Optimal group schedule π Ω * is the non-decreasing order of o h .
  • Step 3. Optimal resource allocations r ˜ h j * and r ˜ h * are calculated by Equations (5) and (6) (see Lemma 1).

4.3. Case 3

For any groups Ω x and Ω y , if o x o y implies n ˜ x n ˜ y , we have:
Lemma 4.
For any groups Ω x and Ω y of P t c t ^ , if o x o y implies n ˜ x n ˜ y , the optimal group schedule π ¯ Ω * is non-decreasing order of o h .
Proof. 
Similar to the proof of Liang et al. [6] (see Equation (12)).    □
For this special case, i.e., for any groups Ω x and Ω y , if o x o y implies n ˜ x n ˜ y , P t c t ^ can be solved by:
Theorem 3.
For any groups Ω x and Ω y , if o x o y implies n ˜ x n ˜ y , P t c t ^ is solvable by Algorithm 3 in O n ˜ log n ˜ time.
Algorithm 3: Case 3
  • Step 1. For group Ω h h = 1 , , m ˜ , the optimal job schedule π ¯ h * can be determined by Lemma 2, i.e., ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. The optimal group schedule π Ω * is the non-decreasing order of o h .
  • Step 3. The optimal resource allocations r ˜ h j * and r ˜ h * are calculated by Equations (5) and (6) (see Lemma 1).

5. A General Case

For P t c t ^ , we cannot find a polynomially optimal algorithm, and the complexity of determining the optimal group schedule is still an open problem; we conjecture that this problem is NP-hard. Thus, B & B (i.e., branch-and-bound, where we need a lower bound and a upper bound) and heuristic algorithms might be a good way to solve P t c t ^ .

5.1. Upper Bound

For the t c t ^ minimization, any feasible solution can be proposed as a upper bound (denoted by U B ). Similar to Section 3, the group sorting method can be used as the heuristic and then this solution is improved by using the pairwise interchange method.
For a better comparison, an alternative or complementary to Algorithm 4 is proposed, a tabu search (denoted by t s ) algorithm (i.e., Algorithm 5) can be used to solve P t c t ^ .    
Algorithm 4: Upper Bound
  • Step 1. For group Ω h h = 1 , , m ˜ , an internal optimal job schedule π h * (Lemma 2) is: ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. Groups are scheduled by the non-decreasing order of o h , i.e., o ( 1 ) o ( 2 ) o ( m ˜ ) .
  • Step 3. Groups are scheduled by the non-increasing order of n ˜ h , i.e., n ˜ < < 1 > > n ˜ < < 2 > > n ˜ < < m ˜ > > .
  • Step 4. From Steps 2 and 3, the smallest value t c t ^ (see Equation (12)) is selected as an original group schedule π ¯ Ω .
  • Step 5. Set k = 1 .
  • Step 6. Set s = k + 1 .
  • Step 7. The new group schedule can be obtained by exchanging the kth and sth groups (denoted as π ¯ Ω * ), and when t c t ^ of π ¯ Ω * is smaller than π ¯ Ω , π ¯ Ω is updated by π ¯ Ω * .
  • Step 8. If s < m ˜ , then set s = s + 1 , go to step 7.
  • Step 9. If k < m ˜ 1 , then set k = k + 1 , go to step 6; otherwise, STOP. Output the group schedule π ¯ Ω * of the best group schedule found by the heuristic algorithm and its objective value t c t ^ .
  • Step 10. According to Lemma 1, calculate the resource allocation by Equations (5) and (6).
Algorithm 5:  t s
  • Step 1. For group Ω h h = 1 , , m ˜ , an internal optimal job schedule π h * can be obtained by Lemma 2, i.e., ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. Let the tabu list be empty and the iteration number be zero.
  • Step 3. Choose an initial group schedule by the Steps 2–4 of Algorithm 4, calculate its value t c t ^ (see Equation (12)) and set the current group schedule as the best solution π ¯ Ω * .
  • Step 4. Search the associated neighborhood of the current group schedule and resolve if there is a group schedule π ¯ Ω * * with the smallest objective value in associated neighborhood and it is not in the tabu list, where the neighborhood is generated by the random exchange of any two groups.
  • Step 5. If t c t ^ ( π ¯ Ω * * ) < t c t ^ ( π ¯ Ω * ) , then let π ¯ Ω * = π ¯ Ω * * . Update the tabu list and the iteration number.
  • Step 6. If there is not a group schedule in associated neighborhood but it is not in the tabu list or the maximum number of iterations is reached, output the local optimal group schedule π ¯ Ω and t c t ^ ( π ¯ Ω ) . Otherwise, update tabu list and go to Step 4.
  • Step 7. According to Lemma 1, calculate the resource allocation by Equations (5) and (6).

5.2. Lower Bound

Let π ¯ Ω = π ¯ Ω p , π ¯ Ω u be a group schedule, where π ¯ Ω p (respectively π ¯ Ω u ) is the scheduled (respectively unscheduled) part, and there are r groups in π ¯ Ω p . From Equation (12) and Lemma 4, the lower bound (denoted by L B ) of P t c t ^ is
L B = V ˜ η h = 1 r j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 r ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] + k = r + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l j n ˜ < < k > > + ξ = h r n ˜ [ ξ ] + ξ = r + 1 k n ˜ < < ξ > > 1 η + 1 ( ς [ h ] j ) η η + 1 + h = r + 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l j n ˜ < < k > > + ξ = h k n ˜ < < ξ > > 1 η + 1 ( ς ( h ) j ) η η + 1 η + 1 + U ˜ η h = 1 r k = h r ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] + k = r + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l n ˜ < < k > > + ξ = h r n ˜ [ ξ ] + ξ = r + 1 k n ˜ < < ξ > > 1 η + 1 ( o [ h ] ) η η + 1 h = r + 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l n ˜ < < k > > + ξ = h k n ˜ < < ξ > > 1 η + 1 ( o ( h ) ) η η + 1 η + 1 ,
where ς h 1 ς h 2 ς h n ˜ h , o ( r + 1 ) o ( r + 2 ) o ( m ˜ ) and n ˜ < < r + 1 > > n ˜ < < r + 2 > > n ˜ < < m ˜ > > (remark: o h and n ˜ < < h > > ( h = r + 1 , , m ˜ ) do not necessarily correspond to identical group).
From the U B (see Algorithm 4) and L B (see Equation (20)), a standardized B & B algorithm can be given.

6. Computational Result

A series of computational experiments were performed to evaluate the effectiveness of the U B , B & B , and t s algorithms, and the t s algorithm was terminated after 2000 iterations. The proposed algorithms were coded in the C++ language and performed on a desktop computer with CPUInter®Corei5-10500 3.10 GHz, 8 GB RAM on Windows® 10 operating system. The following parameters were randomly generated: ζ h j is uniformly distributed in 1 , 100 ; o h is uniformly distributed in 1 , 50 ; θ and μ are uniformly distributed in 0 , 0.5 , 0.5 , 1 ; U ˜ = V ˜ = 500 ; n ˜ = 100 , 150 , 200 , 250 , 300 ; m ˜ = 12 , 13 , 14 , 15 , 16 (at least one job per group); η = 2 . For each combination ( n ˜ , m ˜ , and θ ( μ ) ), there were 10 randomly generated replicas and the maximum c p u ^ time for each instance was set to 3600 s. For the B & B algorithm, average and maximum c p u ^ time (in seconds), and average and maximum node numbers were given. The error bound of U B and t s algorithms is given by:
t c t ^ Y t c t ^ O p t t c t ^ O p t ,
where Y { U B , t s } , t c t ^ Y is a value t c t ^ by Y, and t c t ^ ( O p t ) is an optimal value by a B & B algorithm. The computational results are given in Table 2 and Table 3. From Table 2 and Table 3, it is easy to see that the B & B can solve up to 300 jobs in a reasonable amount of time, and U B performs very well compared to t s in terms of error bound. When n ˜ 300 , the maximum error bound is less than 0.001559 (i.e., relative error 0.1559 % ).

7. Conclusions

This paper investigated the group problem with deterioration effects and resource allocation. The goal was to determine π ¯ Ω * , π ¯ h * h = 1 , , m ˜ in Ω h and R * such that t c t ^ is minimized under i = 1 m ˜ j = 1 n ˜ h r ˜ i j V ˜ and i = 1 m ˜ r ˜ i U ˜ . For some special cases, we demonstrated that this problem remains polynomially solvable. For the general case, we proposed some algorithms to solve this problem. As a future extension, it is interesting to deal with group scheduling with two scenarios based on processing times (see Wu et al. [23]) and delivery times (see Qian and Zhan [24]).

Author Contributions

Conceptualization, J.-X.Y. and H.-B.B.; methodology, J.-X.Y. and N.R.; software, J.-X.Y., H.-B.B. and H.B.; formal analysis, J.-X.Y. and J.-B.W.; investigation, J.-B.W.; writing—original draft preparation, J.-X.Y. and J.-B.W.; writing—review and editing, J.-X.Y. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This Work was supported by LiaoNing Revitalization Talents Program (Grant No. XLYC2002017) and Natural Science Foundation of LiaoNing Province, China (Grant No. 2020-MS-233).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ham, I.; Hitomi, K.; Yoshida, T. Group Technology: Applications to Production Management. Available online: https://0-link-springer-com.brum.beds.ac.uk/book/10.1007/978-94-009-4976-8 (accessed on 17 July 2022).
  2. Wang, J.B.; Guo, A.X.; Shan, F.; Jiang, B.; Wang, L.Y. Single machine group scheduling under decreasing linear deterioration. J. Appl. Math. Comput. 2007, 24, 283–293. [Google Scholar] [CrossRef]
  3. Yang, S.J. Group scheduling problems with simultaneous considerations of learning and deterioration effects on a single-machine. Appl. Math. Model. 2011, 35, 4008–4016. [Google Scholar] [CrossRef]
  4. Bai, J.; Li, Z.R.; Huang, X. Single-machine group scheduling with general deterioration and learning Eff. Appl. Math. Model. 2012, 36, 1267–1274. [Google Scholar] [CrossRef]
  5. Lu, Y.Y.; Wang, J.J.; Wang, J.B. Single machine group scheduling with decreasing time-dependent processing times subject to release dates. Appl. Math. Comput. 2014, 234, 286–292. [Google Scholar] [CrossRef]
  6. Wang, J.B.; Liu, L.; Wang, J.J.; Li, L. Makespan minimization scheduling with ready times, group technology and shortening job processing times. Comput. J. 2018, 61, 1422–1428. [Google Scholar] [CrossRef]
  7. Liu, F.; Yang, J.; Lu, Y.Y. Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng. Optim. 2019, 51, 862–874. [Google Scholar] [CrossRef]
  8. Zhu, Z.G.; Sun, L.Y.; Chu, F.; Liu, M. Single-machine group scheduling with resource allocation and learning effect. Comput. Ind. Eng. 2011, 60, 148–157. [Google Scholar] [CrossRef]
  9. Zhang, X.; Liao, L.; Zhang, W.; Cheng, T.C.E.; Tan, Y.; Ji, M. Single-machine group scheduling with new models of position-dependent processing times. Comput. Ind. Eng. 2018, 117, 1–5. [Google Scholar] [CrossRef]
  10. Liao, B.; Wang, X.; Zhu, X.; Yang, S.; Pardalos, P.M. Less is more approach for competing groups scheduling with different learning effects. J. Comb. Optim. Vol. 2020, 39, 33–54. [Google Scholar] [CrossRef]
  11. Lv, D.Y.; Luo, S.W.; Xue, J.; Xu, J.X.; Wang, J.B. A note on single machine common flow allowance group scheduling with learning effect and resource allocation. Comput. Ind. Eng. 2021, 151, 106941. [Google Scholar] [CrossRef]
  12. Xu, H.Y.; Li, X.P.; Ruiz, R.B.; Zhu, H.H. Group scheduling with nonperiodical maintenance and deteriorating effects. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 2860–2872. [Google Scholar] [CrossRef]
  13. Oron, D. Scheduling controllable processing time jobs in a deteriorating environment. J. Of The Oper. Res. Soc. 2014, 64, 49–56. [Google Scholar] [CrossRef]
  14. Li, L.; Wang, J.-J. Scheduling jobs with deterioration effect and controllable processing time. Neural Comput. Appl. 2018, 29, 1163–1170. [Google Scholar] [CrossRef]
  15. Wang, D.; Huo, Y.; Ji, P. Single-machine group scheduling with deteriorating jobs and allotted resource. Optim. Lett. 2014, 8, 591–605. [Google Scholar] [CrossRef]
  16. Liang, X.X.; Liu, M.Q.; Feng, Y.B.; Wang, J.B.; Wen, L.S. Solution algorithms for single-machine resource allocation scheduling with deteriorating jobs and group technology. Eng. Optim. 2020, 52, 1184–1197. [Google Scholar] [CrossRef]
  17. Wang, J.B.; Liang, X.X. Group scheduling with deteriorating jobs and allotted resource under limited resource availability constraint. Eng. Optim. 2019, 51, 231–246. [Google Scholar] [CrossRef]
  18. Ji, M.; Chen, K.; Ge, J.; Cheng, T.C.E. Group scheduling and job-dependent due window assignment based on a common flow allowance. Comput. Ind. Eng. 2014, 68, 35–41. [Google Scholar] [CrossRef]
  19. Gawiejnowicz, S. Models and Algorithms of Time-Dependent Scheduling; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  20. Shabtay, D.; Kaspi, M. Minimizing the total weighted flow time in a single machine with controllable processing times. Comput. Oper. Res. 2004, 31, 2279–2289. [Google Scholar] [CrossRef]
  21. Wang, J.B.; Wang, M.Z. Single-machine scheduling to minimize total convex resource consumption with a constraint on total weighted flow time. Comput. Oper. Res. 2012, 39, 492–497. [Google Scholar] [CrossRef]
  22. Hardy, G.-H.; Littlewood, J.-E.; Polya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1967. [Google Scholar]
  23. Wu, C.C.; Bai, D.; Chen, J.H.; Lin, W.C.; Xing, L.; Lin, J.C.; Cheng, S.-R. Several variants of simulated annealing hyper-heuristic for a single-machine scheduling with two-scenario-based dependent processing times. Swarm Evol. Comput. 2021, 60, 100765. [Google Scholar] [CrossRef]
  24. Qian, J.; Zhan, Y. The due date assignment scheduling problem with delivery times and truncated sum-of-processing-times-based learning effect. Mathematics 2021, 9, 3085. [Google Scholar] [CrossRef]
Table 1. Symbols.
Table 1. Symbols.
NotationMeaning
n ˜ (resp. m ˜ )number of jobs (respective groups)
Ω h group h
J ˘ h j job j at group Ω h
n ˜ h number of jobs belonging to Ω h , i.e., h = 1 m ˜ n ˜ h = n ¯
ς h j workload of J ˘ h j (a positive value which represents job parameter)
r ˜ h j amount of resource assigned to J ˘ h j
p h j A p t actual processing time of J ˘ h j
o h workload of Ω h (a positive value which represents group parameter),
h = 1 , 2 , , m ˜ ,
r ˜ h amount of resource allocated to Ω h
s h A p t actual setup time of Ω h
C ¯ h j completion time of J ˘ h j
[ j ] jth position in a schedule
t c t ^ = i = 1 m ˜ j = 1 n ˜ h C ¯ i j total completion time
π ¯ Ω group schedule
π ¯ h job schedule in Ω h
Π schedule of jobs and groups, i.e., Π = π ¯ Ω , π ¯ h | h = 1 , , m ˜
Table 2. Results of algorithms for θ , μ 0 , 0.5 .
Table 2. Results of algorithms for θ , μ 0 , 0.5 .
B & B - cpu ^ (s)Node Number of B & B UB - cpu ^ (s)Error Bound of
UB
ts - cpu ^ (s)Error Bound of
ts
n ˜ m ˜ Mean Max Mean Max Mean Max Mean Max Mean Max Mean Max
12135.024221.7761,809,907.6702,159,4430.0160.0170.0000950.00018720.77920.8513.8344497.624277
13487.759689.5643,549,270.3335,082,2380.0260.0270.0001560.00026827.91727.9311.7329932.020046
10014922.6431522.26813,107,200.33315,310,0340.0390.0400.0008010.00085736.54736.8512.6493363.713691
152194.0312700.56527,285,725.33329,086,1720.060.0660.0009580.00155946.68846.7162.8171313.046495
163600360031,359,296.66735,393,2260.0890.0910059.42659.5024.2642975.401098
12207.987342.9722,341,793.0003,117,9570.0180.0190.0000270.00007820.80121.241.5141081.621744
13542.599801.5976,761,302.3339,380,5260.0240.0250027.63727.7131.3858862.443091
15014945.2391546.2312,394,620.66716,056,9400.0380.0390.0000230.00006236.41736.4472.6690223.621472
152253.4482792.62621,252,555.33328,365,6090.0580.0590.0000440.00008346.84246.8842.3057732.829561
16360036003,216,807.33335,946,2090.0840.0860.0000780.00016559.63559.7322.3487013.032307
12244.069401.3842,977,588.3334,496,9300.0190.0240020.67920.7011.1593091.440431
13603.216898.2877,165,875.3339,493,6510.0250.0260.0000350.00007127.63127.6261.2733321.786464
20014998.1541702.556910,620,850.66718,113,6330.0390.040036.33936.3861.8957252.250205
152256.6692899.4525,283,619.35029,882,3290.0580.0590.0003360.00051246.89246.9411.7415182.476327
163600360030406061.000349939660.0830.0840059.63459.6912.1732013.408911
12315.999511.484,632,641.0007,187,5640.0180.0190.0007260.00093720.60620.6110.8493151.063118
13649.96998.957,359,686.6679,993,7120.0260.0260.0000120.00005327.67427.7271.1777951.975853
250141052.2361798.63510,615,423.97015,492,1180.0380.0400.0002250.00042935.36936.4412.1325185.285686
152348.5843022.15823,198,516.33326,916,5190.0610.0630046.18246.8862.1958783.378249
163600360031,985,126.35039,841,5450.0850.0870.0000470.00009659.64260.6621.0803412.231895
12372.965602.4786,529,516.2708,874,6540.0190.0210.0002640.00041820.22821.6782.1322383.441628
13700.6481096.1269,534,255.36010,346,3470.0280.0290027.43228.3443.0418664.726763
300141125.2671900.59718,285,356.33020,378,6760.0420.0430.0002410.00035835.63636.3592.5089753.354484
152411.2663098.48623,166,586.55529,786,7760.0610.0630.0000740.00008545.85646.6121.4569932.018688
163600360033,064,597.98036,268,4970.0860.0880059.20260.3422.6654223.625572
Table 3. Results of algorithms for θ , μ 0.5 , 1 .
Table 3. Results of algorithms for θ , μ 0.5 , 1 .
B & B - cpu ^ (s)Node Number of B & B UB - cpu ^ (s)Error Bound of
UB
ts - cpu ^ (s)Error Bound of
ts
n ˜ m ˜ Mean Max Mean Max Mean Max Mean Max Mean Max Mean Max
12111.759203.1651,772,325.3332,006,4400.0180.0220.0003420.00036120.63620.6632.1513142.733541
13514.284720.055,192,266.5708,375,9000.0360.0570.0000230.00009727.7127.7242.3020262.604564
10014899.1521511.53410,440,786.66715,534,0530.0410.0440.0000560.00008938.45441.593.2115894.194101
152231.1542691.18721,511,386.67028,404,1130.0620.0710047.42647.4843.0536453.928604
163600360032,304,624.33334,845,3290.0860.0870.0001540.00020360.49560.6884.7141356.396946
12197.605301.4792,269,584.6703,575,8330.0170.0180.0009770.00099520.89620.9121.4176861.890114
13570.153807.5426,764,995.43510,027,9940.0250.0260.0000890.000013828.12528.2131.0235241.151338
15014966.1471632.49810,816,813.00017,165,4080.0390.040.0001350.00018536.91836.9951.3831391.714258
152265.1542823.39664,825,309.66620,264,3950.070.0940047.61347.7041.9180443.053374
163600360033,081,125.00038,287,5710.0860.0910.0002350.00030459.54259.6222.3395853.275121
12251.663412.2693,364,810.3334,354,7500.0160.0170.0003220.00035820.58820.6190.6791191.162433
13632.148910.2147,692,721.25012,135,3540.0380.0640.0000140.00003927.67527.7370.8436370.974668
200141021.2671741.63710,359,758.66718,262,0140.0400.0410036.34237.3531.0945591.495353
152303.2642935.34824,840,981.00029,871,4740.0570.0580.0000550.00008346.87246.911.2434551.642072
163600360036,074,135.33339,275,3740.0830.0840059.67159.7941.9423572.492556
12310.286507.5493,193,303.3335,195,7280.0170.0180020.62720.6510.5784951.212256
13669.2971032.7567,887,726.55513,510,5560.0250.0260.0003270.00044927.71628.7220.9713041.540024
250141077.9541901.2511,022,935.55020,161,9050.0420.0430.0000870.00012638.26541.6881.4646962.107542
152363.1863102.58324,480,723.98032,108,6710.080.0920.0003550.00042147.46448.4241.0731621.436231
163600360035,416,920.00037,899,7640.0920.0960.0000390.00005860.47461.6221.6072122.647154
12372.261611.3654,028,462.6406,779,9550.0210.0220.0000220.00007320.85821.9960.7705911.264125
13726.3141143.3488,899,027.57013,410,5130.0360.0570028.13529.2341.6618152.354942
300141132.7062008.64614,693,257.00023,608,1090.0590.0610.00000680.00011236.93638.5990.7020511.018678
152546.2643348.34530,358,453.33335,337,8150.0840.0850047.34148.5021.6706742.166306
163600360035,362,523.00039,764,5820.0860.0920.0000340.00008658.24560.2621.2471541.626366
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, J.-X.; Ren, N.; Bei, H.-B.; Bao, H.; Wang, J.-B. Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time. Mathematics 2022, 10, 2983. https://0-doi-org.brum.beds.ac.uk/10.3390/math10162983

AMA Style

Yan J-X, Ren N, Bei H-B, Bao H, Wang J-B. Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time. Mathematics. 2022; 10(16):2983. https://0-doi-org.brum.beds.ac.uk/10.3390/math10162983

Chicago/Turabian Style

Yan, Jia-Xuan, Na Ren, Hong-Bin Bei, Han Bao, and Ji-Bo Wang. 2022. "Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time" Mathematics 10, no. 16: 2983. https://0-doi-org.brum.beds.ac.uk/10.3390/math10162983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop