Next Article in Journal
Expand-and-Randomize: An Algebraic Approach to Secure Computation
Next Article in Special Issue
Using Entropy to Evaluate the Impact of Monetary Policy Shocks on Financial Networks
Previous Article in Journal
Cryptography in Hierarchical Coded Caching: System Model and Cost Analysis
Previous Article in Special Issue
Is Bitcoin Still a King? Relationships between Prices, Volatility and Liquidity of Cryptocurrencies during the Pandemic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers

Department of Mathematics, Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland
Entropy 2021, 23(11), 1460; https://doi.org/10.3390/e23111460
Submission received: 27 September 2021 / Revised: 26 October 2021 / Accepted: 31 October 2021 / Published: 3 November 2021
(This article belongs to the Special Issue Entropy-Based Applications in Economics, Finance, and Management)

Abstract

:
This paper presents an extension of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method with objective criteria weights for Group Decision Making (GDM) with Interval Numbers (INs). The proposed method is an alternative to popular and often used methods that aggregate the decision matrices provided by the decision makers (DMs) into a single group matrix, which is the basis for determining objective criteria weights and ranking the alternatives. It does not use an aggregation operator, but a transformation of the decision matrices into criteria matrices, in the case of determining objective criteria weights, and into alternative matrices, in the case of the ranking of alternatives. This ensures that all the decision makers’ evaluations are taken into account instead of their certain average. The numerical example shows the ease of use of the proposed method, which can be implemented into common data analysis software such as Excel.

1. Introduction

Recent years show that Multiple Criteria Decision Making (MCDM) methods are increasingly used to solve real decision-making problems concerning various aspects of human life [1,2,3]. The main application areas for these methods are supply chain management [4], logistics [5], engineering [6], technology [7], and many others. The complexity and diversity of MCDM problems have resulted in the development of a variety of methods to solve them [2]. One group of these methods are methods based on reference points. Historically, the first method which belongs to this group is the Hellwig method [8]. It uses a single reference point, called a “pattern”. It is an artificial solution that maximizes benefit criteria and minimizes cost criteria. The computed synthetic indicator “proximity” of the alternatives to the “pattern” allows for their linear ordering and the identification of the best one. However, the most recognized and regularly used method in this group is TOPSIS, developed by Hwang and Yoon [9]. It uses two artificial solutions called the Positive Ideal Solution (PIS) and the Negative Ideal Solution (NIS). The PIS is equivalent to the “pattern” in Hellwig’s method. In turn, the NIS minimizes the benefit criteria and maximizes the cost criteria. Taking into account the separation of the alternatives from the PIS and NIS, the Relative Closeness Coefficients (RCCs) to the PIS are calculated, which allows for the ranking of the alternatives.
The applications of the TOPSIS method are very diverse. Apart from the main applications of MCDM mentioned above, it is used in more and more new areas, such as flow control in a manufacturing system [10], the selection of sustainable acid rain control options [11], the selection of the best employees using decision support systems in internal control [12], credit risk evaluations for strategic partners [13], the investigation of aggregated social influence [14], the selection of stocks before the formation of a portfolio based on a company’s financial performance [15], the identification of the best wind turbines for different locations [16], the ranking of the developmental performance of nations [17], the evaluation of the quality of institutions in the European Union countries [18], the evaluation of technologies improving the quality of life of elderly people [19], and many others.
In real-life problems, it may be difficult to measure data accurately or present the preferences of the DMs by real numbers; it may also happen that DMs use linguistic variables, in which case we can use another format of data. In such situations, MCDM methods, including TOPSIS, should be extended from real numbers to the new type of data. In the literature, we can find a number of extensions of the TOPSIS method for different types of data: fuzzy numbers [20], ordered fuzzy numbers [21], hesitant fuzzy sets [22], intuitionistic fuzzy sets [23], hesitant Pythagorean fuzzy sets [24], interval-valued fuzzy sets [25], interval neutrosophic sets [26], and others. This shows that researchers are developing new ways of presenting data to allow DMs to formulate their preferences more effectively. We can say that the choice of a data presentation method is an MCDM problem.
In this paper we use INs. An extension of the TOPSIS method to MCDM problems with INs was developed by Jahanshahloo et al. [27]. A limitation of this approach is the definitions of the PIS and NIS. These reference points are represented by real numbers selected from the lower and upper endpoints of the INs in the decision matrix, rather than by INs themselves. This can lead to incorrect results [28]. In the literature, various methods for determining the PIS and NIS for INs have been proposed. In [29,30], they are represented by real numbers instead of intervals, as in [27]. In [31,32], the PIS is defined as an interval whose endpoints are the maximum values from the lower and upper endpoints of the intervals, respectively, while for the NIS we take the minimum values of these endpoints. In [33], the PIS is the average of intervals, while for the NIS, the lower endpoints are the minimum of the lower endpoints of the intervals and the upper endpoints are the maximum of the upper endpoints of the intervals, respectively. The main limitation of these methods is that the determined elements of the PIS and NIS may not be elements of the decision matrix. Dymova et al. [28] presented a method of comparing INs to determine the minimum and maximum elements from the decision matrix. It is based on determining the distance between the midpoints of the INs being compared. In the proposed approach, we will use an analogous method of comparing INs, as proposed by Hu and Wang [34].
An important step in MCDM methods, including the TOPSIS method, is the determination of criteria weights. These describe the importance of each criterion in the decision-making process and have a key influence on the final result. We usually use subjective or objective weights in solving MCDM problems. Subjective weights are determined by the DM or an expert, using their knowledge, experience, skills, etc. In situations where we cannot obtain the appropriate weights or the cost of obtaining them is too high, we can use objective weights. These are determined by using mathematical methods based on the decision matrix. One of the popular methods for determining objective weights is the entropy method [9]. It assigns a higher weight to the given criterion, regarding which the evaluations of alternatives are more diversified. Hosseinzadeh Lotfi and Fallahnejad [35] proposed an extension of the entropy method to data in the form of INs. As a result, we can obtain objective criteria weights, also in the form of INs.
Because of the increasing complexity of decision-making problems, they are often analyzed by a group of DMs, which leads to the development of so-called Multiple Criteria Group Decision Making (MCGDM). In such situations, each member of the group defines an individual decision matrix. A common technique is to determine the aggregate (group) matrix from the individual matrices using a selected aggregation operator. This matrix is the basis for determining objective criteria weights and ranking the alternatives. One of the most popular aggregation operators is the arithmetic mean. Note, however, that this may not reflect the preferences or judgments of DMs [36]. To better explain this limitation, we present two simple numerical examples. We consider a group of two decision makers { D M 1 , D M 2 } who evaluate three alternatives { A 1 ,   A 2 , A 3 } with respect to two benefit criteria { C 1 , C 2 } using the following scale: { 1 , 2 , 3 , 4 , 5 } . Their evaluations of the alternatives with respect to the criteria are in the form of individual decision matrices X 1 and X 2 ; by X A R T we denote the aggregation results using the arithmetic mean.
Example 1.
The ratings of the alternatives with respect to the criteria provided by the DMs are:
X 1 = D M 1 C 1 C 2 A 1 A 2 A 3 ( 1 1 2 2 4 3 ) ,         X 2 = D M 2 C 1 C 2 A 1 A 2 A 3 ( 3 3 2 2 1 1 )   .
Let us note that regardless of whether the ratings of the alternatives with respect to a criterion are in the form “1 and 3”, “2 and 2”, or “3 and 1”, the aggregation results are the same and equal to “2”. The aggregation results are:
X A G G = C 1 C 2 A 1 A 2 A 3 ( 2 2 2 2 2.5 2 )   .
Based on matrix  X A G G , and using the entropy method, we can calculate the criteria weights, obtaining the following vector:
w A G G = ( 1 ,   0 )   .
This means that criterion  C 2  has no influence on the ranking of the alternatives and can be omitted. On the other hand, using the proposed approach to the matrices  X 1  and  X 2 , we obtain the following vector of criteria weights:
w = ( 0.5921 ,   0.4079 )   .
Example 2.
The ratings of the alternatives with respect to the criteria provided by the DMs are:
X 1 = D M 1 C 1 C 2 A 1 A 2 A 3 ( 5 1 3 2 1 3 ) ,       X 2 = D M 2 C 1 C 2 A 1 A 2 A 3 ( 1 3 3 2 5 1 )   .
The aggregation results are:
X A G G = C 1 C 2 A 1 A 2 A 3 ( 3 2 3 2 3 2 )   .
Matrix  X A G G  shows that all three alternatives { A 1 ,   A 2 , A 3 }  are equivalent (i.e., they have the same aggregate rating) and we cannot calculate the vector of criteria weights using the entropy method. However, if we use the proposed approach, we obtain the following vector of criteria weights:
w = ( 0.6497 ,   0.3503 )   .
From Examples 1 and 2, we can conclude that such an averaged result does not reflect the discrepancies between the individual decisions (the preferences of the DMs) and the fact that using such averaged information may lead to an incorrect final decision.The aim of this paper is to present a new approach for GDM using the TOPSIS method and objective criteria weights with INs. The first main contribution of this paper is a method for determining the objective criteria weights for GDM without aggregating individual decision matrices. The method involves transforming the individual decision matrices into criteria matrices and using the interval entropy and the interval TOPSIS methods to determine the objective criteria weights. In this method, unlike in the method proposed by Hosseinzadeh Lotfi and Fallahnejad [35], as the final result, we receive the weights in the form of real numbers. The second main contribution of this paper is the TOPSIS method for GDM, also without the aggregation of individual decision matrices. This method involves transforming the decision matrices into matrices of alternatives and then using a new interval TOPSIS method for the ranking of alternatives.
The remainder of the paper consists of the following sections. Section 2 presents basic information about INs and a description of the classical TOPSIS method and the classical entropy method. The main section of the paper, i.e., Section 3, presents the algorithm of the proposed method in detail. Next, the proposed method is used in a numerical example and compared with other, similar approaches which are based on the aggregation of individual matrices. The paper ends with the conclusions.

2. Preliminaries

In the following, we present some basic information about INs, the classical TOPSIS method, and the entropy method of determining criteria weights.

2.1. Interval Numbers

Definition 1.
As proposed by [37]: The closed IN, denoted by  [ a _ , a ¯ ] , is the set of real numbers given by:
[ a _ , a ¯ ] = { x   :   a _ x a ¯ }   .
Throughout this paper, INs will be used in the interval TOPSIS and interval entropy methods, so we assume that they are positive INs, i.e., a _ > 0 .
Definition 2.
As proposed by [37]: Let  [ a _ , a ¯ ]  and  [ b _ , b ¯ ]  be two positive INs, and  λ > 0  be a real number. Then:
[ a _ , a ¯ ] = [ b _ , b ¯ ]   i f   a _ = b _   a n d   a ¯ = b ¯ , [ a _ , a ¯ ] + [ b _ , b ¯ ] = [ a _ + b _ ,   a ¯ + b ¯ ] , [ a _ , a ¯ ] [ b _ , b ¯ ] = [ a _ b ¯ ,   a ¯ b _ ] , [ a _ , a ¯ ] [ b _ , b ¯ ] = [ a _ b _ ,   a ¯ b ¯ ] , [ a _ , a ¯ ] / [ b _ , b ¯ ] = [ a _ / b ¯ ,   a ¯ / b _ ] , λ · [ a _ , a ¯ ] = [ λ a _ , λ a ¯ ]   .  
The TOPSIS method requires the determination of the minimum and maximum elements. To compare INs, we apply the method developed by Hu and Wang [34]. It is based on a different description of INs than Equation (1) used in Definition 1.
Definition 3.
As proposed by [34]: The IN  [ a _ , a ¯ ]  is represented in the form:
m ( [ a _ , a ¯ ] ) ; w ( [ a _ , a ¯ ] )
where  m ( [ a _ , a ¯ ] )  and  w ( [ a _ , a ¯ ] )  are its mid-point and half-width, respectively, determined as follows:
m ( [ a _ , a ¯ ] ) = a _ + a ¯ 2   ,
and:
w ( [ a _ , a ¯ ] ) = a ¯ a _ 2   .
Using the representation from Equation (2), Hu and Wang defined the order relation “ = ” for INs as follows.
Definition 4.
As proposed by [34]: Let  [ a _ , a ¯ ]  and  [ b _ , b ¯ ]  be two INs. Then:
[ a _ , a ¯ ] = [ b _ , b ¯ ]   i f f   { m ( [ a _ , a ¯ ] ) < m ( [ b _ , b ¯ ] ) , i f m ( [ a _ , a ¯ ] ) m ( [ b _ , b ¯ ] ) w ( [ a _ , a ¯ ] ) w ( [ b _ , b ¯ ] ) , i f m ( [ a _ , a ¯ ] ) = m ( [ b _ , b ¯ ] ) .
and:
[ a _ , a ¯ ] [ b _ , b ¯ ]   i f f   [ a _ , a ¯ ] = [ b _ , b ¯ ]   a n d   [ a _ , a ¯ ] [ b _ , b ¯ ] .

2.2. The Classical TOPSIS Method

Suppose an MCDM problem is given. The solution of the problem involves the linear ordering of the set of possible alternatives { A 1 , A 2 , ,   A m } and the indication of the best one. The alternatives under consideration are evaluated with respect to a set of criteria { C 1 , C 2 , ,   C n } that determine the choice of a solution. An MCDM problem is represented by a decision matrix X , of the form:
X = ( x 11 x 12 x 21 x 22 x 1 n x 2 n x m 1 x m 2 x m n )
where x i j for i = 1 , 2 , , m and j = 1 , 2 , , n represents the evaluation of the i th alternative with respect to the j th criterion. In addition, we determine the vector criteria weights w = ( w 1 , w 2 , , w n ) . The classical TOPSIS method developed by Hwang and Yoon consists of the following steps [9]:
  • Step 1. The normalization of the decision matrix X and calculation of the matrix Y , of the form:
    Y = ( y 11 y 12 y 21 y 22 y 1 n y 2 n y m 1 y m 2 y m n )
    using, for j = 1 , . . , n , the following formula:
    y i j = x i j i = 1 m x i j 2   .
  • Step 2. The calculation of the weighted normalized decision matrix V , of the form:
    V = ( v 11 v 12 v 21 v 22 v 1 n v 2 n v m 1 v m 2 v m n )
    where v i j = w j · y i j for i = 1 , 2 , , m and j = 1 , 2 , , n .
  • Step 3. Determination of the PIS ( A + ) , of the form:
    A + = ( v 1 + , v 2 + , , v n + ) = { ( max i v i j   |   j B ) , ( min i v i j   |   j C ) } ,
    and of the NIS ( A ) , of the form:
    A = ( v 1 , v 2 , , v n ) = { ( min i v i j   |   j B ) , ( max i v i j   |   j C ) } ,
    where B and C are associated with benefit and cost criteria, respectively.
  • Step 4. The calculation of the distance of each A i   ( i = 1 , , m ) from the PIS:
    d i + = j = 1 n ( v i j v j + ) 2 ,
    and from the NIS:
    d i = j = 1 n ( v i j v j ) 2 .
  • Step 5. The calculation of the coefficients R C C i   ( i = 1 ,   2 , , m ) of relative closeness to the PIS for each alternative A i   ( i = 1 , , m ) , using the following formula:
    R C C i = d i d i + + d i .
  • Step 6. The ranking of alternatives in descending order, using R C C i , and the determination of the best one (the one with the highest value of R C C i ).

2.3. The Entropy Method

The starting point for determining objective criteria weights by the entropy method is the decision matrix, Equation (7) (see Section 2.2). It consists of the following steps [9]:
  • Step 1. The normalization of the decision matrix X and the calculation of the matrix Y , of the form:
    Y = ( y 11 y 12 y 21 y 22 y 1 n y 2 n y m 1 y m 2 y m n )
    using the following formula for j = 1 , . . , n :
    y i j = x i j i = 1 m x i j .
  • Step 2. The calculation of the vector of entropy e = ( e 1 , e 2 , , e n ) , using the following formula for j = 1 , , n :
    e j = 1 ln m i = 1 m y i j ln y i j .
    Moreover, when y i j = 0 for some i , the value of y i j ln y i j is taken as 0, which is consistent with lim x 0 + x ln x = 0 .
  • Step 3. The calculation of the vector of diversification d = ( d 1 , d 2 , , d n ) , using the following formula for j = 1 , , n :
    d j = 1 e j .
  • Step 4. The calculation of the vector of objective criteria weights w = ( w 1 , w 2 , , w n ) , where:
    w j = d j j = 1 n d j .

3. The Proposed Approach

The proposed extension of the TOPSIS method with objective criteria weights based on interval data for GDM consists of three major stages:
  • The preparation of the data;
  • The calculation of the objective criteria weights using the interval entropy method and the interval TOPSIS method, without the aggregation of individual decision matrices;
  • The linear ordering of alternatives using the extended TOPSIS method, based on interval data, without the aggregation of individual decision matrices.
A flow chart and a graphical scheme of the proposed method are shown in Figure 1 and Figure 2, respectively.
  • Stage 1: The preparation of the data.
    As in Section 2.2., suppose an MCDM problem for GDM is given, which consists of a set of possible alternatives { A 1 , A 2 , ,   A m } and a set of criteria { C 1 , C 2 , ,   C n } . In this case, the evaluation of alternatives, with respect to the criteria, is performed by a group of DMs or experts { D M 1 , D M 2 , ,   D M K } . In the process of GDM, each D M k   ( k = 1 ,   2 , , K ) constructs a matrix, called the individual decision matrix, of the form:
    X k = D M k C 1     C 2       C n A 1 A 2 A m ( x 11 k x 12 k x 21 k x 22 k x 1 n k x 2 n k x m 1 k x m 2 k x m n k ) .
    In the proposed approach, each element x i j k for i = 1 , 2 , , m and j = 1 , 2 , , n of the matrix X k is in the form of an IN, i.e., x i j k = [ x _ i j k , x ¯ i j k ] , and represents the evaluation of the k th DM of the i th alternative with respect to the j th criterion.
  • Stage 2: The calculation of the objective criteria weights for GDM, without the aggregation of individual decision matrices.
    The proposed method of calculation of the objective criteria weights based on interval entropy and interval TOPSIS consists of the following steps.
  • Step 1. The normalization, for each decision maker D M k   ( k = 1 , 2 , , K ) , of their individual decision matrix, as given by Equation (21), and obtaining the matrix Y k , of the form:
    Y k = D M k C 1     C 2       C n A 1 A 2 A m ( y 11 k y 12 k y 21 k y 22 k y 1 n k y 2 n k y m 1 k y m 2 k y m n k )
    using the following formula for j = 1 , . . , n [35]:
    y i j k = { [ x _ i j k i = 1 m x ¯ i j k , x ¯ i j k i = 1 m x ¯ i j k ] if j B [ 1 / x ¯ i j k i = 1 m 1 / x _ i j k , 1 / x _ i j k i = 1 m 1 / x _ i j k ] if j C .
  • Step 2. The construction, for each criterion C j   ( j = 1 , 2 , , n ) , of the matrix V j , of the form:
    V j = C j D M 1 D M 2 D M K A 1 A 2 A m ( y 1 j 1 y 1 j 2 y 2 j 1 y 2 j 2 y 1 j K y 2 j K y m j 1 y m j 2 y m j K ) .
  • Step 3. The calculation, for each criterion C j   ( j = 1 , 2 , , n ) , of the entropy vector e j , of the form:
    e j = ( e j 1 , e j 2 , , e j K )
    based on the matrix V j , where e j k = [ e _ j k , e ¯ j k ] for k = 1 , 2 , , K and:
    e _ j k = min { 1 ln m i = 1 m y _ i j k ln y _ i j k , 1 ln m i = 1 m y ¯ i j k ln y ¯ i j k } ,
    and:
    e ¯ j k = max { 1 ln m i = 1 m y _ i j k ln y _ i j k , 1 ln m i = 1 m y ¯ i j k ln y ¯ i j k } ,
    and y _ i j k ln y _ i j k or y ¯ i j k ln y ¯ i j k is defined to be 0 if y _ i j k = 0 or y ¯ i j k = 0 [35], respectively.
  • Step 4. The calculation, for each criterion C j   ( j = 1 , 2 , , n ) , of the diversification vector d j , of the form:
    d j = ( d j 1 , d j 2 , , d j K )
    where d j k = 1 e j k = [ 1 e ¯ j k , 1 e _ j k ] for k = 1 ,   2 , , K , and the construction of diversification matrix D , of the form:
    D = D M 1 D M 2 D M K C 1 C 2 C n (   d 1 1       d 1 2     d 2 1       d 2 2       d 1 K       d 2 K     d n 1     d n 2         d n K   ) .
  • Step 5. The determination of the Most Important Criterion (MIC):
    C + = ( c 1 + , c 2 + , , c K + )
    where c k + = max j d j k for k = 1 , 2 , , K , and of the Least Important Criterion (LIC):
    C = ( c 1 , c 2 , , c K )
    where c k = [ 0 , 0 ] for k = 1 , 2 , , K , based on the matrix D .
  • Step 6. The calculation of the distance of each diversification vector d j , representing the weight of criterion C j   ( j = 1 ,   2 , , n ) , from the MIC:
    d j C + = k = 1 K [ ( d _ j k c _ k + ) 2 + ( d ¯ j k c ¯ k + ) 2 ] ,
    and from the LIC:
    d j C = k = 1 K [ ( d _ j k c _ k ) 2 + ( d ¯ j k c ¯ k ) 2 ] .
  • Step 7. The calculation of the coefficients R C C j C   ( j = 1 , 2 , , n ) of relative closeness to the MIC for each diversification vector d j , using the following formula:
    R C C j C = d j C d j C + + d j C .
  • Step 8. The calculation of the vector of objective criteria weights:
    w = ( w 1 , w 2 , , w n )
    where:
    w j = R C C j C j = 1 n R C C j C
    for j = 1 ,   2 , , n .
  • Stage 3: The extended TOPSIS method for GDM without the aggregation of individual decision matrices.
The developed extended TOPSIS for GDM without the aggregation of individual decision matrices consists of the following steps.
  • Step 1. The normalization, for each decision maker D M k   ( k = 1 , 2 , , K ) , of their individual decision matrix, as given by Equation (21), and obtaining the matrix Y k , of the form
    Y k = D M k C 1     C 2       C n A 1 A 2 A m ( y 11 k y 12 k y 21 k y 22 k y 1 n k y 2 n k y m 1 k y m 2 k y m n k )
    using the following formula for j = 1 , , n [38]:
    y i j k = { [ x _ i j k i = 1 m x ¯ i j k , x ¯ i j k i = 1 m x _ i j k ] if j B [ 1 / x ¯ i j k i = 1 m 1 / x _ i j k , 1 / x _ i j k i = 1 m 1 / x ¯ i j k ] if j C .
Remark 1.
Note that the normalization method, Equation (38), used above does not provide the property that the normalized elements  y i j k  belong to the interval  [ 0 , 1 ] . If we require this property to be satisfied, the elements of the matrix  Y k  can be recalculated using the following formula [38]:
z i j k = [ y _ i j k i = 1 m [ ( y _ i j k ) 2 + ( y ¯ i j k ) 2 ] , y ¯ i j k i = 1 m [ ( y _ i j k ) 2 + ( y ¯ i j k ) 2 ] ] .
As the final result, we obtain normalized decision matrices  Z k   ( k = 1 , 2 , , K ) :
Z k = D M k C 1     C 2       C n   A 1 A 2 A m ( z 11 k z 12 k z 21 k z 22 k z 1 n k z 2 n k z m 1 k z m 2 k z m n k )
  • Step 2. The calculation of the weighted normalized individual matrices V k   ( k = 1 , 2 , , K ) :
    V k = D M k C 1     C 2       C n   A 1 A 2 A m ( v 11 k v 12 k v 21 k v 22 k v 1 n k v 2 n k v m 1 k v m 2 k v m n k )
    where:
    v i j k = w j z i j k = [ w j z _ i j k , w j z ¯ i j k ]
    and w j   ( j = 1 , 2 , , n ) are the objective criteria weights obtained in Stage 2.
  • Step 3. The construction, for each alternative A i   ( i = 1 ,   2 , , m ) , of the matrix A i :
    A i = A i   C 1   C 2   C n     D M 1 D M 2 D M K ( v i 1 1 v i 2 1 v i 1 2 v i 2 2 v i n 1 v i n 2 v i 1 K v i 2 K v i n K ) .
  • Step 4. The determination of the PIS ( A + ) :
    A + = C 1       C 2       C n     D M 1 D M 2 D M K ( v 1 1 + v 2 1 + v 1 2 + v 2 2 + v k 1 + v n 2 + v 1 K + v 2 K + v n K + )
    where v j k + = max i v i j k for j = 1 ,   2 , , n and k = 1 ,   2 , , K and of NIS ( A ) :
    A = C 1       C 2       C n     D M 1 D M 2 D M K ( v 1 1 v 2 1 v 1 2 v 2 2 v k 1 v n 2 v 1 K v 2 K v n K )
    where v j k = min i v i j k for j = 1 , 2 , , n and k = 1 , 2 , , K .
  • Step 5. The calculation of the distance of each matrix A i , representing the alternative A i   ( i = 1 , , m ) , from the PIS:
    d i A + = k = 1 K j = 1 n [ ( v _ i j k v _ j k + ) 2 + ( v ¯ i j k v ¯ j k + ) 2 ] ,
    and from the NIS:
    d i A = k = 1 K j = 1 n [ ( v _ i j k v _ j k ) 2 + ( v ¯ i j k v ¯ j k ) 2 ] .
  • Step 6. The calculation of the coefficients R C C i A   ( i = 1 ,   2 , , m ) of relative closeness to the PIS for each alternative A i   ( i = 1 , , m ) , using the following formula:
    R C C i A = d i A d i A + d i A + .
  • Step 7. The ranking of alternatives in descending order, using R C C i A , and the determination of the best one.

4. A Numerical Example and Results

The approach proposed in Section 3 will now be illustrated with a numerical example, taken from [38], related to the evaluation of the authorities of a university in China. The set of alternatives { A 1 ,   A 2 ,   A 3 } consists of the president and two vice presidents, who are evaluated by teams of teachers, D M 1 , researchers, D M 2 , and undergraduates, D M 3 . The DMs evaluate the presidents with respect to leadership, C 1 , performance, C 2 , and style of work, C 3 , using a point scale from 0 to 100. The team ratings are represented by INs, where the lower end is the minimum and the upper end is the maximum ratings among the group members. The individual decision matrices are presented in Table 1.
The first main step of the proposed approach is to determine the objective criteria weights, as described in Stage 2 of Section 3. The individual decision matrices are normalized (see Table 2) and then transformed into matrices of criteria (see Table 3). Next, for each criterion matrix, the entropy and diversification vectors are determined (see Table 4 and Table 5). Using the diversification vectors, we construct a diversification matrix, which is the basis for calculating the objective criteria weights using the interval TOPSIS method. Table 6 presents reference points—in this case, the MIC and LIC. After calculating the distance of each row of the diversification matrix from the MIC and LIC, the RCCs are calculated (see Table 7). These coefficients, after normalization, are the objective criteria weights (see Table 7 and Figure 3). In our example, we obtain the following vector:
w = ( 0.3049 ,   0.4372 ,   0.2579 ) .
The second main step of the proposed approach is to use an extension of the TOPSIS method for GDM without the aggregation of individual matrices, as described in Stage 3 of Section 3. The individual decision matrices (see Table 1) are normalized (see Table 8) using Equation (38) and then Equation (39). Using objective criteria weights (see Table 7), we calculate the weighted normalized decision matrices (see Table 9). These matrices are the basis for constructing the matrix for each alternative (see Table 10) of the form (43). Now, we apply the extended TOPSIS method for the matrices of alternatives for ranking the alternatives. Table 11 presents reference points—in this case, the PIS and NIS. Finally, the distances of the alternatives from the PIS and NIS and the RCCs are calculated (see Table 12). Based on these coefficients, the ranking of the alternatives is as follows:
A 3 A 1 A 2
where means “inferior to” (see Table 12 and Figure 4). It means that the highest rating is given to the vice president, A 2 . The symbol J in Table 12 represents the normalized RCCs.

5. Comparison of the Proposed Method with Other, Similar Approaches

In the following, the approach proposed in Section 3 will be compared with other, similar approaches. In practice, the most common methods for GDM use a certain operator to aggregate the individual decision matrices, given by Equation (21), into a group matrix X of the form Equation (7), which is the starting point for the ranking of alternatives. To compare the results obtained by the proposed method ( P M ), we use the following operators:
  • A M —arithmetic mean, defined by:
    x i j = 1 K k = 1 K x i j k = [ 1 K k = 1 K x _ i j k , 1 K k = 1 K x ¯ i j k ] ;
  • G M —geometric mean, defined by:
    x i j = ( k = 1 K x i j k ) 1 K = ( ( k = 1 K x _ i j k ) 1 K , ( k = 1 K x ¯ i j k ) 1 K ) ;
  • W M —weighted mean, defined by:
    x i j = k = 1 K λ k x i j k = ( k = 1 K λ k x _ i j k , k = 1 K λ k x ¯ i j k )
    where λ k are weights that determine the importance of the DMs, such that λ k [ 0 , 1 ] and k = 1 K λ k = 1 .
In the W M method, the vector of DM weights λ = ( 0.2661 , 0.3573 , 0.3766 ) is determined by the method proposed by [38]. Next, based on the matrix X , we determine the objective criteria weights using the method proposed by Lotfi and Fallahnejad [35]. In this case, the criteria weights are in the form of INs, so we do not compare them with the criteria weights obtained by the proposed method described in Stage 2 of Section 3 and presented in Table 7. To obtain the ranking of the alternatives, we use the normalization method proposed by Jahanshahloo et al. [27]; the PIS and NIS are determined using Equations (5) and (6), whereas the distances of the alternatives from the PIS and NIS are calculated using Equations (46) and (47), where K = 1 . Because the analyzed methods are significantly different, to compare the final results we use the indicator J instead of the RRCs. Table 13 and Figure 5 present the results obtained. We can notice that all the analyzed methods indicated alternative A 2 as the best one, and the obtained values of the indicator J are similar. On the other hand, methods that use an aggregation operator give a different ranking than the proposed method, of the form:
A 1 A 3 A 2
where alternatives A 1 and A 3 are swapped.

6. Conclusions

This paper presents a new extension of the TOPSIS method for GDM, using INs. It is an alternative to methods based on the aggregation of individual matrices. It uses the transformation of decision matrices into criteria matrices to determine objective criteria weights, while it uses alternatives matrices to create rankings of alternatives. The numerical example shows that the results obtained by the proposed method differ from the results obtained by the methods based on the aggregation of individual matrices using the arithmetic mean, geometric mean, and weighted mean (with weights reflecting the importance assigned to the DMs).
However, it is worth noting that the proposed method has some limitations, as it uses data in the form of INs. This implies the necessity of extending the proposed method to other types of imprecise data, which will be the subject of further research. Furthermore, the proposed method should be extended by taking into account the subjective criteria weights and the subjective and objective weights of the DMs, to ensure that all key elements in the decision-making process are taken into account.

Funding

The work was performed in the framework of project WZ/WI-IIT/1/2020 at the Bialystok University of Technology and financed by the Ministry of Science and Higher Education.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank the editor of the Entropy journal and the two anonymous reviewers for their valuable comments and suggestions.

Conflicts of Interest

The author declare no conflict of interest.

Abbreviations

TOPSISTechnique for Order Preference by Similarity to Ideal Solution
GDMGroup Decision Making
INInterval Number
DMDecision Maker
MCDMMultiple Criteria Decision Making
PISPositive Ideal Solution
NISNegative Ideal Solution
RCCRelative Closeness Coefficient
MCGDMMultiple Criteria Group Decision Making
MICMost Important Criterion
LICLeast Important Criterion
PMProposed Method
AMArithmetic Mean
GMGeometric Mean
WMWeighted Mean

References

  1. Behzadian, M.; Otaghsara, S.K.; Yazdani, M.; Ignatius, J. A state-of the art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
  2. Mardani, A.; Jusoh, A.; Zavadskas, E.K. Fuzzy multiple criteria decision-making techniques and applications—Two decades review from 1994 to 2014. Expert Syst. Appl. 2015, 42, 4126–4148. [Google Scholar] [CrossRef]
  3. Palczewski, K.; Sałabun, W. The fuzzy TOPSIS applications in the last decade. Procedia Comput. Sci. 2019, 159, 2294–2303. [Google Scholar] [CrossRef]
  4. Koganti, V.K.; Menikonda, N.; Anbuudayasankar, S.P.; Krishnaraj, T.; Athhukuri, R.K.; Vastav, M.S. GRAHP TOP model for supplier selection in Supply Chain: A hybrid MCDM approach. Decis. Sci. Lett. 2019, 8, 65–80. [Google Scholar] [CrossRef]
  5. Zhang, X.; Lu, J.; Peng, Y. Hybrid MCDM model for location of logistics hub: A case in China under the belt and road initiative. IEEE Access 2021, 9, 41227–41245. [Google Scholar] [CrossRef]
  6. Stojčić, M.; Zavadskas, E.K.; Pamučar, D.; Stević, Ž.; Mardani, A. Application of MCDM Methods in Sustainability Engineering: A Literature Review 2008–2018. Symmetry 2019, 11, 350. [Google Scholar] [CrossRef] [Green Version]
  7. Ghasempour, R.; Nazari, M.A.; Ebrahimi, M.; Ahmadi, M.H.; Hadiyanto, H. Multi-Criteria Decision Making (MCDM) Approach for Selecting Solar Plants Site and Technology: A Review. Int. J. Renew. Energy Dev. 2019, 8, 15–25. [Google Scholar] [CrossRef] [Green Version]
  8. Hellwig, Z. Procedure of Evaluating High-Level Manpower Data and Typology of Countries by Means of the Taxonomic Method; COM/WS/91, UNESCO Working Paper; UNESCO: Warsaw, Poland, 1967. [Google Scholar]
  9. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Springer: Berlin, Germany, 1981. [Google Scholar]
  10. Rudnik, K.; Kacprzak, D. Fuzzy TOPSIS method with ordered fuzzy numbers for flow control in a manufacturing system. Appl. Soft Comput. 2017, 52, 1020–1041. [Google Scholar] [CrossRef]
  11. Onu, P.U.; Quan, X.; Xu, L.; Orji, J.; Onu, E. Evaluation of sustainable acid rain control options utilizing a fuzzy TOPSIS multi-criteria decision analysis model frame work. J. Clean. Prod. 2017, 141, 612–625. [Google Scholar] [CrossRef] [Green Version]
  12. Rahim, R.; Supiyandi, S.; Siahaan, A.P.U.; Listyorini, T.; Utomo, A.P.; Triyanto, W.A.; Khairunnisa, K. TOPSIS Method Application for Decision Support System in Internal Control for Selecting Best Employees. J. Phys. Conf. Ser. 2018, 1028, 012052. [Google Scholar] [CrossRef] [Green Version]
  13. Shen, F.; Ma, X.; Li, Z.; Xu, Z.; Cai, D. An extended intuitionistic fuzzy TOPSIS method based on a new distance measure with an application to credit risk evaluation. Inf. Sci. 2018, 428, 105–119. [Google Scholar] [CrossRef]
  14. Kong, Y.S.; Dankyi, A.B.; Ankomah-Asare, E.T.; Addo, A.A. An Application of TOPSIS Approach in Determination of Spread Influencers in a Competitive Industrial Space: Evidence from the Banking Network of Ghana. J. Hum. Resour. Sustain. Stud. 2019, 7, 312–327. [Google Scholar] [CrossRef] [Green Version]
  15. Fauzi, N.A.M.; Ismail, M.; Jaaman, S.H.; Kamaruddin, S.N.D.M. Applicability of TOPSIS Model and Markowitz Model. J. Phys. Conf. Ser. 2019, 1212, 012032. [Google Scholar] [CrossRef]
  16. Rehman, S.; Khan, S.A.; Alhems, L.M. Application of TOPSIS Approach to Multi-Criteria Selection of Wind Turbines for On-Shore Sites. Appl. Sci. 2020, 10, 7595. [Google Scholar] [CrossRef]
  17. Narayan, N.; Singh, K.K.; Srivastava, U. Developmental Performance Ranking of SAARC Nations: An Application of TOPSIS Method of Multi-Criteria Decision Making. Inter. Multidiscip. J. Soc. Sci. 2020, 9, 26–50. [Google Scholar] [CrossRef]
  18. Balcerzak, A.P. Quality of institutions in the European Union countries. Application of TOPSIS based on entropy measure for objective weighting. Acta Polytech. Hung. 2020, 17, 101–122. [Google Scholar] [CrossRef]
  19. Halicka, K.; Kacprzak, D. Linear ordering of selected gerontechnologies using selected MCGDM methods. Technol. Econ. Dev. Econ. 2021, 27, 921–947. [Google Scholar] [CrossRef]
  20. Chen, C.T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst. 2000, 114, 1–9. [Google Scholar] [CrossRef]
  21. Roszkowska, E.; Kacprzak, D. The fuzzy SAW and fuzzy TOPSIS procedures based on ordered fuzzy numbers. Inf. Sci. 2016, 369, 564–584. [Google Scholar] [CrossRef]
  22. Senvar, O.; Otay, I.; Boltürk, E. Hospital site selection via hesitant fuzzy TOPSIS. IFAC-PapersOnLine 2016, 49, 1140–1145. [Google Scholar] [CrossRef]
  23. Boran, F.E.; Genc, S.; Kurt, M.; Akay, D. A multi-criteria intuitionistic fuzzy group decision making for supplier selection with TOPSIS method. Expert Syst. Appl. 2009, 36, 11363–11368. [Google Scholar] [CrossRef]
  24. Garg, H. Hesitant Pythagorean fuzzy sets and their aggregation operators in multiple attribute decision making. Int. J. Uncertain. Quantif. 2018, 8, 267–289. [Google Scholar] [CrossRef]
  25. Ashtiani, B.; Haghighirad, F.; Makui, A.; Montazer, G. Extension of fuzzy TOPSIS method based on interval-valued fuzzy sets. Appl. Soft Comput. 2009, 9, 457–461. [Google Scholar] [CrossRef]
  26. Chi, P.; Liu, P. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set. Neutrosophic Sets Sys. 2013, 1, 63–70. [Google Scholar]
  27. Jahanshahloo, G.R.; Hosseinzadeh Lofti, F.; Izadikhah, M. An Algorithmic Method to Extend TOPSIS for Decision Making Problems with Interval Data. Appl. Math. Comput. 2006, 175, 1375–1384. [Google Scholar] [CrossRef]
  28. Dymova, L.; Sevastjanov, P.; Tikhonenko, A. A direct interval extension of TOPSIS method. Expert Syst. Appl. 2013, 40, 4841–4847. [Google Scholar] [CrossRef]
  29. Jahanshahloo, G.R.; Hosseinzadeh Lotfi, F.; Davoodi, A.R. Extension of TOPSIS for decision-making problems with interval data: Interval efficiency. Math. Comput. Model. 2009, 49, 1137–1142. [Google Scholar] [CrossRef]
  30. Jahanshahloo, G.R.; Khodabakhshi, M.; Hosseinzadeh Lotfi, F.; Moazami Goudarzi, M.R. Across-efficiency model based on super-efficiency for ranking units through the TOPSIS approach and its extension to the interval case. Math. Comput. Model. 2011, 53, 1946–1955. [Google Scholar] [CrossRef]
  31. Ye, F.; Li, Y.N. Group multi-attribute decision model to partner selection in the formation of virtual enterprise under incomplete information. Expert Syst. Appl. 2009, 36, 9350–9357. [Google Scholar] [CrossRef]
  32. Tsaur, R.C. Decision risk analysis for an interval TOPSIS method. Appl. Math. Comput. 2011, 218, 4295–4304. [Google Scholar] [CrossRef]
  33. Yue, Z. An extended TOPSIS for determining weights of decision makers with interval numbers. Knowl. Based Syst. 2011, 24, 146–153. [Google Scholar] [CrossRef]
  34. Hu, B.Q.; Wang, S. A Novel Approach in Uncertain Programming Part I: New Arithmetic and Order Relation for Interval Numbers. J. Ind. Manag. Optim. 2006, 2, 351–371. [Google Scholar]
  35. Hosseinzadeh Lotfi, F.; Fallahnejad, R. Imprecise Shannon’s Entropy and Multi Attribute Decision Making. Entropy 2010, 12, 53–62. [Google Scholar] [CrossRef] [Green Version]
  36. Kacprzak, D. An extended TOPSIS method based on ordered fuzzy numbers for group decision making. Artif. Intell. Rev. 2020, 53, 2099–2129. [Google Scholar] [CrossRef] [Green Version]
  37. Moore, R.E.; Kearfott, R.B.; Cloud, M.J. Introduction to Interval Analysis; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  38. Yue, Z. Developing a straightforward approach for group decision making based on determining weights of decision makers. Appl. Math. Model. 2012, 36, 4106–4117. [Google Scholar] [CrossRef]
Figure 1. The conceptual framework of the proposed method.
Figure 1. The conceptual framework of the proposed method.
Entropy 23 01460 g001
Figure 2. Hierarchical structure of the proposed method.
Figure 2. Hierarchical structure of the proposed method.
Entropy 23 01460 g002
Figure 3. Objective criteria weights.
Figure 3. Objective criteria weights.
Entropy 23 01460 g003
Figure 4. The ranking of the alternatives.
Figure 4. The ranking of the alternatives.
Entropy 23 01460 g004
Figure 5. Comparison of results.
Figure 5. Comparison of results.
Entropy 23 01460 g005
Table 1. Individual decision matrices.
Table 1. Individual decision matrices.
C1C2C3
D M 1 A 1 [60, 90][72, 86][85, 92]
A 2 [77, 81][69, 93][83, 88]
A 3 [80, 96][59, 87][68, 85]
D M 2 A 1 [77, 83][68, 86][82, 90]
A 2 [93, 98][76, 86][65, 87]
A 3 [79, 85][72, 92][81, 97]
D M 3 A 1 [85, 86][76, 86][80, 97]
A 2 [79, 87][75, 89][81, 93]
A 3 [62, 82][84, 89][78, 82]
Table 2. Normalized individual decision matrices for the calculation of criteria weights.
Table 2. Normalized individual decision matrices for the calculation of criteria weights.
C1C2C3
D M 1 A 1 [0.2247, 0.3371][0.2707, 0.3233][0.3208, 0.3472]
A 2 [0.2884, 0.3034][0.2594, 0.3496][0.3132, 0.3321]
A 3 [0.2996, 0.3596][0.2218, 0.3271][0.2566, 0.3208]
D M 2 A 1 [0.2895, 0.3120][0.2576, 0.3258][0.2993, 0.3285]
A 2 [0.3496, 0.3684][0.2879, 0.3258][0.2372, 0.3175]
A 3 [0.2970, 0.3195][0.2727, 0.3485][0.2956, 0.3540]
D M 3 A 1 [0.3333, 0.3373][0.2879, 0.3258][0.2941, 0.3566]
A 2 [0.3098, 0.3412][0.2841, 0.3371][0.2978, 0.3419]
A 3 [0.2431, 0.3216][0.3182, 0.3371][0.2868, 0.3015]
Table 3. Matrices for each criterion.
Table 3. Matrices for each criterion.
DM1DM2DM3
C 1 A 1 [0.2247, 0.3371][0.2895, 0.3120][0.3333, 0.3373]
A 2 [0.2884, 0.3034][0.3496, 0.3684][0.3098, 0.3412]
A 3 [0.2996, 0.3596][0.2970, 0.3195][0.2431, 0.3216]
C 2 A 1 [0.2707, 0.3233][0.2576, 0.3258][0.2879, 0.3258]
A 2 [0.2594, 0.3496][0.2879, 0.3258][0.2841, 0.3371]
A 3 [0.2218, 0.3271][0.2727, 0.3485][0.3182, 0.3371]
C 3 A 1 [0.3208, 0.3472][0.2993, 0.3285][0.2941, 0.3566]
A 2 [0.3132, 0.3321][0.2372, 0.3175][0.2978, 0.3419]
A 3 [0.2566, 0.3208][0.2956, 0.3540][0.2868, 0.3015]
Table 4. Vectors of entropy.
Table 4. Vectors of entropy.
e 1 ([0.9605, 0.9978], [0.9893, 0.9975], [0.9767, 0.9997])
e 2 ([0.9446, 0.9995], [0.9669, 0.9995], [0.9834, 0.9999])
e 3 ([0.9807, 0.9995], [0.9672, 0.9990], [0.9820, 0.9977])
Table 5. Vectors of diversification.
Table 5. Vectors of diversification.
d 1 ([0.0022, 0.0395], [0.0025, 0.0107], [0.0003, 0.0233])
d 2 ([0.0005, 0.0554], [0.0005, 0.0331], [0.0001, 0.0166])
d 3 ([0.0005, 0.0193], [0.0010, 0.0328], [0.0023, 0.0180])
Table 6. MIC and LIC.
Table 6. MIC and LIC.
C + ([0.0005, 0.0554], [0.0010, 0.0328], [0.0003, 0.0233])
C ([0, 0], [0, 0], [0, 0])
Table 7. Objective criteria weights.
Table 7. Objective criteria weights.
d j C + d j C R R C j C w j
C 1 0.02720.04720.63400.3049
C 2 0.00670.06660.90910.4372
C 3 0.03650.04220.53620.2579
Table 8. Normalized individual decision matrices for the TOPSIS method.
Table 8. Normalized individual decision matrices for the TOPSIS method.
C1C2C3
D M 1 A 1 [0.1572, 0.2902][0.1876, 0.2980][0.2260, 0.2747]
A 2 [0.2018, 0.2611][0.1798, 0.3223][0.2207, 0.2628]
A 3 [0.2096, 0.3095][0.1537, 0.3015][0.1808, 0.2538]
D M 2 A 1 [0.2045, 0.2354][0.1803, 0.2787][0.2098, 0.2768]
A 2 [0.2470, 0.2780][0.2015, 0.2787][0.1663, 0.2676]
A 3 [0.2098, 0.2411][0.1909, 0.2982][0.2073, 0.2983]
D M 3 A 1 [0.2348, 0.2681][0.2029, 0.2579][0.2071, 0.2858]
A 2 [0.2183, 0.2712][0.2002, 0.2669][0.2097, 0.2740]
A 3 [0.1713, 0.2556][0.2242, 0.2669][0.2019, 0.2416]
Table 9. Weighted normalized individual decision matrices for the TOPSIS method.
Table 9. Weighted normalized individual decision matrices for the TOPSIS method.
C1C2C3
D M 1 A 1 [0.0479, 0.0885][0.0820, 0.1303][0.0583, 0.0708]
A 2 [0.0615, 0.0796][0.0786, 0.1409][0.0569, 0.0678]
A 3 [0.0639, 0.0944][0.0672, 0.1318][0.0466, 0.0655]
D M 2 A 1 [0.0623, 0.0718][0.0788, 0.1219][0.0541, 0.0714]
A 2 [0.0753, 0.0848][0.0881, 0.1219][0.0429, 0.0690]
A 3 [0.0640, 0.0735][0.0835, 0.1304][0.0535, 0.0769]
D M 3 A 1 [0.0716, 0.0817][0.0887, 0.1128][0.0534, 0.0737]
A 2 [0.0666, 0.0827][0.0875, 0.1167][0.0541, 0.0707]
A 3 [0.0522, 0.0779][0.0980, 0.1167][0.0521, 0.0623]
Table 10. Matrices of alternatives.
Table 10. Matrices of alternatives.
C1C2C3
A 1 D M 1 [0.0479, 0.0885][0.0820, 0.1303][0.0583, 0.0708]
D M 2 [0.0623, 0.0718][0.0788, 0.1219][0.0541, 0.0714]
D M 3 [0.0716, 0.0817][0.0887, 0.1128][0.0534, 0.0737]
A 2 D M 1 [0.0615, 0.0796][0.0786, 0.1409][0.0569, 0.0678]
D M 2 [0.0753, 0.0848][0.0881, 0.1219][0.0429, 0.0690]
D M 3 [0.0666, 0.0827][0.0875, 0.1167][0.0541, 0.0707]
A 3 D M 1 [0.0639, 0.0944][0.0672, 0.1318][0.0466, 0.0655]
D M 2 [0.0640, 0.0735][0.0835, 0.1304][0.0535, 0.0769]
D M 3 [0.0522, 0.0779][0.0980, 0.1167][0.0521, 0.0623]
Table 11. PIS and NIS.
Table 11. PIS and NIS.
C1C2C3
A + D M 1 [0.0639, 0.0944][0.0786, 0.1409][0.0583, 0.0708]
D M 2 [0.0753, 0.0848][0.0835, 0.1304][0.0535, 0.0769]
D M 3 [0.0716, 0.0817][0.0980, 0.1167][0.0534, 0.0737]
A D M 1 [0.0479, 0.0885][0.0672, 0.1318][0.0466, 0.0655]
D M 2 [0.0623, 0.0718][0.0788, 0.1219][0.0429, 0.0690]
D M 3 [0.0522, 0.0779][0.0887, 0.1128][0.0521, 0.0623]
Table 12. The ranking of the alternatives—R.
Table 12. The ranking of the alternatives—R.
d j A + d j A R C C j A R J
A 1 0.03130.03220.507620.3322
A 2 0.02550.03640.588410.3851
A 3 0.03400.02580.431830.2826
Table 13. Comparison of results.
Table 13. Comparison of results.
P M A M G M W M
A 1 0.3322440.2911380.2979030.297472
A 2 0.3851210.3854270.3846820.385048
A 3 0.2826350.3234350.3174160.317480
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kacprzak, D. A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers. Entropy 2021, 23, 1460. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111460

AMA Style

Kacprzak D. A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers. Entropy. 2021; 23(11):1460. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111460

Chicago/Turabian Style

Kacprzak, Dariusz. 2021. "A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers" Entropy 23, no. 11: 1460. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop