Next Article in Journal
Toward Embedded System Resources Relaxation Based on the Event-Triggered Feedback Control Approach
Next Article in Special Issue
Group Decision-Making Problems Based on Mixed Aggregation Operations of Interval-Valued Fuzzy and Entropy Elements in Single- and Interval-Valued Fuzzy Environments
Previous Article in Journal
Acknowledgment to Reviewers of Mathematics in 2021
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Intelligent Expert Combination Weighting Scheme for Group Decision Making in Railway Reconstruction

1
School of Software, Jiangxi University of Science and Technology, Nanchang 330013, China
2
School of Vocational Education and Technology, Jiangxi Agricultural University, Nanchang 330045, China
3
Department of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK 74133, USA
*
Author to whom correspondence should be addressed.
Submission received: 10 December 2021 / Revised: 7 February 2022 / Accepted: 8 February 2022 / Published: 10 February 2022
(This article belongs to the Special Issue Advances in Fuzzy Decision Theory and Applications)

Abstract

:
The intuitionistic fuzzy entropy has been widely used in measuring the uncertainty of intuitionistic fuzzy sets. In view of some counterintuitive phenomena of the existing intuitionistic fuzzy entropies, this article proposes an improved intuitionistic fuzzy entropy based on the cotangent function, which not only considers the deviation between membership and non-membership, but also expresses the hesitancy degree of decision makers. The analyses and comparison of the data show that the improved entropy is reasonable. Then, a new IF similarity measure whose value is an IF number is proposed. The intuitionistic fuzzy entropy and similarity measure are applied to the study of the expert weight in group decision making. Based on the research of the existing expert clustering and weighting methods, we summarize an intelligent expert combination weighting scheme. Through the new intuitionistic fuzzy similarity, the decision matrix is transformed into a similarity matrix, and through the analysis of threshold change rate and the design of risk parameters, reasonable expert clustering results are obtained. On this basis, each category is weighted; the experts in the category are weighted by entropy weight theory, and the total weight of experts is determined by synthesizing the two weights. This scheme provides a new method in determining the weight of experts objectively and reasonably. Finally, the method is applied to the evaluation of railway reconstruction scheme, and an example shows the feasibility of the method.

1. Introduction

With the characteristics of high speed, large volume, low energy consumption, little pollution, safety and reliability, railway transportation has become the main transportation mode in the modern transportation system in China (see Figure 1 and Figure 2) [1,2,3] and plays an important role in the development of the national economy.
As an important national infrastructure and popular means of transportation, railway is the backbone of China’s comprehensive transportation system. With the continuous acceleration of China’s urbanization process and the urban expansion, railway construction has entered a period of rapid development, and the railway plays an increasingly important role in people’s choice of travel mode (see Figure 3) [1,4].
With regard to railway reconstruction, due to the huge investment and complex factors [5,6,7], it is necessary to compare and select various construction schemes in order to optimize the scheme with more reasonable technology and economy. Therefore, the use of scientific evaluation methods is very important. At present, the method of expert scoring and evaluation with the help of fuzzy theory has been more common, but the expert scoring is more or less subjective. This paper proposes an intelligent expert combination weighting method to optimize the scheme.
The rest of this paper is structured as follows. Section 2 introduces the related work of this study. Section 3 introduces the preparatory knowledge. Section 4 puts forward the weighted scheme of intelligent expert combination. Section 5 introduces the risk factors of the railway reconstruction project and uses the method proposed in the fourth section to optimize the railway reconstruction scheme. Finally, Section 6 summarizes the whole paper.

2. The Related Work

Fuzziness, as developed in [8], is a kind of uncertainty that often appears in human decision-making problems. Fuzzy set theory deals with uncertainties happening in daily life successfully. The membership degrees can be effectively decided by a fuzzy set. However, in real-life situations, the non-membership degrees should be considered in many cases as well. Thus, Atanassov [9] introduced the concept of an intuitionistic fuzzy (IF) set that considers both membership and non-membership degrees. IF set has been implemented in numerous areas due to its ability to handle uncertain information more effectively [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Tao et al. [10] provided an insight with an alternative queuing method and intuitionistic fuzzy set into dynamic group MCDM, which ranked the alternatives based on preference relation. Intuitionistic fuzzy sets based on the weighted average were adopted for aggregating individual suggestions of decision makers by Singh et al. [11]. Chaira [12] suggested a novel clustering approach for segmenting lesions/tumors in mammogram images using Atanassov’s intuitionistic fuzzy set theory. Jiang et al. [13] studied a novel three-way group investment decision model under an intuitionistic fuzzy multi-attribute group decision-making environment. Wang et al. [14] put forward a novel three-way multi-attribute decision-making model in light of a probabilistic dominance relation with intuitionistic fuzzy sets. Wan and Dong [15] developed a new intuitionistic fuzzy best-worst method for multi-criteria decision making. Kumar et al. [16] formulated an intuitionistic fuzzy set theory-based, bias-corrected intuitionistic fuzzy c-means with spatial neighborhood information method for MRI image segmentation. In addition, intuitionistic fuzzy sets are extended to various forms and applied to practical problems. Senapati and Yager [17,18,19] proposed Fermatean fuzzy sets and introduced four new weighted aggregated operators, as well as defined basic operations over the Fermatean fuzzy sets. Ashraf et al. [20] introduced a new version of the picture fuzzy set, so-called spherical fuzzy sets (SFS), and discussed some operational rules. Khan et al. [21] introduced a method to solve decision-making problems using an adjustable weighted soft discernibility matrix in a generalized picture fuzzy soft set. Riaz and Hashmi [22] introduced the novel concept of the linear Diophantine fuzzy set (LDFS) with the addition of reference parameters.
Shannon used probability theory as a mathematical tool to measure information. He defined information as something that eliminates uncertainty, thus connecting information with uncertainty. Taking entropy as a measure of the uncertainty of information state, Shannon put forward the concept of information entropy. De Luca and Termini [25] studied the measurement of fuzziness of fuzzy sets, extended probabilistic information entropy to non-probabilistic information entropy and proposed axioms that fuzzy information entropy must satisfy. Szmidt and Kacprzyk [26] extended the axioms of De Luca and Termini and extended fuzzy information entropy to IF information entropy. Some scholars have conducted in-depth research in this aspect and constructed IF entropy formulae from different angles and applied it to the fields of multi-attribute decision making and pattern recognition [27,28,29,30,31,32,33,34]. Whether these entropy formulas can reasonably measure the uncertainty of IF sets is directly related to the rationality of their application. In this paper, some entropy formulas in existing literature are classified, and their advantages and disadvantages are analyzed with data. On this basis, a new IF entropy is constructed that not only considers the deviation between membership and non-membership but also includes the hesitancy in the entropy measure. The rationality of entropy is fully explained by data analysis and comparison.
In recent years, the decision-making problem with IF information has attracted many scholars’ attention [35]. Due to the complexity and uncertainty of pragmatic problems, expert group decision-making method is commonly used in decision-making problems. Expert group decision making can fully gather the experience and knowledge of various experts, making the decision-making results more scientific and reasonable. However, in the actual evaluation, experts in group decision making are influenced by numerous factors, such as knowledge structure, understanding of scheme, interest correlation and so on. They often hold different views and attitudes. How to determine the weight of experts and effectively aggregate the decision-making information of experts with different preferences has become the focus of scholars [36,37,38,39,40,41].
In traditional group decision making, the expert weighting method usually uses the consistency ratio of the judgment matrix to construct the weight coefficient, which lacks the attention to the overall consistency of group decision-making objectives. In order to surmount the shortcomings of the traditional method, a cluster analysis method is often used to realize the expert weighting in group decision making. The basic principle of expert cluster analysis is to measure the similarity degree of expert evaluation opinions according to certain standards and cluster experts based on the similarity degree. He and Lei [36] extended fuzzy C-means clustering to IF C-means clustering and proposed a clustering algorithm based on IF sets. Zhang et al. [37] and He et al. [38] proposed the concept of IF similarity, whose value is an IF number; they also constructed the IF similarity matrix, the IF equivalent matrix and its λ - cut matrix and gave a clustering method based on the IF similarity matrix. Wang et al. [39] proposed a new method of an IF similar matrix, avoided the tedious process of calculating an IF equivalent matrix and used the membership degrees of elements in an IF similar matrix to cluster. Zhou et al. [40] conducted cluster analysis on experts according to the principle of entropy, used information similarity coefficients to measure the similarity degrees of expert opinions and then classified the experts.
The above clustering methods have the following problems when clustering IF information.
(1)
In reference [36], the clustering results of IF sets are expressed in real numbers, which does not accord with the characteristics of IF sets.
(2)
After obtaining the IF similarity matrix, the method proposed in reference [37,38] also needs to test whether it is an IF equivalent matrix. If not, it needs a lot of iterative operations until it becomes an IF equivalent matrix, which requires a large amount of calculation.
(3)
Reference [39] reduced the amount of calculation, but after obtaining the IF similarity matrix, only membership degree is used for clustering, ignoring non-membership degree and hesitation degree, which will inevitably cause the loss of information.
(4)
In literature [40], there is no analysis on the value of the clustering threshold. The value directly affects the clustering results, so the rationality of the value is particularly important.
Considering the above situation, this paper proposes a method of clustering and weighting experts based on IF entropy. According to the evaluation information of IF numbers given by experts, a new IF similarity measure is constructed, whose value is an IF number. Then the decision matrix is transformed into a similar matrix. By analyzing the change rate of the threshold and designing the risk parameters, the decision maker can choose the appropriate clustering threshold and risk parameters so as to obtain the reasonable expert clustering results, and based on this result, experts are weighted between categories. It can make more experts in a category, so that the weight of the category is greater, which reflects the important principle of the minority obeying the majority in group decision making. Using the new IF entropy proposed in this paper, the experts in the same category with clear logic and an accurate evaluation can get a larger weight. The total weight of experts is determined by synthesizing the weight between categories and within categories. Finally, the IF weighted aggregation operator is used to aggregate weighted experts and their IF information, and the alternatives are optimized and sorted.

3. Preliminaries

In the following part, we introduce some basic concepts, which will be used in the next sections.
Definition 1
([9]). Let X be a given universal set. An IF set is an object having the form A = { < x i , μ A ( x i ) , ν A ( x i ) > | x i X } where the function μ A : X [ 0 , 1 ] defines the degree of membership, and ν A : X [ 0 , 1 ] defines the degree of non-membership of the element x i X , respectively, and for every x i X , it holds that 0 μ A ( x i ) + ν A ( x i ) 1 . Furthermore, for any IF set A and x i X , π A ( x i ) = 1 μ A ( x i ) ν A ( x i ) is called the hesitancy degree of x i . All IF sets on X are denoted as I F S s ( X ) .
For simplicity, Xu and Chen [41] denoted α = ( μ α , ν α ) as an IF number (IFN), where μ α and ν α are the degree of membership and the degree of non-membership of the element α X to A , respectively.
The basic operational laws of IF set defined by Atanassov [9] are introduced as follows:
Definition 2
([9]). Let A = { < x i , μ A ( x i ) , ν A ( x i ) > | x i X } and B = { < x i , μ B ( x i ) , ν B ( x i ) > | x i X } be two IF sets; then,
(1) 
A B if and only if μ A ( x i ) μ B ( x i ) and ν A ( x i ) ν B ( x i ) for all x i X ;
(2) 
A = B if and only if A B and B A ;
(3) 
The complementary set of A , denoted by A C , is A C = { < x i , ν A ( x i ) ,   > | x i X }
(4) 
A n = { < x i , [ μ A ( x i ) ] n , [ ν A ( x i ) ] n > | x i X } ;
(5) 
A B called A less fuzzy than B , i.e., for x i X
  • if μ B ( x i ) ν B ( x i ) , then μ A ( x i ) μ B ( x i ) , ν A ( x i ) ν B ( x i ) ;
  • if μ B ( x i ) ν B ( x i ) , then μ A ( x i ) μ B ( x i ) , ν A ( x i ) ν B ( x i ) .
Definition 3
([9]). Let A = { < x i , μ A ( x i ) , ν A ( x i ) > | x i X } and B = { < x i , μ B ( x i ) , ν B ( x i ) > | x i X } be two IF sets and ω = ( ω 1 , ω 2 , , ω n ) Τ be the weight vector of the element x i ( i = 1 , 2 , , n ) , where ω j 0 and j = 1 n ω j = 1 . The weighted Hamming distance for A and B is defined as follows:
d ( A , B ) = 1 2 i = 1 n ω i ( | μ A ( x i ) μ B ( x i ) | + | ν A ( x i ) ν B ( x i ) | + | π A ( x i ) π B ( x i ) | ) .
Definition 4
([26]). A map E : I F S s ( X ) [ 0 , 1 ] is called the IF entropy if it satisfies the following properties:
(1) 
E ( A ) = 0 if and only if A is a crisp set;
(2) 
E ( A ) = 1 if and only if μ A ( x i ) = ν A ( x i ) , x i X ;
(3) 
E ( A ) = E ( A C ) ;
(4) 
If A B , then E ( A ) E ( B ) .
Definition 5
([37]). Let z i j ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) be a collection of IFNs, and the matrix Z = ( z i j ) m × n is called an IF matrix.
Definition 6
([37]). Let ψ : I F S s ( X ) × I F S s ( X ) I F N s and C 1 , C 2 , C 3 be three IF sets. ψ ( C 1 , C 2 ) is called an IF similarity measure of C 1 and C 2 if it satisfies the following properties:
(1) 
ψ ( C 1 , C 2 ) is an IFN;
(2) 
ψ ( C 1 , C 2 ) = < 1 , 0 > if and only if C 1 = C 2 ;
(3) 
ψ ( C 1 , C 2 ) = ψ ( C 2 , C 1 ) ;
(4) 
If C 1 C 2 C 3 , then ψ ( C 1 , C 3 ) ψ ( C 1 , C 2 ) , and ψ ( C 1 , C 3 ) ψ ( C 2 , C 3 ) .
Definition 7
([42]). The membership degree μ i ( x j ) is expressed as μ i j , and the non-membership degree ν i ( x j ) is expressed as ν i j . If an IF matrix Z = ( a i j ) m × n where a i j = < μ i j , v i j > satisfies the following conditions:
(1) 
Reflexivity: a i i = < 1 , 0 > , i = 1 , 2 , , m .
(2) 
Symmetry: a i j = a j i , i = 1 , 2 , , m , j = 1 , 2 , , n .
then Z is called an IF similarity matrix.
In order to compare the magnitudes of two IF sets, Xu and Yager [43] introduced the score and accuracy functions for IF sets and gave a simple comparison law as follows:
Definition 8
([43]). Let A = < μ , ν > be an IFN; the score function M ( A ) and accuracy function Δ ( A ) of A can be defined, respectively, as follows:
{ M ( A ) = μ ν Δ ( A ) = μ + ν
Obviously, M ( A ) [ - 1 , 1 ] , Δ ( A ) [ 0 , 1 ] .
Based on the score and accuracy functions, a comparison law for IF set is introduced as below:
Let A j and A k be two IF sets, M ( A j ) and M ( A k ) be the scores of A j and A k , respectively, and Δ ( A j ) and Δ ( A k ) be the accuracy degrees of A j and A k , respectively; then,
(1) 
If M ( A j ) > M ( A k ) , then A j > A k .
(2) 
If M ( A j ) = M ( A k ) , then { Δ ( A j ) = Δ ( A k ) A j = A k Δ ( A j ) < Δ ( A k ) A j < A k Δ ( A j ) > Δ ( A k ) A j > A k .
The weighted aggregation operator for an IF set developed by Xu and Yager [43] is presented as follows:
Definition 9
([43]). Let A j = < μ j , ν j > ( j = 1 , 2 , , n ) be a collection of IF sets, and ω = ( ω 1 , ω 2 , , ω n ) Τ be the weight vector of A j ( j = 1 , 2 , , n ) , where ω j indicates the importance degree of A j , satisfying ω j 0 ( j = 1 , 2 , , n ) and j = 1 n ω j = 1 , and let f ω A : F n F . If
f ω A ( A 1 , A 2 , , A n ) = j = 1 n ω j A j = < 1 j = 1 n ( 1 μ j ) ω j , j = 1 n ν j ω j >
then the function f ω A is called the IF weighted aggregation operator.

4. Our Proposed Intelligent Expert Combination Weighting Scheme

4.1. A New IF Entropy

The uncertainty of IF sets is embodied in fuzziness and intuitionism. Fuzziness is determined by the difference between membership and non-membership. Intuitionism is determined by its hesitation. Therefore, entropy is used as a tool to describe the uncertainty of IF sets; the difference between membership and non-membership and their hesitation should be considered at the same time. Only in this way can the degree of uncertainty be reflected more fully. Next, we will classify the existing entropy formulas according to whether they describe the fuzziness and intuitiveness of IF sets. In addition, the motivation behind the origination of fuzzy and non-standard fuzzy models is their intimacy with human thinking. Therefore, if an entropy measure does not meet some cognitive aspect, we call it a counterintuitive case.
In this section, suppose that A = { < x i , μ A ( x i ) , ν A ( x i ) > | x i X , i = 1 , 2 , , n } is an IF set.
(1) The entropy measure only describes the fuzziness of IF sets. For example, the IF entropy measure of Ye [27] is
E Y ( A ) = 1 n i = 1 n [ ( 2 cos μ A ( x i ) ν A ( x i ) 4 π 1 ) × 1 2 1 ]
The IF entropy measure of Zeng and Li [28] is
E Z ( A ) = 1 1 n i = 1 n | μ A ( x i ) ν A ( x i ) | .
The IF entropy measure of Zhang and Jiang [29] is
E Z J ( A ) = 1 n i = 1 n [ μ A ( x i ) + 1 ν A ( x i ) 2 log 2 ( μ A ( x i ) + 1 ν A ( x i ) 2 ) + ν A ( x i ) + 1 μ A ( x i ) 2 log 2 ( ν A ( x i ) + 1 μ A ( x i ) 2 ) ] .
The exponential IF entropy measure of Verma and Sharma [30] is
E V S ( A ) = 1 n ( e 1 ) i = 1 n [ ( μ A ( x i ) + 1 ν A ( x i ) 2 e 1 μ A ( x i ) + 1 ν A ( x i ) 2 + ν A ( x i ) + 1 μ A ( x i ) 2 e 1 ν A ( x i ) + 1 μ A ( x i ) 2 1 ) ] .
Example 1.
Let A 1 = { < x , 0.3 , 0.4 > | x X } and A 2 = { < x , 0.2 , 0.3 > | x X } be two IF sets. Calculate the entropy of A 1 and A 2 with the entropy formulae E Y , E Z , E Z J and E V S .
According to the above formulae, the results are as follows:
  • E Y ( A 1 ) = E Y ( A 2 ) = 0.9895 , E Z ( A 1 ) = E Z ( A 2 ) = 0.9 ,
  • E Z J ( A 1 ) = E Z J ( A 2 ) = 0.9928 , E V S ( A 1 ) = E V S ( A 2 ) = 0.9905 .
It can be seen that x belongs to IF sets A 1 and A 2 ; the absolute value of deviation between membership and non-membership is equal; and the hesitation degree increases, so the uncertainty of A 1 is smaller than A 2 . However, the entropy formulae E Y , E Z , E Z J and E V S calculated the entropy of two IF sets as equal. In fact, for any IF sets A ˜ = { < x i , μ A ˜ ( x i ) , ν A ˜ ( x i ) > | x i X } and B ˜ = { < x i , μ B ˜ ( x i ) , ν B ˜ ( x i ) > | x i X } if μ A ˜ ( x i ) - ν A ˜ ( x i )   = μ B ˜ ( x i ) - ν B ˜ ( x i ) for all x i X , then any entropy formula E above is adopted, and all of them have E ( A ˜ ) = E ( B ˜ ) . These are counterintuitive situations.
(2) The entropy measure only describes the intuitionism of IF sets.
For example, we show the IF entropy measure of Burillo and Bustince [31]:
E B 1 ( A ) = i = 1 n [ 1 ( μ A ( x i ) + ν A ( x i ) ) ] = i = 1 n π A ( x i )
E B 2 ( A ) = i = 1 n [ 1 ( μ A ( x i ) + ν A ( x i ) ) λ ] , λ = 2 , 3 , , ;
E B 3 ( A ) = i = 1 n [ 1 ( μ A ( x i ) + ν A ( x i ) ) ] e [ 1 ( μ A ( x i ) + ν A ( x i ) ) ] ;
E B 4 ( A ) = i = 1 n [ 1 ( μ A ( x i ) + ν A ( x i ) ) ] sin ( π 2 ( μ A ( x i ) + ν A ( x i ) ) ) ;
Example 2.
Let A 3 = { < x , 0.09 , 0.41 > | x X } and A 4 = { < x , 0.18 , 0.32 > | x X } be two IF sets. Calculate the entropy of A 3 and A 4 with the entropy formula E B 1 .
From Formula E B 1 , we can get the following results: E B 1 ( A 3 ) = E B 1 ( A 4 ) = 0.5 . For IF sets A 3 and A 4 , the hesitancy degree of element x is equal, but the absolute value of the deviation between the membership degree and non-membership degree of A 3 is greater than that of A 4 , so the uncertainty of A 3 is obviously smaller than that of A 4 . However, the entropy formulae E B 1 , E B 2 , E B 3 and E B 4 calculated the entropy of two IF sets as equal, which is inconsistent with people’s intuition. In fact, for any IF sets A ˜ = { < x i , μ A ˜ ( x i ) , ν A ˜ ( x i ) > | x i X } and B ˜ = { < x i , μ B ˜ ( x i ) , ν B ˜ ( x i ) > | x i X } , if | μ A ˜ ( x i ) + ν A ˜ ( x i ) | = | μ B ˜ ( x i ) + ν B ˜ ( x i ) | for all x i X , then any entropy formula E above is adopted, and all of them have E ( A ˜ ) = E ( B ˜ ) .
(3) The entropy measure includes both the fuzziness and intuitionism of IF sets. However, some situations cannot be well distinguished.
For example, we show the IF entropy measure of Wang and Wang [32]:
E W ( A ) = 1 n i = 1 n cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 ( 1 + π A ( x i ) ) π )
The IF entropy measure of Wei et al. [33] is the following:
E W G ( A ) = 1 n i = 1 n cos ( μ A ( x i ) ν A ( x i ) 2 ( 1 + π A ( x i ) ) π )
Example 3.
Let A 5 = { < x , 0.2 , 0.5 > | x X } and A 6 = { < x , 0.4 , 0.04 > | x X } be two IF sets. Obviously, the fuzziness of A 5 is greater than that of A 6 . Calculate the entropies of A 5 and A 6 with the entropy formulae E W and E W G .
We can get the following results:
E W ( A 5 ) = E W ( A 6 ) = 0.6903 ,   E W G ( A 5 ) = E W G ( A 6 ) = 0.9350
which are counterintuitive.
For example, the IF entropy measure of Liu and Ren [34] is
E L R ( A ) = 1 n i = 1 n cos μ A 2 ( x i ) ν A 2 ( x i ) 2 π
Example 4.
Let A 7 = { < x , 0.2 , 0.4 > | x X } and A 8 = { < x , 0.4272 , 0.25 > | x X } be two IF sets. Obviously, the fuzzinesses of A 7 and A 8 are not equal. However, calculating the entropy of A 7 and A 8 with the entropy formula E L R , we have E L R ( A 7 ) = E L R ( A 8 ) = 0.9823 .
Motivation: we can see that some existing cosine and cotangent function-based entropy measures have no ability to discriminate some IF sets, and there are counterintuitive phenomena, such as the cases of Example 1 to 4. In this paper, we are also devoted to the development of IF entropy measures. We propose a new intuitionistic fuzzy entropy based on a cotangent function, which is an improvement of Wang’s entropy [32], as follows:
E R Z ( A ) = 1 n i = 1 n cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π )
which not only considers the deviation between membership and non-membership degrees μ A ( x i ) ν A ( x i ) , but also considers the hesitancy degree π A ( x i ) of the IF set.
Theorem 1.
The measure given by Equation (3) is an IF entropy.
Proof. 
To prove the measure E R Z ( A ) given by Equation (3) is an IF entropy, we only need to prove it satisfies the properties in Definition 4. Obviously, for every x i , we have:
0 | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π π 4 ,
then
0 cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π ) 1
Thus, we have 0 E R Z ( A ) 1 .
(i) Let A be a crisp set, i.e., for x i X , we have μ A ( x i ) = 1 , ν A ( x i ) = 0 or μ A ( x i ) = 0 , ν A ( x i ) = 1 . It is obvious that E R Z ( A ) = 0 .
If E R Z ( A ) = 0 , i.e., E R Z ( A ) = 1 n i = 1 n cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π ) = 0 , then x i X , we have i = 1 n cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π ) = 0 .
Thus | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) = 1 4 , amd then we have μ A ( x i ) = 1   ν A ( x i ) = 0 or μ A ( x i ) = 0 , ν A ( x i ) = 1 . Therefore, A is a crisp set.
(ii) Let μ A ( x i ) = ν A ( x i ) , x i X ; according to Equation (3), we have E R Z ( A ) = 1 n i = 1 n cot ( π 4 ) = 1 .
Now we assume that E R Z ( A ) = 1 ; then for all x i X , we have: cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π ) = 1 , then | μ A ( x i ) ν A ( x i ) | = 0 , and we can obtain the conclusion μ A ( x i ) = ν A ( x i ) for all x i X .
(iii) By A C = { < x i , ν A ( x i ) , μ A ( x i ) > | x i X } and Equation (3), we have:
E R Z ( A C ) = 1 n i = 1 n cot ( π 4 + | ν A ( x i ) μ A ( x i ) | 4 + π A ( x i ) π ) = E R Z ( A ) .
(iv) Construct the function:
f ( x , y ) = cot ( π 4 + | x y | 5 ( x + y ) π ) ,   where x , y [ 0 , 1 ] .
Now, when x y , we have f ( x , y ) = cot ( π 4 + y x 5 ( x + y ) π ) ; we need to prove that the function f ( x , y ) is increasing with x and decreasing with y .
We can easily derive the partial derivatives of f ( x , y ) to x and to y , respectively:
f x = csc 2 ( π 4 + y x 5 ( x + y ) π ) · ( 2 y 5 ) π [ 5 ( x + y ) ] 2
f y = csc 2 ( π 4 + y x 5 ( x + y ) π ) · ( 5 2 x ) π [ 5 ( x + y ) ] 2
When x y , we have f x 0 , f y 0 ; then, f ( x , y ) is increasing with x and decreasing with y ; thus, when μ B ( x i ) ν B ( x i ) and μ A ( x i ) μ B ( x i ) , ν A ( x i ) ν B ( x i ) are satisfied, we have f ( μ A ( x i ) , ν A ( x i ) ) f ( μ B ( x i ) , ν B ( x i ) ) .
So cot ( π 4 + | μ A ( x i ) ν A ( x i ) | 4 + π A ( x i ) π ) cot ( π 4 + | μ B ( x i ) ν B ( x i ) | 4 + π B ( x i ) π ) , that is, E R Z ( A ) E R Z ( B ) holds.
Similarly, we can prove that when x y , f x 0 , f y 0 , then f ( x , y ) is decreasing with x and increasing with y , thus when μ B ( x i ) ν B ( x i ) and μ A ( x i ) μ B ( x i ) ,   ν A ( x i ) ν B ( x i ) is satisfied, so we have f ( μ A ( x i ) , ν A ( x i ) ) f ( μ B ( x i ) , ν B ( x i ) ) .
Therefore, if A B , we have 1 n i = 1 n f ( μ A ( x i ) , ν A ( x i ) ) 1 n i = 1 n f ( μ B ( x i ) , ν B ( x i ) ) , i.e., E R Z ( A ) E R Z ( B ) . □
From Equation (3), the entropies of A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7   and   A 8 in Examples 1 to 4 can be obtained as follows:
E R Z ( A 1 ) = 0.8634 , E R Z ( A 2 ) = 0.8694 , E R Z ( A 1 ) E R Z ( A 2 ) . E R Z ( A 3 ) = 0.6298 , E R Z ( A 4 ) = 0.8215 , E R Z ( A 3 ) E R Z ( A 4 ) . E R Z ( A 5 ) = 0.6356 , E R Z ( A 6 ) = 0.5959 , E R Z ( A 5 ) E R Z ( A 6 ) . E R Z ( A 7 ) = 0.7486 , E R Z ( A 5 ) = 0.7707 , E R Z ( A 7 ) E R Z ( A 8 ) .
The calculation results are in agreement with our intuition.
According to the above examples, we see that the proposed entropy measure has a better performance than the entropy measures E Y , E Z , E Z J , E V S , E B 1 , E W , E W G , E L R . Furthermore, the new entropy measure considers the two aspects of the IF set (i.e., the uncertainty depicted by the derivation of membership and non-membership and the hesitancy degree reflected by the hesitation degree of the IF set), and thus the proposed entropy measure is a good entropy measure formula of the IF set.

4.2. Clustering Method of Group Decision Experts

For group decision-making problems, suppose that X = { x 1 , x 2 , , x m } is a set of m schemes, and O = { O 1 , O 2 , , O n } is a set of n decision makers. The evaluation values decision makers O j O to schemes x k X are expressed by IF number < μ j ( x k ) , ν j ( x k ) > , where μ j ( x k ) and ν j ( x k ) are the membership (satisfaction) and non-membership (dissatisfaction) degrees of the decision maker O j O to the scheme x k X with respect to the fuzzy concept so that they satisfy the conditions 0 μ j ( x k ) 1 , 0 ν j ( x k ) 1 and 0 μ j ( x k ) + ν j ( x k ) 1 ( j = 1 , 2 , , n ; k = 1 , 2 , , m ).
Thus, a group decision-making problem can be expressed by the decision matrix O = [ < μ k j , ν k j > ] m × n as follows:
O = [ < μ k j , ν k j > ] m × n = O 1 O 2 O n x 1 x 2 x m [ < μ 11 , ν 11 > < μ 12 , ν 12 > < μ 1 n , ν 1 n > < μ 21 , ν 21 > < μ 22 , ν 22 > < μ 2 n , ν 2 n > < μ m 1 , ν m 1 > < μ m 2 , ν m 2 > < μ m n , ν m n > ] m × n

4.2.1. A New IF Similarity Measure

To measure the similarities among any form of data is an important topic [44,45]. The measures used to find the resemblance between data is called a similarity measure. It has different applications in classification, medical diagnosis, pattern recognition, data mining, clustering [46], decision making and image processing. Khan et al. [47] proposed a newly similarity measure for a q-rung orthopair fuzzy set based on a cosine and cotangent function. Chen and Chang [48] proposed a new similarity measure between Atanassov’s intuitionistic fuzzy sets (AIFSs) based on transformation techniques and applied the proposed similarity measure between AIFSs to deal with pattern recognition problems. Beliakov et al. [49] presented a new approach for defining similarity measures for AIFSs and applied it to image segmentation. Lohani et al. [50] presented a novel probabilistic similarity measure (PSM) for AIFSs and developed the novel probabilistic λ-cutting algorithm for clustering. Liu et al. [51] proposed a new intuitionistic fuzzy similarity measure, introduced it into intuitionistic fuzzy decision system and proposed an intuitionistic fuzzy three branch decision method based on intuitionistic fuzzy similarity. Mei [52] constructed a similarity model between intuitionistic fuzzy sets and applied it to dynamic intuitionistic fuzzy multi-attribute decision making.
At present, most of the existing similarity measures are expressed in real numbers, which is not in line with the characteristics of intuitionistic fuzzy sets. In this section, we define a new IF similarity measure whose value is an IF number.
For any two experts O j and O k , let
X p ( O j , O k ) = i = 1 m w i ( ν i j ν i k ) p p and M p ( O j , O k ) = i = 1 m w i ( μ i j μ i k ) p p ,
where w i is the weight of scheme x i for all i { 1 , 2 , , m } and i = 1 m w i = 1 and p 1 is a parameter.
Let
μ ¯ j k = 1 max { X p ( O j , O k ) , M p ( O j , O k ) } ,
ν ¯ j k = min { X p ( O j , O k ) , M p ( O j , O k ) }
Theorem 2.
Let O j and O k be two IF sets; then,
ψ ( O j , O k ) = < μ ¯ j k , ν ¯ j k >
is the IF similarity measure of O j and O k .
Proof. 
To prove the measure given by Equation (4) is an IF similarity measure of O j and O k , we only need to prove that it satisfies the properties in Definition 6.
First, we prove that ψ ( O j , O k ) is the form of an IFN.
Because 0 X p ( O j , O k ) = i = 1 m w i ( ν i j ν i k ) p p 1 and 0 M p ( O j , O k ) = i = 1 m w i ( μ i j μ i k ) p p 1 , so 0 1 max { X p ( O j , O k ) , M p ( O j , O k ) } 1 , 0 min { X p ( O j , O k ) , M p ( O j , O k ) } 1 and μ ¯ j k + ν ¯ j k 1 . This proves that ψ ( O j , O k ) is the form of an IFN.
Let ψ ( O j , O k ) = < μ ¯ j k , ν ¯ j k > = < 1 , 0 > ; we have
μ ¯ j k = 1 max { X p ( O j , O k ) , M p ( O j , O k ) } = 1
And ν ¯ j k = min { X p ( O j , O k ) , M p ( O j , O k ) } = 0 , so X p ( O j , O k ) = M p ( O j , O k ) . Because of the arbitrariness of w i , we get μ i j = μ i k and ν i j = ν i k for all i { 1 , 2 , , m } , that is, O j = O k .
Now we assume that O j = O k ; then for all i { 1 , 2 , , m } , we have μ i j = μ i k , ν i j = ν i k ; we can obtain X p ( O j , O k ) = M p ( O j , O k ) = 0 and μ ¯ j k = 1 max { X p ( O j , O k ) , M p ( O j , O k ) } = 1 , ν ¯ j k = min { X p ( O j , O k ) , M p ( O j , O k ) } = 0 , that is, ψ ( O j , O k ) = < 1 , 0 > .
Property 3 clearly holds.
If O 1 O 2 O 3 , i.e., μ i 1 μ i 2 μ i 3 , ν i 1 ν i 2 ν i 3 for all i { 1 , 2 , , m } ,then ( μ i 1 - μ i 2 ) p ( μ i 1 - μ i 3 ) p , ( ν i 1 ν i 2 ) p ( ν i 1 ν i 3 ) p for all i { 1 , 2 , , m } .
We have X p ( O 1 , O 2 ) X p ( O 1 , O 3 ) and M p ( O 1 , O 2 ) M p ( O 1 , O 3 ) ; therefore, μ ¯ 12 μ ¯ 13 , and ν ¯ 12 ν ¯ 13 , that is, ψ ( O 1 , O 3 ) ψ ( O 1 , O 2 ) . Similarly, it can be proved that ψ ( O 1 , O 3 ) ψ ( O 2 , O 3 ) .
This theorem is proved. □
For IF similarity measure Equation (4), since each scheme is equal, this paper takes p = 2 , w i = 1 m for all i { 1 , 2 , , m } . Using this formula, the IF decision matrix O = [ < μ k j , ν k j > ] m × n can be transformed into the IF similar matrix Z = ( z j k ) n × n , where z j k = ψ ( O j , O k ) = < μ ¯ j k , ν ¯ j k > is an IFN.
The IF decision matrix can be transformed into the IF similarity matrix Z = ( z j k ) n × n by using the IF similarity formula proposed in this paper, where z j k = ψ ( O j , O k ) = < μ ¯ j k , ν ¯ j k > is an IFN.
People’s pursuit of risk varies from person to person. Let β [ 0 , 1 ] be the risk factor; then the IF similarity matrix Z = ( z j k ) n × n can be transformed into a real matrix R = ( r j k ) n × n where r j k = μ ¯ j k + β ( 1 μ ¯ j k ν ¯ j k ) .
R = ( r j k ) n × n = [ r 11 r 12 r 1 n r 21 r 22 r 2 n r n 1 r n 2 r n n ]

4.2.2. Threshold Change Rate Analysis Method

The method of Zhou et al. [40] is adopted in this section.
Let the clustering threshold θ = θ t , where θ t [ 0 , 1 ] . If
r j k θ t , j k
then elements O k and O j are considered to have the same properties. The closer the threshold is to 1, the finer the classification is.
In Zhou et al. [40], the selection of the optimal clustering threshold θ i can be deter-mined by analyzing the change rate C i of θ i . The rate of change C i is given as follows:
C i = θ i 1 θ i n i n i 1
where i is the clustering times of θ from large to small, n i and n i 1 are the number of objects in the i -th and ( i 1 ) -th clustering, respectively, and θ i and θ i 1 are the thresholds for the i -th and ( i 1 ) -th clustering, respectively. If
C i = max j { C j }
then the threshold value of i clustering is the best.
It can be seen from Equation (5) that the greater the change rate C i of the clustering threshold θ is, the greater the difference between the corresponding two clusters and the more obvious the boundary between classes. When C i is the maximum value, its corresponding θ is the optimal clustering threshold value, which can make the difference between the clusters obtained by the i -th clustering to be the largest, thus realizing the purpose and significance of classification.

4.3. Analysis of Group Decision Making Expert Group Weighting

In group decision-making problems, because each expert has a different specialty, experience and preference, their evaluation information should be treated differently. In order to reflect the status and importance of each expert in decision making, it is of great significance to determine the expert weight reasonably.
Two aspects need to be considered in expert weight, namely, the weight between categories and the weight within categories. The weight between categories mainly considers the number of experts in the category of experts. For the category with large capacity, the evaluation results given by experts represent the opinions of most experts, so the corresponding categories should be given a larger weight, which reflects the principle that the minority is subordinate to the majority, while the category with smaller capacity should be given a smaller weight.
Suppose that n experts are divided into t categories; the number of experts in the i category is φ i ( φ i n ) ; and the weights between the expert categories λ i are as follows:
λ i = φ i 2 k = 1 t φ t 2 , k = 1 , 2 , , t .
The weight of experts within the category can be measured by the information contained in an IF evaluation value given by experts. Entropy is a measure of information uncertainty and information quantity. If the entropy of the evaluation information given by an expert is smaller, the uncertainty of the evaluation information is smaller, which means that the logic of the expert is clearer; the amount of information provided is greater; and the role of the expert in the comprehensive evaluation is greater, so the expert should be given more weight. Therefore, the weight of experts within the category can be measured by IF entropy.
The evaluation vector of expert k is O k = ( < μ k ( x 1 ) , ν k ( x 1 ) > , , < μ k ( x 5 ) , ν k ( x 5 ) > ) .
The IF entropy corresponding to Equation (1) is expressed as follows:
E ( k ) = 1 5 i = 1 5 cot ( π 4 + | μ k ( x i ) ν k ( x i ) | 4 + π k ( x i ) π )
The internal weight a i k of the k expert in category i is as follows:
a i k = 1 E ( k ) i = 1 φ ( i ) [ 1 E ( i ) ]
By linear weighting λ i and a i k , the total weight of experts ω k is obtained:
ω k = λ i · a i k , k = 1 , 2 , , n .

4.4. Intelligent Expert Combination Weighting Algorithm

A cluster analysis method is often used to realize the expert weighting in group decision making. The basic principle of expert cluster analysis is to measure the similarity degree of expert evaluation opinions according to certain standards and cluster experts based on the similarity degree. In short, Figure 4 shows the general scheme of the expert clustering method.
To sum up, this paper proposes an expert combination weighting scheme for group decision making, and obtains the following algorithm, which we call the intelligent expert combination weighting algorithm (see Algorithm 1).
Algorithm 1. Intelligent expert combination weighting algorithm
          Input the IF decision matrix O = [ < μ i j , ν i j > ] n × m given by experts where
           I = { 1 , 2 , , n }   and J = { 1 , 2 , , m } .
          1: For j I implement.
          2: For k I implement.
          3: For i J implement.
          4: The IF similarity measure between experts ψ ( O j , O k ) = < μ ¯ j k , ν ¯ j k > is calculated according to formula (4).
          5: End for
          6: Let z j k = ψ ( O j , O k ) = < μ ¯ j k , ν ¯ j k > .
          7: End for
          8: End for
          9: The IF decision matrix O = [ < μ i j , ν i j > ] n × m is transformed into the similarity matrix Z = ( z j k ) n × n .
          10: By selecting the risk factor β, the IF similarity matrix Z = ( z j k ) n × n is transformed into the real matrix R = ( r j k ) n × n   .
          11: According to the real matrix R = ( r j k ) n × n , the dynamic clustering graph is drawn, and the optimal clustering threshold is determined by Formulae (6) and (7). According to this threshold, experts are classified into L categories.
          12: For l L implement.
          13: Using Formula (8), the weight of experts between categories λl is determined.
          14: For k I implement.
          15: Using Formula (8), the weight of experts between categories alk is determined.
          16: Formula (11) is used to determine the total weight of experts. ωk is calculated.
          17: End for.
          18: End for.
          19: For i J implement.
          20: For k I implement.
          21: The weighted operator (2) of IF sets is used to aggregate expert IF group decision-making information.
          22: End for.
          23: According to definition 8, the scores and accuracy values of each scheme xi are obtained.
          24: End for.
          25: return The results of the ranking of schemes xi.

5. Performance Analysis

The railway is an important national infrastructure and livelihood project. It is a resource-saving and environment-friendly mode of transportation. In recent years, China’s railway development has made remarkable achievements, but compared with the needs of economic and social development, other modes of transportation and advanced foreign railway technique, the railway in China is still a weak part of the whole transportation system [53,54]. In order to further accelerate railway construction, expand the scale of railway network and improve the layout structure and quality, the state promulgated the medium and long term railway network plan, which puts forward a series of railway plans, including the plan for railway reconstruction.
The railway reconstruction project is carried out under a series of communication, coordination and cooperation efforts, and the complex work is arranged in a limited work area, so it has encountered many unexpected challenges, such as carelessness or inadequate planning, which may lead to accidents and cause significant damage to life, assets, environment and society. According to literature [55], we can conclude that there are about seven types of risks in railway reconstruction projects, including financial and economic risks, contract and legal risks, subcontractor related risks, operation and safety risks, political and social risks, design risks and force majeure risks.
It is assumed that nine experts O i ( i = 1 , 2 , , 9 ) form a decision-making group to rank five alternatives x j ( j = 1 , 2 , 3 , 4 , 5 ) from the seven evaluation attributes above. Evaluation alternatives always contain ambiguity and diversity of meaning. In addition, in terms of qualitative attributes, human assessment is subjective and therefore inaccurate. In this case, an IF set is very advantageous; it can describe the decision process more accurately. IF sets are used in this study. After expert investigation and statistical analysis, we can get the satisfaction degree μ i j and dissatisfaction ν i j given by each expert O i ( i = 1 , 2 , , 9 ) for each scheme x j ( j = 1 , 2 , 3 , 4 , 5 ) . The specific data are given in Table 1.
The calculation steps of the proposed method are given as follows:
Step 1. According to Equation (4), the IF similarity matrix Z is obtained as follows:
Z = [ < 1 , 0 > < 0.805 , 0.152 > < 0.675 , 0.304 > < 0.953 , 0.033 > < 0.672 , 0.327 > < 1 , 0 > < 0.685 , 0.261 > < 0.821 , 0.125 > < 0.685 , 0.274 > < 1 , 0 > < 0.694 , 0.271 > < 0.946 , 0.051 > < 1 , 0 > < 0.689 , 0.294 > < 1 , 0 >
< 0.746 , 0.253 > < 0.668 , 0.311 > < 0.945 , 0.023 > < 0.752 , 0.235 > < 0.758 , 0.211 > < 0.706 , 0.246 > < 0.816 , 0.133 > < 0.876 , 0.075 > < 0.751 , 0.245 > < 0.938 , 0.060 > < 0.691 , 0.276 > < 0.689 , 0.255 > < 0.762 , 0.229 > < 0.688 , 0.276 > < 0.966 , 0.022 > < 0.768 , 0.210 > < 0.722 , 0.277 > < 0.954 , 0.038 > < 0.686 , 0.298 > < 0.698 , 0.245 > < 1 , 0 > < 0.745 , 0.250 > < 0.755 , 0.216 > < 0.705 , 0.286 > < 1 , 0 > < 0.683 , 0.279 > < 0.720 , 0.220 > < 1 , 0 > < 0.763 , 0.220 > < 1 , 0 > ]
Step 2. By selecting the risk factor β = 0.5 , i.e., moderate risk, the real matrix R is obtained.
R = [ 1 0.827 0.686 0.96 0.673 0.747 0.679 0.961 0.759 1 0.712 0.848 0.706 0.774 0.73 0.842 0.901 1 0.712 0.948 0.753 0.939 0.708 0.717 1 0.700 0.767 0.706 0.972 0.779 1 0.723 0.958 0.694 0.727 1 0.748 0.770 0.710 1 0.702 0.75 1 0.772 1 ]
Step 3. According to Equation (5), let i take all the values in turn to get a series of classifications, and then draw a dynamic clustering graph according to Equations (5) and (6), as shown in Figure 5.
According to Equation (6), we have
C 1 = 1 0.972 2 0 = 0.014 ,   C 2 = 0.972 0.961 3 2 = 0.011 ,   C 3 = 0.961 0.958 5 3 = 0.0015 ,   C 4 = 0.958 0.948 6 5 = 0.01 , C 5 = 0.948 0.901 8 6 = 0.0235 , C 6 = 0.901 0.770 9 8 = 0.131 .
Since it is meaningless for each expert to become a category or all experts to be classified into one category, we do not consider C 6 ; then, we have C 5 = max { C 1 , C 2 , C 3 , C 4 , C 5 } .
Therefore, taking θ = 0.891 as the optimal clustering threshold, the clustering result is the most reasonable and consistent with the actual situation, and the clustering results are shown in Figure 6. We can see that the corresponding clustering results are as follows:
{(1 4 8), (3 5 7), (2 9), (6)}
Step 4. According to Equation (8), the weight of experts between categories is as follows:
λ 1 = 0.3913 , λ 2 = 0.3913 , λ 3 = 0.1739 , λ 4 = 0.0435 .
Step 5. According to Equation (9), the entropy vector of the expert group is obtained as follows:
(0.6868, 0.7405, 0.5538, 0.7364, 0.4995, 0.5935, 0.5507, 0.7159, 0.7339)
According to Equation (10), the weight of experts within the category is shown in Table 2.
Step 6. We weight λ i and a i k linearly to get the total weight vector ω k of experts as follows:
(0.1424, 0.0859, 0.1251, 0.1198, 0.1403, 0.0435, 0.1260, 0.1291, 0.0748).
Step 7. According to the total weight of nine experts, the weighted aggregation operator given by Equation (2) is used to aggregate the expert information, and the comprehensive evaluation vector is obtained as follows:
(0.3616, 0.4504), (0.5226, 0.3878), (0.5932, 0.3218), (0.4749, 0.3853), (0.4972, 0.3718).
According to Equation (1), the scores and accuracy values of the comprehensive evaluation vector are calculated as follows:
M ( x 1 ) = 0.089 , M ( x 2 ) = 0.1348 , M ( x 3 ) = 0.2714 , M ( x 4 ) = 0.0896 , M ( x 5 ) = 0.1254 . Δ ( x 1 ) = 0.812 ,   Δ ( x 2 ) = 0.9104 ,   Δ ( x 3 ) = 0.915 ,   Δ ( x 4 ) = 0.8602 , Δ ( x 5 ) = 0.869
Therefore, the priority of the five alternatives is x 3 x 2 x 5 x 4 x 1 , and the optimal one is x 3 .

6. Conclusions and Future Work

This article listed some counterintuitive phenomena of some existing intuitionistic fuzzy entropies. We defined an improved intuitionistic fuzzy entropy based on a cotangent function and a new IF similarity measure whose value is an IF number, applied them to the expert weight problem of group decision making and put forward the expert weight combination weighting scheme. Finally, this method was applied to a railway reconstruction case to illustrate the effectiveness of the method.
In the future, we will apply the expert weight combination weighting scheme proposed in this paper to situations in real life. We will also formulate this kind of entropy measure and similarity measures for an interval-valued IF set [56], Fermat fuzzy set, spherical fuzzy set, t-spherical fuzzy set, picture fuzzy set, single valued neutrosophic set [55,57], Plithogenic set [58] and linear fuzzy set.
While studying the theoretical method, this paper used numerical examples rather than the actual production data, which is the limitation of this paper. In the future research, we will apply the expert weight combination weighting scheme proposed in this paper to practical production problems.

Author Contributions

L.Z. and H.R. designed the method and wrote the paper; T.Y. and N.X. analyzed the data. All authors have read and agreed to the published version of the manuscript.

Funding

This work was mainly supported by the National Natural Science Foundation of China (No. 71661012) and scientific research project of the Jiangxi Provincial Department of Education (No. GJJ210827).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declared that they have no conflict of interest to this work.

References

  1. Xie, Y.; Kou, Y.; Jiang, M.; Yu, W. Development and technical prospect of China railway. High Speed Railway Technol. 2020, 11, 11–16. [Google Scholar]
  2. Li, Y.B. Research on the current situation and development direction of railway freight transportation in China. Intell. City 2019, 5, 133–134. [Google Scholar]
  3. Fu, Z.; Zhong, M.; Li, Z. Development and innovation of Chinese railways over past century. Chin. Rail. 2021, 7, 1–7. [Google Scholar]
  4. Li, Z.; Xie, R.; Sun, L.; Huang, T. A survey of mobile edge computing Telecommunications Science. Chin. Rail. 2014, 2, 9–13. [Google Scholar]
  5. Lu, S.-T.; Yu, S.-H.; Chang, D.-S. Using fuzzy multiple criteria decision-making approach for assessing the risk of railway reconstruction project in Taiwan. Sci. Word J. 2014, 2014, 239793. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, M.; Xie, H. Present situation analysis and discussion on development of Chinese railway construction market. J. Rail. Sci. Eng. 2008, 5, 63–67. [Google Scholar]
  7. Zhang, Z. Analysis on risks in construction of railway engineering projects and exploration for their prevention. Rail. Stan. Desi. 2010, 9, 51–52. [Google Scholar]
  8. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  9. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  10. Tao, P.; Liu, Z.; Cai, R.; Kang, H. A dynamic group MCDM model with intuitionistic fuzzy set: Perspective of alternative queuing method. Inf. Sci. 2021, 555, 85–103. [Google Scholar] [CrossRef]
  11. Singh, M.; Rathi, R.; Antony, J.; Garza-Reyes, J. Lean six sigma project selection in a manufacturing environment using hybrid methodology based on intuitionistic fuzzy MADM approach. IEEE Trans. Eng. Manag. 2021, 99, 1–15. [Google Scholar] [CrossRef]
  12. Chaira, T. An intuitionistic fuzzy clustering approach for detection of abnormal regions in mammogram images. J. Digit. Imaging 2021, 34, 428–439. [Google Scholar] [CrossRef] [PubMed]
  13. Jiang, H.B.; Hu, B.Q. A novel three-way group investment decision model under intuitionistic fuzzy multi-attribute group decision-making environment. Inf. Sci. 2021, 569, 557–581. [Google Scholar] [CrossRef]
  14. Wang, W.; Zhan, J.; Mi, J. A three-way decision approach with probabilistic dominance relations under intuitionistic fuzzy information. Inf. Sci. 2022, 582, 114–145. [Google Scholar] [CrossRef]
  15. Wan, S.; Dong, J. A novel extension of best-worst method with intuitionistic fuzzy reference comparisons. IEEE Trans. Fuzzy Syst. 2021, 99, 1. [Google Scholar] [CrossRef]
  16. Kumar, D.; Agrawal, R.; Kumar, P. Bias-corrected intuitionistic fuzzy c-means with spatial neighborhood information ap proach for human brain MRI image segmentation. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  17. Senapati, T.; Yager, R. Fermatean fuzzy sets. J. Ambient Intell. Humaniz. Comput. 2020, 11, 663–674. [Google Scholar] [CrossRef]
  18. Senapati, T.; Yager, R. Fermatean fuzzy weighted averaging/geometric operators and its application in multi-criteria decision-making methods—Science Direct. Eng. Appl. Artif. Intell. 2019, 85, 112–121. [Google Scholar] [CrossRef]
  19. Senapati, T.; Yager, R. Some new operations over fermatean fuzzy numbers and application of fermatean fuzzy WPM in multiple criteria decision making. Informatica 2019, 2, 391–412. [Google Scholar] [CrossRef] [Green Version]
  20. Ashraf, S.; Abdullah, S.; Mahmood, T.; Ghani, F. Spherical fuzzy sets and their applications in multi-attribute decision making problems. J. Intell. Fuzzy Syst. 2019, 36, 2829–2844. [Google Scholar] [CrossRef]
  21. Khan, M.; Kumam, P.; Liu, P.; Kumam, W.; Rehman, H. An adjustable weighted soft discernibility matrix based on generalized picture fuzzy soft set and its applications in decision making. J. Intell. Fuzzy Syst. 2020, 38, 2103–2118. [Google Scholar] [CrossRef]
  22. Riaz, M.; Hashmi, M. Linear Diophantine fuzzy set and its applications towards multi-attribute decision-making problems. J. Intell. Fuzzy Syst. 2019, 37, 5417–5439. [Google Scholar] [CrossRef]
  23. Gao, K.; Han, F.; Dong, P.; Xiong, N.; Du, R. Connected vehicle as a mobile sensor for real time queue length at signalized intersections. Sensors 2019, 19, 2059. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Zhang, Q.; Zhou, C.; Tian, Y.; Xiong, N.; Qin, Y.; Hu, B. A fuzzy probability Bayesian network approach for dynamic cybersecurity risk assessment in industrial control systems. IEEE Trans. Ind. Inform. 2017, 14, 2497–2506. [Google Scholar] [CrossRef] [Green Version]
  25. Luca, A.; Termini, S. A definition of a nonprobabilistie entropy in the setting of fuzzy sets theory. Inf. Control 1972, 3, 301–312. [Google Scholar] [CrossRef] [Green Version]
  26. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [Google Scholar] [CrossRef]
  27. Ye, J. Two effective measures of intuitionistic fuzzy entropy. Computing 2010, 87, 55–62. [Google Scholar] [CrossRef]
  28. Zeng, W.; Li, H. Relationship between similarity measure and entropy of interval valued fuzzy sets. Fuzzy Sets Syst. 2006, 157, 1477–1484. [Google Scholar] [CrossRef]
  29. Zhang, Q.; Jiang, S. A note on information entropy measures for vague sets and its applications. Inf. Sci. 2008, 178, 4184–4191. [Google Scholar] [CrossRef]
  30. Verma, R.; Sharma, B. Exponential entropy on intuitionistic fuzzy sets. Kybernetika 2013, 49, 114–127. [Google Scholar]
  31. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst. 1996, 78, 305–316. [Google Scholar] [CrossRef]
  32. Wang, J.; Wang, P. Intuitionistic linguistic fuzzy multi-criteria decision-making method based on intuitionistic fuzzy entropy. Control Decis. 2012, 27, 1694–1698. [Google Scholar]
  33. Wei, C.; Gao, Z.; Guo, T. An intuitionistic fuzzy entropy measure based on trigonometric function. Control Decis. 2012, 27, 571–574. [Google Scholar]
  34. Liu, M.; Ren, H. A new intuitionistic fuzzy entropy and application in multi-attribute decision making. Informatica 2014, 5, 587–601. [Google Scholar] [CrossRef] [Green Version]
  35. Ai, C.; Feng, F.; Li, J.; Liu, K. AHP method of subjective group decision-making based on interval number judgment matrix and fuzzy clustering analysis. Control Decis. 2019, 35, 41–45. [Google Scholar]
  36. He, Z.; Lei, Y. Research on intuitionistic fuzzy C-means clustering algorithm. Control Decis. 2011, 26, 847–850. [Google Scholar]
  37. Zhang, H.; Xu, Z.; Chen, Q. On clustering approach to intuitionistic fuzzy sets. Control Decis. 2007, 22, 882–888. [Google Scholar]
  38. He, Z.; Lei, Y.; Wang, G. Target recognition based on intuitionistic fuzzy clustering. J. Syst. Eng. Electron. 2011, 6, 1283–1286. [Google Scholar]
  39. Wang, Z.; Xu, Z.; Liu, S.; Tang, J. A netting clustering analysis method under intuitionistic fuzzy environment. Appl. Soft Comput. 2011, 11, 5558–5564. [Google Scholar] [CrossRef]
  40. Zhuo, X.; Zhang, F.; Hui, X.; Li, K. Method for determining experts’ weights based on entropy and cluster analysis. Control Decis. 2011, 26, 153–156. [Google Scholar]
  41. Xu, Z.; Chen, J. An overview of distance and similarity measures of intuitionistic fuzzy sets. Int. J. Uncertain. Fuzz. 2008, 16, 529–555. [Google Scholar] [CrossRef]
  42. Zhang, Z.; Chen, S.; Wang, C. Group decision making with incomplete intuitionistic multiplicative preference relations. Inf. Sci. 2020, 516, 560–571. [Google Scholar] [CrossRef]
  43. Xu, Z.S.; Yager, R.R. Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst. 2006, 35, 417–433. [Google Scholar] [CrossRef]
  44. Huang, S.; Liu, A.; Zhang, S.; Wang, T.; Xiong, N. BD-VTE: A novel baseline data based verifiable trust evaluation scheme for smart network systems. IEEE Trans. Netw. Sci. Eng. 2020, 8, 2087–2105. [Google Scholar] [CrossRef]
  45. Wu, M.; Tan, L.; Xiong, N. A structure fidelity approach for big data collection in wireless sensor networks. Sensors 2015, 15, 248–273. [Google Scholar] [CrossRef] [Green Version]
  46. Li, H.; Liu, J.; Wu, K.; Yang, Z.; Liu, R.; Xiong, N. Spatio-temporal vessel trajectory clustering based on data mapping and density. IEEE Access 2018, 6, 58939–58954. [Google Scholar] [CrossRef]
  47. Khan, M.; Kumam, P.; Alreshidi, N.; Kumam, W. Improved cosine and cotangent function-based similarity measures for q-rung orthopair fuzzy sets and TOPSIS method. Complex Intell. Syst. 2021, 7, 2679–2696. [Google Scholar] [CrossRef]
  48. Chen, S.; Chang, C. A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on transformation techniques with applications to pattern recognition. Inf. Sci. 2015, 291, 96–114. [Google Scholar] [CrossRef]
  49. Beliakov, G.; Pagola, M.; Wilkin, T. Vector valued similarity measures for Atanassov’s intuitionistic fuzzy sets. Inf. Sci. 2014, 280, 352–367. [Google Scholar] [CrossRef]
  50. Lohani, Q.; Solanki, R.; Muhuri, P. Novel adaptive clustering algorithms based on a probabilistic similarity measure over Atanassov intuitionistic fuzzy set. IEEE Trans. Fuzzy Syst. 2018, 6, 3715–3729. [Google Scholar] [CrossRef]
  51. Liu, J.; Zhou, X.; Li, H.; Huang, B.; Gu, P. An intuitionistic fuzzy three-way decision method based on intuitionistic fuzzy similarity degrees. Syst. Eng. Theory Pract. 2019, 39, 1550–1564. [Google Scholar]
  52. Mei, X. Dynamic intuitionistic fuzzy multi-attribute decision making method based on similarity. Stat. Decis. 2016, 15, 22–24. [Google Scholar]
  53. Tang, Y. Comparison and analysis of domestic and foreign railway energy consumption. Rail. Tran. Econ. 2018, 40, 97–103. [Google Scholar]
  54. Gao, F. A study on the current situation and development strategies of China’s railway restructuring. Railw. Freight Transport. 2020, 38, 15–19. [Google Scholar]
  55. Chai, J.S.; Selvachandran, G.; Smarandache, F.; Gerogiannis, V.C.; Son, L.H.; Bui, Q.-T.; Vo, B. New similarity measures for single-valued neutrosophic sets with applications in pattern recognition and medical diagnosis problems. Complex Intell. Syst. 2021, 7, 703–723. [Google Scholar] [CrossRef]
  56. Garg, H. Generalized intuitionistic fuzzy entropy-based approach for solving multi-attribute decision-making problems with unknown attribute weights. Proc. Natl. Acad. Sci. USA 2017, 89, 129–139. [Google Scholar] [CrossRef]
  57. Majumdar, P. On new measures of uncertainty for neutrosophic sets. Neutrosophic Sets Syst. 2017, 17, 50–57. [Google Scholar]
  58. Quek, S.G.; Selvachandran, G.; Smarandache, F.; Vimala, J.; Le, S.H.; Bui, Q.-T.; Gerogiannis, V.C. Entropy measures for Plithogenic sets and applications in multi-attribute decision making. Mathematics 2020, 8, 965. [Google Scholar] [CrossRef]
Figure 1. Business mileage of China’s railways.
Figure 1. Business mileage of China’s railways.
Mathematics 10 00549 g001
Figure 2. Total railway freight volume in China.
Figure 2. Total railway freight volume in China.
Mathematics 10 00549 g002
Figure 3. China railway passenger volume.
Figure 3. China railway passenger volume.
Mathematics 10 00549 g003
Figure 4. The general scheme of expert clustering method.
Figure 4. The general scheme of expert clustering method.
Mathematics 10 00549 g004
Figure 5. Dynamic clustering graph.
Figure 5. Dynamic clustering graph.
Mathematics 10 00549 g005
Figure 6. Clustering results.
Figure 6. Clustering results.
Mathematics 10 00549 g006
Table 1. Expert evaluation information on the program.
Table 1. Expert evaluation information on the program.
Expert x 1 x 2 x 3 x 4 x 5
O 1 <0.43,0.45><0.24,0.70><0.57,0.40><0.29,0.55><0.25,0.60>
O 2 <0.58,0.30><0.37,0.52><0.30,0.50><0.55,0.35><0.35,0.50>
O 3 <0.31,0.61><0.74,0.22><0.70,0.25><0.50,0.40><0.70,0.20>
O 4 <0.44,0.45><0.31,0.60><0.56,0.40><0.31,0.52><0.24,0.60>
O 5 <0.31,0.60><0.70,0.20><0.75,0.20><0.60,0.30><0.68,0.20>
O 6 <0.70,0.20><0.58,0.32><0.52,0.40><0.20,0.70><0.60,0.30>
O 7 <0.38,0.52><0.72,0.21><0.68,0.22><0.61,0.30><0.70,0.22>
O 8 <0.41,0.40><0.28,0.60><0.55,0.35><0.30,0.55><0.26,0.60>
O 9 <0.56,0.34><0.40,0.50><0.30,0.40><0.71,0.10><0.38,0.45>
Table 2. The weight of experts within the category.
Table 2. The weight of experts within the category.
CategoryThe Weight of Experts within the Category
Category 1 a 11 = 0.3638 , a 14 = 0.3062 , a 18 = 0.330
Category 2 a 23 = 0.3196 , a 25 = 0.3585 , a 27 = 0.3219
Category 3 a 32 = 0.4937 , a 39 = 0.4303
Category 4 a 46 = 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zeng, L.; Ren, H.; Yang, T.; Xiong, N. An Intelligent Expert Combination Weighting Scheme for Group Decision Making in Railway Reconstruction. Mathematics 2022, 10, 549. https://0-doi-org.brum.beds.ac.uk/10.3390/math10040549

AMA Style

Zeng L, Ren H, Yang T, Xiong N. An Intelligent Expert Combination Weighting Scheme for Group Decision Making in Railway Reconstruction. Mathematics. 2022; 10(4):549. https://0-doi-org.brum.beds.ac.uk/10.3390/math10040549

Chicago/Turabian Style

Zeng, Lihua, Haiping Ren, Tonghua Yang, and Neal Xiong. 2022. "An Intelligent Expert Combination Weighting Scheme for Group Decision Making in Railway Reconstruction" Mathematics 10, no. 4: 549. https://0-doi-org.brum.beds.ac.uk/10.3390/math10040549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop