Next Article in Journal
Optimizing the Tolerance for the Products with Multi-Dimensional Chains via Simulated Annealing
Next Article in Special Issue
Rough q-Rung Orthopair Fuzzy Sets and Their Applications in Decision-Making
Previous Article in Journal
Some Trapezoid Intuitionistic Fuzzy Linguistic Maclaurin Symmetric Mean Operators and Their Application to Multiple-Attribute Decision Making
Previous Article in Special Issue
On Fundamental Theorems of Fuzzy Isomorphism of Fuzzy Subrings over a Certain Algebraic Product
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Overlap Functions Based (Multi-Granulation) Fuzzy Rough Sets and Their Applications in MCDM

1
School of Mathematics and Data Science, Shaanxi University of Science and Technology, Xi’an Weiyang University Park, Xi’an 710021, China
2
Shaanxi Joint Laboratory of Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an Weiyang University Park, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Submission received: 30 August 2021 / Revised: 15 September 2021 / Accepted: 18 September 2021 / Published: 24 September 2021

Abstract

:
Through a combination of overlap functions (which have symmetry and continuity) and a fuzzy β -covering fuzzy rough set (FCFRS), a new class of FCFRS models is established, and the basic properties of the new fuzzy β -neighborhood lower and upper approximate operators are analyzed and studied. Then the model is extended to the case of multi-granulation, and the properties of a multi-granulation optimistic fuzzy rough set are mainly investigated. By theoretical analysis for the fuzzy covering (multi-granulation) fuzzy rough sets, the solutions to problems in multi-criteria decision-making (MCDM) and multi-criteria group decision-making (MCGDM) problem methods are built, respectively. To fully illustrate these methodologies, effective examples are developed. By comparing the method proposed in this paper with the existing methods, we find that the method proposed is more suitable for solving decision making problems than the traditional methods, while the results obtained are more helpful to decision makers.

1. Introduction

1.1. Look Back to Fuzzy Rough Set

Rough set theory, first proposed by Polish scientist Pawlak in 1982, is a mathematical tool to deal with uncertain knowledge [1]. Covering rough set theory is an important extension form of classical rough set theory and also an important theoretical method for processing incomplete information system data [2]. Since the classical rough set theory can only be applied to the complete theory of discrete data attributes in the export of the domain division, but the data in the daily production are various, the complete discrete data of the classical rough set model is no longer applicable and at this time can be covered using the export field to study specific problems [3]. Therefore, it is of practical significance to study the rough set model based on covering. As data are increasingly characterized by large capacity, diversity, value and authenticity, data processing using the covered rough set method has become a research hotspot in domestic and foreign academic circles [4,5,6,7,8].
Attribute values in many problems are of symbolic and true type [9], and rough set theory has limitations when processing a truth data set. Fuzzy set theory [10] was proposed by the famous Azerbaizan cybernetics expert Zadeh in 1965. Fuzzy set theory can effectively deal with fuzzy concepts, and it is very useful for solving the limitation of rough set theory in dealing with true data sets. Due to the complementarity and difference of the two theories, the research on the combination of rough set theory and fuzzy set theory has been widely concerned in the past twenty years. In this way, the idea of combining rough set theory and fuzzy set theory can be found in different mathematical fields. The concept of the fuzzy rough set [11] was first proposed by Dubois and Prade combining rough set theory and fuzzy set theory. Based on this, many researchers use different fuzzy logic connectives to define various types of fuzzy rough set models [12,13,14,15,16,17], which further expands the theory of fuzzy rough set. On the other hand, as is commonly known, the theory of rough sets has already been successfully used in various aspects of practical issues, for instance, in multi-attribute decision-making [18], knowledge discovery [19], granular machine learning [20], and data analysis [21], etc.
To study fuzzy rough sets from another perspective, Zakowski first proposed replacing binary fuzzy relation with covering in 1983 to obtain covering-based rough sets [22]. Once developed, it aroused widespread discussion. For more extensive and general research, some scholars extend the covering-based rough set to a more general form, that is, the fuzzy covering-based rough set model. For example, Li, Leung and Zhang proposed two pairs of generalized lower and upper approximation operators based on fuzzy covering [23]. In addition, D’eer explored fuzzy neighborhood operators based on fuzzy neighborhood systems, a fuzzy minimum description and a fuzzy maximum description. However, the definition of fuzzy coverings is still limited to research [24,25,26]. In 2016, Ma suggested building fuzzy β -covering instead of fuzzy covering, where β ( 0 , 1 ] . In the same year, with the help of fuzzy β -covering, Ma defined another class of fuzzy β -covering-based fuzzy rough sets by a fuzzy β -neighborhood [27]. Models based on fuzzy covering rough sets have been widely used in specific problems, such as decision problems [28,29,30]. On the basis of the fuzzy covering rough set model, the fuzzy operator is changed to extend the model to make it more widely usable. The fuzzy covering fuzzy rough set model based on the triangular norm is applied to the uncertain multi-attribute decision making problem [31,32] and compared with the existing decision making methods, better decision results are obtained. In 2010, Qian et al. proposed the multi-granularity fuzzy rough set and discussed it from optimistic and pessimistic perspectives, respectively, which enriched the theory of fuzzy rough set [33,34]. Since then, the research of multi-granularity rough sets has become a hot topic. Scholars extended the multi-granularity rough set model in various directions, for example, replacing the binary relation with fuzzy covering, using different approximate operators to construct new models, etc. [35]. Zhan et al. and M. Atef et al. [36,37] applied a new extended model for solving MAGDM problems.

1.2. Look Back to Overlap Function

Overlap function (which has symmetry and continuity) is the extension of a continuous triangular norm, which is widely used in image processing, data classification and multi-attribute decision making. In 2009, Bustince et al. first proposed the overlap function [38] (a kind of aggregation function and uncombined fuzzy logic connecter), which originated from the practical application of image processing and data classification. In the problem of image processing, Bustince et al. used the overlap function to calculate the threshold value of an image and then carried out the research in [39]. In classification problems, Amo et al. use overlap functions to discuss (fuzzy) evaluations of classification of results [40]. In 2013, Bedregal et al. studied some important properties of overlapping functions [41], such as migrancy, homogeneity and idempotence. In 2015, Dimuro and Bedregal studied the basic properties of overlap functions and their residual implication [42]. At the same time, the authors of [43] studied the n-dimensional overlap function and its properties and the application of the overlap function in the classification problem based on fuzzy rules. In 2021, the concept of intuitionistic overlap function was proposed for the first time and its properties were studied [44]. In recent years, researchers combined the overlap function and fuzzy rough set theory [45,46] and found that the overlap function has a wider application prospect.

1.3. Motivation of Our Research

On the one hand, the overlap function can be regarded as a new extension of the logical connective “and”, which is different from the usual fuzzy logical connective t-norm. Therefore, they can be used to define upper fuzzy rough approximation operators in fuzzy rough sets instead of classical joining operators, and correspondingly, lower fuzzy rough approximation operators can be defined by their induced residual implication. On the other hand, from the aspects of application, our research will be based on a combination of covering fuzzy rough sets and overlap functions not only to enrich the theory of rough set but to expand the application scope of rough set. Fuzzy rough sets based on the overlap function discussion plays a huge role in the practical application, which can not only give us another possible method for dealing with practical problems (e.g., multi-attribute decision making, data classification, etc.) but can also provide us with more theoretical basis than before, helping us to deal with practical problems more conveniently.
In addition, as far as we know, there is no research on the fuzzy rough set model of fuzzy covering based on the overlap function and multi-granulation fuzzy rough set model of fuzzy covering based on the overlap function. Therefore, in order to fill this research gap, we propose two types of models, namely fuzzy β -covering fuzzy rough set model based on the overlap function, and this is extended to the multi-granulation case, i.e., the FCOMGFRS model based on the overlap function and the FCPMGFRS model based on the overlap function.

1.4. The Relationship between Some Extension Models of Rough Sets

In the rough set model proposed by Pawlak, asking for binary relations is an equivalent relation, which has high requirements and limits the application of the rough set model. Therefore, the extension of the rough set model is an important research direction for rough set theory, including the constructional method, that is, taking the general fuzzy relation, partition, covering, and domain, etc. as the basic elements, further defining the upper and lower rough approximate operators, and thus, deriving the rough set system. Furthermore, rough sets based on different logical operators (such as logical “and”, t-norm, overlap function, etc.) are also a type of method for constructing rough set extension models (see Figure 1). Therefore, various extended forms of rough sets can be obtained by using different research methods from different viewpoints. The research of extended models and the applications based on them have become a new research hotspot. The two types of models proposed in this paper are extensions of the existing models. On the basis of the research work of the existing rough set based on the triangular norm, the new model is obtained by replacing the logical operator, and its theoretical properties and practical applications are studied.

1.5. Outline of the Present Paper

We review some preliminary concepts and results in Section 2. Next, we establish the FCFRS model based on the overlap function and study the basic properties of this model and give some examples in Section 3. In Section 4, we have established two multi-granulation fuzzy rough set models that are the FCOMGFRS model and the FCPMGFRS model based on the overlap function, and mainly study the properties of the FCOMGFRS model. In Section 5, we put forward the MCMD problem method of the FCFRS model based on the overlap function. In Section 6, we propose the concrete example and provide a comparison analysis with other methods. In Section 7, we present the MCGDM method and verify the validity of the model with an example. We conclude our work and outline future research in Section 8.

2. Preliminary Concepts and Results

In this section, let us review some basic concepts of overlap function, fuzzy set theory, fuzzy covering rough theory and multi-granulation rough sets.

2.1. Overlap Function

Definition 1
([16]). For a mapping T : [ 0 , 1 ] [ 0 , 1 ] [ 0 , 1 ] , if it satisfies the commutative, associative and increasing with the boundary condition T ( 1 , x ) = x for each x [ 0 , 1 ] , then refer to T as a t-norm.
Definition 2
([33]). A bivariate function O : [ 0 , 1 ] 2 [ 0 , 1 ] is called an overlap function, if for each x , y [ 0 , 1 ] , the following conditions hold:
( O 1 )   O ( x , y ) = O ( y , x ) ;
( O 2 )   O ( x , y ) = 0 if and only if x y = 0 ;
( O 3 )   O ( x , y ) = 1 if and only if x y = 1 ;
( O 4 )   O ( x , y ) is increasing;
( O 5 )   O ( x , y ) is continuous;
Example 1.
(1) Each continuous t-norm with no nontrivial zero factor is an overlap function.
(2) For each p > 0 , the function O p : [ 0 , 1 ] 2 [ 0 , 1 ] , defined by,
O p ( x , y ) = x p y p
is an overlap function. Especially when p = 1 , it is both an overlap function and t-norm.
(3) The function O m : [ 0 , 1 ] 2 [ 0 , 1 ] , defined by,
O m ( x , y ) = min { x , y }
is an overlap function.
(4) The function O M i d : [ 0 , 1 ] 2 [ 0 , 1 ] , defined by,
O M i d ( x , y ) = x y x + y 2
is an overlap function.
(5) The function O m M : [ 0 , 1 ] 2 [ 0 , 1 ] , defined by,
O m M ( x , y ) = min { x , y } max { x 2 , y 2 }
ia an overlap function.
Definition 3
([16]). For a mapping φ: [ 0 , 1 ] [ 0 , 1 ] [ 0 , 1 ] , if φ satisfies φ ( 0 , 0 ) = φ ( 0 , 1 ) = φ ( 1 , 1 ) = 1 and φ ( 1 , 0 ) = 0 , then φ is a fuzzy implicator operator. For each x , y , z [ 0 , 1 ] , if φ satisfies x y φ ( x , z ) φ ( y , z ) , φ is left monotonic decreasing. If φ satisfies y z φ ( x , y ) φ ( x , z ) , φ is right monotonic increasing. When φ is both left monotonic and right monotonic, φ is a called hybrid monotonic.
In the following, we show the definition of residual implication derived from the overlap function and some related examples.
Definition 4
([33] ). Suppose O : [ 0 , 1 ] 2 [ 0 , 1 ] is an overlap function, then the bivarite function RO : [ 0 , 1 ] 2 [ 0 , 1 ] defined by, for each x , y [ 0 , 1 ] ,
R O ( x , y ) = max { z [ 0 , 1 ] | O ( x , z ) y }
is said to be residual implication induced from overlap function O.
Example 2.
Some residual implications derived from overlap functions:
(1) Let the function be O 2 ( x , y ) = x 2 y 2 , the residual implication derived from overlap function O 2 is defined by,
R O ( x , y ) = 1 x 2 y y x 2 x 2 > y
(2) Let the function be O m ( x , y ) = min { x , y } , the residual implication derived from the overlap function O m is defined by,
R O ( x , y ) = 1 x y y 2 x > y

2.2. Fuzzy Sets Theory and Fuzzy Covering Rough Theory

Definition 5
([10]). The fuzzy set (or fuzzy subset) A on argument domain X is a mapping that goes from X to [ 0 , 1 ] (called membership function):
μ A : X [ 0 , 1 ]
For every x X , μ A ( x ) is called the membership of x with respect to A.
Now we proceed to review some basic knowledge to fuzzy covering approximation spaces and neighborhoods:
Definition 6
([10]). Let U be a universe and C be a family of subsets of U. If no element in C is an empty set and U = A C A , then C is called a covering of U, and the pair ( U , C ) is called a covering approximation space.
For each x U , defined the neighborhood of x as:
N x = { A C : x A } .
Definition 7
([34]). Let ( U , C ) be a covering approximation space and X U . The lower approximation P ( X ) and the upper approximation P + ( X ) of X are defined by:
P ( X ) = { x U : N x X } , P + ( X ) = { x U : N x X } .
Definition 8
([15]). Let U be a universal. A fuzzy subset A ˜ of U is a function assigning to each x of U a value A ˜ [ 0 , 1 ] . We denote by F ( U ) the set of all fuzzy subsets of U.
For any A ˜ , B ˜ F ( U ) , we say that A ˜ contained in B ˜ , denoted by A ˜ B ˜ , if A ˜ ( x ) B ˜ ( x ) for all x U . When say A ˜ = B ˜ iff A ˜ B ˜ and B ˜ A ˜ . For all x U , the union of A ˜ and B ˜ denoted as A ˜ B ˜ , is defined by ( A ˜ B ˜ ) ( x ) = A ˜ ( x ) B ˜ ( x ) ; the intersection of A ˜ and B ˜ denoted as A ˜ B ˜ , is defined by ( A ˜ B ˜ ) ( x ) = A ˜ ( x ) B ˜ ( x ) .
The following defines the fuzzy β -covering and the fuzzy β -neighborhood.
Definition 9
([24]). Let U be a universal and F ( U ) be the fuzzy power set of U. For each β ( 0 , 1 ] , C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } , with C i ˜ F ( U ) ( i = 1 , 2 , , n ), and ( i = 1 n C i ˜ ) ( x ) β for any x U . The pair ( U , C ˜ ) is a fuzzy β-covering approximation space (FCAS).
Definition 10
([23]). Suppose ( U , C ˜ ) is FCAS, for each x U then the fuzzy β-neighborhood N ˜ β of x is:
N ˜ x β = { C i ˜ C ˜ : C i ˜ ( x ) β } .
Proposition 1
([24]). Suppose ( U , C ˜ ) is FCAS, then the following is true:
(1) For each x U , N ˜ x β ( x ) β ;
(2) x , y , z U , if N ˜ x β ( y ) β and N ˜ y β ( z ) β , then N ˜ x β ( z ) β ;
(3) For each β [ 0 , 1 ] , there exist C i ˜ { N ˜ x β : C i ˜ ( x ) β , x U } , where i = 1 , 2 , , n ;
(4) If 0 < β 1 β 2 1 , then N ˜ x β 1 N ˜ x β 2 for all x U .
The following definition is of two rough set models on the basis of the fuzzy β -covering and the fuzzy β -neighborhood.
Definition 11
([27]). Suppose ( U , C ˜ ) is FCAS, For 0 < β 1 , assume that C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } is a fuzzy β-covering of U. For each A F ( U ) , the lower and upper approximation operators P ˜ ( A ) and P ˜ + ( A ) of A as:
P ˜ ( A ) ( x ) = y U ( 1 N ˜ x β ( y ) A ( y ) ) ,
and
P ˜ + ( A ) ( x ) = y U ( N ˜ x β ( y ) A ( y ) ) .
If P ˜ ( A ) P ˜ + ( A ) , then A is called a fuzzy β-covering rough set (FCRS).
Definition 12
([31]). Suppose ( U , C ˜ ) is a FCAS. For 0 < β 1 , assume that C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } is a fuzzy β-covering of U, then the fuzzy β-covering lower and upper approximation operators C ˜ ( A ) and C ˜ + ( A ) of A F ( U ) are respectively constructed as follows: for each x U ,
C ˜ ( A ) ( x ) = y U φ ( N ˜ x β ( y ) , A ( y ) ) ,
and
C ˜ + ( A ) ( x ) = y U T ( N ˜ x β ( y ) , A ( y ) ) .
where T is a t-norm, φ is an implicator derived from t-norm.
If C ˜ ( A ) C ˜ + ( A ) , we say that A is fuzzy β-covering based ( φ , T ) -fuzzy rough set. If C ˜ ( A ) = C ˜ + ( A ) , we say that A is a definable fuzzy β-covering based ( φ , T ) -fuzzy rough set.

2.3. Multi-Granulation Rough Sets

In the classical rough set, the set A on the domain U is roughly represented by A knowledge granule derived from A single equivalent binary relation, and the concept A is approximated by approximation. Based on the multi-granulation structure, the classical single-granulation rough set is extended to a multi-granulation rough set, and then the optimistic and pessimistic multi-granulation rough sets are defined.
Formally, we can define an information system as a binary group, namely: S = ( U , A T ) , where U is the set of all objects, called the domain; A T is a collection of all attributes.
Definition 13
([33,34]). Suppose S = ( U , A T ) is a knowledge system. Let A 1 , A 2 , , A m A T be m attributes in A T . For each X U , its optimistic lower and upper approximations with respect to A 1 , A 2 , , A m are defined as follows:
( i = 1 m A i ) o ̲ ( X ) = { x U : [ x ] A 1 X [ x ] A 2 X [ x ] A m X } ,
( i = 1 m A i ) o ¯ ( X ) = ( i = 1 m A i ) o ̲ ( X ) .
where X is the complement of X, and ( ( i = 1 m A i ) o ̲ , ( i = 1 m A i ) o ¯ ) is called the optimistic multi-granulation rough set of X about attributes A 1 , A 2 , , A m .
Proposition 2
([33,34]). Suppose S = ( U , A T ) is a knowledge system. Let A 1 , A 2 , , A m A T be m attributes in A T . For each X U , its optimistic upper approximations with respect to A 1 , A 2 , , A m are defined as follows:
( i = 1 m A i ) o ¯ ( X ) = { x U : [ x ] A 1 X ϕ [ x ] A 2 X ϕ [ x ] A m X ϕ } .
Definition 14
([33,34]). Suppose S = ( U , A T ) is a knowledge system. Let A 1 , A 2 , , A m A T be m attributes in A T . For each X U , its pessimistic lower and upper approximations with respect to A 1 , A 2 , , A m are defined as follows:
( i = 1 m A i ) p ̲ ( X ) = { x U : [ x ] A 1 X [ x ] A 2 X [ x ] A m X } ,
( i = 1 m A i ) p ¯ ( X ) = ( i = 1 m A i ) p ̲ ( X ) .
where X is the complement of X, and ( ( i = 1 m A i ) p ̲ , ( i = 1 m A i ) p ¯ ) is called the pessimistic multi-granulation rough set of X about attributes A 1 , A 2 , , A m .
Proposition 3
([33,34]). Suppose S = ( U , A T ) is a knowledge system. Let A 1 , A 2 , , A m A T be m attributes in A T . For each X U , its pessimistic upper approximations with respect to A 1 , A 2 , , A m are defined as follows:
( i = 1 m A i ) p ¯ ( X ) = { x U : [ x ] A 1 X ϕ [ x ] A 2 X ϕ [ x ] A m X ϕ } .
Definition 15
([35]). Suppose S = ( U , A T ) is a knowledge system. Let A 1 , A 2 , , A m A T be m attributes in A T . For each X F ( U ) its optimistic lower and upper approximations with respect to A 1 , A 2 , , A m are defined as follows: for each x U
( i = 1 m A i ) o ̲ ( x ) = i = 1 m y U { y [ x ] A i } ,
( i = 1 m A i ) o ¯ ( x ) = i = 1 m y U { y [ x ] A i } .
If ( i = 1 m A i ) o ̲ ( i = 1 m A i ) o ¯ , then A is called the optimistic multi-granulation fuzzy rough set (OMGFRS).
Definition 16
([32]). Suppose S = ( U , C ˜ , A T ) be a fuzzy covering knowledge system. For 0 < β 1 , assume that C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } is a fuzzy β-covering of U. For each X ^ F ( U ) , we defined the fuzzy β optimistic multi-granulation fuzzy rough lower and upper approximation as follows:
P o i = 1 m ( A i ) ( X ^ ) ( x ) = i = 1 m y U { ( 1 N x β ˜ ( y ) ) X ^ ( y ) } ,
P + o i = 1 m ( A i ) ( X ^ ) ( x ) = i = 1 m y U { N x β ˜ ( y ) X ^ ( y ) } .
If P o i = 1 m ( A i ) ( X ^ ) P + o i = 1 m ( A i ) ( X ^ ) , then X ^ is called the fuzzy β-covering based optimistic multi-granulation fuzzy rough set (FCOMGFRS).

3. FCFRS Based on Overlap Function

In this section, we propose a new kind of rough set model, that is, a fuzzy β -covering fuzzy rough set (FCFRS) model based on the overlap function, and study its properties.
Definition 17.
Let ( U , C ˜ ) be FCAS and C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } be a fuzzy β-covering of U for some β ( 0 , 1 ] . For each A F ( U ) , the lower and upper approximation C ˜ ( A ) and C ˜ + ( A ) of A are respectively defined by: for each x U ,
C ˜ N ˜ x β ( A ) ( x ) = inf y U R O ( N ˜ x β ( y ) , A ( y ) )
C ˜ N ˜ x β + ( A ) ( x ) = sup y U O ( N ˜ x β ( y ) , A ( y ) )
If C ˜ N ˜ x β ( A ) C ˜ N ˜ x β + ( A ) , we say that A is FCFRS based on the overlap function O and its implicator R O . If C ˜ ( A ) = C ˜ + ( A ) , we say that A is a definable FCFRS based on the overlap function O and its implicator R O .
Next, we give an example to illustrate Definition 17.
Example 3.
Let U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } and C ˜ = { C 1 ˜ , C 2 ˜ , C 3 ˜ , C 4 ˜ , C 5 ˜ } be a family of fuzzy sets on U shown in Table 1.
Then, C ˜ is a fuzzy β-covering of U, let β = 0.5 calculate the fuzzy β-neighborhood of x U shown in Table 2.
Let A = 0.5 x 1 + 0.3 x 2 + 0.7 x 3 + 0.6 x 4 + 0.2 x 5 + 0.3 x 6 . Suppose that O ( x , y ) = min { x , y } and R O ( x , y ) = 1 , x y y 2 , x > y By Definition 17:
C ˜ N ˜ x β ( A ) ( x ) = 0.04 x 1 + 0.04 x 2 + 0.04 x 3 + 0.04 x 4 + 0.04 x 5 + 0.04 x 6
C ˜ N ˜ x β + ( A ) ( x ) = 0.71 x 1 + 0.63 x 2 + 0.71 x 3 + 0.71 x 4 + 0.84 x 5 + 0.71 x 6 .
Now let us explore some properties of the fuzzy β -covering fuzzy rough set based on the overlap function:
Proposition 4.
Suppose that ( U , C ˜ ) is FCAS, and C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } is a fuzzy β-covering of U for some β ( 0 , 1 ] . For each A , B F ( U ) , the model verifies the following properties:
(i) C ˜ N ˜ x β ( A ) A C ˜ N ˜ x β + ( A ) ;
(ii) C ˜ N ˜ x β ( U ) = U ;
(iii) C ˜ N ˜ x β + ( ϕ ) = ϕ ;
(iv) If A B then C ˜ N ˜ x β ( A ) C ˜ N ˜ x β ( B ) ;
(v) If A B then C ˜ N ˜ x β + ( A ) C ˜ N ˜ x β + ( B ) ;
(vi) when R O satisfies continuous and right monotonic, then C ˜ N ˜ x β ( i I A i ) i I C ˜ N ˜ x β ( A i ) , C ˜ N ˜ x β ( i I A i ) = i I C ˜ N ˜ x β ( A i ) ;
(vii) when O satisfies continuous and monotonic, then C ˜ N ˜ x β + ( i I A i ) = i I C ˜ N ˜ x β + ( A i ) , C ˜ N ˜ x β + ( i I A i ) = i I C ˜ N ˜ x β + ( A i ) .
Proof. 
(i) This proposition is clearly held.
(ii) By the definition of residual implication, we have R O ( x , 1 ) = 1 . For each x U , then
C ˜ N ˜ x β ( U ) ( x ) = inf y U R O ( N ˜ x β ( y ) , U ( y ) ) = inf y U R O ( N ˜ x β ( y ) , 1 ) = 1 = U ( x ) .
(iii) For each x U , we have
C ˜ N ˜ x β + ( ϕ ) ( x ) = sup y U O ( N ˜ x β ( y ) , ϕ ( y ) ) = sup y U O ( N ˜ x β ( y ) , 0 ) = 0 = ϕ ( x ) .
(iv) For each x U , then:
C ˜ N ˜ x β ( A ) ( x ) = inf y U R O ( N ˜ x β ( y ) , A ( y ) ) inf y U R O ( N ˜ x β ( y ) , B ( y ) ) = C ˜ N ˜ x β ( B ) ( x ) .
(v) If A B , then
C ˜ N ˜ x β + ( A ) ( x ) = sup y U O ( N ˜ x β ( y ) , A ( y ) ) sup y U O ( N ˜ x β ( y ) , B ( y ) ) = C ˜ N ˜ x β + ( B ) ( x ) .
(vi) Because R O satisfies continuous and right monotonic, for each x U , we have
C ˜ N ˜ x β ( i I A i ) ( x ) = inf y U R O ( N ˜ x β ( y ) , ( i I A i ) ( y ) ) = inf y U R O ( N ˜ x β ( y ) , ( sup i I A i ) ( y ) )
= inf y U sup i I R O ( N ˜ x β ( y ) , A i ( y ) ) sup i I inf y U R O ( N ˜ x β ( y ) , A i ( y ) ) = sup i I C ˜ N ˜ x β ( A i ) ( x ) = ( i I C ˜ N ˜ x β ( A i ) ) ( x ) .
And
C ˜ N ˜ x β ( i I A i ) ( x ) = inf y U R O ( N ˜ x β ( y ) , ( i I A i ) ( y ) ) = inf y U R O ( N ˜ x β ( y ) , ( inf i I A i ) ( y ) )
= inf y U inf i I R O ( N ˜ x β ( y ) , A i ) ( y ) ) = inf i I inf y U R O ( N ˜ x β ( y ) , A i ) ( y ) ) = inf i I C ˜ N ˜ x β ( A i ) ( x ) = ( i I C ˜ N ˜ x β ( A i ) ) ( x ) .
Hence C ˜ N ˜ x β ( i I A i ) i I C ˜ N ˜ x β ( A i ) and C ˜ N ˜ x β ( i I A i ) = i I C ˜ N ˜ x β ( A i ) .
(vii) when O satisfies continuous and monotonic, for each x U , we have
C ˜ N ˜ x β + ( i I A i ) ( x ) = sup y U O ( N ˜ x β ( y ) , ( i I A i ) ( y ) ) = sup y U O ( N ˜ x β ( y ) , ( sup i I A i ) ( y ) )
= sup y U sup i I O ( N ˜ x β ( y ) , A i ) ( y ) ) = sup i I sup y U O ( N ˜ x β ( y ) , A i ( y ) ) = sup i I C ˜ N ˜ x β + ( A i ( x ) = ( i I C ˜ N ˜ x β + ( A i ) ) ( x ) .
And
C ˜ N ˜ x β + ( i I A i ) ( x ) = sup y U O ( N ˜ x β ( y ) , ( i I A i ) ( y ) ) = sup y U O ( N ˜ x β ( y ) , ( inf i I A i ) ( y ) )
= sup y U inf i I O ( N ˜ x β ( y ) , A i ) ( y ) ) = inf i I sup y U O ( N ˜ x β ( y ) , A i ) ( y ) ) = inf i I C ˜ N ˜ x β + ( A i ) ( x ) = ( i I C ˜ N ˜ x β + ( A i ) ) ( x ) .
Hence C ˜ N ˜ x β + ( i I A i ) = i I C ˜ N ˜ x β + ( A i ) and C ˜ N ˜ x β + ( i I A i ) = i I C ˜ N ˜ x β + ( A i ) .
In particular, we find that idempotativity is not held for the Fuzzy β -covering fuzzy rough set model based on the overlap function, i.e., C ˜ N ˜ x β ( C ˜ N ˜ x β ( A ) ) C ˜ N ˜ x β ( A ) and C ˜ N ˜ x β + ( C ˜ N ˜ x β + ( A ) ) C ˜ N ˜ x β + ( A ) . We will show it with a concrete example as follows:
Example 4
(Continue Example 3). Let U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } and C ˜ = { C 1 ˜ , C 2 ˜ , C 3 ˜ , C 4 ˜ , C 5 ˜ } be a family of fuzzy sets U, and by calculating, we have:
C ˜ N ˜ x β + ( A ) ( x ) = 0.71 x 1 + 0.63 x 2 + 0.71 x 3 + 0.71 x 4 + 0.84 x 5 + 0.71 x 6 .
C ˜ N ˜ x β + ( C ˜ N ˜ x β + ( A ) ( x ) ) = 0.77 x 1 + 0.77 x 2 + 0.71 x 3 + 0.71 x 4 + 0.84 x 5 + 0.71 x 6 .
Obviously, C ˜ N ˜ x β + ( A ) ( x ) C ˜ N ˜ x β + ( C ˜ N ˜ x β + ( A ) ( x ) ) , in a similar way, we can find C ˜ N ˜ x β ( A ) ( x ) C ˜ N ˜ x β ( C ˜ N ˜ x β ( A ) ( x ) ) . Thus, the idempotativily is not holding.

4. FCMGFRSs Based on Overlap Function

In this section, we introduce a fuzzy β -covering based optimistic multi-granulation fuzzy rough set based on the overlap function (briefly, FCOMGFRS based on the overlap function), a fuzzy β -covering based pessimistic multi-granulation fuzzy rough set (briefly, FCPMGFRS based overlap function), and studied some properties of the FCOMGFRS model.
Definition 18.
Let ( U , C ˜ ) be the FCAS, C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } be a fuzzy β-covering of U for some β ( 0 , 1 ] . For each A F ( U ) , the lower and upper approximation C o i = 1 m ( N ˜ x β ) i ( A ) and C + o i = 1 m ( N ˜ x β ) i ( A ) of A are, respectively, defined by: for each x U ,
C o i = 1 m ( N ˜ x β ) i A ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , A ( y ) )
C + o i = 1 m ( N ˜ x β ) i A ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , A ( y ) )
If C o i = 1 m ( N ˜ x β ) i A C + o i = 1 m ( N ˜ x β ) i A , then A is called the FCOMGFRS model based on the overlap function.
Definition 19.
Let ( U , C ˜ ) be the FCAS and C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } be a fuzzy β-covering of U for some β ( 0 , 1 ] . For each A F ( U ) , the lower and upper approximation C p i = 1 m ( N ˜ x β ) i ( A ) and C + p i = 1 m ( N ˜ x β ) i ( A ) of A are, respectively, defined by: for each x U ,
C p i = 1 m ( N ˜ x β ) i A ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , A ( y ) )
C + p i = 1 m ( N ˜ x β ) i A ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , A ( y ) )
If C p i = 1 m ( N ˜ x β ) i A C + p i = 1 m ( N ˜ x β ) i A , then A is called the FCPMGFRS model based on the overlap function.
Due to the similarity of the two models, in this paper, we only discuss the FCOMGFRS model based on the overlap function. In the following, we employ an example to illustrate the above concepts.
Example 5
(Continue Example 3). C ˜ is a fuzzy β-covering of U, let β = 0.6 calculate the fuzzy β-neighborhood of x U shown in Table 3.
It is not difficult to verify that both N ˜ x i 0.5 and N ˜ x i 0.6 are fuzzy β-neighborhood of x. Therefore, we can obtain the lower and upper approximation of the FCOMGFRS model based on the overlap function as follows:
C o i = 1 m ( N ˜ x β ) i A ( x ) = 0.04 x 1 + 0.04 x 2 + 0.04 x 3 + 0.04 x 4 + 0.04 x 5 + 0.04 x 6 ;
C + o i = 1 m ( N ˜ x β ) i A ( x ) = 0.71 x 1 + 0.63 x 2 + 0.71 x 3 + 0.71 x 4 + 0.84 x 5 + 0.71 x 6 .
From the definition of the lower and upper approximation operators of the FCOMGFRS model based on the overlap function, it is possible to deduce the following properties.
Proposition 5.
Suppose that ( U , C ˜ ) be FCAS, C ˜ = { C 1 ˜ , C 2 ˜ , , C n ˜ } is a fuzzy β-covering of U for some β ( 0 , 1 ] . For each A , B F ( U ) , the model verifies the following properties:
(i) C o i = 1 m ( N ˜ x β ) i ( A ) A C + o i = 1 m ( N ˜ x β ) i ( A ) ;
(ii) C o i = 1 m ( N ˜ x β ) i ( U ) = U ;
(iii) C + o i = 1 m ( N ˜ x β ) i ( ϕ ) = ϕ ;
(iv) If A B then C o i = 1 m ( N ˜ x β ) i ( A ) C o i = 1 m ( N ˜ x β ) i ( B ) ;
(v) If A B then C + o i = 1 m ( N ˜ x β ) i ( A ) C + o i = 1 m ( N ˜ x β ) i ( B ) ;
(vi) when R O satisfies continuous and right monotonic, then C o i = 1 m ( N ˜ x β ) i ( A B ) C o i = 1 m ( N ˜ x β ) i ( A ) C o i = 1 m ( N ˜ x β ) i ( B ) , C o i = 1 m ( N ˜ x β ) i ( A B ) C o i = 1 m ( N ˜ x β ) i ( A ) C o i = 1 m ( N ˜ x β ) i ( B ) ;
(vii) when O satisfies continuous and monotonic, then C + o i = 1 m ( N ˜ x β ) i ( A B ) C + o i = 1 m ( N ˜ x β ) i ( A ) C + o i = 1 m ( N ˜ x β ) i ( B ) , C + o i = 1 m ( N ˜ x β ) i ( A B ) C + o i = 1 m ( N ˜ x β ) i ( A ) C + o i = 1 m ( N ˜ x β ) i ( B ) .
Proof. 
(i) This proposition is clearly held.
(ii) By the definition of residual implication, we have R O ( x , 1 ) = 1 . For each x U , then
C o i = 1 m ( N ˜ x β ) i ( U ) ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , U ( y ) ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , 1 ) = 1 = U ( x ) .
(iii) For each x U , we have
C + o i = 1 m ( N ˜ x β ) i ( ϕ ) ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , ( ϕ ) ( y ) ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , 0 ) = 0 = ϕ ( x ) .
(iv) For each x U , then:
C o i = 1 m ( N ˜ x β ) i ( A ) ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , A ( y ) ) i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , B ( y ) ) = C o i = 1 m ( N ˜ x β ) i ( B ) ( x ) .
(v) If A B , then
C + o i = 1 m ( N ˜ x β ) i ( A ) ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , A ( y ) ) i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , B ( y ) ) = C + o i = 1 m ( N ˜ x β ) i ( B ) ( x ) .
(vi) Because R O satisfies continuous and right monotonic, for each x U , we have
C o i = 1 m ( N ˜ x β ) i ( A B ) ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , ( A B ) ( y ) ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , ( A B ) ( y ) )
= i = 1 m [ inf y U R O ( ( N ˜ x β ) i ( y ) , A ( y ) ) inf y U R O ( ( N ˜ x β ) i ( y ) , B ( y ) ) ] i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , A ( y ) ) i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , B ( y ) )
= C o i = 1 m ( N ˜ x β ) i A ( x ) C o i = 1 m ( N ˜ x β ) i B ( x ) ;
According to the (iv), C o i = 1 m ( N ˜ x β ) i ( A B ) C o i = 1 m ( N ˜ x β ) i ( A ) C o i = 1 m ( N ˜ x β ) i ( B ) is held.
(vii)
C + o i = 1 m ( N ˜ x β ) i ( A B ) ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , ( A B ) ( y ) ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , ( A B ) ( y ) )
= i = 1 m [ sup y U O ( ( N ˜ x β ) i ( y ) , A ( y ) ) sup y U O ( ( N ˜ x β ) i ( y ) , A ( y ) ) ] i = 1 m sup y U O ( N ˜ x β ( y ) , A ( y ) ) i = 1 m sup y U O ( N ˜ x β ( y ) , B ( y ) )
= C + p i = 1 m ( X i ) A ( x ) C + p i = 1 m ( X i ) B ( x )
According to the (v), C + o i = 1 m ( N ˜ x β ) i ( A B ) C + o i = 1 m ( N ˜ x β ) i ( A ) C + o i = 1 m ( N ˜ x β ) i ( B ) is hold.

5. Method for Multi-Criteria Decision-Making (MCDM) Problem with Fuzzy Information

Aiming at the combination of multi-criteria decision-making (MCDM) problems and the fuzzy β -covering fuzzy rough set model based on the overlap function models, in this section, we propose a new class of problem solving methods inspired by Zhang’s article [23].

5.1. Background Description

Let U = { x i : i = 1 , 2 , , n } be a domain of discourse, C ˜ = { C ˜ j : j = 1 , 2 , , m } be a set of criteria, where C ˜ j ( x i ) represents the value of x i for criteria C ˜ j . Let β ( 0 , 1 ] , and C ˜ is a fuzzy β -covering on U.
In the standard fuzzy TOPSIS method, for a fuzzy set A, we first obtain both the fuzzy positive and negative ideal solutions A + and A based on the set of criteria C ˜ j : j = 1 , 2 , , m . Then, we calculate the positive and negative ideal fuzzy distance d i + and d i of each x i . Finally, we calculated their closeness coefficient and ranks of all alternatives x i : i = 1 , 2 , , n .

5.2. Decision-Making Method

Now we are prepared to expound the decision-making method. It is based on the fuzzy β -covering fuzzy rough set model based on the overlap function.
Firstly, we obtain the decision-making matrix A with fuzzy information, which is formally expressed as follows:
A = a 11 a 12 a 1 m a 21 a 22 a 2 m . . . . . . . . . a n 1 a n 2 a n m
where a i j is the decision-maker’s evaluation value of x i for criteria C ˜ j , i.e., C ˜ j ( x i ) = a i j .
For the criteria weight, W = { w 1 , w 2 , , w m } and i = 1 m w i = 1 , w i [ 0 , 1 ] for j = 1 , 2 , , m . From matrix A, the following formulas provide us with the positive ideal fuzzy set A + , the negative ideal fuzzy set A and the integrated ideal fuzzy set A as follows:
A + = { ( x 1 , max 1 j m a 1 j ) , ( x 2 , max 1 j m a 2 j ) , , ( x n , max 1 j m a n j ) } ;
A = { ( x 1 , min 1 j m a 1 j ) , ( x 2 , min 1 j m a 2 j ) , , ( x n , min 1 j m a n j ) } ;
A = { ( x 1 , 1 j m a 1 j w j ) , ( x 2 , 1 j m a 2 j w j ) , , ( x n , 1 j m a n j w j ) } .
Next, we calculate the lower and upper approximation of A + , A and A by the fuzzy β -covering fuzzy rough set model based on the overlap function. Then, we calculate the positive and negative distance of each x i . The following formulas provide us with the positive and negative ideal fuzzy distance d i + and d i of each x i as follows:
d i + = α ( d ( C ˜ ( A + ) ( x i ) , C ˜ ( A ) ( x i ) ) ) + ( 1 α ) ( d ( C ˜ + ( A + ) ( x i ) , C ˜ + ( A ) ( x i ) ) ) ;
d i = α ( d ( C ˜ ( A ) ( x i ) , C ˜ ( A ) ( x i ) ) ) + ( 1 α ) ( d ( C ˜ + ( A ) ( x i ) , C ˜ + ( A ) ( x i ) ) ) .
where α [ 0 , 1 ] is a controlling value and d ( a , b ) = | a b | .
Lastly, we calculate the closeness coefficient associated with each alternative x i through the following formula:
ρ = d i d i + + d i , i ( 1 , 2 , , n ) .
According to the value of ρ , the ranking of all alternatives is determined and we can select the best of them.

5.3. Decision-Making Steps

Input MCDM with fuzzy information and β .
Output The ranking for all alternatives.
Step 1: The decision-making matrix A with information is obtained.
Step 2: Using the formulas, calculate the positive ideal fuzzy set A + , the negative ideal fuzzy set A and the integrated ideal fuzzy set A .
Step 3: Using the fuzzy β -covering fuzzy rough set based on the overlap function, calculate the lower and the upper approximation of A + , A and A , respectively.
Step 4: Using the formulas, calculate the positive ideal fuzzy distance d i + and the negative ideal fuzzy distance d i of x i .
Step 5: Using the formulas, calculate the closeness coefficient ρ of each alternative x i .
Step 6: Rank the alternatives, and choose the best element.

6. MCDM Problem with Fuzzy β -Covering Fuzzy Rough Set

In general, MCDM problems are characterized by incomplete information, so we need to use rough set theory, which is a mathematical theory that effectively deals with data inconsistencies, to solve such problems. Therefore, specific problems are proposed in this section, which are solved by using different rough set models and analyzed comprehensively.

6.1. Problem Description

Assume that a school is planning to hire six teachers, respectively, from their personality, teaching ability, oral expression ability, teaching-plan writing ability and work experience for five aspects of evaluation. Let U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } represent a group of six teachers and C ˜ = { C ˜ 1 , C ˜ 2 , C ˜ 3 , C ˜ 4 , C ˜ 5 } represent five criteria that are represented as follows:
  • C ˜ 1 represents the personality;
  • C ˜ 2 represents the teaching ability;
  • C ˜ 3 represents the oral expression ability;
  • C ˜ 4 represents the teaching-plan writing ability;
  • C ˜ 5 represents the work experience.
This decision-making matrix A with fuzzy information is shown in Table 1 (continue Example 3). Then, we obtain the positive ideal A + , negative ideal A and integrated ideal A shown in Table 4.
Case 1: O m ( x , y ) = min { x , y } and R O ( x , y ) = 1 x y y 2 x > y ; α = 0.5 . The lower and upper approximations are denoted as:
C ˜ N ˜ x β ( A ) ( x ) = inf y U R O m ( N ˜ x β ( y ) , A ( y ) )
C ˜ N ˜ x β + ( A ) ( x ) = sup y U O m ( N ˜ x β ( y ) , A ( y ) ) .
Next, the lower and upper approximation of the fuzzy β -covering fuzzy rough set model based on the overlap function of A + , A and A are, respectively, calculated in Table 5.
By calculating, the positive ideal fuzzy distance d i + and the negative ideal fuzzy distance d i of x i shown in Table 6.
Next, we calculate the closeness coefficient ρ of alternatives shown in Table 7.
By the Table 7, we can rank all of the alternatives as x 4 x 6 x 1 x 5 x 3 x 2 , and from this result, we can see that candidate x 4 is the best option.
Case 2: O 2 ( x , y ) = x 2 y 2 and R O ( x , y ) = 1 x 2 y y x 2 x 2 y ; α = 0.5 . The lower and upper approximations denoted as:
C ˜ N ˜ x β ( A ) ( x ) = inf y U R O 2 ( N ˜ x β ( y ) , A ( y ) )
C ˜ N ˜ x β + ( A ) ( x ) = sup y U O 2 ( N ˜ x β ( y ) , A ( y ) ) .
Next the lower and upper approximation of the fuzzy β -covering fuzzy rough set based on the overlap function of A + , A and A are, respectively, calculated in Table 8.
The calculations of the positive ideal fuzzy distance d i + and the negative ideal fuzzy distance d i of x i are shown in Table 9.
Next, we calculate the closeness coefficient ρ of alternative, as shown in Table 10.
By the Table 10, we can rank all alternatives as x 4 x 1 x 6 x 2 x 3 x 5 , and from this result, we can see that candidate x 4 is the best option.

6.2. Sensitivity Analysis Based on Our Proposed Method

In this section, a sensitivity analysis of the multi-criteria decision making methods is demonstrated.
A comparative analysis among the methods W A , and O W A , etc., together with our proposed method, is studied in this section through numerical examples of comparing the methods. Moreover, a drawback of W A , O W A and T O P S I S methods is that they cannot make the best decision in many situations, for example, Zhang’s sample in [23] shows that it does not make good decisions. We find the method we proposed can make up for this situation. Next, according to the numerical example, we compare our method together with other methods, as shown in Table 11.
Through Table 11, we find that although the specific ranking of the alternatives by some methods is a little different, the best alternative is the same. This situation illustrates the effectivity of our proposed method.
Then we summarize the rankings that we obtain under different customizations of the factors presented in Table 12.
Through Table 12, we found that different α results in the same best alternative, which is x 4 . No matter what the value of α is, it does not affect the first three terms.
The decision making method proposed in this paper is based on the extension of the t-norm method, which expands its scope in specific applications and can better order objects, so that decision makers can make a better judgment. Through specific examples, the comparison results of the two models are shown in Table 13.

6.3. Short Comparison of the Fuzzy β -Covering Fuzzy Rough Set Model Based on Overlap Function

In this part, in order to illustrate the results obtained in this paper more clearly, we show a short comparison of the fuzzy β -covering fuzzy rough set model based on the overlap function.
As another extension form of logic AND, the overlap function is different from the triangle norm because it has no associativity and has more advantages in the application than the triangle norm. For example, in classification involving n-dimensional input problems, n-dimensional overlap functions have been successfully used. On the other side, a fuzzy rough set, which is an effective mathematical tool for dealing with uncertain and incomplete information, has already been successfully used in various aspects, for instance in knowledge discovery [19], granular machines learning [20], and data analysis [21], etc. Therefore, using the overlap function and its residual implication to construct an upper and lower fuzzy approximate operator can not only provide us another possible method to deal with practical problems but also broaden the application scope of rough set theory.

7. An Approach to Multi-Criteria Group Decision-Making (MCGDM) Based on FCOMGFRS Model

In this section, we will present a novel approach to multi-criteria group decision-making (MCGDM) based on FCOMGFRS models and give an MCGDM problem to illustrate the advantages of the extended model. Aiming at the combination of MCGDM problems and the FCOGMFRS based on the overlap function model, we propose a new class of problem solving methods inspired by Atef’s article [37].

7.1. Background Description

Let U = { x 1 , x 2 , , x n } be n alternatives and D = { d 1 , d 2 , , d m } be m decision-makers. λ i is the weight vector for experts, where λ i 0 for i = 1 , 2 , , m and i = 1 m λ i = 1 . Then F ( A i ) = F ( A i 1 ) , F ( A i 2 ) , , F ( A i m ) is each expert’s evaluation of a set of criteria. The expert presents the assessments for the collection of alternatives concerning criteria F ( A i ) using a family of mappings g i : U × F ( A i ) [ 0 , 1 ] , where g i ( x i , F ( A i j ) ) [ 0 , 1 ] for i = 1 , 2 , , n and j = 1 , 2 , , m . Therefore, we institute the MCGDM with an information system. Based on the proposed covering methods, we prefer a decision-making algorithm to detect the best alternative in the following steps. Now, we can make the decision using the following steps.
Step 1: Produce the decision-making object of the universe of fuzzy β information. Through the TOPSIS theory, we have:
p m ¯ = { F ( A i j ) , max 1 i n ( g l ( x i , F ( A i j ) ) ) : i = 1 , 2 , , l , j = 1 , 2 , , m } ;
and
p m ̲ = { F ( A i j ) , min 1 i n ( g l ( x i , F ( A l j ) ) ) : j = 1 , 2 , , m , l = 1 , 2 , , t } .
Step 2: Calculate the respective distances D m ¯ and D m ̲ as follows:
D m ¯ = d ( F ( A m j ) ( x i ) , F ( A m j ) ( p ¯ ) ;
and
D m ̲ = d ( F ( A m j ) ( x i ) , F ( A m j ) ( p ̲ ) .
where d ( X ^ , Y ^ ) = 1 n i = 1 n ( X ^ ( x i ) Y ^ ( x i ) ) 2 and n is the radix of U .
Step 3: Calculate the lower and upper approximations of the best and worst decision-making objects with fuzzy β information through the community of experts pertaining to the criteria set by Definition 18 to the multiple criterion decision-making information method under the β ( 0 , 1 ] precision parameter. They are each listed as follows:
C o i = 1 m ( N ˜ x β ) i D m ̲ ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , D m ̲ ( y ) ) ;
and
C + o i = 1 m ( N ˜ x β ) i D m ̲ ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , D m ̲ ( y ) ) .
C o i = 1 m ( N ˜ x β ) i D m ¯ ( x ) = i = 1 m inf y U R O ( ( N ˜ x β ) i ( y ) , D m ¯ ( y ) ) ;
and
C + o i = 1 m ( N ˜ x β ) i D m ¯ ( x ) = i = 1 m sup y U O ( ( N ˜ x β ) i ( y ) , D m ¯ ( y ) ) .
Step 4: Calculate the closeness coefficient degree by:
R m = W m ̲ ( x i ) W m ̲ ( x i ) + W m ¯ ( x i ) .
where W m ̲ ( x i ) = φ ( C o i = 1 m ( N ˜ x β ) i D m ̲ ( x i ) , C + o i = 1 m ( N ˜ x β ) i D m ̲ ( x i ) ) and W m ¯ ( x i ) = φ ( C o i = 1 m ( N ˜ x β ) i D m ¯ ( x ) i , C + o i = 1 m ( N ˜ x β ) i D m ¯ ( x i ) ) . ( φ ( x , y ) = x + y x y ) are the worst and the best decision-making objects for individual ranking function of experts for the candidates x i , and 0 W m ̲ ( x i ) , W m ¯ ( x i ) 1 .
Step 5: Calculate the group ranking function by the following equation and rank the alternatives.
R ( x i ) = m = 1 t λ m R m ( x i ) .

7.2. Illustrative Example

In this section, we will use a specific example to illustrate the above steps.
Example 6.
Assume that a school is planning to hire six teachers U = { x 1 , x 2 , , x 6 } , respectively, from personality, teaching ability, oral expression ability, teaching-plan writing ability and work experience; five aspects C i = { c 1 , c 2 , c 3 , c 4 , c 5 } for evaluation. The weight of three experts are λ 1 = 0.4 , λ 2 = 0.1 , λ 3 = 0.5 . Next, we illustrate it through specific examples.
Step 1:Each expert’s assessment of each teacher is shown in Table 14, Table 15 and Table 16.
Step 2:According to the importance of these five attributes, we give the following results for three experts:
p 1 ¯ = { ( F ( c 11 ) , 0.82 ) , ( F ( c 12 ) , 0.76 ) , ( F ( c 13 ) , 0.74 ) , ( F ( c 14 ) , 0.78 ) , ( F ( c 15 ) , 0.91 ) } ;
and
p 1 ̲ = { ( F ( c 11 ) , 0.28 ) , ( F ( c 12 ) , 0.32 ) , ( F ( c 13 ) , 0.36 ) , ( F ( c 14 ) , 0.45 ) , ( F ( c 15 ) , 0.43 ) } .
p 2 ¯ = { ( F ( c 21 ) , 0.85 ) , ( F ( c 22 ) , 0.77 ) , ( F ( c 23 ) , 0.79 ) , ( F ( c 24 ) , 0.81 ) , ( F ( c 25 ) , 0.71 ) } ;
and
p 2 ̲ = { ( F ( c 21 ) , 0.35 ) , ( F ( c 22 ) , 0.34 ) , ( F ( c 23 ) , 0.46 ) , ( F ( c 24 ) , 0.26 ) , ( F ( c 25 ) , 0.43 ) } .
p 3 ¯ = { ( F ( c 31 ) , 0.84 ) , ( F ( c 32 ) , 0.75 ) , ( F ( c 33 ) , 0.74 ) , ( F ( c 34 ) , 0.69 ) , ( F ( c 35 ) , 0.78 ) } ;
and
p 3 ̲ = { ( F ( c 31 ) , 0.37 ) , ( F ( c 32 ) , 0.36 ) , ( F ( c 33 ) , 0.35 ) , ( F ( c 34 ) , 0.42 ) , ( F ( c 35 ) , 0.48 ) } .
Step 3:If the consistency consensus threshold is β = 0.6 , which produces N ˜ x i 0.6 as displayed in Table 17, Table 18 and Table 19.
Step 4:Calculate the distances D m ¯ and D m ̲ as follows:
D 1 ¯ = 0.248 x 1 + 0.269 x 2 + 0.315 x 3 + 0.189 x 4 + 0.261 x 5 + 0.306 x 6 ;
and
D 1 ̲ = 0.307 x 1 + 0.307 x 2 + 0.241 x 3 + 0.317 x 4 + 0.247 x 5 + 0.259 x 6 .
D 2 ¯ = 0.276 x 1 + 0.276 x 2 + 0.293 x 3 + 0.146 x 4 + 0.261 x 5 + 0.228 x 6 ;
and
D 2 ̲ = 0.246 x 1 + 0.278 x 2 + 0.209 x 3 + 0.352 x 4 + 0.293 x 5 + 0.278 x 6 .
D 3 ¯ = 0.243 x 1 + 0.233 x 2 + 0.210 x 3 + 0.219 x 4 + 0.184 x 5 + 0.259 x 6 ;
and
D 3 ̲ = 0.225 x 1 + 0.231 x 2 + 0.250 x 3 + 0.237 x 4 + 0.232 x 5 + 0.212 x 6 .
Situation 1.
Step 5:When O 2 ( x , y ) = x 2 y 2 , calculate the lower and upper approximations of the best and worst decision-making objects as follows:
(E1)
C + O i = 1 m ( X i ) D 1 ¯ ( x ) = 0.0276 x 1 + 0.0305 x 2 + 0.0419 x 3 + 0.0165 x 4 + 0.0294 x 5 + 0.0408 x 6
C O i = 1 m ( X i ) D 1 ¯ ( x ) = 0.743 x 1 + 0.798 x 2 + 0.863 x 3 + 0.639 x 4 + 0.699 x 5 + 0.727 x 6
C + O i = 1 m ( X i ) D 1 ̲ ( x ) = 0.0421 x 1 + 0.0396 x 2 + 0.0285 x 3 + 0.0399 x 4 + 0.0282 x 5 + 0.0292 x 6
C O i = 1 m ( X i ) D 1 ̲ ( x ) = 0.78 x 1 + 0.803 x 2 + 0.721 x 3 + 0.893 x 4 + 0.764 x 5 + 0.771 x 6 .
(E2)
C + O i = 1 m ( X i ) D 2 ¯ ( x ) = 0.0342 x 1 + 0.0321 x 2 + 0.0363 x 3 + 0.0146 x 4 + 0.0288 x 5 + 0.0259 x 6
C O i = 1 m ( X i ) D 2 ¯ ( x ) = 0.739 x 1 + 0.773 x 2 + 0.833 x 3 + 0.607 x 4 + 0.721 x 5 + 0.721 x 6
C + O i = 1 m ( X i ) D 2 ̲ ( x ) = 0.0305 x 1 + 0.0348 x 2 + 0.0211 x 3 + 0.0492 x 4 + 0.0365 x 5 + 0.0345 x 6
C O i = 1 m ( X i ) D 2 ̲ ( x ) = 0.74 x 1 + 0.811 x 2 + 0.703 x 3 + 0.941 x 4 + 0.832 x 5 + 0.798 x 6 .
(E3)
C + O i = 1 m ( X i ) D 3 ¯ ( x ) = 0.0265 x 1 + 0.0229 x 2 + 0.0186 x 3 + 0.019 x 4 + 0.0180 x 5 + 0.0292 x 6
C O i = 1 m ( X i ) D 3 ¯ ( x ) = 0.735 x 1 + 0.742 x 2 + 0.705 x 3 + 0.742 x 4 + 0.659 x 5 + 0.771 x 6
C + O i = 1 m ( X i ) D 3 ̲ ( x ) = 0.0227 x 1 + 0.0225 x 2 + 0.0153 x 3 + 0.0223 x 4 + 0.0227 x 5 + 0.0196 x 6
C O i = 1 m ( X i ) D 3 ̲ ( x ) = 0.707 x 1 + 0.739 x 2 + 0.862 x 3 + 0.772 x 4 + 0.741 x 5 + 0.697 x 6 .
Step 6:By the formula, we obtain the worst and the best decision-making objects as follows:
W 1 ¯ = 0.75 x 1 + 0.81 x 2 + 0.87 x 3 + 0.64 x 4 + 0.71 x 5 + 0.74 x 6 , W 1 ̲ = 0.79 x 1 + 0.81 x 2 + 0.73 x 3 + 0.89 x 4 + 0.77 x 5 + 0.78 x 6
W 2 ¯ = 0.75 x 1 + 0.78 x 2 + 0.84 x 3 + 0.61 x 4 + 0.73 x 5 + 0.73 x 6 , W 2 ̲ = 0.75 x 1 + 0.82 x 2 + 0.71 x 3 + 0.94 x 4 + 0.84 x 5 + 0.81 x 6
W 3 ¯ = 0.74 x 1 + 0.75 x 2 + 0.71 x 3 + 0.74 x 4 + 0.67 x 5 + 0.78 x 6 , W 3 ̲ = 0.71 x 1 + 0.74 x 2 + 0.86 x 3 + 0.78 x 4 + 0.75 x 5 + 0.71 x 6
Thus, we evaluate a closeness coefficient as follows:
R 1 = 0.5127 x 1 + 0.5021 x 2 + 0.4563 x 3 + 0.5818 x 4 + 0.5212 x 5 + 0.5130 x 6
R 2 = 0.5 x 1 + 0.5117 x 2 + 0.4581 x 3 + 0.6064 x 4 + 0.5348 x 5 + 0.5250 x 6
R 3 = 0.4902 x 1 + 0.4989 x 2 + 0.5488 x 3 + 0.5099 x 4 + 0.5289 x 5 + 0.4747 x 6 .
Step 7:Based on these results, we calculate the group optimal index as follows:
R = 0.5002 x 1 + 0.5014 x 2 + 0.5027 x 3 + 0.5483 x 4 + 0.5264 x 5 + 0.4951 x 6
and hence obtain the ranking order as:
x 4 x 5 x 3 x 2 x 1 x 6
According to the order, we think the fourth is the best candidate.
Situation 2.
Step 5:When O m M ( x , y ) = min { x , y } max { x 2 , y 2 } , the lower and upper approximations of the decision-making objects are as follows:
(E1)
C + O i = 1 m ( X i ) D 1 ¯ ( x ) = 0.1132 x 1 + 0.1136 x 2 + 0.1331 x 3 + 0.0750 x 4 + 0.1103 x 5 + 0.1333 x 6
C O i = 1 m ( X i ) D 1 ¯ ( x ) = 0.6084 x 1 + 0.6433 x 2 + 0.7456 x 3 + 0.5477 x 4 + 0.6337 x 5 + 0.6728 x 6
C + O i = 1 m ( X i ) D 1 ̲ ( x ) = 0.1378 x 1 + 0.1297 x 2 + 0.1018 x 3 + 0.1258 x 4 + 0.1044 x 5 + 0.1128 x 6
C O i = 1 m ( X i ) D 1 ̲ ( x ) = 0.6839 x 1 + 0.7266 x 2 + 0.6089 x 3 + 0.7986 x 4 + 0.6164 x 5 + 0.6264 x 6 .
(E2)
C + O i = 1 m ( X i ) D 2 ¯ ( x ) = 0.1238 x 1 + 0.1166 x 2 + 0.1237 x 3 + 0.0601 x 4 + 0.1103 x 5 + 0.0993 x 6
C O i = 1 m ( X i ) D 2 ¯ ( x ) = 0.6234 x 1 + 0.6371 x 2 + 0.6934 x 3 + 0.4814 x 4 + 0.5248 x 5 + 0.5667 x 6
C + O i = 1 m ( X i ) D 2 ̲ ( x ) = 0.1104 x 1 + 0.1174 x 2 + 0.0886 x 3 + 0.1397 x 4 + 0.1237 x 5 + 0.1211 x 6
C O i = 1 m ( X i ) D 2 ̲ ( x ) = 0.6059 x 1 + 0.6579 x 2 + 0.5670 x 3 + 0.8868 x 4 + 0.7612 x 5 + 0.6490 x 6 .
(E3)
C + O i = 1 m ( X i ) D 3 ¯ ( x ) = 0.1091 x 1 + 0.0984 x 2 + 0.0887 x 3 + 0.0869 x 4 + 0.0801 x 5 + 0.1128 x 6
C O i = 1 m ( X i ) D 3 ¯ ( x ) = 0.6022 x 1 + 0.5987 x 2 + 0.5684 x 3 + 0.5895 x 4 + 0.5320 x 5 + 0.6083 x 6
C + O i = 1 m ( X i ) D 3 ̲ ( x ) = 0.1010 x 1 + 0.0976 x 2 + 0.1056 x 3 + 0.0941 x 4 + 0.0980 x 5 + 0.0923 x 6
C O i = 1 m ( X i ) D 3 ̲ ( x ) = 0.5795 x 1 + 0.5961 x 2 + 0.6201 x 3 + 0.6133 x 4 + 0.5974 x 5 + 0.5667 x 6 .
Step 6:By the formula, we obtain the worst and the best decision-making objects as follows:
W 1 ¯ = 0.65 x 1 + 0.68 x 2 + 0.78 x 3 + 0.58 x 4 + 0.67 x 5 + 0.72 x 6 , W 1 ̲ = 0.73 x 1 + 0.76 x 2 + 0.65 x 3 + 0.82 x 4 + 0.66 x 5 + 0.67 x 6
W 2 ¯ = 0.67 x 1 + 0.68 x 2 + 0.73 x 3 + 0.51 x 4 + 0.58 x 5 + 0.61 x 6 , W 2 ̲ = 0.65 x 1 + 0.69 x 2 + 0.61 x 3 + 0.90 x 4 + 0.79 x 5 + 0.69 x 6
W 3 ¯ = 0.65 x 1 + 0.64 x 2 + 0.61 x 3 + 0.63 x 4 + 0.57 x 5 + 0.65 x 6 , W 3 ̲ = 0.62 x 1 + 0.64 x 2 + 0.66 x 3 + 0.65 x 4 + 0.64 x 5 + 0.61 x 6
Thus, we evaluate a closeness coefficient as follows:
R 1 = 0.5271 x 1 + 0.5071 x 2 + 0.4542 x 3 + 0.5862 x 4 + 0.4934 x 5 + 0.4827 x 6
R 2 = 0.4922 x 1 + 0.5067 x 2 + 0.4529 x 3 + 0.6378 x 4 + 0.5780 x 5 + 0.5314 x 6
R 3 = 0.4906 x 1 + 0.4989 x 2 + 0.5211 x 3 + 0.5096 x 4 + 0.5279 x 5 + 0.4818 x 6 .
Step 7:Based on these results, we calculate the group optimal index as follows:
R = 0.5054 x 1 + 0.5109 x 2 + 0.4875 x 3 + 0.5531 x 4 + 0.5191 x 5 + 0.4871 x 6
and hence obtain the ranking order as:
x 4 x 5 x 2 x 1 x 3 x 6
From the calculations, we conclude that the fourth system analysis engineer is the best alternative among the others.
Situation 3.
Step 5:When O 3 ( x , y ) = x 3 y 3 , calculate the lower and upper approximations of the best and worst decision-making objects as follows:
(E1)
C + O i = 1 m ( X i ) D 1 ¯ ( x ) = 0.0046 x 1 + 0.0053 x 2 + 0.0086 x 3 + 0.0021 x 4 + 0.0051 x 5 + 0.0082 x 6
C O i = 1 m ( X i ) D 1 ¯ ( x ) = 0.9377 x 1 + 0.9931 x 2 + 1 x 3 + 0.9109 x 4 + 0.9831 x 5 + 1 x 6
C + O i = 1 m ( X i ) D 1 ̲ ( x ) = 0.0087 x 1 + 0.0079 x 2 + 0.0048 x 3 + 0.0079 x 4 + 0.0047 x 5 + 0.0049 x 6
C O i = 1 m ( X i ) D 1 ̲ ( x ) = 1 x 1 + 1 x 2 + 0.9573 x 3 + 1 x 4 + 0.9652 x 5 + 0.9658 x 6 .
(E2)
C + O i = 1 m ( X i ) D 2 ¯ ( x ) = 0.0063 x 1 + 0.0058 x 2 + 0.0069 x 3 + 0.0017 x 4 + 0.0049 x 5 + 0.0042 x 6
C O i = 1 m ( X i ) D 2 ¯ ( x ) = 0.9717 x 1 + 0.9935 x 2 + 1 x 3 + 0.8358 x 4 + 0.9831 x 5 + 0.9256 x 6
C + O i = 1 m ( X i ) D 2 ̲ ( x ) = 0.0053 x 1 + 0.0065 x 2 + 0.0041 x 3 + 0.0109 x 4 + 0.0069 x 5 + 0.0064 x 6
C O i = 1 m ( X i ) D 2 ̲ ( x ) = 0.9351 x 1 + 1 x 2 + 0.9129 x 3 + 1 x 4 + 1 x 5 + 0.9888 x 6 .
(E3)
C + O i = 1 m ( X i ) D 3 ¯ ( x ) = 0.0043 x 1 + 0.0035 x 2 + 0.0025 x 3 + 0.0026 x 4 + 0.0024 x 5 + 0.0049 x 6
C O i = 1 m ( X i ) D 3 ¯ ( x ) = 0.9313 x 1 + 0.9466 x 2 + 0.9144 x 3 + 0.9567 x 4 + 0.8750 x 5 + 0.9658 x 6
C + O i = 1 m ( X i ) D 3 ̲ ( x ) = 0.0034 x 1 + 0.0036 x 2 + 0.0042 x 3 + 0.0033 x 4 + 0.0034 x 5 + 0.0027 x 6
C O i = 1 m ( X i ) D 3 ̲ ( x ) = 0.9077 x 1 + 0.9439 x 2 + 0.9691 x 3 + 0.9822 x 4 + 0.9453 x 5 + 0.9034 x 6 .
Step 6:By the formula, we obtain the worst and the best decision-making objects as follows:
W 1 ¯ = 0.94 x 1 + 0.99 x 2 + 1 x 3 + 0.91 x 4 + 0.98 x 5 + 1 x 6 , W 1 ̲ = 1 x 1 + 1 x 2 + 0.95 x 3 + 1 x 4 + 0.96 x 5 + 0.96 x 6
W 2 ¯ = 0.97 x 1 + 0.99 x 2 + 1 x 3 + 0.83 x 4 + 0.98 x 5 + 0.92 x 6 , W 2 ̲ = 0.93 x 1 + 1 x 2 + 0.91 x 3 + 1 x 4 + 1 x 5 + 0.98 x 6
W 3 ¯ = 0.93 x 1 + 0.94 x 2 + 0.91 x 3 + 0.95 x 4 + 0.87 x 5 + 0.96 x 6 , W 3 ̲ = 0.91 x 1 + 0.94 x 2 + 0.96 x 3 + 0.98 x 4 + 0.94 x 5 + 0.91 x 6
Thus, we evaluate a closeness coefficient as follows:
R 1 = 0.5159 x 1 + 0.5021 x 2 + 0.4891 x 3 + 0.5232 x 4 + 0.4954 x 5 + 0.4913 x 6
R 2 = 0.4905 x 1 + 0.5016 x 2 + 0.4770 x 3 + 0.5446 x 4 + 0.5042 x 5 + 0.5164 x 6
R 3 = 0.4936 x 1 + 0.4992 x 2 + 0.5142 x 3 + 0.5066 x 4 + 0.5192 x 5 + 0.4833 x 6 .
Step 7:Based on these results, we calculate the group optimal index as follows:
R = 0.5022 x 1 + 0.5006 x 2 + 0.5004 x 3 + 0.5171 x 4 + 0.5082 x 5 + 0.4899 x 6
and hence obtain the ranking order as:
x 4 x 5 x 1 x 2 x 3 x 6
According to the rank, we think the fourth candidate is the best.

7.3. Comparative Analysis

One of the main limitations of the traditional MCGDM problem with fuzzy information is that it uses generalized operators with continuity under fuzzy binary relations, such as triangular norms, to integrate different preference evaluations and decision makers. The key problem of the traditional MCGDM problem is how to define the aggregation operators effectively and precisely. First, covering-based fuzzy rough set models overcome this problem because they do not rely on fuzzy duality, but on the generalization of this relationship. In particular, the multi-granulation fuzzy rough set based on fuzzy β -covering has stronger advantages in dealing with such problems. Secondly, as a discontinuous aggregation operator, the overlap function has a wide range of applications in data classification, image processing and multi-attribute decision making and can effectively solve the problem of data discontinuity in real life. The extended model of MGFRS based on the overlap function in this paper can better deal with the MCGDM problem and allows decision makers to obtain effective and clear ordering information. We made a preliminary attempt to study the techniques and models based on the FCOMGFRS theory, which applies to MCGDM problems with fuzzy information.
Here, we give a new approach to solve MCGDM problems according to the FCOMGFRS model. Table 20 shows the comparisons between our approach and the previous ones.
According to Table 20, the same optimal solution is obtained by different methods. Firstly, it shows that the method proposed in this paper is feasible and effective. Secondly, there are no two similar items in the ranking obtained by the method proposed in this paper. Compared with the existing methods, the ranking order of decision makers can be given more clearly, indicating that the method proposed in this paper has more advantages in dealing with such problems.

8. Conclusions

Inspired by the literature [4,5,31], this paper presents the fuzzy β -covering fuzzy rough set model based on the overlap function, which is an extended form of the fuzzy β -covering based ( φ , T ) -fuzzy rough set model. As a kind of non-associative fuzzy logic connectives, the overlap function is the extension of a continuous triangular norm. Replacing the triangular norm with the overlap function can not only retain the important properties of the original fuzzy β -covering fuzzy rough set model but also provide a more theoretical basis for us. It can also overcome the continuity feature in practical application problems and enlarge the application range of a fuzzy rough set model. In order to solve the multi-criteria decision making problem more effectively, a new method with a FCFRS model based on the overlap function is proposed and compared with the existing methods, and the feasibility, effectiveness and decision advantage of the method are verified. Based on the fuzzy β -covering fuzzy rough set model based on the overlap function, we extend it to the case of multi-granulation, give two kinds of multi-granularity fuzzy rough set models, FCOMGFRS and FCPMGFRS models, and apply them to the MCGDM problem. Compared with the existing methods, the results obtained are more helpful to decision makers. As a further research topic, the variable precision fuzzy β -covering fuzzy rough set based on the overlap function will be discussed in the following work [47,48,49] and applied to data mining and knowledge discovery, etc.

Author Contributions

The idea of this whole paper was put forward by X.Z., and he also completed the preparatory work of the paper. X.W. analyzed the existing work of overlap function-based (multi-granulation) fuzzy rough sets and wrote the paper. Both authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science Foundation of China (Grant No. 61976130).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Yao, Y.Y.; Yao, B.X. Covering based rough set approximations. Inf. Sci. 2012, 200, 91–107. [Google Scholar] [CrossRef]
  3. Couso, I.; Dubois, D. Rough sets, covering and incomplete information. Fundam. Inform. 2011, 108, 223–247. [Google Scholar] [CrossRef]
  4. Ma, L.W. On some types of neighborhood-related covering rough sets. Int. J. Approx. Reason. 2012, 53, 901–911. [Google Scholar] [CrossRef] [Green Version]
  5. Ma, L.W. Classification of coverings in the finite approximation spaces. Inf. Sci. 2014, 279, 31–41. [Google Scholar] [CrossRef]
  6. Zhu, W. Topological approaches to covering rough sets. Inf. Sci. 2007, 177, 1499–1508. [Google Scholar] [CrossRef]
  7. Zhu, W.; Wang, F.Y. On three types of covering rough sets. IEEE Trans. Knowl. Data Eng. 2007, 19, 1131–1141. [Google Scholar] [CrossRef]
  8. Zhu, W.; Wang, F.Y. Reduction and axiomization of covering generalized rough sets. Inf. Sci. 2003, 152, 217–230. [Google Scholar] [CrossRef]
  9. Jensen, R.; Shen, Q. Fuzzy-rough attribute reduction with application to web categorization. Fuzzy Sets Syst. 2004, 141, 469–485. [Google Scholar] [CrossRef] [Green Version]
  10. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  11. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  12. Pawlak, Z. Rough sets and fuzzy sets. Fuzzy Sets Syst. 1985, 18, 99–102. [Google Scholar] [CrossRef]
  13. Wu, W.Z.; Yee, L.; Mi, J.S. On characterizations of (I,T)-fuzzy rough approximation operators. Fuzzy Sets Syst. 2005, 154, 76–102. [Google Scholar] [CrossRef]
  14. Sun, B.Z.; Gong, Z.T.; Chen, D.G. Fuzzy rough set theory for the interval-valued fuzzy information systems. Inf. Sci. 2008, 178, 2794–2815. [Google Scholar] [CrossRef]
  15. Klir, G.J.; Yuan, B. Fuzzy sets and fuzzy logic: Theory and Applications. Kybernetika 1996, 32, 207–208. [Google Scholar]
  16. Radzikowska, A.M.; Keere, E. A comparative study of fuzzy rough sets. Fuzzy Sets Syst. 2002, 126, 137–155. [Google Scholar] [CrossRef]
  17. Yager, R. On ordered weighted averaging aggregation operators in multicriteria decisioningmaking. IEEE Trans. Syst. Man Cybern. 1998, 18, 183–190. [Google Scholar] [CrossRef]
  18. Hwang, C.; Yoon, K. Multiple Attribuate Decision Making: Methods and Applications; Springer: Berlin, Germany; London, UK; New York, NY, USA, 2011. [Google Scholar]
  19. Polkowski, L. Rough Sets in Knowledge Discovery 2: Applications, Case Studies and Software Systems. Physica 2013, 19, 137–155. [Google Scholar]
  20. Moshkov, M.; Zielosko, B. Combinatorial Machine Learning: A Rough Set Approach; Springer Science and Business Media: Berlin, Germany, 2011. [Google Scholar]
  21. Pawlak, Z. Rough set theory and its applications to data analysis. Cybern. Syst. 1998, 29, 661–688. [Google Scholar] [CrossRef]
  22. Zakowski, W. Approximations in the (U,∏)-space. Demonstr. Math. 1983, 16, 761–769. [Google Scholar]
  23. Li, T.J.; Leung, Y.; Zhang, W.X. Generalized fuzzy rough approximation operators based on fuzzy covering. Int. J. Approach Reason. 2008, 48, 836–856. [Google Scholar] [CrossRef] [Green Version]
  24. D’eer, L.; Cornelis, C.; Yao, Y.Y. A semantically sound approach to Pawlak rough sets and covering based rough sets. Int. J. Approach Reason. 2016, 78, 62–72. [Google Scholar] [CrossRef]
  25. D’eer, L.; Cornelis, C. A comprehensive study of fuzzy covering- based rough set models: Definitions, properties and interrelationships. Fuzzy Sets Syst. 2018, 336, 1–26. [Google Scholar] [CrossRef]
  26. D’eer, L.; Cornelis, C.; Godo, L. Fuzzy neighborhood operators based on fuzzy coverings. Fuzzy Sets Syst. 2017, 312, 17–35. [Google Scholar] [CrossRef] [Green Version]
  27. Ma, L.W. Two fuzzy covering rough set models and their generalizations over fuzzy lattices. Fuzzy Sets Syst. 2016, 294, 1–17. [Google Scholar] [CrossRef]
  28. Zhan, J.M.; Zhang, X.H.; Yao, Y.Y. Covering based multigranulation fuzzy rough sets and corresponding applications. Artif. Intell. Rev. 2020, 53, 1093–1126. [Google Scholar] [CrossRef]
  29. Zhan, J.M.; Alcantud, J.C.R. A novel type of soft rough covering and its application to multicriteria group decision making. Artif. Intell. Rev. 2019, 52, 2381–2480. [Google Scholar] [CrossRef]
  30. Zhan, J.M.; Alcantud, J.C.R. A survey of parameter reduction of soft sets and corresponding algorithms. Artif. Intell. Rev. 2019, 52, 1839–1872. [Google Scholar] [CrossRef]
  31. Zhang, K.; Zhan, J.; Wu, W.; Alcantud, J.C. Fuzzy β-covering based (φ,T)-fuzzy rough set models and applications to multi-attribuate decision-naking. Comput. Ind. Eng. 2019, 128, 605–621. [Google Scholar] [CrossRef]
  32. Zhan, J.M.; Sun, B.; Alcantud, J.C.R. Covering based multigranulation (I,T)-fuzzy rough sets models and applications in multi-attribute group decision-making. Inf. Sci. 2019, 476, 290–318. [Google Scholar] [CrossRef]
  33. Qian, Y.H.; Liang, J.Y. Rough set method based on multigranulations. In Proceedings of the 5th IEEE International Conference on Cognitive Informatics, Beijing, China, 17–19 July 2006; Volume 90, pp. 297–304. [Google Scholar]
  34. Qian, Y.; Liang, J.; Yao, Y.; Dang, C. MGRS: A multi-granulation rough set. Inf. Sci. 2010, 180, 949–970. [Google Scholar] [CrossRef]
  35. Liu, C.; Pedrycz, W. Covering-based multi-granulation fuzzy rough sets. J. Intell. Fuzzy Syst. 2016, 30, 303–318. [Google Scholar] [CrossRef]
  36. Zhang, L.; Zhan, J.M.; Xu, Z.S.; Alcantud, J.C.R. Covering-based general multigranulation intuitionistic fuzzy rough sets and corresponding applications to multi-attribute group decision-making. Inf. Sci. 2019, 494, 114–140. [Google Scholar] [CrossRef]
  37. Atef, M.; Ali, M.I.; Al-Shami, T.M. Fuzzy soft covering based multi-granulation fuzzy rough sets and their applications. Comput. Appl. Math. 2021, 40, 115. [Google Scholar] [CrossRef]
  38. Bustince, H.; Fernández, J.; Mesiar, R.; Montero, J.; Orduna, R. Overlap Index, Overlap Functions and Migrativity. In Proceedings of the Joint 2009 International Fuzzy Systems Association Word Congress and 2009 European Society of Fuzzy Logic and Technology Conference, Lisbon, Portugal, 20–24 July 2009; Volume 65, pp. 300–305. [Google Scholar]
  39. Bustince, H.; Barrenecha, E.; Pagola, M. Image thresholding using restrited equivalent functions and maximizing the measures of similarity. Fuzzy Sets Syst. 2007, 158, 496–516. [Google Scholar] [CrossRef]
  40. Amo, A.D.; Montero, J.; Biging, G.; Cutello, V. Fuzzy classification systems. Eur. J. Oper. Res. 2004, 156, 495–507. [Google Scholar] [CrossRef]
  41. Bedregal, B.; Dimuro, G.P.; Bustince, H.; Barrenechea, E. New results on overlap and grouping functions. Inf. Sci. 2013, 249, 148–170. [Google Scholar] [CrossRef]
  42. Dimuro, G.P.; Bedregal, B. On residual implications derived from overlap functions. Inf. Sci. 2015, 312, 78–88. [Google Scholar] [CrossRef]
  43. Elkano, M.; Galar, M.; Sanz, J.A.; Fernández, A.; Barrenechea, E.; Herrera, F.; Bustince, H. Enhancing multiclass classification in FARC-HD fuzzy classifier: On the synergy between n-dimensional overlap functions and decomposition strategies. IEEE Trans. Fuzzy Syst. 2015, 23, 1562–1580. [Google Scholar] [CrossRef] [Green Version]
  44. Wen, X.F.; Zhang, X.H.; Lei, T. Intuitionistic fuzzy (IF) overlap functions and IF-rough sets with applications. Symmetry 2021, 13, 1494. [Google Scholar]
  45. Qiao, J.S. On (IO,O)-fuzzy rough sets based on overlap function. Int. J. Approx. Reason. 2021, 132, 26–48. [Google Scholar] [CrossRef]
  46. Wen, X.F.; Zhang, X.H.; Wang, J.Q.; Lei, T. Fuzzy rough set based on overlapping functions and its application. J. Shaanxi Norm. Univ. (Nat. Sci. Ed.) 2021. [Google Scholar]
  47. Jiang, H.; Zhan, J.; Sun, B.; Alcantud, J.C. An MADM approach to covering-based variable precision fuzzy rough sets: An application to medical diagnosis. Int. J. Mach. Learn. Cybern. 2020, 11, 2181–2207. [Google Scholar] [CrossRef]
  48. Zhang, X.H.; Wang, J.Q. Fuzzy β-covering approximation space. Int. J. Approx. Reason. 2020, 26, 27–47. [Google Scholar] [CrossRef]
  49. Zhang, X.H.; Wang, J.Q.; Zhan, J.M.; Dai, J.H. Fuzzy measures and Choquet integrals based on fuzzy covering rough sets. IEEE Trans. Fuzzy Syst. 2021. [Google Scholar] [CrossRef]
Figure 1. The extension process of rough sets.
Figure 1. The extension process of rough sets.
Symmetry 13 01779 g001
Table 1. A family of fuzzy sets on U.
Table 1. A family of fuzzy sets on U.
C ˜ x 1 x 2 x 3 x 4 x 5 x 6
C 1 ˜ 0.70.60.40.50.10.6
C 2 ˜ 0.50.30.30.70.40.8
C 3 ˜ 0.40.30.50.50.20.4
C 4 ˜ 0.30.70.80.20.60.1
C 5 ˜ 0.20.30.50.60.10.5
Table 2. Fuzzy β-neighborhood of x U .
Table 2. Fuzzy β-neighborhood of x U .
C ˜ x 1 x 2 x 3 x 4 x 5 x 6
N ˜ x 1 0.5 0.50.30.30.50.10.6
N ˜ x 2 0.5 0.30.60.40.20.10.1
N ˜ x 3 0.5 0.20.30.50.20.10.1
N ˜ x 4 0.5 0.20.30.30.50.10.4
N ˜ x 5 0.5 0.30.70.80.20.60.1
N ˜ x 6 0.5 0.20.30.30.50.10.5
Table 3. Fuzzy β-neighborhood of x U .
Table 3. Fuzzy β-neighborhood of x U .
C ˜ x 1 x 2 x 3 x 4 x 5 x 6
N ˜ x 1 0.6  0.70.60.40.50.10.6
N ˜ x 2 0.6  0.30.60.40.20.10.1
N ˜ x 3 0.6  0.30.70.80.20.60.1
N ˜ x 4 0.6  0.20.30.40.50.10.5
N ˜ x 5 0.6  0.30.70.80.20.60.1
N ˜ x 6 0.6 0.50.30.30.50.10.6
Table 4. The positive ideal A + , negative ideal A and integrated ideal A .
Table 4. The positive ideal A + , negative ideal A and integrated ideal A .
x 1 x 2 x 3 x 4 x 5 x 6
A + 0.70.70.80.70.60.8
A 0.20.30.30.20.10.1
A 0.410.390.480.540.290.52
Table 5. The lower and upper approximations of A + , A and A .
Table 5. The lower and upper approximations of A + , A and A .
x 1 x 2 x 3 x 4 x 5 x 6
C ˜ N ˜ x β + ( A + ) 0.770.770.710.720.890.71
C ˜ N ˜ x β ( A + ) 0.490.490.710.490.360.49
C ˜ N ˜ x β + ( A ) 0.550.550.550.550.550.55
C ˜ N ˜ x β ( A ) 0.010.010.010.010.010.01
C ˜ N ˜ x β + ( A ) 0.720.630.690.720.680.71
C ˜ N ˜ x β ( A ) 0.080.080.080.080.080.08
Table 6. The positive ideal fuzzy distance and negative ideal fuzzy distance of x i .
Table 6. The positive ideal fuzzy distance and negative ideal fuzzy distance of x i .
x 1 x 2 x 3 x 4 x 5 x 6
d i + 0.230.2750.3250.210.2450.205
d i + 0.120.0750.1050.120.10.115
Table 7. Closeness coefficient of x i .
Table 7. Closeness coefficient of x i .
x 1 x 2 x 3 x 4 x 5 x 6
ρ 0.3430.2140.2440.3650.2890.359
Table 8. The lower and upper approximations of A + , A and A .
Table 8. The lower and upper approximations of A + , A and A .
x 1 x 2 x 3 x 4 x 5 x 6
C ˜ N ˜ x β + ( A + ) 0.23040.17640.160.13440.40960.21
C ˜ N ˜ x β ( A + ) 111111
C ˜ N ˜ x β + ( A ) 0.010.03240.02250.010.05760.01
C ˜ N ˜ x β ( A ) 0.530.9110.410.530.63
C ˜ N ˜ x β + ( A ) 0.09730.05480.05760.07290.14760.0676
C ˜ N ˜ x β ( A ) 11110.86601
Table 9. The positive ideal fuzzy distance and negative ideal fuzzy distance of x i .
Table 9. The positive ideal fuzzy distance and negative ideal fuzzy distance of x i .
x 1 x 2 x 3 x 4 x 5 x 6
d i + 0.06660.06080.05120.03080.7620.0712
d i + 0.27870.05620.01760.32650.2130.2138
Table 10. Closeness coefficient of x i .
Table 10. Closeness coefficient of x i .
x 1 x 2 x 3 x 4 x 5 x 6
ρ 0.80710.48030.25580.91380.21850.7502
Table 11. Comparison of different methods.
Table 11. Comparison of different methods.
MethodsRanking of Alternatives
W A m e t h o d 17 x 4 x 6 x 3 x 1 x 2 x 5
O W A m e t h o d 17 x 4 x 3 x 6 x 2 x 1 x 5
T O P S I S m e t h o d 18 x 4 x 6 x 3 x 1 x 2 x 5
method based on ( φ , T ) 31 x 4 x 6 x 3 x 5 x 1 x 2
method based on ( C ˜ R O m , C ˜ O m ) x 4 x 6 x 1 x 5 x 3 x 2
method based on ( C ˜ R O 2 , C ˜ O 2 ) x 4 x 1 x 6 x 2 x 3 x 5
Table 12. Comprehensive comparison of all decision making results based on the ( C ˜ R O 2 , C ˜ O 2 ) model.
Table 12. Comprehensive comparison of all decision making results based on the ( C ˜ R O 2 , C ˜ O 2 ) model.
Different Value of α Ranking of Alternatives
α = 0.1 x 4 x 1 x 6 x 2 x 5 x 3
α = 0.2 x 4 x 1 x 6 x 2 x 5 x 3
α = 0.3 x 4 x 1 x 6 x 2 x 5 x 3
α = 0.4 x 4 x 1 x 6 x 2 x 5 x 3
α = 0.5 x 4 x 1 x 6 x 2 x 3 x 5
α = 0.6 x 4 x 1 x 6 x 5 x 2 x 3
α = 0.7 x 4 x 1 x 6 x 5 x 2 x 3
α = 0.8 x 4 x 1 x 6 x 5 x 2 x 3
α = 0.9 x 4 x 1 x 6 x 5 x 2 x 3
Table 13. Comprehensive comparison of all decision making results.
Table 13. Comprehensive comparison of all decision making results.
Different Value of α MethodsRanking of Alternatives
α = 1 method based on ( φ , T ) x 4 x 6 x 2 x 1 x 3 x 5
α = 1 method based on ( φ , T ) x 4 x 6 x 3 x 1 x 5 x 2
α = 1 method based on ( C ˜ R O 2 , C ˜ O 2 ) x 4 x 1 x 6 x 5 x 3 x 2
Table 14. Table for expert 1.
Table 14. Table for expert 1.
U c 11 c 12 c 13 c 14 c 15
x 1 0.820.710.460.550.52
x 2 0.730.320.650.580.84
x 3 0.560.680.360.780.44
x 4 0.530.480.740.650.91
x 5 0.660.530.570.720.43
x 6 0.280.760.520.450.77
Table 15. Table for expert 2.
Table 15. Table for expert 2.
U c 21 c 22 c 23 c 24 c 25
x 1 0.780.560.670.260.59
x 2 0.350.770.490.690.55
x 3 0.510.370.790.420.67
x 4 0.850.680.570.750.48
x 5 0.580.340.730.810.43
x 6 0.530.750.460.590.71
Table 16. Table for expert 3.
Table 16. Table for expert 3.
U c 31 c 32 c 33 c 34 c 35
x 1 0.560.750.390.670.48
x 2 0.760.360.680.450.55
x 3 0.840.550.350.580.65
x 4 0.430.530.740.690.63
x 5 0.590.710.650.480.55
x 6 0.370.660.560.420.78
Table 17. Table for N ˜ x i 0.6 of expert 1.
Table 17. Table for N ˜ x i 0.6 of expert 1.
C ˜ x 1 x 2 x 3 x 4 x 5 x 6
N ˜ x 1 0.6  0.710.320.560.480.530.28
N ˜ x 2 0.6  0.460.650.360.530.430.28
N ˜ x 3 0.6  0.550.320.680.480.530.45
N ˜ x 4 0.6  0.460.580.360.650.430.45
N ˜ x 5 0.6  0.550.580.560.530.660.28
N ˜ x 6 0.6 0.520.320.440.480.430.76
Table 18. Table for N ˜ x i 0.6 of expert 2.
Table 18. Table for N ˜ x i 0.6 of expert 2.
C ˜ x 1 x 2 x 3 x 4 x 5 x 6
N ˜ x 1 0.6  0.670.350.510.570.580.46
N ˜ x 2 0.6  0.260.690.370.680.340.59
N ˜ x 3 0.6  0.590.490.670.480.430.46
N ˜ x 4 0.6  0.260.350.370.680.340.53
N ˜ x 5 0.6  0.260.490.420.570.730.46
N ˜ x 6 0.6 0.560.550.370.480.340.71
Table 19. Table for N ˜ x i 0.6 of expert 3.
Table 19. Table for N ˜ x i 0.6 of expert 3.
C ˜ x 1 x 2 x 3 x 4 x 5 x 6
N ˜ x 1 0.6  0.670.360.550.530.480.42
N ˜ x 2 0.6  0.390.680.350.430.590.37
N ˜ x 3 0.6  0.480.550.650.430.550.37
N ˜ x 4 0.6  0.390.450.350.630.480.42
N ˜ x 5 0.6  0.390.360.350.530.650.56
N ˜ x 6 0.6 0.480.360.550.530.550.66
Table 20. Comparison of different methods.
Table 20. Comparison of different methods.
MethodsRanking of Alternatives
Zhan’s m e t h o d 47 (t-norm I) x 4 x 1 x 6 x 2 x 3 x 5
Zhan’s m e t h o d 47 (t-norm II) x 4 x 1 x 5 x 2 x 3 x 6
Zhan’s m e t h o d 47 (t-norm III) x 4 x 2 x 1 x 3 x 5 x 6
Atef’s m e t h o d 48 x 4 x 1 x 6 x 2 = x 3 = x 5
Our method based O 2 x 4 x 5 x 3 x 2 x 1 x 6
Our method based O 3 x 4 x 5 x 1 x 2 x 3 x 6
Our method based O m M x 4 x 5 x 2 x 1 x 3 x 6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wen, X.; Zhang, X. Overlap Functions Based (Multi-Granulation) Fuzzy Rough Sets and Their Applications in MCDM. Symmetry 2021, 13, 1779. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13101779

AMA Style

Wen X, Zhang X. Overlap Functions Based (Multi-Granulation) Fuzzy Rough Sets and Their Applications in MCDM. Symmetry. 2021; 13(10):1779. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13101779

Chicago/Turabian Style

Wen, Xiaofeng, and Xiaohong Zhang. 2021. "Overlap Functions Based (Multi-Granulation) Fuzzy Rough Sets and Their Applications in MCDM" Symmetry 13, no. 10: 1779. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13101779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop