Next Article in Journal
Measuring Resilience Potentials: A Pilot Program Using the Resilience Assessment Grid
Previous Article in Journal
Evaluating the Impact of Increased Fuel Cost and Iran’s Currency Devaluation on Road Traffic Volume and Offenses in Iran, 2011–2019
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Proposition for Combining Rough Sets, Fuzzy Logic and FRAM to Address Methodological Challenges in Safety Management: A Discussion Paper

Department of Mechanical Engineering, École de Technologie Supérieure (ÉTS), 1100 Notre-Dame St W, Montreal, QC H3C 1K3, Canada
*
Author to whom correspondence should be addressed.
Submission received: 24 September 2020 / Revised: 2 November 2020 / Accepted: 7 November 2020 / Published: 9 November 2020

Abstract

:
In recent years, the focus in safety management has shifted from failure-based analysis towards a more systemic perspective, redefining a successful or failed performance as a complex and emergent event rather than as a conclusion of singular errors or root causes. This paradigm shift has also necessitated the introduction of innovative tools capable of capturing the complex and dynamic nature of modern sociotechnical systems. In our research, we argued at previous stages for adopting a more systemic and human-centric perspective to evaluate the context of aircraft de-icing operations. The Functional Resonance Analysis Method (FRAM) was applied in the first stage for this purpose. Consequently, fuzzy logic was combined with FRAM in the second stage to provide a quantified representation of performance variability. Fuzzy logic was used as a quantification tool suitable for computing with natural language. Several limitations were found in the data collection and rule generation process for the first prototype. In the third phase, the model was further improved by integrating rough sets as a data-mining tool to generate and reduce the size of the rule base and classify outcomes. In this paper, we reflect on the three stages of the project and discuss in a qualitative manner the challenges and limitations faced in the development and application of the models. A summary of the advantages and disadvantages of the three models as experienced in our case are presented at the end. The objective is to present an outlook for future studies to address methodological limitations in the study of complex sociotechnical systems.

1. Introduction

Driven by the human need to ensure safety and to be free from harm, significant research efforts have been always directed towards improving the performance of sociotechnical systems. Because of this human drive, the perspective on what constitutes safe and adequate performance went through an evolution over the years since the introduction of the concepts of risk and safety management. Just as everything evolves in this world, an evolution has taken place reshaping the perspectives of practitioners and researchers and redefining how the concepts of safety, risk and performance are viewed. In the early ages of reliability assessments, the focus has been mainly directed towards evaluating systems by examining the performance of its parts, adopting a more mechanistic perspective and focusing on the technological aspect of things [1]. The classical view considered any system decomposable to its parts, well defined and understood [1]. Operators performed assigned tasks as required by instructions and procedures and the design phase was expected to account for every possible contingency and then implement barriers and protection mechanisms to prevent the occurrence of any adversity. However, since a systemic performance cannot be decoupled from human performance, even the simplest and most closed systems rely largely on human interactions, whether they are related to maintenance, examining or repair work and general interactions with the environment. Therefore, it was soon realized that the assessment of systemic performance had to include the human factor, which ushered in the age of human error and a shift occurred, leading to the introduction of such tools to comply with the newly emerging requirements. Later, the distinction between collective human performance and individual human performance was made and a new understanding of systemic performance led to the inclusion of the organizational factor as well. This resulted in directing research efforts to introduce new concepts identifying the need to innovate and provide modern and adequate tools. During that era, the concept of the MTO classification [2] was introduced, opening the door for the emergence of several methods to provide needed solutions. In recent years, new tools have been introduced, building on the years-long progression of systems’ assessment to provide a much-needed systemic perspective such as the Systems Theoretic Accident Model and Process (STAMP) [3], the Functional Resonance Analysis Method (FRAM) [4] and the Resilience Analysis Grid (RAG) [5]. Consequently, the discipline of resilience engineering emerged and proposed a new outlook and new solutions to address the challenges facing both researchers and practitioners. The topic of resilience engineering has been a popular research item in recent years, initiating several and numerous studies addressing the need to innovate and the topic has been heavily discussed leading to the formation of a significant body of literature and research projects on the topic [6,7].
This research project started with the objective of evaluating the performance of aircraft de-icing operations from a systemic and human-centric perspective. Generally, in the aviation sector, de-icing operations are carried out by a highly reliable and complex system to ensure compliance with strict procedures. Despite the high dynamicity and the extreme working conditions, the system in place in most countries, and especially in Canada, performs very well. De-icing-related incidents are a rarity and are usually related to smaller airports and aircraft types than to larger airports and aircraft types [8]. Although larger airports have a high traffic volume, the strict procedures and implemented safety nets ensure that the operations are carried out safely and reliably. Accordingly, the statistics on de-icing incidents might be insufficient for statistical analyses and probabilistic methods. given the rarity of such events. Such incidents are unique in their development, which makes generalizations to other contexts difficult. The accidents that may evolve in such systems are therefore very complex by nature and would have to be the result of combinations of highly improbable events and performance conditions. The severity of such accidents is significant and the costs in human lives and material damage could be very high, which is the main reason why these systems undergo such scrutiny to ensure high levels of safety, security and performance. Nonetheless, the aircraft de-icing industry is faced with several challenges in the coming years, which should be addressed in future research projects to ensure that these systems continue to perform as desired. The volume of air traffic prior to the COVID-19 pandemic increased worldwide by 5–6% annually [9] and is expected to continue its growth after the resolution of the pandemic. Consequently, there is a continuous requirement for larger airports with higher capacities and for more innovation and technological advancements. The gradually increasing utilization of centralized de-icing pads to facilitate the de-icing of a larger number of airplanes simultaneously requires precise coordination and clear communication between the flight crew, Air Traffic Control (ATC), the de-icing team and the de-icing tower [10]. Significant Research & Development projects are aiming at introducing novel technologies to comply with the rising demand, such as the Ground Ice Detection System (GIDS), the application of drones, or the innovation of modern and automated de-icing apparatus. With the advent of the fourth industrial revolution and its inevitable expansion to most sectors and domains, it becomes imperative to innovate and develop adequate assessment tools to be well equipped and prepared for the implementations of novel technologies. The very nature of such systems is to evolve and grow in complexity. Such systems become intractable and are difficult to comprehend in their entirety. Such gaps in knowledge on systemic behaviour that could lead to the formation of loopholes in the safety barriers and defense mechanisms, from which adversity could emerge.
In this paper, we aim to reflect on the progress achieved so far in our research and summarize in a qualitative manner the advantages and limitations as experienced in the previous stages of our project. To the best of our knowledge, projects concerned with the evaluation and development of aircraft ground de-icing operations from a systemic and humanistic perspective are rare. So far, we argued for the need to adopt a systemic perspective in complex systems’ analyses generally, and more specifically in the context of aircraft de-icing. Eventually, we proposed a modified model of FRAM combining fuzzy logic at first as a quantification tool and then rough sets as a data classification tool. This paper here aims to look back and discuss the combined model starting with a brief overview of the different disciplines in the next section. The overview serves the purpose of presenting the three axes of the project and defining their relation to provide the background for each approach in the following section. The three models are then described briefly in the Methods section and discussed to present the main findings that could be generalized to other contexts. This should allow for drawing helpful conclusions and provide an outlook for future studies.

2. Background and Motivation

2.1. The Evolution of Systems’ Analysis: An Argument for Adopting a Systemic Perspective

Traditionally, classical safety analysis methods starting with Heinrich’s Domino Model of accident causation in 1931 focused on causal and linear relationships [11]. Heinrich’s Domino Model introduced a paradigm shift in safety analyses shifting the focus from unsafe conditions to human error. Accidents and incidents were described mainly as a chain of discrete events initiated by a root cause and occurring consecutively leading to undesirable outcomes [1]. Preventive measures therefore focused on breaking the chain of events and avoiding errors and malfunctions that could set the chain in motion. This approach was carried over to other methods later such as the Fault Tree Analysis (FTA) or the Failure Modes and Effects Analysis (FMEA), which are still popular tools among analysts and decision makers to this day. These tools have done a great job securing systems and ensuring their safety and adequate performance. They have become more established and acknowledged in various industrial contexts due to the years-long tradition of applications.
As systems started to gain complexity in the second half of the 20th century with the introduction of digital technology and the information revolution, new challenges and types of accidents emerged over time. The reliability of technology was not the main challenge facing safety analysts, rather the social component. It was realized that as soon as animate objects become a part of the system, the behaviour of the system is no longer easily predictable [12]. The evaluation of any system decoupled from its human aspects could therefore result in insufficient knowledge about its behaviour and the evolution of adverse outcomes. As stated by Adriaensen et al., the majority of, if not all, traditional safety analysis methods require a combination of different types of methods “to cover all technical and human performance-related hazards and risks” [13]. The analysis of such issues in isolation from each other might produce insufficient results about their integration on a higher-order level of human–machine interaction [13]. Complex nonlinear interactions could be practically invisible for such tools or not immediately comprehensible [14]. Complex Systems are tightly coupled [14], are more rigid and time-dependent and require more precision. The systemic components cannot be easily substituted and the failure of one component reflects significantly on the rest of the system. Multiple factors could combine in complex ways leading to failures and accidents. Since most accidents can be attributed to the human factor, a purely technological perspective is not sufficient. The human factor adds to the uncertainty and performance variability becomes inevitable. In reality, work is always underspecified and never carried out as imagined or prescribed [15]. This variability in performance is a natural characteristic and is even necessary at times to ensure successful outcomes. Consequently, there is always a difference between work-as-imagined (WAI) and work-as-done (WAD), between theory and practice [15]. While the inclusion of the human factor was recognized from the start, it was simply and mainly viewed in terms of unsafe acts, human errors or cognitive shortcomings.
Epidemiological models emerged in the 1980s as a consequence of the continuous search to improve and redefine the understanding of safety. These models adopted a more complex view on adverse events and explained them as a combination of several influential factors and conditions, which can be active or latent. The human factor was considered herewith more deeply, dividing it into two main sets of conditions: individual human factors, which cause active failures at the sharp end of operations and organizational factors, which reflect the impact of the latent organizational and cultural influences [16,17]. Combined with the performance or environmental conditions present at the time of execution, organizational factors can lead to active human failure if no adequate protection layers and barriers are in place [1]. Reason’s perspective on accident causation is best represented in the Swiss cheese Model, which illustrates accidents as emergent events resulting from loopholes in the defense mechanisms and barriers of the system [16]. However, the adopted philosophy in epidemiological models was still focused on sequential cause–effect and linear relationships [4]. The Swiss cheese Model presented a static snapshot of a mainly complex and dynamic context, which could result in overlooking safety loopholes [1].
Rasmussen in the 1990s argued that a systems approach based on functional abstraction rather than structural decomposition is needed to model and design safer sociotechnical systems [18]. Rasmussen classified six levels of a sociotechnical system: Government, Regulators/Associations, Company, Management, Staff and Work [18]. An adequate analyses of dynamic working environments cannot rely on traditional task analysis and should rather assess the issues on several systemic levels in an interdisciplinary manner [18]. For providing adaptive systems, it is imperative that all systemic levels are interactive. The implications of implementing change on one level should be assessed in relation to all remaining levels [18]. The hierarchical classification as presented by Rasmussen was analyzed by Leveson, who noted that the model solely focuses on the operational aspects [3]. Influential factors are still modeled at each level as an event chain, which is then linked to the event chain on the lower level and so on. This approach, as stated by Leveson, still assumes the existence of a root cause for accidents and adverse outcomes and describes events in terms of causal relationships. Leveson concludes that causality models are no longer adequate to describe modern sociotechnical systems, which must be modelled as a whole [3]. Leveson considers safety to be a control problem as well, i.e., the emergent properties of a system are to be controlled by imposing constraints on the behaviour of that system and the interactions between its components during its design and operation [3].
The perspective of Rasmussen is shared by Erik Hollnagel, who argues as well that systems cannot be defined as a collection of separate parts only, but as “a set of coupled or mutually dependent functions” [19]. Starting with the framework of Cognitive Systems Engineering, Hollnagel and Woods suggested a new paradigm and introduced a gradual shift into the principle of SAFETY-II. Hollnagel considered CREAM to be a short step to FRAM, which provides a functional view of the system and focuses on performance variability and its combinations. The representation of systems in terms of functional couplings forms the basis upon which Hollnagel introduced the Functional Resonance Analysis Method (FRAM), which served as the main framework applied in our project. In FRAM, functions are divided into three categories: technological, organizational and human, in accordance with the MTO classification method [19]. FRAM functions are not just hierarchical, rather they are objective or task oriented and accordingly can expand across systemic hierarchical levels. The difference between FRAM and other systemic tools such as STAMP is that, in addition to negative outcomes, FRAM adopts a more resilience-focused perspective considering both negative and positive outcomes [7].
Thus, SAFETY-II in addition to what goes wrong, looks at what goes right. This is especially helpful to draw conclusions when faced with scarcity of data and statistics since what goes right is the norm and is mostly the outcome. Modern critical systems such as aviation and nuclear power generation are high-reliability systems due to the severe and possibly disastrous consequences of incidents in such systems. It was therefore imperative to direct more attention to securing these systems through research and development and implementing regulations that are supposed to eliminate any possibility for the occurrence of accidents. Consequently, the occurrence of accidents in such domains have become rare to a degree that the task of providing sufficient statistics and databases has become a difficult one. The rarity and uniqueness of such events have made it difficult even to draw meaningful and generalizable conclusions. In high-reliability systems, it would therefore make more sense to adopt a SAFETY-II approach considering both the negative and positive factors. This consequently generates more data for analysis and would help strengthening the system by maintaining the performance conditions that ensure successful outcomes; thus, making the system more resilient. The resilience of any given system can be defined as its intrinsic ability to adapt and adjust its performance to fluctuations and unexpected disturbances to maintain its output within acceptable margins [20]. Resilience Engineering focuses on the full spectrum of outcomes from extremely positive to extremely negative. Performance variability, whether individual or collective, is a natural and inherent characteristic of any sociotechnical system and can be beneficial to cope with these disturbances [15]. The roots of Resilience Engineering lie in Human Factors and Safety Management and the principle of performance variability is strongly linked to the human factor considering the human as the main component of any sociotechnical system. Resilience Engineering can therefore be described with the following four principles [20]:
  • There is always an underspecification of actual performance conditions, i.e., the difference between WAI and WAD.
  • The principle of performance variability is the main reason why outcomes deviate from the norm or expectations.
  • Retrospective Analyses are not sufficient and proactive assessments are needed to anticipate and be well prepared for adversity.
  • The strive to maintain highly efficient and productive systems cannot be decoupled from safety, which should be incorporated into the business planning and core processes as a prerequisite for productivity.
The continuous drive to make systems function more efficiently and reliably necessitates innovation and thus the integration of smart technologies into the different types of industrial environments. The continuously increasing interconnection of systems and the widening of networks, the backup of data into clouds, the internet of things, the generation of large data sets and the gradually increasing reliance on human–machine interfaces all lead to increased human–machine interaction, complexity and intractability [21]. Considering the emergent properties of complex systems, this all could introduce new types of risks and performance impairing factors that are difficult to anticipate in the design phase. The objective of risk and safety assessment models (Figure 1) is to evaluate and conceptualize the characteristics of accidents and incidents to provide an explanation for their development. They are therefore useful in all phases from system development to system deployment and operation. Applying classical tools in this case in a retrospective manner and focusing solely on lessons learned from previous events and experiences will eventually result in missing loopholes, from which accidents can emerge [22]. Retrospective analysis is usually easier to conduct than proactive or predictive analyses since it deals with defined conditions and well-known events. The issue with retrospective analysis is the limitation of the scope to what already transpired and therefore draw the definitive pass for the development of an accident and ignore the possibilities for alternative outcomes [13]. Looking at events in retrospect might give rise to confirmation bias and lead the analyst to believe a one-sided explanation of the event in question [13]. This could result in possibly overlooking other explanations for the development of the event, since the sequence of events provides only one narrative. In a proactive analysis, however, assumptions are to be formulated to anticipate possible sources of risks and adversity [22]. Innovative systemic models have therefore become necessary to understand the new structures and better anticipate the behaviour of complex sociotechnical systems [3].

2.2. Quantitative and Qualitative Methods: Why Fuzzy Logic?

From a methodological perspective, research activities adopt mainly either qualitative or quantitative methods [23]. Mathematical analyses and quantification tools offer appropriate methods to measure phenomena more objectively and provide intersubjective results. Experimental methods using quantitative measures generally aim at verifying formulated hypotheses over specific cases to deduce generalizations that can be applicable to different contexts [23]. The objective hereby is to quantify the observed instances in an objective manner and express them in terms of numbers. This is usually achieved through the reduction of the whole into its parts to simplify the system and define linear relationships that can facilitate the analysis. However, linearizing systemic relationships and relying solely on quantitative measures could result in ignoring certain characteristics of the system of interest such as the significance of the measured values and their meanings for decision makers and analysts. Reducing a system to its parts might be efficient and produce more straightforward results that can be more intersubjective and comprehensible for analysts and stakeholders. However, it can also prevent the analyst from understanding complex and dynamic relationships, which are emergent by nature and only visible in a more holistic approach. Additionally, not all phenomena and contexts are easily quantifiable [12]. The determination of a precise magnitude for qualitative and uncertain variables is difficult and sometimes even not possible in terms of discrete mathematics. The problem then becomes the inability to collect precise data and provide quantifiable analysis results [12].
Qualitative assessments can be more suitable for understanding complex relationships considering that even purely quantitative results are meaningless without defining how they relate to their context and what interpretations and conclusions can be drawn [23]. Using a qualitative approach allows for a more descriptive evaluation over time to clarify dependencies among variables and describe dynamic and more complex relationships. However, here as well, several limitations can be noticed, which might limit the significance of purely qualitative results for some decision makers and practitioners. In comparison, since quantitative approaches generally define simple relationships, the data collection process can be easier and more efficient such as measuring the distance or temperature, etc. The same cannot be claimed for qualitative measures, which can be vague by nature and require for interpretation a deeper understanding of systemic functionality. The interpretation of the results depends on the context in question, and different people can perceive the magnitude or meaning of words differently. Linguistic descriptors or scales can only capture the relative but not the precise magnitude of the measured value [24].
The use of quantitative methods in safety analysis has been sought in a limited fashion so far. This might have several reasons such as the lack of such methods of an internal causation model [13]. Another limitation is represented in the inability to properly quantify complex variables in a reliable manner, which has led to focusing on simple systems and the application on a micro-level in ergonomics and safety management [13]. Another aspect is the lack of sufficient data and statistics in some contexts, which makes the application of probabilistic approaches somewhat difficult and in some cases impossible. Despite these limitations, several propositions to combine FRAM with quantification mechanisms were presented in recent years [7], most notably by Patriarca et al. (2017), who presented a semi quantitative approach combining Monte Carlo simulation and FRAM [25]. The Monte Carlo simulation has been used to assign probabilities of dependencies within the framework of complexity-thinking models [25]. Bayesian networks, on the other hand, allow for a better representation of complex relationships than logical operators and are therefore suitable to evaluate tightly coupled systems [26]. However, Bayesian networks still rely mainly on causality and tractability [13]. The degree of uncertainty in the model depends on the quality of the provided data and the degree of knowledgeability on the present performance conditions.
Applied in a proper manner, the combination of qualitative and quantitative methods is promising and can generate more representative and reliable results. Such applications combining the two methods are already being applied in the aviation sector supporting qualitative methods like the FMEA with quantification tools [13]. Qualitative analysis methods such as FRAM and STAMP are descriptive by nature and rely mainly on the use of linguistic ordinal scales to assess the system of interest. A solution to combine such approaches with quantitative methods and maintain most of their advantages could be reached through the integration of fuzzy logic [27]. The challenge to adopt fuzzy logic into safety analysis and management as a possible way to handle uncertainty remains an understudied topic to be addressed in future research. Fuzzy logic resembles human reasoning and can therefore be suitable to quantify such variables and vague contexts [27]. In contrast to theoretical idealistic concepts, real life processes are characterized by ambiguity and vagueness. They are never as imagined and are therefore underspecified in the theoretical model. From an operational perspective, even in the most precise applications of procedures and regulations, the operational execution in practice always deviates from the norms and defined standards (WAI and WAD). This deviation is not exceptional or abnormal; rather it is an inherent characteristic of real-life applications and is even necessary and beneficial in many cases to ensure the resilience of the system.
In 1965, Lotfi Zadeh presented a formalized definition of many-valued logic and laid the foundations of Fuzzy Set Theory and Fuzzy Logic in his article “Fuzzy Sets” [27]. Since then, the positions on fuzzy set theory in the scientific community have been divided. The theory was faced with harsh criticism and the advantages of fuzzy logic over conventional methods in addressing uncertainty were doubted in the early years [28,29]. The notion that “fuzziness” represented a type of uncertainty distinct from probability was rejected considering that probability theory provides complete and optimal tools to solve problems and manage uncertainty [30]. Early criticism of fuzzy logic also rejected the new concept for compromising with vagueness and radically and unjustifiably shifting from traditional formalism sacrificing precise, formal rules of inference and consistency of results [28]. Furthermore, it was argued that fuzzy logic did not simplify things or avoid the complexities of regimentation; rather, it added more methodological complexity itself [28].
While the adversaries of fuzzy logic believed it to be overrated and questioned its validity, the advocates on the other hand considered the theory to be ground-breaking and a more accurate representation of reality [31,32]. Zadeh explained in detail that such views as the ones mentioned above were mainly derived from a misunderstanding of fuzzy logic and from the inability to realize the importance of the concept of linguistic variables [33]. In classical set theory, binary logic defines any element as either a member or not a member of any given set; it is either true or false [34]. Fuzzy Logic as a generalization of classical set theory extends this definition and elements can thus belong to more than one fuzzy set with a certain degree of truth using the concept of the membership function [24]. The rationale for the fuzzy-logic-generalization is based on two main principles: the ability to construct better models of reality and the ability to exploit the tolerance for imprecision and replace numbers with words [35]. The ability to better represent reality derives from the fact that the information granules are in reality fuzzy, i.e., truth is not black or white, rather grey. The second principle allows for the replacement of numbers with words through the concept of linguistic variables, which are labels of fuzzy sets with specified membership values [35]. This allows for the design of more cost-efficient and simplified systems relying on comprehensible natural languages [35]. The concept of fuzzy granulation and use of linguistic variables is a unique feature of fuzzy logic [35,36]. Methods and approaches that rely on crisp logic to handle uncertainty such as rough set theory or the Dempster–Schafer theory fail “to reflect the fact that in much, perhaps most, of human reasoning and concept formation the granules are fuzzy (f-granular) rather than crisp” [36].
The application of conventional quantitative methods for system analyses is inappropriate in the case of humanistic systems due to the principle of incompatibility. This means that whenever the complexity of any given system increases, the ability to precisely understand its behaviour decreases, i.e., it becomes more intractable [12]. Human reasoning relies on linguistic variables rather than on numbers. Linguistic variables are labels of fuzzy sets, which are classes of objects in which the transition from membership to non-membership occurs gradually rather than abruptly [12]. Human reasoning does not resemble machine processing and relies on indiscrete logic with continuous functions. It approximates and summarizes information in the form of labels, words and sentences. Fuzzy logic therefore works and provides a method to model vague contexts, which is accurate to provide reliable and representative results. It is easy to implement, therefore, it became preferable was applied in many fields over the past decades. Fuzzy set theory is consistent and the relationship to probability despite the distinctions is present in the form of possibility theory [37]. The three concepts of probability theory, rough set theory and fuzzy set theory are non-contradictive and present three different approaches to manage different types of uncertainty [38].

2.3. Rough Sets: An Approach for Data Classification

For the development of a predictive assessment model, several techniques and methods were applied over the years, mostly probability-based multivariate statistical methods such as logistic regression and discriminant analyses, which can be used to predict the probability of a specific outcome for a given set of input variables. While such methods can be helpful in data analysis given a large set of quantitative data, qualitative judgement of the analyst is still required to understand the influential relationships among variables and interpret the provided results. The provision of an entirely objective approach for data analysis is therefore impossible since subjectivity and biases are still present in the judgement provided through the experts. The objective is therefore to minimize the subjectivity and bias in handling the input data as adequately as possible and work towards providing a more objective and standardized framework. Rough Set Theory (RST) provides a mathematical framework adequate for the classification of imperfect and uncertain information by discovering patterns and relationships in archived and historical data [39,40,41]. Through the application of several search algorithms to analyze input data provided by experts, RST is capable of automatically and objectively identifying patterns and filtering and classifying data (whether quantitative or qualitative) [40]. The subjectivity is therefore limited to the input data provided by the experts and to the selection and characterization of the classification method, not to the classification process itself.
RST was proposed by Zdzislaw Pawlak in 1982 and has since been applied in several domains and fields, in which it has proven to be very useful in filtering and classifying large data sets. The main idea in RST is represented in the assumption that with every object in the universe of discourse some information can be associated [42]. Similar objects characterized by the same information are indiscernible if a decision class cannot be determined based on the values of their considered attributes. In contrast to crisp or classical sets, which are precise and have clear boundaries, a rough set has boundary-line cases, in which objects cannot be classified certainly as members of the set or of its complement given the provided information [42]. Therefore, the main set in RST is defined by a pair of precise sets, called the lower and the upper approximation [43]. The lower approximation consists of all objects, which surely belong to the main set, while the upper approximation contains the objects that possibly belong to the main set. The difference between the upper and the lower approximation is then defined as the boundary region [44]. A set in RST is called rough if the boundary region is not empty [44]. The indiscernibility relation and the principle of approximations form the mathematical foundation of rough set theory [44]. Through the provision of efficient algorithms and the principle of indiscernibility, data sets can be scanned to identify hidden patterns, classified and reduced to eventually generate a minimal but accurate and efficient rule base [40]. The generated rule base offers easily understandable and straightforward results.
The time at which RST was introduced in the 1980s witnessed a rising interest in Machine Learning, Artificial Intelligence and Expert Systems [45]. To produce applicable frameworks within those fields, comprehensive theoretical foundations were needed, which were based either upon classical logic or intuitive algorithms and qualitative methods. From a practical perspective, approaches based on classical logic were rigid and presented difficulty in applications into real-world settings [45]. On the other hand, the intuition-based approaches lacked a standardized theoretical foundation and a unified representation to produce reliable results [45]. RST was perceived as a possible solution, which provided a framework with the clarity and algebraic completeness of classical set theory. This initiated research projects at that time aiming at developing algorithms and models based on the RST framework to address said limitations. It was noticed thereafter that the RST approach was limited in several aspects when it came to real world applications such as inconsistencies in outcome due to inconsistencies in input data [46], and the restrictiveness of RST approximations in empirical applications in complex systems involving real world data. This led to several research efforts to overcome these limitations leading to the proposition of several extensions and modifications to the RST framework [46]. Several algorithms were consequently proposed over the years to help improve the RST data mining process and allow for a better pattern recognition and data classification. Additionally, the combination of RST with fuzzy logic and with probability theory was sought to generate models capable of handling practical applications, in which probabilistic inferences and statistics were provided and could be helpful [47]. The combination of fuzzy logic and rough sets was especially interesting for our application scenario, since we were facing the problem of collecting sufficiently representative and valid data in our context due to the rarity of significant events in the studied context and the vague nature of collected data. Other limitations such as issues with the discretization and partitioning of the numerical range of values for input data and incomplete information or missing values also present challenges for the RST method. The discretization process is best provided by domain experts, who rely on their expertise and years-long experience and knowledge to define the partitions for the universe of discourse. Such limitations represent a real challenge to the RST to find more practical implementations and real-world applications. A jump from the theoretical research and academic studies into industrial and real-world implementations is necessary, which would require an expansion in respective research projects with this objective as well. The majority of conducted studies and designed models are of theoretical nature and are limited when it comes to real-world applications [45].
The relation between fuzzy logic and rough set theory was discussed in several studies. Many of those studies addressed the mathematical and theoretical issues of both concepts and aimed at either comparing or combining both methods. The scope of this paper is not to conduct an in-depth review of the theoretical foundations of fuzzy logic and rough sets. From a more practical point of view, it is more interesting to evaluate the usefulness of both methods and their ability to provide helpful results in addressing some of the limitations faced in traditional approaches. Fuzzy logic and rough sets are two different approaches to address information uncertainty. Fuzzy set theory is older (1965) and developed extensively over more than fifty years to become established and find applications in various fields [35]. The theory of rough sets relies on the principle of indiscernibility, which are objects characterized by the same information or attributes [48]. The principle of approximation is fundamental to rough set theory to handle and process uncertainty in contrast to fuzzy logic, which relies on the membership function and numerical values [0, 1] to define the degree of truth or membership [49]. However, while fuzzy logic is more suitable for addressing fuzziness and vagueness of data, rough sets are better suited for data classification and addressing inconsistency and incompleteness issues in data [50]. This does not however mean that both methods are only applicable in a restricted or specific way. Similar to Boolean logic and classical set theory, fuzzy logic and rough set theory are two mathematical constructs whose advantages and limitations depend greatly on the direction and form of application. The two methods do not conflict with each other; rather they complement each other and were combined to address both vagueness and incompleteness in the form of rough fuzzy sets and fuzzy rough sets [51,52]. Mixed frameworks of the two methods were proposed in several studies as well to benefit from the strengths of the two methods in handling the issues at hand [52]. It remains for the analyst to decide which methodology is more adequate for the context in question and in which form it should be applied.

3. Methods

In the previous sections, the goal was to present an overview of the development of safety and risk assessment tools and to discuss two possibly promising approaches to overcome their limitations. In this section, a brief overview of the models proposed in each phase is presented. The objective here is not to present a detailed and systematic explanation of each model, but rather to explain the process qualitatively. For a more detailed description on the proposed methodology, the reader might consult the previously published papers [53,54,55].
As mentioned earlier, accidents related to de-icing operations are rare due to the adequacy of implemented procedures and the high reliability of the system in place. Especially in a country such as Canada, these operations are executed very efficiently and precisely due to the harsh weather conditions and the continuous need for proper de-icing technologies. However, despite the rarity of de-icing-related accidents nowadays, they can still occur due to the very nature of complex sociotechnical systems as explained above and with severe consequences. The unavoidable need to develop and improve applied technologies and procedures does not diminish. Innovation is inevitable; otherwise, one risks falling behind and at one point becoming obsolete. Looking at the aviation context in general and specifically the de-icing context, we are faced with a complex working environment consisting of many technological components (planes, trucks, de-icing equipment, communication, control centers, computers, etc.) and human components in the form of operators, engineers, flight crew, de-icing personnel, etc. These components interact with each other under the guidance of an organizational structure that specifies policies, procedures and regulations. The work is likely conducted under extreme weather conditions with tight time schedules and a high requirement on adequacy and reliability. All these aspects shape the de-icing working environment to be a very complex and dynamic system in need of continuous assessment and improvement. The expansion of Industry 4.0 into several fields and the introduction of new technologies as the move from gate de-icing to centralized de-icing, or future innovations such as the new Ground Ice Detection System (GIDS), and possibly the application of drones that uses the internet of things, etc., are all challenges that require further research from a safety angle. Such scenarios require an understanding of the performance of de-icing operations from a systemic perspective taking into considerations all aspects of the system, whether human, organizational, environmental or technological. Research in de-icing from a systemic perspective is rare and to the best of our knowledge, our research team is the only team pursuing such a project. While research and the application of systemic tools in aviation have gained on popularity and significant studies were published, the same cannot be said about de-icing operations.
Since the start of this project, we aimed at accomplishing three objectives to eventually realize a basic theoretical framework and provide indicators for future research. The first objective was to analyze the working environment of aircraft de-icing operations adopting a complex systemic perspective and to identify influential contextual factors that can possibly affect performance. The Functional Resonance Analysis Method (FRAM), which allows for a functional representation of the studied system, was chosen for this task. The second objective was to conduct a predictive assessment and to provide quantified and more intersubjective results through the integration of fuzzy logic as a quantification tool. The third objective was to address the lack of databases and the issues related to the classification of uncertain and vague information. To this end, the Rough Set Theory (RST) was used to classify input data and generate the rule base for our analysis. In the following subsections, a brief overview of the three phases will be presented.

3.1. Phase I: Basic FRAM as a Complex Systemic Assessment Tool

FRAM in its basic form using two phenotypes, namely time and precision, was applied to study the working environment of aircraft de-icing operations. For a more detailed explanation of the methodology and the obtained results, the reader might consult Slim et al. (2018) [53]. The followed methodology consisted of the following steps:
  • The objective for the first application was to select an analysis scenario using a well-known de-icing-related accident. The Scandinavian Airlines flight SK751 crash at Gottröra, Sweden, in 1991 was chosen [56]. The accident was investigated and well defined in the official accident report, which allowed for an easier characterization of the functions and their outputs. While the conditions and events leading to the accident were listed in a detailed manner in the report, the objective was to verify whether a FRAM application could add to the findings and present a different perspective. This perspective should be facilitated relying on the distinguishing four principles of FRAM: equivalence of success and failure, approximate adjustments, emergence of outcome and functional resonance [19] (Figure 2).
  • The second step then was to characterize the working environment of aircraft de-icing operations and create a functional representation of the context by identifying the functions that constitute the system in question. The functional characterization in FRAM describes how the various tasks are related and how the outcome’s variability can resonate and affect performance negatively or positively. Consequently, a list of representative functions was identified limiting the scope of the analysis to the functions needed to execute the de-icing operations. The chosen analysis scenario would then specifically depict the de-icing operation and takeoff process of SK751. Each function could possibly be characterized by six aspects: Input (I), Output (O), Preconditions (P), Resources (R), Time (T) and Control (C) [19]. However, it is not necessitated that all aspects are provided; rather, it depends on the function in question. The boundaries of the analysis are formed by the background functions, which only provide outputs and are invariable as they are not the focus of the analysis. The functions were described in the form of a table listing all their respective characteristics, which can be derived from the events and data provided in the official accident report published by the Board of Accident Investigation [56]. Additionally, three types of functions could be defined: organizational, technological and human functions.
  • The third step was then to identify sources of performance variability within the designed setup. As mentioned above, variability is characterized in terms of two phenotypes: timing (early, on time, too late and omission) and precision (imprecise, acceptable and precise) [19].
  • Finally, the influential relationships among functions were identified to construct a visual map of the system and illustrate how functional resonance can affect the outputs of the functions. This could explain how performance variability combined and resonated to eventually lead to the crash and what lessons or new findings could this analysis provide.

3.2. Phase II: A Predictive Assessment with Quantified Results

The basic FRAM is qualitative and relies on linguistic scales to characterize performance variability. This is advantageous in case of uncertain data and inherently complex factors, which can be hard to measure numerically. The application of FRAM in phase I was straightforward given the well-defined events and conditions in the official accident report. Moving forward to a more generic scenario and attempting to conduct a predictive assessment proved to be more difficult in our case. It was not always possible to identify for each function how a variable input would reflect on the output and to which extent. The classification of inputs and outputs was not always easy and was in some cases not possible, even after understanding the mechanisms of the functions in question. The magnitude and type of variability were not easily deducible in many cases. An additional issue was that people have different perceptions and tend to associate different meanings with words. What one might define as inacceptable for example, another could perceive as adequate and so on. To ensure conformity and provide more intersubjective results, fuzzy logic [27] was integrated in phase II into the framework of FRAM as a quantification tool. Fuzzy logic resembles human reasoning and allows for a mathematical representation of natural language. Through the integration of fuzzy logic into FRAM and designing the FRAM functions as rule-based Fuzzy Inference Systems (FIS), the advantages of both methods could be utilized to provide more representative and comprehensible results (Figure 3).
The five steps of FRAM provide a guideline for defining the context of analysis and decomposing the system into functions. We can distinguish between two types of variability: internal from within the function, and external through functional couplings. Each function is then defined as a hierarchical fuzzy inference system (Figure 4) with an internal FIS to account for the Internal Variability Factor (IVF) and a higher-order FIS to account for the combined variability of both the IVF and the external variability through the couplings with upstream functions. The list of common performance conditions (CPC) was used as evaluation parameters to anticipate the possibility for potential internal variability. The analyst assigns a quality score on a scale between zero and ten to each performance condition. A rule-based fuzzy inference system is then constructed to fuzzify the scores of all respective performance conditions and generate an aggregated quantifier for the potential internal variability of each function. The IVF is then linked to the higher-order fuzzy inference system in addition to the other incoming aspects from upstream functions to generate a numerical output. The impact of the timing and precision phenotypes is combined and simplified into three classes: highly variable, variable and non-variable. The numerical outcome represents an indicator for possible variability, whether negative or positive, in the function’s output. On a spectrum between 0 and 1.5, 1 represents a non-variable output. Any value below 1 represents negative variability, while any value above 1 represents positive variability.
An application scenario of aircraft de-icing operations was constructed inspired by two de-icing-related accidents, namely the Scandinavian Airlines flight 751 crash in 1991 [56] and the Air Maroc accident in Mirabel in 1995 [57]. The work environment of centralized de-icing was characterized in terms of functions, which describe the set of activities necessary to successfully perform the de-icing of airplanes. A total of 17 functions were defined: four background functions (non-variable) and 13 foreground functions (possibly variable). To induce variability into the defined setup, assumptions over prevailing performance conditions such as inadequate airliner guidelines, present extreme weather conditions, inadequate training of flight crew and high temporal stress were formulated.
Based on the formulated assumptions in our simulation, we recorded possible positive variability for functions with an output quality of one or more and possible negative variability for functions with scores below one. This was especially noticeable for the function “Post De-icing Inspection”, whose score was remarkably low due to the principle of functional resonance and the impact of combined variability from upstream functions. The scores of the IVF and the output were then plotted on the graphical representation generated in the FRAM Model Visualizer (FMV) to provide an illustrative map of the relationships within the studied system. For a more detailed explanation of the methodology and the obtained results, the reader might consult Slim and Nadeau (2019) [54].

3.3. Phase III: Rough Sets to Classify Input Data

In the third phase of the project, we aimed at addressing several limitations faced in the prototyping model combining fuzzy logic and FRAM. Despite the many advantages provided by the addition of fuzzy logic, the model was still limited in many ways that made the realization of such an analysis somewhat difficult. The number of input variables and associated phenotypes and classes was kept at a minimum to avoid the “rules explosion” problem. A higher number of inputs could result in an exhaustive and large rule base, which would make the creation of the fuzzy inference system (FIS) difficult and in many cases unfeasible due to the required effort and resources. Secondly, the decision to determine the class of output was not always easily identifiable in the rule base relying on the provided qualitative scales. The output was in many cases variable and vague in nature due to the vagueness or incompleteness of the input information. Additionally, the assignment of a quantitative domain and the partitioning of the universe of discourse were still determined in a subjective manner relying on the expertise of the analyst or the consulted experts. Therefore, to address the above-mentioned limitations, the Rough Sets Theory (RST) was integrated into the prototyping model to classify input data and generate a more efficient rule base.
The main advantage for using RST is represented in the capability to filter and identify patterns in data tables to classify outcomes and provide a decision relying on historical information. This can be achieved applying the principle of approximation and the indiscernibility relation to classify recorded information in the form of data tables or information systems. The RST approach allows for an automatic generation of a reduced and efficient rule base afterwards, which can be migrated into the FIS instead of manually writing the rule base rule by rule.
Basically, FRAM functions are data tables, which lists the functional aspects as attributes. Thus, the functions can be defined as rough information systems considering each iteration or recorded instance of the function as an object, the functional aspects as attributes and the output as the received decision class. Each recorded instance of the function is an object with respective values for the attributes and a resulting decision class. The data table forms the discernibility matrix, which helps to identify indiscernible objects, i.e., rows sharing the same values that have different decision classes. Different types of algorithms can be applied to scan the constructed information system to compute the reducts and generate the rule base. The accuracy of the rule base depends here on the size and accuracy of the provided dataset. The characterization step is important to determine the type of needed data and parameters in the data collection step to perform predictive or proactive assessments.
The second aspect to consider is the characterization of performance variability. To determine the internal variability for each function, the CPC’s can be defined as attributes and the decision class would then be the IVF. It is important here to mention that the choice of relevant performance conditions can be re-evaluated and modified as required by the studied context. The same settings and classes were kept as in the prototyping model, i.e., each CPC was classified as either “Adequate” or “Inadequate”. The decision classes for the IVF were kept the same as well: “Non-variable”, “Variable” or “Highly Variable” (Table 1). The external variability factor (EVF) is determined following the same process as with the IVF, however, using a three-class scale for each incoming aspect or attribute. The functional aspects serve this time in addition to the IVF as attributes in the RST table (Table 2) with three possible values or classes: “Non-variable”, “Variable” and “Highly Variable”. The provided dataset for each function is then split randomly into a training set and a testing set. The chosen algorithm scans the training set to identify the reducts, which are then used to generate a reduced rule base. The generated rule base can be validated using the testing set, which helps to determine adequate rules with acceptable levels of coverage, support and accuracy. The final rule base can be then migrated into the FIS and used to run specific instantiations of the model and generate the numerical outcomes for the output (Figure 5). The same analysis scenario was kept as in the previous stage to sustain the same settings for a better comparison of the two models. For a more detailed explanation of the methodology and the obtained results, the reader might consult Slim and Nadeau (2020) [55].

4. Discussion

Realizing the significance and validity of the principles, on which FRAM was founded, there still existed uncertainty concerning its usefulness in the case of the SK751 crash. The official accident report published by the SHK in 1993 was used as the main source of information to construct the scenario for the simulation. The report provided a detailed list of all causes and circumstances leading to the accident. It is important to note here that the analysis scenario or the instantiation of the model is not the model itself. It can be seen as an iteration of the model with specific performance conditions and parameters. The accident was used as a case study to draw conclusions and identify implications for the context of aircraft de-icing operations generally. The model can be reused with different settings and a different set of data inspired by or recorded in real-world environments. The main advantage is still presented in the principles of FRAM, which allow for looking at what went right in addition to what went wrong. The perspective presented by FRAM can provide interesting results, which can be helpful and complementary to traditional assessment tools. FRAM in its basic form relying on simple three-point qualitative scales can capture the dynamic and complex nature of the dominant relationships within the studied system. The FRAM framework provides a functional representation of the system, which enables the analyst to zoom in on processes and tasks of interest without losing the holistic view. The behaviour of the system can be evaluated as a function of performance variability and local adjustments that can lead to unforeseen and unexpected deviations in the outcome. Performance variability, i.e., the difference between Work-As-Imagined (WAI) and Work-As-Done (WAD) is an inherent characteristic of any given system, which can be utilized and exploited to maintain resilience. Relying solely on the accident report, the focus was entirely directed to what went wrong, which is understandable given that the event is an airplane crash. Nonetheless, the crash could have had a more severe and painful outcome and the actions of the flight crew led to the survival of all passengers. Applying a SAFETY-II approach such as FRAM can help to better manage variability and predict adversity, which can be difficult to anticipate with classical tools. This is especially true in the case of highly reliable systems such as aviation, in which adverse events are very rare and unique by nature.
Speaking more specifically, the FRAM analysis of the SK751 crash provided a better understanding for the development of the accident over time and the relationships between several variables that led to that outcome. The official accident report provided a detailed description of the events and aspects of the accident and presented a detailed list of findings describing the development of the accident. However, the data provided by the report was still limited and focused primarily on negative actions and performance conditions. The application of FRAM was helpful to connect the events and draw a map of the system and the dominant relationships that played a significant role in producing the outcome. The characterization of the system as a set of interdependent functions helped to identify the couplings between them and define how performance variability of one function affected the output of the other functions, even the indirectly linked ones. This would allow for pinpointing weak spots, which could be lifted or strengthened through preventative measures to better manage variability in the future. The application was limited and served simply as an example given that the accident report was not sufficient to determine how the influential factors effectively contributed to the accident. It was possible to identify variability in the execution of many functions; however, the extent and magnitude of each influential factor could not be easily determined using a simple three-class scale. Moving forward to a proactive assessment model, it would not be easy to determine how variable inputs would affect the performance of the functions and how variability of the output would manifest. For example, a “too late” input could be vague by default and could therefore mean either a “too late” output or an “imprecise” output.
Therefore, a quantified output would provide a more intersubjective and precise representation of the perceived magnitude of performance variability. Additionally, the provision of a standardized framework to account for variability would facilitate the classification of outputs more precisely. Moreover, it is not always possible in high-reliability systems to quantify certain complex variables without simplifying the systemic relationships and consequently sacrificing the systemic perspective. Furthermore, the lack of sufficient statistics due to the high reliability of those systems and the rarity and uniqueness of adverse events make the use of probabilistic tools difficult. The studied relationships are often vague and uncertain by nature and are therefore better represented with natural language, which can be utilized by fuzzy logic in the form of linguistic variables. The relationships between inputs and outputs can be characterized in a very comprehensible manner using conditional IF-THEN rules enabling the analyst to assign different weights to the inputs or the rules separately. By assigning a numerical scale or range of input values and determining the respective membership functions that better suit the function in question, a numerical output can be obtained providing an aggregated and comprehensible representation of the output’s variability. The selection of a numerical scale and the partitioning are conducted in a relatively subjective manner since these domains are defined by the analyst or the consulted experts. However, they still provide a more comprehensible and agreeable representation to most practitioners and decision makers.
The designed fuzzy-FRAM model was applied in the second phase to analyze a case study inspired by two de-icing related accidents, namely the SK751 crash in 1991 and the Mirabel accident in 1993. A more generic model of the de-icing context was created for this simulation. The number of functions and their respective inputs and outputs was limited in comparison to the original model to ensure an efficient simulation easy on computational resources in this first step. The objective here was to perform a more predictive/proactive type of assessment using the assigned numerical scales and building on the initial characterization of functions, relationships and membership functions. Predictive studies lack certainty in comparison to retrospective studies, in which events are very well known and defined. It might not always be possible to determine the quality of output in a predictive assessment using natural language solely. This can be overcome through the integration of fuzzy logic into the framework of the classical FRAM. The analysis scenario was designed in a subjective manner relying on literature findings, technical reports and the knowledge gained from our years-long project on the de-icing working environment. The defined functions described human (individual or organizational) activities that constitute the de-icing context and the relationships and couplings that affect the system’s performance on a wide scale. The preliminary results of the simulation in MATLAB were promising and the model served primarily as a demonstration for such an application. Given the predefined settings of the model, the calculation process of the numerical outcome was a straightforward task without the need to further hypothesize or assume possible outcomes. The FIS calculates the numerical output providing an indicator of functional variability and the dominant relationships between the different functions within the studied system. The aggregated output does not provide simply a linguistic label pinpointing the output as a member of a definite class; rather, it can be seen as a pointer to sources of positive or negative variability.
Another contribution of the second-phase model was the distinction between the internal and external variability of each function. The basic FRAM model does not present a clear approach to distinguish between the two in the characterization process, which is performed more qualitatively to maintain the complex nature of the described relationships. Considering just the functional couplings would mean in a practical sense the characterization of variability as an external source, even if this were not intended. The internal variability of the function itself was accounted for by the analyst when determining the quality of the output. This is due to the above-explained lack of distinction between the two types of variability, at least in a more practical sense of the word. Therefore, the impact of the inner mechanisms of the function on the output’s quality was characterized relying on the Common Performance Conditions (CPC) (originally proposed for CREAM) [58]. The CPC influence represents the influence of the context on the execution of the function, which can be considered an external source as well. However, it serves the purpose of providing indicators, which can point to possible internal variability that cannot be decoupled from external influences. As an example, consider the impact of noise on the hearing capability of an operator on the airport grounds. The noise can be characterized as an inadequate performance condition causing the operator to lose concentration or fail to hear instructions over the radio. The variability might be considered in this case an internal factor coming from the state of the operator (psychological or physiological); however, it was amplified by the inadequate conditions. On the other hand, preferable conditions could help avoid such variability or even enhance performance despite a poor physiological state of the operator.
The influence of the CPCs and the incoming aspects for each function on the quality of the output can be represented and weighted in the rule base of the FIS using natural language. However, one important limitation was faced in the design process of the model. The number of input variables and respective classes was kept low (maximum of seven inputs each with three possible classes) to avoid the rule explosion problem. The rule generation process for the Mamdani-type FIS [59] would be very exhaustive if it were to be performed in a manual fashion, especially if a rule base would consist of thousands of rules. This simplification served the purpose of constructing a more efficient rule base to ensure a timely conclusion of the project and a feasible and less-demanding model. Moreover, the selection of the membership functions and partitioning of the universe of discourse is performed by the analyst in a relatively subjective manner. This would usually necessitate a deep understanding of the system’s behaviour and would require the knowledge of experts for more reliable and valid results. The same limitation faced with the basic FRAM model in classifying the final output was noticed again in the rule base classification for the consequent part. It was not always clear or sufficiently decisive to identify the class of output relying on qualitative scales. The incompleteness and vagueness of the input values would result in many cases in vagueness in the rule generation and classification. In our case, we used an automatic generator to compute every possible combination for the input values and the selection of the output for each rule was determined by assigning a numerical score or weight for the input classes in the antecedent part of each rule. The weights for the rules and the different functional aspects were kept the same to simplify the process further. Admittedly, this can be seen as a simplification of reality; however, the objective here in the first steps was to lay a theoretical foundation and provide a demonstration with the objective to move on later to an application built on real world settings and recorded data.
These considerations led to a further modification of our model through the integration of Rough Set Theory (RST) [39]. RST was integrated into the model as a tool for data mining and filtration, rule generation and output classification. A more efficient rule base was obtained consequently and migrated into the FIS of each function to compute the numerical outputs. The same de-icing scenario was used again keeping identical characterizations and settings to better compare the results of both models. The obtained numerical scores using the RST-generated rule base were identical to the numbers of the initial model, which represents maximum accuracy (accuracy of 1.0) and an ideal outcome. Moreover, the generated rule base was significantly reduced in size in comparison to the previous model. This would translate into a less demanding and a more efficient model easier on computing machines for simulation. In addition to rule generation and reduction, RST can be useful in classifying outcomes. In the previous model using fuzzy logic, the classification of the consequent part of each rule was performed manually. In the case of a high number of rules, this process would be extremely demanding and time consuming. We managed eventually to use excel formulae to generate the data tables and by assigning numerical scores to each class in the antecedent part, the decision class in the consequent part was determined. The integration of RST would take over this task and consequently allow for an automatic and efficient rule base construction. The decision class in the fuzzy-logic-based model might not be deducible for each rule separately and the rules’ generation might be unfeasible in that case. RST can be helpful in this case by providing a method for deriving rules with decisions from a limited set of recorded data.
Admittedly, the use of ideal data sets covering all possible combinations of input values resulted in maximum accuracy. This would not be possible in a real-world application given that the provided input data would be imperfect and would suffer from inconsistencies. Therefore, the use of real-world data would be required to test how useful the RST-based model would be in practice. Nonetheless, in principle, this shows the validity of the RST approach to reduce data and generate minimal rule bases. The accuracy of the rule base in a real-world application would depend greatly on the quality and quantity of collected data. RST as a data-mining tool would allow for deriving decisions from recorded data over time in the presence of uncertain and vague information. The choice of uniform scores or weights for classes and input variables must be reconsidered in real world applications. Depending on the nature of the function and the respective variables, the magnitude of each input value might differ and be unique to each variable. Furthermore, the implementation of RST would require additional effort and knowledge from the analyst. The model requires additional software, data acquisition tools and eventually a more complicated design process.
Traditional statistical methods are desirable in most cases; however, as mentioned in this paper several times, it can be difficult to generate adequate databases in complex and highly reliable systems. Such tools require large datasets to provide helpful results, which can be difficult to achieve given the scarcity and uniqueness of events in such systems. Tools as RST and fuzzy logic can be more helpful in such contexts through the provision of qualitative scales and the possibility to compute with natural language. The use of conditional rules is comprehensible and allows for a straightforward representation of the relationships between inputs and outputs. RST is especially helpful in the data treatment process by allowing for deducing meaningful relationships even when faced with a limited set of data. A specific inaccuracy is still present in the provided results when using RST; however, that is just the case in predictive assessments generally and the results are still useful and more reliable than using plain heuristics for example.
Table 3 provides a qualitative comparison between the three models. The list is not limited to the mentioned points and summarizes the main factors that we found to be most relevant to our application scenarios.

5. Conclusions and Outlook for Future Studies

The objective in this paper was to provide a qualitative summary and a discussion of the main findings and challenges faced within our research project. The three frameworks of FRAM, fuzzy logic and rough sets were briefly discussed to provide context and clarify the advantages offered by these methods in resolving some of the issues facing safety management tools and to address some of the limitations yet to overcome. Technological advancements such as the imminent implementation of Industry 4.0 into the majority of industrial contexts will inevitably add to the complexity of systems. The classical models originating from the era before digital technology have not been updated or kept pace with technological innovations over time. The adoption of a more complex and systemic perspective has become a requirement to be well equipped for the continuous changing and dynamic nature of modern systems. The human component in this context is a major contributor to success or failure. Technological systems have been evolving and advancing in recent years at a faster pace than regulations and guidelines, which makes the proposition of adequate solutions even more vital. FRAM as a systemic analysis tool allows for capturing complex relationships to construct a functional map of the system and identify the routes for negative and positive propagation of performance variability. Fuzzy logic combined with FRAM facilitates the quantification of natural language and provides a more systematic approach to account for variability. Finally, rough sets offer efficient algorithms to classify input data and generate reduced rule bases. These tools should by no means simply replace traditional and well-established tools. However, they can serve the purpose of complementing the classical tools to provide a more comprehensive and complete picture of the modern systems of today [60].
The initial results within this project are very promising and provide a good start to initiate further research work investigating further possibilities and improvements. The model at this stage is still a simulation and a simplification of reality and from a technology-readiness perspective is still in the theoretical development phase (TRL 4—Technology development and validation in simulated environments). The purpose so far in this project is still to present an alternative approach for integrating a quantification method into FRAM without losing its properties and significant advantages in handling complex systems. The provision of sufficient real-world data and representative performance indicators to facilitate analyses of the de-icing system is still lacking and can be possibly helped by rough sets. Future studies would be required to provide validation and optimization through the deployment of the model in a real-world setting using real data. Additionally, the application of the model into new contexts and analysis scenarios could provide more insights to further improve the proposed framework.

Author Contributions

Conceptualization and design of research, H.S. and S.N.; methodology, H.S.; software, H.S.; validation, H.S. and S.N.; formal analysis, H.S. and S.N.; resources, H.S. and S.N.; writing—original draft preparation, H.S.; writing—review and editing, H.S. and S.N.; supervision, S.N.; funding acquisition, H.S. and S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Arbour Foundation (Montreal, QC), the National Sciences and Engineering Council of Canada (NSERC) & École de technologie supérieure (ÉTS).

Acknowledgments

The authors thank Rees Hill, the developer of the “FRAM Model Visualizer (FMV)”, and Aleksander Øhrn and the ROSETTA development team, whose software were used in this study to design the FRAM model.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qureshi, Z.H. A Review of Accident Modelling Approaches for Complex Critical Sociotechnical Systems; Defence Science and Technology Organisation Edinburgh (Australia) Command Control Communications and Intelligence DiV: Edinburgh, Australia, 2008.
  2. Rollenhagen, C. MTO: En Introduktion: SAMBANDET Människa, Teknik och Organisation; Studentlitteratur: Lund, Sweden, 1995. [Google Scholar]
  3. Leveson, N.G. Engineering a Safer World: Systems Thinking Applied to Safety; The MIT Press: Cambridge, MA, USA, 2011; ISBN 978-0-262-29824-7. [Google Scholar]
  4. Hollnagel, E. Barriers and Accident Prevention; Ashgate Publishing: Aldershot, UK, 2004. [Google Scholar]
  5. Hollnagel, E. Epilogue: RAG–The Resilience Analysis Grid. In Resilience Engineering in Practice; Hollnagel, E., Pariès, J., Woods, D., Wreathall, J., Eds.; Ashgate Publishing, Ltd.: Surrey, UK, 2011. [Google Scholar]
  6. Patriarca, R.; Bergström, J.; di Gravio, G.; Costantino, F. Resilience engineering: Current status of the research and future challenges. Saf. Sci. 2018, 102, 79–100. [Google Scholar] [CrossRef]
  7. Patriarca, R.; di Gravio, G.; Woltjer, R.; Costantino, F.; Praetorius, G.; Ferreira, P.; Hollnagel, E. Framing the FRAM: A literature review on the functional resonance analysis method. Saf. Sci. 2020, 129, 104827. [Google Scholar] [CrossRef]
  8. Aventin, A.; Morency, F.; Nadeau, S. Statistical study of aircraft accidents and incidents related to de/anti-icing process in Canada between 2009 and 2014. In Proceedings of the 62nd CASI Aeronautics Conference and AGM 3rd GARDN Conference. Abstract book, Montreal, QC, Canada, 19–21 May 2015; pp. 201–209. [Google Scholar]
  9. International Civil Aviation Organization. Forecasts of Scheduled Passenger and Freight Traffic. Available online: http://www.icao.int/sustainability/pages/eap_fp_forecastmed.aspx (accessed on 15 August 2020).
  10. Günebak, S.; Nadeau, S.; Morency, F.; Sträeter, O. Aircraft ground de-icing as a complex sociotechnical system: Towards a safer and more efficient communication process for aircraft ground de-icing. In Proceedings of the 62nd CASI Aeronautics Conference and AGM 3rd GARDN Conference. Abstract book, Montreal, QC, Canada, 19–21 May 2015; pp. 189–199. [Google Scholar]
  11. Heinrich, H.W. Industrial Accident Prevention. A Scientific Approach; McGraw-Hill: New York, NY, USA, 1931. [Google Scholar]
  12. Zadeh, L.A. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Trans. Syst. Man Cybern. 1973, 1, 28–44. [Google Scholar] [CrossRef] [Green Version]
  13. Adriaensen, A.; Decré, W.; Pintelon, L. Can complexity-thinking methods contribute to improving occupational safety in industry 4.0? A review of safety analysis methods and their concepts. Safety 2019, 5, 65. [Google Scholar] [CrossRef] [Green Version]
  14. Perrow, C. Normal Accidents: Living with High Risk Technologies, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 1984. [Google Scholar]
  15. Hollnagel, E. Safety-I and Safety-II: The Past and Future of Safety Management; Ashgate Publishing, Ltd.: Farnham, France, 2014. [Google Scholar]
  16. Reason, J. Human Error, 1st ed.; Cambridge University Press: Cambridg, UK, 1990; ISBN 978-0-521-30669-0. [Google Scholar]
  17. Woods, D.D.; Johasnnesen, L.J.; Sarter, N.B. Behind Human Error: Cognitive Systems, Computers and Hindsight; SOAR Report 94-01; Wright-Patterson Air Force Base, CSERIAC: Dayton, OH, USA, 1994. [Google Scholar]
  18. Rasmussen, J. Risk management in a dynamic society: A modelling problem. Saf. Sci. 1997, 27, 183–213. [Google Scholar] [CrossRef]
  19. Hollnagel, E. FRAM, the Functional Resonance Analysis Method: Modeling Complex Socio-Technical Systems; Ashgate Publishing, Ltd.: Farnham, UK, 2012. [Google Scholar]
  20. Eurocontrol, A. White Paper on Resilience Engineering for ATM; Report of the Project Resilience Engineering for ATM; Eurocontrol: Brussels, Belgium, 2009. [Google Scholar]
  21. Nadeau, S.; Landau, K. Utility, advantages and challenges of digital technologies in the manufacturing sector. Ergon. Int. J. 2018, 2. [Google Scholar] [CrossRef]
  22. Cacciabue, P.C. Human factors impact on risk analysis of complex systems. J. Hazard. Mater. 2000, 71, 101–116. [Google Scholar] [CrossRef]
  23. Amaratunga, D.; Baldry, D.; Sarshar, M.; Newton, R. Quantitative and qualitative research in the built environment: Application of “mixed” research approach. Work Study 2002, 51, 17–31. [Google Scholar] [CrossRef]
  24. Shepard, R.B. Quantifying Environmental Impact Assessments Using Fuzzy Logic; Springer Series on Environmental Management; Springer: New York, NY, USA, 2005; ISBN 978-0-387-24398-6. [Google Scholar]
  25. Patriarca, R.; di Gravio, G.; Costantino, F. A monte carlo evolution of the functional resonance analysis method (fram) to assess performance variability in complex systems. Saf. Sci. 2017, 91, 49–60. [Google Scholar] [CrossRef]
  26. Slater, D. Modelling, Monitoring, Manipulating and Managing? Modelling Process Flow in Complex Systems; CAMBRENSIS: Porthcawl, UK, 2017. [Google Scholar]
  27. Zadeh, L.A. Fuzzy Sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  28. Haack, S. Do we need “fuzzy logic”? Int. J. Man Mach. Stud. 1979, 11, 437–445. [Google Scholar] [CrossRef]
  29. Tribus, M. Comments on “Fuzzy sets, fuzzy algebra, and fuzzy statistics”. Proc. IEEE 1979, 67, 1168–1169. [Google Scholar] [CrossRef]
  30. Laviolette, M.; Seaman, J.W. The efficacy of fuzzy representations of uncertainty. IEEE Trans. Fuzzy Syst. 1994, 2, 4–15. [Google Scholar] [CrossRef]
  31. Pelletier, F.J. Metamathematics of fuzzy logic. Bull. Symb. Log. 2000, 6, 342–346. [Google Scholar] [CrossRef]
  32. Hájek, P. Metamathematics of Fuzzy Logic; Trends in Logic; Springer: Dordrecht, The Netherland, 1998; Volume 4, ISBN 978-1-4020-0370-7. [Google Scholar]
  33. Zadeh, L.A. Is there a need for fuzzy logic? Inf. Sci. 2008, 178, 2751–2779. [Google Scholar] [CrossRef]
  34. Sivanandam, S.N.; Sumathi, S.; Deepa, S.N. Introduction to Fuzzy Logic using MATLAB; Springer: Berlin/Heidelberg, Germany, 2007; ISBN 978-3-540-35780-3. [Google Scholar]
  35. Zadeh, L.A. Fuzzy logic—A personal perspective. Fuzzy Sets Syst. 2015, 281, 4–20. [Google Scholar] [CrossRef]
  36. Zadeh, L.A. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 1997, 90, 111–127. [Google Scholar] [CrossRef]
  37. Dubois, D.; Prade, H. Fuzzy sets and probability: Misunderstandings, bridges and gaps. In Proceedings of the [Proceedings 1993] Second IEEE International Conference on Fuzzy Systems, San Francisco, CA, USA, 28 March–1 April 1993; pp. 1059–1068. [Google Scholar]
  38. Nurmi, H. Probability and fuzziness—echoes from 30 years back. In Views on Fuzzy Sets and Systems from Different Perspectives; Studies in Fuzziness and Soft Computing; Seising, R., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 243, pp. 161–174. ISBN 978-3-540-93801-9. [Google Scholar]
  39. Pawlak, Z. Rough Sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  40. Øhrn, A. Discernibility and Rough Sets in Medicine: Tools and Applications; Department of Computer and Information Science, Norwegian University of Science and Technology: Trondheim, Norway, 2000; p. 239. [Google Scholar]
  41. Alisantoso, D.; Khoo, L.P.; Lee, B.I.; Fok, S.C. A rough set approach to design concept analysis in a design chain. Int. J. Adv. Manuf. Technol. 2005, 26, 427–435. [Google Scholar] [CrossRef]
  42. Pawlak, Z. Some issues on rough sets. In Transactions on Rough Sets I; Springer: Berlin/Heidelberg, Germany, 2004; pp. 1–58. [Google Scholar]
  43. Hvidsten, T.R. A Tutorial-Based Guide to the ROSETTA System: A Rough Set Toolkit for Analysis of Data. Available online: http://www.trhvidsten.com/docs/ROSETTATutorial.pdf (accessed on 15 January 2020).
  44. Pawlak, Z. Rough set theory and its applications to data analysis. Cybern. Syst. 1998, 29, 661–688. [Google Scholar] [CrossRef]
  45. Ziarko, W. Rough sets: Trends, challenges, and prospects. In International Conference on Rough Sets and Current Trends in Computing; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–7. [Google Scholar]
  46. Greco, S.; Matarazzo, B.; Slowinski, R. Rough sets theory for multicriteria decision analysis. Eur. J. Oper. Res. 2001, 129, 1–47. [Google Scholar] [CrossRef]
  47. Wei, L.L.; Zhang, W.X. Probabilistic rough sets characterized by fuzzy sets. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2004, 12, 47–60. [Google Scholar] [CrossRef]
  48. Pawlak, Z. Imprecise categories, approximations and rough sets. In Rough Sets; Springer: Dordrecht, The Netherland, 1991; pp. 9–32. ISBN 978-94-010-5564-2. [Google Scholar]
  49. Greco, S.; Matarazzo, B.; Slowinski, R. The use of rough sets and fuzzy sets in MCDM. In Multicriteria Decision Making; Gal, T., Stewart, T.J., Hanne, T., Eds.; International Series in Operations Research & Management Science; Springer: Boston, MA, USA, 1999; Volume 21, pp. 397–455. ISBN 978-1-4613-7283-7. [Google Scholar]
  50. Yao, Y. A comparative study of fuzzy sets and rough sets. Inf. Sci. 1998, 109, 227–242. [Google Scholar] [CrossRef]
  51. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  52. Anderson, G.T.; Zheng, J.; Wyeth, R.; Johnson, A.; Bissett, J.; The Posch Group. A rough set/fuzzy logic based decision making system for medical applications. Int. J. Gen. Syst. 2000, 29, 879–896. [Google Scholar] [CrossRef]
  53. Slim, H.; Nadeau, S.; Morency, F. Fram: A complex system’s approach for the evaluation of aircraft on-ground de-icing operations. In Gesellschaft für Arbeitswissenschaft Coll. « Kongress der Gesellschaft für Arbeitswissenschaft »; GFA Press: Frankfurt, Germany, 2018; Volume 64. [Google Scholar]
  54. Slim, H.; Nadeau, S. A proposal for a predictive performance assessment model in complex sociotechnical systems combining fuzzy logic and the Functional Resonance Analysis Method (FRAM). Am. J. Ind. Bus. Manag. 2019, 9, 1345–1375. [Google Scholar] [CrossRef] [Green Version]
  55. Slim, H.; Nadeau, S. A mixed rough sets/fuzzy logic approach for modelling systemic performance variability with FRAM. Sustainability 2020, 12, 1918. [Google Scholar] [CrossRef] [Green Version]
  56. SHK Board of Accident Investigation. Report C 1993:57 Air Traffic Accident on 27 December 1991 at Gottröra; AB County Case L-124/91; SHK Board of Accident Investigation: Stockholm, Sweden, 1993; Available online: http://www.havkom.se/assets/reports/English/C1993_57e_Gottrora.pdf (accessed on 15 January 2020).
  57. Transport Safety Board of Canada (TSB). Aviation Occurrence Report: Collision Royal Air Maroc Boeing 747-400, CN-RGA; TSB: Montreal, PQ, Canada, 1995; International Airport, Québec 21 January 1995. Report Number A95Q0015, TSB Canada; Available online: http://www.tsb.gc.ca/eng/rapports-reports/aviation/1995/a95q0015/a95q0015.pdf (accessed on 15 January 2020).
  58. Hollnagel, E. Cognitive Reliability and Error Analysis Method (CREAM); Elsevier: Amsterdam, The Netherlands, 1998. [Google Scholar]
  59. Mamdani, E.H.; Assilian, S. An experiment in linguistic synthesis with a fuzzy logic controller. Int. J. Man Mach. Stud. 1975, 7, 1–13. [Google Scholar] [CrossRef]
  60. Melanson, A.; Nadeau, S. Managing OHS in complex and unpredictable manufacturing systems: Can FRAM bring agility? In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future; Advances in Intelligent Systems and Computing; Schlick, C., Trzcieliński, S., Eds.; Springer International Publishing: Cham, Swizerland, 2016; Volume 490, pp. 341–348. ISBN 978-3-319-41696-0. [Google Scholar]
Figure 1. An overview of the development of accident and safety assessment methods [20].
Figure 1. An overview of the development of accident and safety assessment methods [20].
Safety 06 00050 g001
Figure 2. The four principles of Functional Resonance Analysis Method (FRAM) [19].
Figure 2. The four principles of Functional Resonance Analysis Method (FRAM) [19].
Safety 06 00050 g002
Figure 3. The three steps of the Fuzzy Inference System (FIS).
Figure 3. The three steps of the Fuzzy Inference System (FIS).
Safety 06 00050 g003
Figure 4. A FRAM function represented as an FIS.
Figure 4. A FRAM function represented as an FIS.
Safety 06 00050 g004
Figure 5. A flow diagram presenting an overview of the modified FRAM framework [55].
Figure 5. A flow diagram presenting an overview of the modified FRAM framework [55].
Safety 06 00050 g005
Table 1. A Rough Set Theory (RST) information system for determining the Internal Variability Factor (IVF).
Table 1. A Rough Set Theory (RST) information system for determining the Internal Variability Factor (IVF).
FunctionCommon Performance ConditionsIVF
CPC1CPC2.....CPCn
Function 1AdequateAdequate.....AdequateNon-Variable
Function 2InadequateAdequate.....AdequateNon-Variable
Function 3InadequateInadequate.....AdequateVariable
.................................
Function mInadequateInadequate.....InadequateHighly Variable
Table 2. An RST information system for determining the External Variability Factor (EVF), i.e., output’s variability.
Table 2. An RST information system for determining the External Variability Factor (EVF), i.e., output’s variability.
FunctionAspectsOutput
IVFInputTimeControlPreconditionsResources
Function 1Non-VariableNon-VariableNon-VariableNon-VariableNon-VariableNon-VariableNon-Variable
Function 2Non-VariableNon-VariableVariableVariableVariableNon-VariableVariable
Function 3Non-VariableNon-VariableHighly VariableNon-VariableNon-VariableNon-VariableVariable
...........................................
Function mHighly VariableHighly VariableHighly VariableHighly VariableHighly VariableHighly VariableHighly Variable
Table 3. A short comparison and presentation of the advantages and disadvantages of each model.
Table 3. A short comparison and presentation of the advantages and disadvantages of each model.
ModelBasic FRAMFuzzy FRAMRough-Fuzzy FRAM
Advantages
  • Four principles of FRAM
  • Allow to characterize complex relationships qualitatively
  • Use of simplified qualitative ordinal scales
  • Less demanding in comparison to modified approaches
  • Requires no additional software other than the FMV
  • Easier for conducting expert elicitation
  • More intersubjective results
  • Quantified results
  • A more precise representation of the magnitude of variability
  • Allows for an easier predictive assessment
  • Provision of a more standardized approach to characterize internal and external variability
  • Allows for a larger number of input variables and classes
  • Provision of reduced and more efficient rule bases
  • Data filtration and classification based on historical and archived data
  • Limited need to experts’ input
  • Automatic generation of rules
Limitations
  • More subjective results
  • Lack of a standardized framework for characterizing internal and external variability
  • Results can be vague (magnitude) or interpreted differently
  • Inability to determine output’s class sometimes relying on simplified scales
  • Limitation on the number of input variables and respective classes
  • Resource demanding: computing, additional software, data acquisition, time etc.
  • Requires additional knowledge and effort from the analyst
  • Subjective characterization of the universe of discourse, partitions, scales and weights.
  • Requires additional effort from analyst and knowledge in three disciplines
  • Resource demanding: computing, additional software, data acquisition, time etc.
  • Less accuracy using real data (dependent on the quality of provided data)
  • Subjective partitioning and scale assignment
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Slim, H.; Nadeau, S. A Proposition for Combining Rough Sets, Fuzzy Logic and FRAM to Address Methodological Challenges in Safety Management: A Discussion Paper. Safety 2020, 6, 50. https://0-doi-org.brum.beds.ac.uk/10.3390/safety6040050

AMA Style

Slim H, Nadeau S. A Proposition for Combining Rough Sets, Fuzzy Logic and FRAM to Address Methodological Challenges in Safety Management: A Discussion Paper. Safety. 2020; 6(4):50. https://0-doi-org.brum.beds.ac.uk/10.3390/safety6040050

Chicago/Turabian Style

Slim, Hussein, and Sylvie Nadeau. 2020. "A Proposition for Combining Rough Sets, Fuzzy Logic and FRAM to Address Methodological Challenges in Safety Management: A Discussion Paper" Safety 6, no. 4: 50. https://0-doi-org.brum.beds.ac.uk/10.3390/safety6040050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop