Next Article in Journal
Model Bridge Span Traversed by a Heavy Mass: Analysis and Experimental Verification
Next Article in Special Issue
A Machine-Learning Approach for Extracting Modulus of Compacted Unbound Aggregate Base and Subgrade Materials Using Intelligent Compaction Technology
Previous Article in Journal
Fiber Reinforced Polymer Culvert Bridges—A Feasibility Study from Structural and LCC Points of View
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rock Strain Prediction Using Deep Neural Network and Hybrid Models of ANFIS and Meta-Heuristic Optimization Algorithms

Civil Engineering Department, National Institute of Technology (NIT) Patna, Patna 800 005, India
*
Author to whom correspondence should be addressed.
Submission received: 31 July 2021 / Revised: 1 September 2021 / Accepted: 2 September 2021 / Published: 7 September 2021
(This article belongs to the Special Issue Artificial Intelligence in Infrastructure Geotechnics)

Abstract

:
The majority of natural ground vibrations are caused by the release of strain energy accumulated in the rock strata. The strain reacts to the formation of crack patterns and rock stratum failure. Rock strain prediction is one of the significant works for the assessment of the failure of rock material. The purpose of this paper is to investigate the development of a new strain prediction approach in rock samples utilizing deep neural network (DNN) and hybrid ANFIS (adaptive neuro-fuzzy inference system) models. Four optimization algorithms, namely particle swarm optimization (PSO), Fireflies algorithm (FF), genetic algorithm (GA), and grey wolf optimizer (GWO), were used to optimize the learning parameters of ANFIS and ANFIS-PSO, ANFIS-FF, ANFIS-GA, and ANFIS-GWO were constructed. For this purpose, the necessary datasets were obtained from an experimental setup of an unconfined compression test of rocks in lateral and longitudinal directions. Various statistical parameters were used to investigate the accuracy of the proposed prediction models. In addition, rank analysis was performed to select the most robust model for accurate rock sample prediction. Based on the experimental results, the constructed DNN is very potential to be a new alternative to assist engineers to estimate the rock strain in the design phase of many engineering projects.

1. Introduction

The earth’s crust is constantly pushed, pulled, and twisted by the tectonic movement which leads to deformations. The deformations cause strains which results in accumulation of stresses inside the earth’s rocky formation. Elastic strain, ductile strain, and fracture are the three forms of strain that rock can suffer as a result of stress. The study of rock strain is significant for predicting failure patterns in rock masses as well as gravel materials. Permanent strain in gravel is one of the most essential concerns for pavement design, construction, and maintenance over the long run. Plastic strain plays a significant effect in the collapse of rock material [1]. Researchers made comparisons between the large and small strain theories at different deformation levels [2]. The strain on a rock mass is caused mostly by its weight and mechanical interaction with surrounding materials [3,4]. Golosov et al. [5] implemented a data processing approach for experimental assessments of deformation in a rock sample exposed to uniaxial compression. Using a uniaxial compression test, Yang et al. [6] investigated the effect of fracture combination on the strength and deformation failure behavior of brittle marble samples. Finally, his research provides a deeper understanding of the fundamental nature of rock failure under uniaxial stress. Uniaxial compression tests on coarse crystal marble were done at nine pre-specified static-to-quasi-static strain rates, and the strain rate dependency of rock strength, deformation, and strain energy conversion were explored in depth by Li et al. [7]. The results of the experimental work conducted to obtain the physical properties and stress–strain curves, as well as to correlate the various parameters and develop an analytical model that predicts the stress–strain curve of the sandstones, were presented by Marques and Chastre [8]. From this researcher observed good agreement between the analytical model and the experimental tests. According to Zhao et al. [9] the fracture growth and coalescence process under uniaxial compression are highly related to the stress–strain curves of the rock-like material with two flaws. In order to analyze strain in tectonic rock, the direct investigation is not possible. To resolve these concerns, more investigators are focusing on numerical and artificial intelligence (AI) based methods for determining rock characteristics [2,10,11,12,13,14,15,16]. Recently, soft computing techniques have been employed in a number of research works to solve science and engineering problems [17,18,19,20,21,22,23,24].
Machine learning has had a huge impact on our daily lives in recent decades, with its application expanding to efficient web search, self-driving vehicles, computer vision, and optical character recognition. Fuzzy logic, in its original form lacks the learning ability to adapt to new settings, making it impossible to utilize for predictive modeling on its own [25]. Fuzzy logic’s ability can be improved by combining several AI techniques. The adaptive neuro-fuzzy inference system (ANFIS) blends neural networks’ learning ability and interpretation capability of human reasoning to handle uncertainty [26]. Recently, meta-heuristic algorithms such as Firefly algorithm (FF), genetic algorithm (GA), grey wolf optimizer (GWO) and particle swarm optimization (PSO) have been hybridized with ANFIS for reducing errors in prediction of hazard risks [27,28,29]. The firefly algorithm is endowed with a good balance of exploitation and exploration [30], [31]. On the other hand, GA can easily search within a population, support multi objective optimization, use probabilistic transition rule that makes it particularly suitable for mixed discrete or continuous problems [32]. The GWO method was found to have a better exploitation of unimodal functions. The GWO’s composite function finding capabilities revealed a high level of local optima avoidance [33]. PSO is a form of meta-heuristic technique that can be used to address problems that are nonlinear and non-continuous. Similar to other hybrid ML-based meta-heuristic optimization techniques, it is a population-based search method that searches parallelly using a set of particles [34].
The research on artificial neural networks (ANNs) gave rise to the concept of deep learning [35]. Deep learning has recently made breakthroughs in a variety of fields, including forest cover projection, flood and typhoon forecasting, image recognition, traffic and other aspects, speech recognition, low-flow hydrological time series forecasting, forecasts in weather, recommendation systems and natural language processing [36,37,38,39,40,41,42]. Deep learning promising results have prompted the authors to investigate and utilize it as an AI methodology [43]. Significant numbers of technology giants are steadily gearing up to adopt deep learning expertise. To understand why, consider the advantages of using a deep learning technique. Maximum exploitation of unstructured data, capacity to give high-quality results, elimination of superfluous costs, and elimination of the necessity for data labeling are few significant advantages of this technology [35].
The aim of this study is to develop and use deep learning and meta-heuristic-based hybrid ANFIS soft computing techniques to estimate the strain in a rock sample. A comprehensive experimental dataset obtained from uniaxial compression tests on rocks is used for this purpose. The description strain measurements for uniaxial compression tests on rocks can be found in the works of Isah et al. [44]. A laboratory uniaxial compression test was performed on the rock sample. The testing procedure was well-equipped to measure the deformation of the rock sample at various points. On the longitudinal axis of the rock sample, a gradual load was applied. Simultaneously, the deformation of the rock sample was recorded in both longitudinal and lateral directions. On the lateral surface of the rock sample, the electronic stain gauge was placed along the height in various directions. The data from the acquisition system were acquired after the experiment on the rock sample. In order to predict the model data, input and output are required for the proposed techniques. As input parameters, the angle of the stain gauge, the height of the strain gauge, and the stress in the rock sample were used. The strain in the rock sample’s longitudinal and lateral dimensions was used as output parameters. The current study estimates and compares the results obtained from deep neural network (DNN), hybrid ANFIS with FF, GA, GWO, and PSO for predicting lateral and longitudinal rock strains. The optimizer techniques have been used to improve the ANFIS performance in this case.

2. Data Collection

For the prediction of strain in the rock sample, data are received from the laboratory during an experimental test. The studies were carried out by Zavacky and Stefanak [45] on two types of rock in order to study the variability of rock strain behavior. The deformation of a cylindrical rock sample of 108 mm height and 40 mm in diameter was measured through axial compression tests. A constant rate of force is applied to the sample’s longitudinal axis and monitored using a load cell. Deformation is measured using a series of electronic strain gauges placed around the perimeter of the rock sample (as shown in Figure 1a) at various heights and directions. The sample is equipped with a total of 24 strain gauges (sensors). The deformation of the rock sample is measured in both the lateral ( x ) and longitudinal ( y ) dimensions in each set. The details of rock sample and experimental setup are shown in Figure 1. In the data acquisition system, all of these deformation and load values are accumulated. Strain and stress are estimated based on the dimension of the rock sample using the gathered data. As input parameters, the height of the strain gauge, the direction (angle) of the strain gauge, and the stress in the rock sample are used. The strain in the rock sample’s lateral and longitudinal dimensions is used as an output parameter. Soft computing techniques are used to forecast the strain in a rock sample using these input and output data. Finally, 2040 data from the rock sample have been collected for the present work.

3. Design and Details of Soft Computing Techniques

3.1. Deep Neural Network

Artificial neural connections with a multi-layered architecture are known as DNN [46]. With several layers of abstraction, the layers between input and output may learn non-linear patterns of data automatically. Back-propagation (BP) learning techniques are used by DNNs to learn difficult patterns in datasets. To compute the representation of each layer from the representations of the previous layer, BP methods adjust the learning parameters of DNNs. It fine-tunes the network weights by back propagating the output errors. An input layer, a number of hidden layers, and an output layer are all included in DNNs. Wider vs. deeper networks, neuron count in hidden layers, activation function selection at each layer, optimizer, batch size, loss function, and epochs are all hyper-parameters that affect the DNNs’ architecture. Under-fitting and over-fitting are two problems that DNNs are prone to. Increased network capacity could remedy the problem of under-fitting. Regularization strategies such as weight decay, early termination with dropout, and a weight constraint, on the other hand, may be able to deal with over-fitting issues. DNN’s two-layered design for runoff prediction is shown in Figure 2.

3.2. Adaptive Neuro-Fuzzy Inference System

Soft computing approaches such as neural networks and fuzzy set theory are used to create intelligent systems. The fuzzy inference system under study is considered to have two inputs and one output for the simplicity. Fuzzy rules are always included in the fuzzy inference system, which incorporates fuzzy analysis and the most popular fuzzy structure. The rules themselves are made up of linguistic variables and fuzzy propositions, and can be summarized as follows:
I f   x   i s   A   a n d   y   i s   B   t h e n   z   i s   f ( x , y )
In a fuzzy system, if a rule is invalid, it should be excluded; otherwise, it should be included in the calculation. In the antecedents, A and B are fuzzy sets, and in the following, z = f ( x , y ) is a crisp function. For the input variables x and y , f ( x , y ) is usually a polynomial function. However, it might be any other function that can roughly characterize the system’s output inside the fuzzy region defined by the antecedent. When f ( x , y ) is constant, a zero order Sugeno fuzzy model emerges, which can be thought of as a specific example of the Mamdani fuzzy inference system, in which each rule consequence is described by a fuzzy singleton. A first order Sugeno fuzzy model is generated if f ( x , y ) is a first order polynomial. The two rules of a Sugeno fuzzy inference system of first order are as follows:
Rule 1: I f   x   i s   A 1   a n d   y   i s   B 1   t h e n   f 1 = p 1 x + q 1 y + r 1
Rule 2: I f   x   i s   A 2   a n d   y   i s   B 2   t h e n   f 2 = p 2 x + q 2 y + r 2
Takagi and Sugeno [47] proposed a type-3 fuzzy inference system, which is applied here. Each rule’s output in this inference system is a linear combination of the input variables plus a constant term. The weighted average of each rule’s output is the ultimate result. Figure 3 shows the corresponding equivalent ANFIS structure.

3.3. Fireflies Algorithm

The algorithm is proposed based on the social activity of fireflies in a tropical summer sky and was developed by Yang [48]. There are around 2000 species of fireflies on the planet, and the majority of them produce brief, harmonious light flashes that they can use to attract potential prey or mating partners. These flashes may also be used as a warning system. Because of two factors, most fireflies’ visibility may be limited within a short distance. The law of Inverse Square, which refers to the strength of light from a source over a certain distance r, is one of the factors. The I 1 r 2 factor suggests that light intensity decreases as distance increases (inverse relation). The properties of air are the second factor. As the distance between the firefly and the viewer increases, the light produced by the firefly is absorbed by the air, reducing the firefly’s visibility. To obtain an effective S-box, the FF, a well-known meta-heuristic approach, is used.
Although the benefits of this swarm intelligence-based algorithm are close to those of other swarm intelligence algorithms, FF is an artificial algorithm that can handle multimodality, which other algorithms cannot. With increasing distance, attraction and attractiveness decrease. As a result, a population can be divided into subgroups, each of which is capable of swarming around each mode. These divisions aid fireflies in simultaneously establishing optimization conditions, particularly when the swarm size is larger than the modes. In term 1 γ limits the average distance of firefly groups that can be observed by proximal groups mathematically. This algorithm mimics firefly mating and information-exchange processes dependent on light flashes. This section covers the most important characteristics of fireflies. The action of artificial fireflies is defined using three idealized rules.
Since fireflies are unisex, they may attract each other regardless of gender.
The attractiveness of a firefly is directly proportional to the amount of light it produces. Low-intensity fireflies are drawn to high-intensity fireflies. If there are no fireflies with a brighter intensity in their vicinity, the degree of attractiveness decreases as the distance between them grows, and fireflies move randomly.
The landscape fitness function determines the intensity of light emitted by fireflies, which can be equal to the fitness value in the maximization problem. In the maximization problem, the brightness is proportional to the value of the firefly function. When a firefly i is attracted to the light of another firefly j , its movement can be calculated as
x i = x i + β 0 e γ r 2 ( x i x j ) + α ( r a n d 0.5 )
where, x i is the firefly’s current location i , r a n d is a random number between 0 and 1, α represents the randomization parameter and β 0 is always 1 and α is always [ 0 ,   1 ] .

3.4. Genetic Algorithm

GA is a general-purpose fusion method that finds the most optimal value of the objective function based on Darwin’s theory of evolution. Simulating natural evolution is one way to do this. Many problems in a wide variety of fields have been solved using the genetic algorithm. In a nutshell, the genetic algorithm involves three steps: selection, crossing, and mutations. A set of solutions is used to start the algorithm. The solution group for the problem under investigation is referred to as the population. Crossover is a technique for generating better offspring or new generations by merging different chromosomes to create a new solution. This mutation mechanism is the process of obtaining a new solution by modifying the organ in the population.

3.4.1. Initialization

A parameter (variable) in the solution is represented by each gene. The chromosome is the collection of parameters that make up the solution. The population is a collection of chromosomes. The order in which genes appear on a chromosome is significant. The majority of the time, chromosomes are represented as binary 0s and 1s, however different encodings are also available.

3.4.2. Selection

A portion of the current population is chosen to breed a new generation during each successive generation. Individual solutions are chosen based on their fitness, with the fitter solutions being more likely to be chosen. Certain methods of selection score the fitness of each solution and choose the best solutions first. Since the former process may be time-consuming, other approaches only score a random sample of the population. The fitness function is a measure of the quality of the represented solution that is defined over the genetic representation. The fitness function is always dependent on a problem.

3.4.3. Crossover

Crossover, also known as recombination, is a genetic operator used to merge the genetic material of two parents to produce new offspring in evolutionary computation of genetic algorithms. It is one way to produce new solutions from an existing population stochastically, and it is similar to the crossover that occurs during biological reproduction. Cloning an existing solution, which is similar to asexual replication, may also be used to create new solutions. Before being applied to the population, newly created solutions are usually mutated.

3.4.4. Mutation

Another genetic operator called mutation is used to maintain genetic variation in a population of genetic algorithm chromosomes from one generation to the next. It is comparable to biological mutation. A mutation changes the value of one or more genes in a chromosome from its original state. The solution in mutation may differ significantly from the previous solution. As a result, using mutation, GA will find a better solution. During evolution, mutation occurs according to a user-defined mutation probability. This probability should be set to a low value. The search will devolve into a primitive random search if it is set too large.

3.5. Grey Wolf Optimizer

In the last two decades, meta-heuristic optimization algorithms have been commonly used in a variety of engineering problems. Mirjalili et al. [33] suggested the Grey Wolf Optimizer (GWO) algorithm, which was inspired by the social life of a grey wolf pack. The key feature of the GWO in this algorithm is the hierarchy structure of grey wolf leadership and hunting mechanism. Each wolf pack has four different types of grey wolves to represent the leadership hierarchy: alpha ( α ), beta ( β ), delta ( δ ), and omega ( ω ). Wolves are the most responsible, while ω wolves are the least responsible. Furthermore, the second and third orders in the pack are occupied by β and δ wolves.
A wolf leads the hunt, with β and δ wolves joining in on occasion, while ω wolves encircle the prey based on the positions of the more experienced wolves. Each possible solution to the optimization problem in the GWO algorithm is represented by a grey wolf’s location. A grey wolf pack is a series of possible solutions in the GWO mathematical model, where the best possible solutions in each iteration, graded from highest to lowest, are the positions of α ( P α ) ,   β ( P β ) , and δ ( P δ ) grey wolves. With the best estimate of a grey wolf’s location, α , β , and δ grey wolves use the following equations to change the position of an ω wolf ( P ω N ) in the pack:
P ω N = 1 3 ( E P α + E P β + E P δ )
where E P α , E P β , and E P δ are the estimated positions of ω grey wolf by α , β , and δ grey wolves, respectively, and can be determined as follows:
E P α = P α A α · D α A α = a · ( 2 r 1 , α 1 ) D α = | 2 r 2 , α . P α P ω |
E P β = P β A β · D β A β = a · ( 2 r 1 , β 1 ) D β = | 2 r 2 , β . P β P ω |
E P δ = P δ A δ · D δ A δ = a · ( 2 r 1 , δ 1 ) D δ = | 2 r 2 , δ . P δ P ω |
where P ω denotes the ω grey wolf’s previous location, r 1 , i and r 2 , i are two random vectors with values ranging from 0 to 1 and a is a vector whose value decreases linearly from 2 to 0 as the number of iterations increases. The GWO’s exploration capability is guaranteed by a higher value of this parameter at the start of the estimation process, while its exploitation sufficiency is assured by a lower value.

3.6. Particle Swarm Optimization

PSO is a meta-heuristic since it makes few to no assumptions about the problem that needs to be solved and can search a large number of candidate solutions. Eberhart and Kennedy [49] introduced PSO in 1995 as a heuristic and evolutionary algorithm. Its basic principle is to mimic predatory behavior in birds, with the idea that with experience and interaction with the flock, birds may change their search path. Each solution in the optimization is the way that particles change their flying distance and directions by changing their speed when searching for a location in space. In the iteration process, each particle remembers its optimal location p i D in the searching past. The global optimal position p g D is the sum of all the optimal positions of all particles. The following is the equation and parameter for particle movement.
V i D j + 1 = ω V i D j + c 1 r 1 ( P i D j x i D j ) + c 2 r 2 ( p g D j x i D j )
x i D j + 1 = x i D j + v i D j + 1
where the particle is represented by i; j is the number of iterations that have been completed so far; D is the dimension of the particle; The velocity and location in the j iteration are x i D j and v i D j ; The learning factors, c 1 and c 2 , decide how p i D and p g D affect the new velocity; The pseudo random amounts r1 and r2 are uniformly distributed in the interval [0, 1]; The inertia weight, x , adjusts the solution domain’s searching capacity.

4. Statistical Parameter

The performance of the developed models was assessed using eight statistical parameters, namely, determination coefficient (R2), root mean square error (RMSE), variance account for (VAF), performance index (PI) Willmott’s index of agreement (WI), mean absolute error (MAE), mean bias error (MBE) and mean absolute percentage error (MAPE) [50,51,52,53,54,55,56,57,58]. The mathematical expressions of the aforementioned indices can be given by:
R 2 = i = 1 N ( d i d m e a n ) 2 i = 1 N ( d i y i ) 2 i = 1 N ( d i d m e a n ) 2
R M S E = 1 N i = 1 N ( d i y i ) 2
V A F = ( 1 v a r ( d i y i ) v a r ( d i ) ) × 100
P I = a d j · R 2 + ( 0.01 × V A F ) R M S E
M A E = 1 N i = 1 N | ( y i d i ) |
M B E = 1 N i = 1 N ( y i d i )
M A P E = 1 N i = 1 N | d i y i d i | × 100
W I = 1 [ i = 1 N ( d i y i ) 2 i = 1 N { | y i d m e a n | + | d i d m e a n | } 2 ]
where di is the observed ith value, yi is the predicted ith value, dmean is the average of observed value, N is the number of data sample and S D is the standard deviation.

5. Computational Processing and Data Analysis

Four optimizations (FF, GA, GWO, and PSO) are separately combined with ANFIS which are four hybrid models called ANFIS-FF, ANFIS-GA, ANFIS-GWO, and ANFIS-PSO. A DNN was also used in the study. To determine strain in the rock sample, an accurate evaluation is done over a collection of 2040 datasets. These data values are normalized between −1 and 1 in order to normalize the numeric column values in the dataset using a common scale.
N o r m a l i z e d   v a r i a b l e = ( 2 ( A c t u a l   v a r i a b l e M i n i m u m   v a r i a b l e ) ( M a x i m u m   v a r i a b l e M i n i m u m   v a r i a b l e ) ) 1
Following the normalization process, the datasets are divided into two parts, with the training dataset accounting for 70% of the total dataset (1428 datasets) and the testing dataset accounting for 30% (612 datasets). The models are trained using a training dataset. The testing dataset is used to assess the effectiveness of the developed models’ fit.

6. Result and Discussion

6.1. Comparison of Stress Strain Curve

Strain is computed by using a strain gauge to measure the deformation of a rock sample at various points. Rock is mostly a brittle substance that is strong in compression but weak in tension. As a result of the tensile stresses in the rock sample, there was increased tensile strain in the lateral ( x ) dimension. Initiation of a failure pattern in the rock sample is indicated by the maximum strain. This maximum strain is studied by using strain values in variation of height, angle and dimensions of the rock sample. In this data, we studied the strain values and concluded that the maximum values are obtained at sample height of 81 mm and angle of 90 degrees in both lateral and longitudinal dimensions, i.e., 0.06347 and 0.23227, respectively. Figure 4 and Figure 5 depict the strain from the rock sample’s lateral and longitudinal dimensions in respect to the strain gauge’s height and angle. The behavior of a rock sample was investigated using different strain gauges installed in lateral and longitudinal directions. The ultimate stress and strain values of all models, namely DNN, ANFIS-FF, ANFIS-GA, ANFIS-GWO, and ANFIS-PSO, are compared to the actual curve as shown in Figure 6 and Figure 7. In this comparative study, it is clear that, DNN is the best model in lateral and longitudinal when compared to other hybrid models. When just hybrid models are considered, the ANFIS-PSO is the best model, followed by the ANFIS-FF, ANFIS-GA, and ANFIS-GWO in both directions.

6.2. Actual vs. Predicted

For the training and testing datasets in x and y directions, Figure 8, Figure 9, Figure 10 and Figure 11 shows the predicted strain values calculated by different generated models compared to experimental results. The developed model’s performance will increase as the points get closer to the regression line. Based on R2 values, it can be stated that the DNN model outperforms other models, whilst the ANFIS-GWO is the least performing model.

6.3. Statistical and Score Analysis

Table 1 and Table 2 for the lateral and longitudinal strains, respectively, show the statistical evaluation and score of the proposed models. Herein, the values of performance indices are presented based on the normalized outputs. Models achieved better performances level in all the cases of rock sample. RMSE, MAE, MAPE and MBE should be close to 0, R2 should be near to 1, and VAF should be close to 100 to consider greater efficiency of the models. As a result, all of the models’ better fit is verified. WI is a measure that ranges from 0 to 1 for the degree of error in model predictions. Two is considered to be the ideal value of PI. According to the limitations and range of parameters in these tables, all models have the best value. A score system is utilized to compare the predictive model’s performance. Each five model’s training and testing data are used to calculate the score value. The number of models determines the range of score values from 1 to 5. The causation value for a scoring system is the ideal value, and the comparative best model has a higher score value. The model’s overall performance is calculated by adding all of the score values for training and testing data.
T o t a l   s c o r e = [ i = 1 m S i + j = 1 n S j ]
where S is the score, i and j are the training and testing performance indicators, and m and n are the number of performance indicators in the training and testing dataset. DNN (78 and 74) has the highest score, followed by PSO (56 and 62), GA (48 and 43), FF (39 and 37), and GWO (18 and 24). As a result, when compared to the hybrid ANFIS model, DNN is the best performing model.

6.4. Error Diagram

The predicted error level of the developed DNN, ANFIS-FF, ANFIS-GA, ANFIS-GWO and ANFIS-PSO models for testing dataset are shown in the error diagram (Figure 12 and Figure 13). In error diagram are showing the minimum, average and maximum error value of the predicted model. The DNN average error value (0.000096) is very less than ANFIS hybrid model in x direction. GWO (0.0122) is slightly less in the average value of ANFIS hybrid model, followed by GA (0.0219), FF (0.0301), and PSO (0.0357) and deviation of that value also less. Simultaneously, DNN average error value (0.0047) in the y direction is lower than that of other models. In y direction, considering ANFIS hybrid models FF is considered as best when compared to PSO (0.0247), GA (0.0268) and GWO (0.0637). PSO (0.8331 and 0.5223) is followed by FF (1.0689), GA (1.1027), and GWO (1.1287) in the x direction and GA (0.5986), FF (0.7895), and GWO (1.2156) in the y direction when considering depth of error in ANFIS hybrid models. It shows that, as compared to ANFIS hybrid models, more than 95 percent of the error in the x and y directions of DNN model predictions for the testing data are in a smaller interval, indicating that the error scatter is focused around zero. Furthermore, the number of outliers in ANFIS hybrid models is higher than in DNN models.

6.5. Taylor Diagram

Figure 14 and Figure 15 shows the Taylor diagrams of all best developed models for the training and testing dataset in x and y directions, which illustrate how the predictable values correspond to the experimental results. The compliance degree of the predictable and experimental values is determined using three performance parameters: standard deviation, RMSE, and correlation coefficient. When compared to other hybrid ANFIS models, it reveals that the hybridized ANFIS model using PSO algorithms can provide the best accurate predictions. However, the accuracy of the ANFIS-PSO model is not as good as that of the DNN model in both directions.

6.6. Error Matrix

In this section, Figure 16 and Figure 17 show the amount of error associated with the models based on several performance parameters. This is nothing but a heat map matrix that is developed by comparing the ideal values of performance parameters. For example, R2 and RMSE have ideal values of one and zero. In this study, the values of these indices for the DNN model (x direction) in the training phase are 0.9153 and 0.004, respectively. As a result, the mentioned model attained an R2 of 8% (1 − 0.9153 = 0.0847) and an RMSE of 0% (0.004). Similarly, the values of PI are obtained as 1.8253 in the training phase of DNN (x direction) which means the DNN model achieved 9% (1 − (1.8253/2) = 0.09) error as the ideal value of PI is two. In the same way, inferences about the correctness of the other indexes are reached. In this matrix, the DNN has achieved a comparatively lower error in training and testing of the x and y directions. Simultaneously, ANFIS-GWO has the worse result due to the larger error level in both the direction of training and testing. In the ANFIS hybrid model, PSO is followed by GA and FF in both the training and testing of x and y directions.

7. Summary and Conclusions

In this study, the results of strain prediction in the lateral and longitudinal dimensions of a rock sample have been presented. The strain in lateral and longitudinal dimensions was predicted using the 2040 dataset. Three influencing parameters, namely height, angle, and stress were considered as the input parameters for this purpose. Four hybrid ANFIS models and DNN were utilized to predict strain. For training and testing of models, 70% and 30% of the main dataset were used from the total dataset. Based on the experimental results, the DNN (R2 = 0.9153; 0.8992 and 0.9925; 0.9927) was found to be more accurate than hybrid ANFIS models, including ANFIS-PSO (R2 = 0.8753; 0.8773 and 0.9699; 0.9724), ANFIS-GA (R2 = 0.8720; 0.8756 and 0.9604; 0.9606), ANFIS-FF (R2 = 0.834; 0.8336 and 0.9343; 0.9390) and ANFIS-GWO (R2 = 0.6837; 0.6814 and 0.8458; 0.8524) in both training and testing of x and y directions. Herein, the values of R2 are mentioned in order of training and testing phase for x and y directions, respectively. Apart from performance indices, the employed models were analyzed using rank analyses. Overall, the DNN model was found to be the best performing model, followed by ANFIS-PSO, ANFIS-GA, ANFIS-FF, ANFIS-GWO in both x and y directions in predicting the rock strain. Overall, the developed DNN model can be used as a promising tool to predict the rock strain based on existing experimental datasets. The future direction of this study may include a detailed assessment of the developed DNN and hybrid models of other optimization algorithms and artificial neural networks, extreme learning machines. In addition, a comparative assessment of several conventional soft computing models can be performed.

Author Contributions

Conceptualization, A.B. (Abidhan Bardhan); methodology, T.P.; overall analysis, T.P.; manuscript writing, T.P.; overall review, A.B. (Avijit Burman) and P.S. All authors have read and agreed to the published version of the manuscript.

Funding

No funding has been received for this work.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset is available with the corresponding author. Will be available on request subject to the approval of funding agency.

Acknowledgments

Authors thank Department of Science and Technology, Government of India for funding the project titled ‘‘Radical Decrease in natural catastrophic disasters”. Ref no. DST/IMRCD/BRICS/Pilotcall2/RDRNCD/2018 (G) dated March 2019.

Conflicts of Interest

The authors confirm that this research work content has no conflict of interest.

References

  1. Li, N.; Wang, X.; Qiao, R.; Ma, B.; Shao, Z.; Sun, W.; Wang, H. A prediction model of permanent strain of unbound gravel materials based on performance of single-size gravels under repeated loads. Constr. Build. Mater. 2020, 246, 118492. [Google Scholar] [CrossRef]
  2. Xu, C.; Xia, C. A new large strain approach for predicting tunnel deformation in strain-softening rock mass based on the generalized Zhang-Zhu strength criterion. Int. J. Rock Mech. Min. Sci. 2021, 143, 104786. [Google Scholar] [CrossRef]
  3. Shi, L.; Zhou, H.; Song, M.; Lu, J.; Liu, Z. Geomechanical model test for analysis of surrounding rock behaviours in composite strata. J. Rock Mech. Geotech. Eng. 2021, 13, 774–786. [Google Scholar] [CrossRef]
  4. Wan, X.; Ding, J.; Ou, Y.; Mou, C.; Ding, C. Triaxial Testing and Numerical Simulation on High Fill Slopes of Gobi Gravel Soils in Urumchi. J. Test. Eval. 2021, 51, 20200678. [Google Scholar] [CrossRef]
  5. Golosov, A.; Lubimova, O.; Zhevora, M.; Markevich, V.; Siskov, V. Data processing method for experimental studies of deformation in a rock sample under uniaxial compression. E3S Web Conf. 2019, 129, 01018. [Google Scholar] [CrossRef]
  6. Yang, S.Q.; Dai, Y.H.; Han, L.J.; Jin, Z.Q. Experimental study on mechanical behavior of brittle marble samples containing different flaws under uniaxial compression. Eng. Fract. Mech. 2009, 76, 1833–1845. [Google Scholar] [CrossRef]
  7. Li, Y.; Huang, D.; Li, X. Strain rate dependency of coarse crystal marble under uniaxial compression: Strength, deformation and strain energy. Rock Mech. Rock Eng. 2014, 47, 1153–1164. [Google Scholar] [CrossRef]
  8. Ludovico-Marques, M.; Chastre, C. Prediction of stress-strain curves based on hydric non-destructive tests on sandstones. Materials 2019, 12, 3366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Zhao, Y.; Zhang, L.; Wang, W.; Pu, C.; Wan, W.; Tang, J. Cracking and Stress–Strain Behavior of Rock-Like Material Containing Two Flaws Under Uniaxial Compression. Rock Mech. Rock Eng. 2016, 49, 2665–2687. [Google Scholar] [CrossRef]
  10. Tariq, Z.; Elkatatny, S.; Mahmoud, M.; Ali, A.Z.; Abdulraheem, A. A new technique to develop rock strength correlation using artificial intelligence tools. In Proceedings of the SPE Reservoir Characterisation and Simulation Conference and Exhibition, Abu Dhabi, United Arab Emirates, 8–10 May 2017; pp. 1340–1353. [Google Scholar] [CrossRef]
  11. Lawal, A.I.; Kwon, S. Application of artificial intelligence to rock mechanics: An overview. J. Rock Mech. Geotech. Eng. 2021, 13, 248–266. [Google Scholar] [CrossRef]
  12. Ghasemi, E.; Amini, H.; Ataei, M.; Khalokakaei, R. Application of artificial intelligence techniques for predicting the flyrock distance caused by blasting operation. Arab. J. Geosci. 2014, 7, 193–202. [Google Scholar] [CrossRef]
  13. Sun, D.; Lonbani, M.; Askarian, B.; Armaghani, D.J.; Tarinejad, R.; Pham, B.T.; Huynh, V. Van Investigating the applications of machine learning techniques to predict the rock brittleness index. Appl. Sci. 2020, 10, 1691. [Google Scholar] [CrossRef] [Green Version]
  14. Baykasoǧlu, A.; Güllü, H.; Çanakçi, H.; Özbakir, L. Prediction of compressive and tensile strength of limestone via genetic programming. Expert Syst. Appl. 2008, 35, 111–123. [Google Scholar] [CrossRef]
  15. Samui, P. Multivariate Adaptive Regression Spline (Mars) for Prediction of Elastic Modulus of Jointed Rock Mass. Geotech. Geol. Eng. 2013, 31, 249–253. [Google Scholar] [CrossRef]
  16. Beiki, M.; Bashari, A.; Majdi, A. Genetic programming approach for estimating the deformation modulus of rock mass using sensitivity analysis by neural network. Int. J. Rock Mech. Min. Sci. 2010, 47, 1091–1103. [Google Scholar] [CrossRef]
  17. Bardhan, A.; Samui, P.; Ghosh, K.; Gandomi, A.H.; Bhattacharyya, S. ELM-based adaptive neuro swarm intelligence techniques for predicting the California bearing ratio of soils in soaked conditions. Appl. Soft Comput. 2021, 110, 107595. [Google Scholar] [CrossRef]
  18. Kardani, N.; Bardhan, A.; Kim, D.; Samui, P.; Zhou, A. Modelling the energy performance of residential buildings using advanced computational frameworks based on RVM, GMDH, ANFIS-BBO and ANFIS-IPSO. J. Build. Eng. 2021, 35, 102105. [Google Scholar] [CrossRef]
  19. Ghani, S.; Kumari, S.; Bardhan, A. A novel liquefaction study for fine-grained soil using PCA-based hybrid soft computing models. Sādhanā 2021, 46, 113. [Google Scholar] [CrossRef]
  20. Kardani, N.; Bardhan, A.; Samui, P.; Nazem, M.; Zhou, A.; Armaghani, D.J. A novel technique based on the improved firefly algorithm coupled with extreme learning machine (ELM-IFF) for predicting the thermal conductivity of soil. Eng. Comput. 2021, 1–20. [Google Scholar]
  21. Kumar, M.; Bardhan, A.; Samui, P.; Hu, J.W.; Kaloop, M.R. Reliability Analysis of Pile Foundation Using Soft Computing Techniques: A Comparative Study. Processes 2021, 9, 486. [Google Scholar] [CrossRef]
  22. Bardhan, A.; Gokceoglu, C.; Burman, A.; Samui, P.; Asteris, P.G. Efficient computational techniques for predicting the California bearing ratio of soil in soaked conditions. Eng. Geol. 2021, 291, 106239. [Google Scholar] [CrossRef]
  23. Kaloop, M.R.; Bardhan, A.; Kardani, N.; Samui, P.; Hu, J.W.; Ramzy, A. Novel application of adaptive swarm intelligence techniques coupled with adaptive network-based fuzzy inference system in predicting photovoltaic power. Renew. Sustain. Energy Rev. 2021, 148, 111315. [Google Scholar] [CrossRef]
  24. Asteris, P.G.; Skentou, A.D.; Bardhan, A.; Samui, P.; Pilakoutas, K. Predicting concrete compressive strength using hybrid ensembling of surrogate machine learning models. Cem. Concr. Res. 2021, 145, 106449. [Google Scholar] [CrossRef]
  25. Azeez, D.; Ali, M.A.M.; Gan, K.B.; Saiboon, I. Comparison of adaptive neuro-fuzzy inference system and artificial neutral networks model to categorize patients in the emergency department. Springerplus 2013, 2, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Zaghloul, M.S.; Hamza, R.A.; Iorhemen, O.T.; Tay, J.H. Comparison of adaptive neuro-fuzzy inference systems (ANFIS) and support vector regression (SVR) for data-driven modelling of aerobic granular sludge reactors. J. Environ. Chem. Eng. 2020, 8, 103742. [Google Scholar] [CrossRef]
  27. Panahi, M.; Gayen, A.; Pourghasemi, H.R.; Rezaie, F.; Lee, S. Spatial prediction of landslide susceptibility using hybrid support vector regression (SVR) and the adaptive neuro-fuzzy inference system (ANFIS) with various metaheuristic algorithms. Sci. Total Environ. 2020, 741, 139937. [Google Scholar] [CrossRef] [PubMed]
  28. Gandomi, A.H.; Alavi, A.H.; Sahab, M.G.; Arjmandi, P. Formulation of elastic modulus of concrete using linear genetic programming. J. Mech. Sci. Technol. 2010, 24, 1273–1278. [Google Scholar] [CrossRef]
  29. Dehghani, M.; Seifi, A.; Riahi-madvar, H. Novel forecasting models for immediate-short-term to long-term influent flow prediction by combining ANFIS and grey wolf optimization. J. Hydrol. 2019, 576, 698–725. [Google Scholar] [CrossRef]
  30. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36. [Google Scholar] [CrossRef] [Green Version]
  31. Gandomi, A.H.; Yang, X.; Talatahari, S.; Alavi, A.H. Commun Nonlinear Sci Numer Simulat Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  32. Aickelin, U. Genetic Algorithms for Multiple-Choice Optimisation Problems. Ph.D. Thesis, University of Wales, Swansea, UK, 1999. [Google Scholar]
  33. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  34. Kita, E.; Shin, Y.B. Application of particle swarm optimization to packing problem. Int. J. Evol. Equ. 2013, 8, 333–346. [Google Scholar]
  35. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  36. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D Convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [Green Version]
  37. Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic Flow Prediction with Big Data: A Deep Learning Approach. IEEE Trans. Intell. Transp. Syst. 2015, 16, 865–873. [Google Scholar] [CrossRef]
  38. Elkahky, A.; Song, Y.; He, X. A multi-view deep learning approach for cross domain user modeling in recommendation systems. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015; pp. 278–288. [Google Scholar] [CrossRef] [Green Version]
  39. Amodei, D.; Ananthanarayanan, S.; Anubhai, R.; Bai, J.; Battenberg, E.; Case, C.; Casper, J.; Catanzaro, B.; Cheng, Q.; Chen, G.; et al. Deep speech 2: End-to-end speech recognition in English and Mandarin. In Proceedings of the 33rd International Conference on Machine Learning (ICML-2016), New York, NY, USA, 19–24 June 2016; Volume 1, pp. 312–321. [Google Scholar]
  40. Scher, S. Toward Data-Driven Weather and Climate Forecasting: Approximating a Simple General Circulation Model With Deep Learning. Geophys. Res. Lett. 2018, 45, 12616–12622. [Google Scholar] [CrossRef] [Green Version]
  41. Jiang, G.Q.; Xu, J.; Wei, J. A Deep Learning Algorithm of Neural Network for the Parameterization of Typhoon-Ocean Feedback in Typhoon Forecast Models. Geophys. Res. Lett. 2018, 45, 3706–3716. [Google Scholar] [CrossRef] [Green Version]
  42. Ye, L.; Gao, L.; Marcos-Martinez, R.; Mallants, D.; Bryan, B.A. Projecting Australia’s forest cover dynamics and exploring influential factors using deep learning. Environ. Model. Softw. 2019, 119, 407–417. [Google Scholar] [CrossRef]
  43. Kumar, D.; Samui, P.; Kim, D.; Singh, A. A Novel Methodology to Classify Soil Liquefaction Using Deep Learning. Geotech. Geol. Eng. 2021, 39, 1049–1058. [Google Scholar] [CrossRef]
  44. Isah, B.W.; Mohamad, H.; Ahmad, N.R.; Harahap, I.S.H.; Al-Bared, M.A.M. Uniaxial compression test of rocks: Review of strain measuring instruments. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2020; Volume 476. [Google Scholar] [CrossRef]
  45. Závacký, M.; Štefaňák, J. Strains of Rock During Uniaxial Compression Test. Civ. Eng. J. 2019, 28, 398–403. [Google Scholar] [CrossRef]
  46. Jiang, S.; Xiao, R.; Wang, L.; Luo, X.; Huang, C.; Wang, J.H.; Chin, K.S.; Nie, X. Combining Deep Neural Networks and Classical Time Series Regression Models for Forecasting Patient Flows in Hong Kong. IEEE Access 2019, 7, 118965–118974. [Google Scholar] [CrossRef]
  47. Takagi, T.; Sugeno, M. Derivation of Fuzzy Control Rules From Human Operator’S Control Actions. IFAC Proc. Vol. 1984, 16, 55–60. [Google Scholar] [CrossRef]
  48. Watanabe, O.; Zeugmann, T. Stochastic Algorithms: Foundations and Applications: 5th International Symposium, SAGA 2009 Sapporo, Japan, October 26–28, 2009 Proceedings; Springer Science & Business Media: Berlin, Germany, 2009; ISBN 9783642215148. [Google Scholar]
  49. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  50. Willmott, C.J. On the Evaluation of Model Performance in Physical Geography. Spat. Stat. Model. 1984, 40, 443–460. [Google Scholar] [CrossRef]
  51. Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)?—Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
  52. Raja, M.N.A.; Shukla, S.K. Multivariate adaptive regression splines model for reinforced soil foundations. Geosynth. Int. 2021, 28, 368–390. [Google Scholar] [CrossRef]
  53. Raja, M.N.A.; Shukla, S.K.; Khan, M.U.A. An intelligent approach for predicting the strength of geosynthetic-reinforced subgrade soil. Int. J. Pavement Eng. 2021, 1–17. [Google Scholar] [CrossRef]
  54. Raja, M.N.A.; Shukla, S.K. Predicting the settlement of geosynthetic-reinforced soil foundations using evolutionary artificial intelligence technique. Geotext. Geomembr. 2021, 49, 1280–1293. [Google Scholar] [CrossRef]
  55. Raja, M.N.A.; Shukla, S.K. An extreme learning machine model for geosynthetic-reinforced sandy soil foundations. Proc. Inst. Civ. Eng.-Geotech. Eng. 2020, 1–21. [Google Scholar] [CrossRef]
  56. Kardani, N.; Bardhan, A.; Gupta, S.; Samui, P.; Nazem, M.; Zhang, Y.; Zhou, A. Predicting permeability of tight carbonates using a hybrid machine learning approach of modified equilibrium optimizer and extreme learning machine. Acta Geotech. 2021, 1–17. [Google Scholar] [CrossRef]
  57. Kardani, N.; Zhou, A.; Nazem, M.; Lin, X. Modelling of municipal solid waste gasification using an optimised ensemble soft computing model. Fuel 2021, 289, 119903. [Google Scholar] [CrossRef]
  58. Kardani, M.N.; Baghban, A.; Hamzehie, M.E.; Baghban, M. Phase behavior modeling of asphaltene precipitation utilizing RBF-ANN approach. Pet. Sci. Technol. 2019, 37, 1861–1867. [Google Scholar] [CrossRef]
Figure 1. (a) Details of strain gauge location with uniaxial load of rock sample and (b) Experimental Setup.
Figure 1. (a) Details of strain gauge location with uniaxial load of rock sample and (b) Experimental Setup.
Infrastructures 06 00129 g001
Figure 2. Architecture of a deep neural network (DNN) for strain prediction.
Figure 2. Architecture of a deep neural network (DNN) for strain prediction.
Infrastructures 06 00129 g002
Figure 3. A typical structure of ANFIS.
Figure 3. A typical structure of ANFIS.
Infrastructures 06 00129 g003
Figure 4. (a) Variation of strain (Height) and (b) variation of strain (angle) in x direction.
Figure 4. (a) Variation of strain (Height) and (b) variation of strain (angle) in x direction.
Infrastructures 06 00129 g004
Figure 5. (a) Variation of strain (Height) and (b) variation of strain (angle) in y direction.
Figure 5. (a) Variation of strain (Height) and (b) variation of strain (angle) in y direction.
Infrastructures 06 00129 g005
Figure 6. Stress-Strain behavior of rock sample ( x direction).
Figure 6. Stress-Strain behavior of rock sample ( x direction).
Infrastructures 06 00129 g006
Figure 7. Stress-Strain behavior of rock sample ( y direction).
Figure 7. Stress-Strain behavior of rock sample ( y direction).
Infrastructures 06 00129 g007
Figure 8. Illustration of actual and predicted strain in lateral (x) dimension (Training phase).
Figure 8. Illustration of actual and predicted strain in lateral (x) dimension (Training phase).
Infrastructures 06 00129 g008aInfrastructures 06 00129 g008b
Figure 9. Illustration of actual and predicted strain in lateral (x) dimension (Testing phase).
Figure 9. Illustration of actual and predicted strain in lateral (x) dimension (Testing phase).
Infrastructures 06 00129 g009
Figure 10. Illustration of actual and predicted strain in longitudinal (y) dimension (Training).
Figure 10. Illustration of actual and predicted strain in longitudinal (y) dimension (Training).
Infrastructures 06 00129 g010aInfrastructures 06 00129 g010b
Figure 11. Illustration of actual and predicted strain in longitudinal (y) dimension (Testing).
Figure 11. Illustration of actual and predicted strain in longitudinal (y) dimension (Testing).
Infrastructures 06 00129 g011
Figure 12. Error diagram in x direction.
Figure 12. Error diagram in x direction.
Infrastructures 06 00129 g012
Figure 13. Error diagram in y direction.
Figure 13. Error diagram in y direction.
Infrastructures 06 00129 g013
Figure 14. Taylor diagram in x direction (a) for training and (b) for testing phase.
Figure 14. Taylor diagram in x direction (a) for training and (b) for testing phase.
Infrastructures 06 00129 g014
Figure 15. Taylor diagram in y direction (a) for training and (b) for testing phase.
Figure 15. Taylor diagram in y direction (a) for training and (b) for testing phase.
Infrastructures 06 00129 g015
Figure 16. Error matrix for lateral (x) direction (TR for training and TS for testing).
Figure 16. Error matrix for lateral (x) direction (TR for training and TS for testing).
Infrastructures 06 00129 g016
Figure 17. Error matrix for longitudinal (y) direction (TR for training and TS for testing).
Figure 17. Error matrix for longitudinal (y) direction (TR for training and TS for testing).
Infrastructures 06 00129 g017
Table 1. Statistical parameter of all the models in lateral (x) direction.
Table 1. Statistical parameter of all the models in lateral (x) direction.
ModelDNN (x)ANFIS-FFANFIS-GAANFIS-GWOANFIS-PSO
ParameterTrainTestTrainTestTrainTestTrainTestTrainTest
R2Value0.91530.89920.83410.83360.8720.87570.68370.68140.87530.8774
Score5522331144
RMSEValue0.0040.00440.00550.00560.00490.0050.0080.00830.00480.0049
Score5522331144
VAFValue91.416189.773583.416283.335786.891787.362564.717563.592987.42587.6902
Score5522331144
PIValue1.82531.79191.66241.66041.73561.74371.32211.30731.74441.7487
Score5522331144
MAPEValue23.388528.813236.981338.979332.219232.17356.382659.722237.916838.6925
Score5532441123
WIValue0.97840.97440.95460.95570.96170.96380.910.91020.96540.9671
Score5522331144
MAEValue0.00280.00320.00410.00430.00360.00370.00610.00640.00370.0038
Score5522441133
MBEValue−0.0004−0.00048.3 × 10−6−0.0001−0.0008−0.00090.00060.00060.00060.0005
Score4455112223
Sub Total393920192424992729
Total7839481856
Table 2. Statistical parameter of all the models in longitudinal (y) direction.
Table 2. Statistical parameter of all the models in longitudinal (y) direction.
ModelDNN (y)ANFIS-FFANFIS-GAANFIS-GWOANFIS-PSO
ParameterTrainTestTrainTestTrainTestTrainTestTrainTest
R2Value0.99250.99270.93430.93910.96050.96060.84590.85250.970.9724
Score5522331144
RMSEValue0.00540.00550.01490.01490.0120.01230.02320.02360.01040.0104
Score5522331144
VAFValue99.232599.249193.430793.894596.001795.965684.077584.685896.864597.0723
Score5522331144
PIValue1.97941.97971.85351.86281.90841.90771.66311.6751.92811.9326
Score5522331144
MAPEValue9.352910.482128.046228.293325.644328.616848.262453.178122.719524.4756
Score5523321144
WIValue0.99790.99790.98310.98390.9890.98890.95310.95390.99160.9921
Score5522331144
MAEValue0.00350.00380.01050.01060.00910.00940.01770.0180.00770.0077
Score5522331144
MBEValue0.00170.00180.00070.0009−0.0029−0.00230.00010.0002−0.0016−0.0013
Score2244115533
Sub Total37371819222112123131
Total7437432462
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pradeep, T.; Bardhan, A.; Burman, A.; Samui, P. Rock Strain Prediction Using Deep Neural Network and Hybrid Models of ANFIS and Meta-Heuristic Optimization Algorithms. Infrastructures 2021, 6, 129. https://0-doi-org.brum.beds.ac.uk/10.3390/infrastructures6090129

AMA Style

Pradeep T, Bardhan A, Burman A, Samui P. Rock Strain Prediction Using Deep Neural Network and Hybrid Models of ANFIS and Meta-Heuristic Optimization Algorithms. Infrastructures. 2021; 6(9):129. https://0-doi-org.brum.beds.ac.uk/10.3390/infrastructures6090129

Chicago/Turabian Style

Pradeep, T., Abidhan Bardhan, Avijit Burman, and Pijush Samui. 2021. "Rock Strain Prediction Using Deep Neural Network and Hybrid Models of ANFIS and Meta-Heuristic Optimization Algorithms" Infrastructures 6, no. 9: 129. https://0-doi-org.brum.beds.ac.uk/10.3390/infrastructures6090129

Article Metrics

Back to TopTop