Next Article in Journal
Application of the IDEAS Framework in Adapting a Web-Based Physical Activity Intervention for Young Adult College Students
Next Article in Special Issue
Leveraging Tweets for Artificial Intelligence Driven Sentiment Analysis on the COVID-19 Pandemic
Previous Article in Journal
The Impact of Psychological Distress on the Occupational Well-Being of Sexual and Gender Minorities
Previous Article in Special Issue
A Pathfinding Algorithm for Lowering Infection Exposure of Healthcare Personnel Working in Makeshift Hospitals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Glowworm Swarm Optimization with an Inception-Based Deep Convolutional Neural Network for COVID-19 Diagnosis and Classification

by
Ibrahim Abunadi
1,
Amani Abdulrahman Albraikan
2,
Jaber S. Alzahrani
3,
Majdy M. Eltahir
4,
Anwer Mustafa Hilal
5,*,
Mohamed I. Eldesouki
6,
Abdelwahed Motwakel
5 and
Ishfaq Yaseen
5
1
Department of Information Systems, College of Computer and Information Sciences, Prince Sultan University, Riyadh 12435, Saudi Arabia
2
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
3
Department of Industrial Engineering, College of Engineering at Alqunfudah, Umm Al-Qura University, Mecca 24382, Saudi Arabia
4
Department of Information Systems, College of Science & Art at Mahayil, King Khalid University, Abha 62529, Saudi Arabia
5
Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj 16278, Saudi Arabia
6
Department of Information System, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj 16278, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 26 February 2022 / Revised: 29 March 2022 / Accepted: 31 March 2022 / Published: 8 April 2022

Abstract

:
Recently, the COVID-19 epidemic has had a major impact on day-to-day life of people all over the globe, and it demands various kinds of screening tests to detect the coronavirus. Conversely, the development of deep learning (DL) models combined with radiological images is useful for accurate detection and classification. DL models are full of hyperparameters, and identifying the optimal parameter configuration in such a high dimensional space is not a trivial challenge. Since the procedure of setting the hyperparameters requires expertise and extensive trial and error, metaheuristic algorithms can be employed. With this motivation, this paper presents an automated glowworm swarm optimization (GSO) with an inception-based deep convolutional neural network (IDCNN) for COVID-19 diagnosis and classification, called the GSO-IDCNN model. The presented model involves a Gaussian smoothening filter (GSF) to eradicate the noise that exists from the radiological images. Additionally, the IDCNN-based feature extractor is utilized, which makes use of the Inception v4 model. To further enhance the performance of the IDCNN technique, the hyperparameters are optimally tuned using the GSO algorithm. Lastly, an adaptive neuro-fuzzy classifier (ANFC) is used for classifying the existence of COVID-19. The design of the GSO algorithm with the ANFC model for COVID-19 diagnosis shows the novelty of the work. For experimental validation, a series of simulations were performed on benchmark radiological imaging databases to highlight the superior outcome of the GSO-IDCNN technique. The experimental values pointed out that the GSO-IDCNN methodology has demonstrated a proficient outcome by offering a maximal s e n s y of 0.9422, s p e c y of 0.9466, p r e c n of 0.9494, a c c y of 0.9429, and F 1 s c o r e of 0.9394.

1. Introduction

The 2019 novel coronavirus named COVID-19 has become a major threat to human health across the globe. Earlier works reported that the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) began from decomposed cats that affected human beings, and Middle East Respiratory Syndrome (MERS-CoV) virus began from Arabian camels to human beings. It is believed that COVID-19 started in bats and spread to humans. It can infect the respiratory system easily and is rapidly transmitted to other people. It exhibits milder symptoms in about 82% of patients, and the remaining worsens to a critical stage [1]. In most cases, 95% of people survived to a certain stage, and the remaining 5% of people suffered from the advanced stage. It has also been observed that COVID-19 has affected more men than women, and children in the age group of 0–6 are at risk of infection.
Since March 2020, several openly accessible X-ray images of COVID-19-infected persons have existed. It offers a method of analyzing the medical images and identifying every possible prototype that may lead to the automatic identification and classification of diseases. Presently, the imprints of this virus are stimulating effort because the unreachability of the COVID-19 diagnosis process results in stress globally. Because of the inadequate availability of COVID-19 rapid test kits, it has become essential to rely on other diagnostic techniques. Since the coronavirus damages the epithelial cells in the respiratory system, doctors make use of X-rays to diagnose the patient’s lungs [2]. As the hospitals commonly have X-ray imaging equipment, it becomes easy to test COVID-19 using X-rays without specific test kits. Radiological imaging techniques have become essential to detecting and classifying COVID-19. Although it denotes a circular allocation in the images, it exhibits identical characteristics to the alternative viral pandemic lung contagion. Because the coronavirus continues to grow quickly, different varieties of examinations are performed.
Deep learning (DL) is an effective method involved in the healthcare-based diagnostic process. DL is a combination of machine learning (ML) algorithms and is majorly focused on automated feature extraction and classification processes [3,4]. ML as well as DL models have been recognized as well-identified models to mine, examine, and identify the patterns that exist in the images. Improving the progression of medical decision making and computer-aided design (CAD) turned out to be non-trivial, as effective data are produced [5]. DL, normally named deep CNN (DCNN), was utilized for automatically extracting the features that utilize the convolutional processes, and layers operate on nonlinear data. All the layers have a data transformation for superior and more abstract levels. Usually, DL refers to novel deep networks related to standard ML techniques utilizing big data [6].
This paper presents an automated glowworm swarm optimization (GSO) with an inception-based deep convolutional neural network (IDCNN) for COVID-19 diagnosis and classification, called the GSO-IDCNN model. The presented model utilizes a Gaussian smoothening filter (GSF) to exterminate the noise that occurs from the radiological images. Moreover, the IDCNN-based feature extractor was utilized, which employs the Inception v4 method. To further boost the performance of the IDCNN model, the hyperparameters are optimally tuned using the GSO algorithm. Finally, an adaptive neuro-fuzzy classifier (ANFC) is used for classifying the existence of COVID-19. For experimental validation, a series of simulations were performed on benchmark radiological imaging databases to highlight the superior outcome of the GSO-IDCNN model. In short, the contribution of the paper is listed as follows:
  • To develop a new GSO-ODCNN model for COVID-19 detection and classification;
  • To present a new GSF model to eradicate the noise that exists in the radiological images;
  • To introduce a GSO model with an Inception v4-based feature extractor on radiological images;
  • To employ ANFC classifier to allocate proper class labels to it;
  • To validate the performance of the GSO-ODCNN model on the benchmark dataset.

2. Related Works

ML algorithms fall under the topic of artificial intelligence (AI), which is commonly employed for healthcare applications for feature extraction and image examination purposes. A classification model is developed for computing the dissimilarity amongst a collection of Regions of Interest (ROIs) [7,8]. Moreover, the features are classified by a normal vector-oriented classifier technique. Another computed tomography (CT)-based classification model is developed in [9] incorporating three classical features, such as grayscale values, shapes, textures, and symmetric features. It can be performed using RBFNN to classify the features involved in the images. A comparative study of JeffriesñMatusita (JñM) distance and KarhunenñLoËve transformation-based feature extracting techniques were developed [10].
A new classifier model is projected in [11] with an average grayscale value of images for a multi-class image classifier. A novel automatic classifier technique is developed in [12] for classifying breast cancer utilizing morphological features. Moreover, it is noticed that the outcome decreases when an identical process is carried out on an alternative dataset. Additionally, handcrafted techniques undergo initialization for deploying CNN and automated feature extraction methods.
Ozyurt et al. [13] presented the hybridization technique known as fused perceptual hash dependent on the CNN model to decrease the diagnosis time of liver CT images and sustain the overall operation. Xu et al. [14] have executed a DTL technique to address the medicinal imaging imbalance problem. Lakshmanaprabu et al. [15] have investigated CT lung images using an optimum DNN as well as LDA. In [16], a transformation of original CT images to lower attenuation actual images and higher attenuation pattern rescaling was carried out. At last, the resampling of the images takes place and is classified using the CNN technique. A DL-based automatic lung and affected region segmentation process take place in [17] using a chest CT image. Wang et al. [18] relied on COVID-19 radiographical modifications in CT images and designed a DL model for graphical feature extraction of COVID-19, offering a medicinal examination prior to obtaining the pathogenic state to avert the deadly disorder in the patients. In [19], data mining (DM) techniques are applied to classifier SARS and pneumonia using X-rays.
Although numerous techniques exist to diagnose COVID-19, it is still a requirement to analyze COVID-19 using chest X-ray images. X-ray machinery appeared to help scan the body for damage, such as fracture, bone displacement, lung disease, pneumonia, and tumor. By using X-rays, the scanning process is easy, quick, cheap, and harmless over CT. Since the advanced stage of COVID-19 leads to serious illnesses, a proficient CAD model for COVID-19 diagnosis is essential. At the same time, most of the earlier works have concentrated on binary classification. Therefore, in this study, a multi-label classification process is designed for COVID-19 diagnosis.

3. The Proposed GSO-IDCNN Model

The working procedure contained in the GSO-IDCNN technique is showcased in Figure 1. As depicted, the noise that exists from the radiological images is discarded by the GSF technique. Then, the feature extraction process takes place using the IDCNN model, where the parameters involved in it are tuned by the GSO technique. Eventually, the classification process is executed by the ANFC model to allocate appropriate class labels to it.

3.1. GSF-Based Preprocessing

The design of 2D GSF is commonly employed to smoothen and remove noise. It necessitates massive computational time, and its effectiveness in the design is fascinating.
The convolutional operators are the Gaussian operators, and the model of Gaussian smoothing is attained by convolutional operations. The 1D Gaussian operator has been represented by:
G lD ( x ) = 1 2 π o e ( x 2 2 o 2 )  
A better smoothening filtering process for images is recognized from the spatial and frequency domains, thus sustaining the uncertainty connection, as provided by [20]:
Δ x Δ ω 1 2 .
The 2D Gaussian operator (circularly symmetric) can be represented by:
G 2 D ( x , y ) = 1 2 π σ 2 e ( x 2 + y 2 2 σ 2 ) ,  
where σ designates the standard deviation (SD) of the Gaussian function. Once it includes a high value, the smoothening effect is found to be high, and ( x ,   y ) designates the Cartesian co-ordinate points in the image that indicates the window dimensional.
This filtering technique contains addition as well as multiplication tasks amongst the image and kernel. An image can be defined as a matrix with values of 0–255. The kernel was considered as a normalized square matrix that lies within the range of zero to one. The kernel can be defined using a specific bit count.
The MSE is a cumulative square error amongst the reconstructed and original images that can be represented by:
M S E = 1 M × N i   j   ( O i m a g e R image ) ,      
where M × N indicates the image size, O i m a g e implies the original images, and R i m a g e denotes the restoring image. PSNR is the peak value of SNR, and it can be represented by the ratio of maximum probable power of pixel values and power of distorted noise. It affects the actual quality of the image and is represented by:
P S N R = 10 log 10 [ 255 × 255 M S E ] ,
where 255 × 255 is the higher pixel values that exist from the image, and MSE is determined to input and saved images with M × N size. The convolutional process is the multiplication method, and the informed logarithm product is ineffective with respect to the accurateness. Thus, it is an effectual logarithm multiplier for improving the accurateness of the Gaussian filter.

3.2. IDCNN-Based Feature Extraction Model

In this section, the features in the preprocessed image are filtered using the IDCNN-based Inception v4 model [21]. The older Inception versions are useful for training distinct blocks where all the repetitive blocks are split into a number of subnetworks enabling the total memory. However, the Inception network is easily tuned, demonstrating that several modifications are performed dependent upon the count of filters in different layers that do not control the quality of completely trained networks. To optimally elect the trained rate, the layer size needs to be set optimally to reach an effective tradeoff between processing and distinct subnetworks. Figure 2 illustrates the network schema of Inception v4. By contrast, in Tensor Flow, advanced Inception techniques are represented without any replica partition.
For the residual version of the Inception network, the lower Inception blocks were obtainable on regular Inception. Every Inception block arrives in the filter-expansion layer, which increases the filtering bank’s dimensionality before the remaining summation to match the input depth. Further variation amongst the remaining and non-remaining Inception methods is that batch-normalization (BN) was applied on the conventional layer, then not on the peak value of the remaining summaries. It can be anticipated to the exclusive exploitation of BN is suitable, yet the plan of BN in TensorFlow necessitates massive memory, so it becomes essential for minimizing the layer count. Thus, BN is employed.
It is expected that if the filter count exceeds 1000, the residual version starts offering uncertainty, and the network “dies” beforehand from the training, signifying that the final layer prior to average pooling creates only 0 s over different counts of iterations. Therefore, the minimization remaining prior to attaching the preceding activation layer is steady at the time of training. Usually, a few scaling factors exist in the interval of [0.1–0.3] to scale the residual prior to attaching it to the accumulated layer activation.

3.3. GSO-Based Hyperparameter Optimization Model

To optimize the hyperparameters of the GSO technique, a collection of glowworms is initialized and arbitrarily distributed from the solution space in such a way that is effective. The intensity of emitted lights was linked to the amount of luciferin that is closely integrated into it, whereas the glowworms were located from their motion and had a dynamic decision range r d i ( t ) limited by a spherical sensor range r s   ( 0 < r d i < = r s i ) . Firstly, the glowworm comprises an identical count of luciferins, l 0 . Based on the resemblance of luciferin values, the glowworm i selects their adjacent one j with probability p i j and shifts from the direction of decision ranges r s ( 0 < r d i < = r s i ) , whereas the position of the glowworm i is represented by x i   ( x i R m , i = 1 , 2 , ,   n ) [22].
A luciferin update stage is affected by the function value in the glowworms’ place. During the luciferin upgrade, the principle can be defined as:
l i ( t + 1 ) = ( 1 ρ ) l i ( t ) + γ J ( x i ( x + 1 ) )
where l i ( t ) denotes the luciferin level connected to a glowworm i at time t ,   ρ refers to the luciferin decay constant 0 < ρ < 1 ,   γ represents the luciferin improvement constant, and J ( x i ( t ) ) signifies the value of the main function at agent i ’s place at time t .
Along with the processes involved in the GSO technique, glowworms are fascinated by their neighbors that glow brighter. Thus, the outcome, at the time of the movement phase, the glowworms make use of the probabilistic process to move towards the neighbor that has a maximum luciferin intensity. In the case of every glowworm i , the possibility of moving over a neighboring glowworm can be represented as:
p i j ( t ) = l j ( t ) l j ( t ) Σ k N i ( t ) l k ( t ) l i ( t )
where j N i ( t ) , N i ( t ) = { j : d i j ( t ) < r d i ( t ) ,   l i ( t ) ,   l i ( t ) < l j ( t ) } denotes the collection of nearby glowworms i at time t ,   d i j ( t ) indicates the Euclidean distance amongst the glowworm i and j at time t , and r d i ( t ) denotes the variable neighboring range related to glowworm i at time t . The variable restricted by a radial sensor range ( 0 < r d i < r s ).
x i ( t + 1 ) = x i ( t ) + s [ x j ( t ) x i ( t ) x j ( t ) x i ( t ) ]
where s   ( > 0 ) refers the step sizes, and   implies the Euclidean norm operator. Moreover, x i ( t ) R m denote the place of glowworm i at time t from the m dimensional real space R m . Afterward, let r 0 be the initialized neighborhood ranges of all the glowworms (i.e, r d i ( 0 ) = r 0 , i ):
r d i ( t + 1 ) =   min   { r s ,   max   { 0 ,   r d i ( t ) + β ( n r | N i ( t ) | ) } }
where β is a constant, and n t defines a parameter utilized to control the degree.

3.4. ANFC-Based Classification Model

The ANFIS-based classification model can be employed to determine the class labels of the input radiological images. For simplicity, it is considered a network with two inputs, u and v , and one outcome, f . The ANFIS is a fuzzy Sugeno method. In order for the ANFIS structure to exist, two fuzzy if-then principles depend on the first-order Sugeno method, which is regarded as follows:
  • Rule 1: if u is A and v is B 1 , then f 1   = p 1 u + q 1 v + r 1 ;
  • Rule 2: if u is A and v is B 2 , then f 2 = p 2 u + q 2 v + r 2 ;
where u and v are the input, A and B i are the fuzzified groups, f i ,   i = 1 ,   2 are the resultants of the fuzzy model, and p i ,   q i , and r i are the designing measures which is defined in the training model. The ANFIS structure for applying these two rules is demonstrated in Figure 3 [23], where the circle refers to the fixed node and the square denotes an adaptive node. As shown in the figure, the ANFIS structure has five layers.
Layer 1: All the nodes in layer 1 creates the adaptive node. The resultants of layer 1 are the fuzzified membership grade of input and are provided as:
O i 1 = µ A i ( u )     i = 1 , 2            
O i 1 = µ B i 2 ( u )     i = 3 , 4    
where u and v are the inputs to node I , A refers the linguistic label, and µ A i ( u ) and µ B i 2 ( u ) accept some fuzzy membership function (MF). In general, A i ( u ) is chosen by:
µ A i = 1 1 + { [ ( u c i ) / a i ] 2 } b i            
where a i ,   b i , and c i are the measures of membership bell-shaped functions.
Layer 2: A node in this layer is labeled M , reflecting that it is executed by a simple multiplier. The resultants of the layer are illustrated as:
O i 2 = w i = µ A i ( u ) µ B i ( v ) i = 1 , 2              
Layer 3: It has static nodes that compute the ratio of firing strength of the principles, as given below:
O i 3 = w ¯ i = w i w 1 + w 2   i = 1 , 2                          
Layer 4: In this layer, nodes are adaptive nodes. The resultants of this layer are calculated by the procedure provided below:
O i 4 = w ¯ i f i = w ¯ i ( p i u + q j v + r i )   i = 1 , 2      
where w ¯ i is a normalized firing strength from layer 3.
Layer 5: A node executes the summary of each received signal. Therefore, an entire output of the method is provided as:
O i 5 = i w ¯ i f i = i w i f i i w i      
There are two adaptive layers in this ANFIS model, i.e., the first and fourth layers. In the first layer, there are three modifiable measures { a ,   b i , c i } that are compared with the input MFs. These measures are usually known as premise measures. The fourth layer is also three modifiable measures { p i   q i   r i } relating to the first-order polynomial. The consequent measures are during this measurement [24].

3.5. Learning Algorithm of ANFIS

The learning technique for this model is to tune every modifiable measure, such as { a i b i c i } and { p i q i r i } , to create an ANFIS output matching the trained data. If premise measures a i , b i , and c i of the MFs are suitable, the resultant of the ANFIS method is expressed by:
f = w 1 w 1 + w 2 f 1 + w 2 w 1 + w 2 f 2        
By replacing Equation (14) and fuzzy if-then principles with Equation (8), it develops:
f = w ¯ 1 ( p 1 + q 1 v + r 1 ) + w ¯ 2 ( p 2 u + q 2 v + r 2 )      
where w ¯ 1 , w ¯ 2 are calculated by Equation (14). After the rearrangement, the output is demonstrated by:
f = ( w ¯ 1 u ) p 1 + ( w ¯ 1 v ) q 1 + ( w ¯ 1 ) r 1 + ( w ¯ 2 u ) p 2 + ( w ¯ 2 v ) q 2 + ( w ¯ 2 ) r 2      
with the linear grouping of changeable resultant measures p 1 ,   q 1 ,   r 1 ,   p 2 ,   q 2 , and r 2 . These measures are upgraded to forward pass the learning technique using the least squares model. Let q be an unidentified vector comprising six measures. Thus, Equation (19) is illustrated by:
f = θ A  
When A is an invertible matrix then:
θ = A 1 f      
Then, a pseudo-inverse is utilized to solve q as follows:
θ = ( A T A ) 1 A T f        
During the backward pass, the error signals are propagated, and premise measures are upgraded with gradient descent.
α n e w = α o l d η   E α  
where E is the MSE, α is all the premise measures, and η is the rate of learning. The chain rule is applied to calculate the partial derivative utilized for upgrading the MF measures.
E α = E f f f j f j w j w j μ i μ i α  
By following the above expression and computing all the partial derivatives, the premise measures { a i b i c i } are upgraded in Equation (23).

4. Experimental Validation

To ensure the classification performance of the GSO-IDCNN method, an extensive experimental validation process was carried out with a chest X-ray dataset [25]. It encompasses a set of 220 COVID-19 images, 27 normal images, 15 pneumocystis images, and 11 SARS images. Figure 4 showcases the sample images. The presented method was executed using an Intel i5, 8th-generation PC with 16GB RAM, MSI L370 Apro, Nividia 1050 Ti4 GB. For experimentation, the Python 3.6.5 tool was utilized together with Pillow, pandas, sklearn, TensorFlow, Keras, opencv, seaborn, Matplotlib, and pycm. The parameters contained are batch size: 128, learning rate: 0.001, epoch count: 500, and momentum: 0.2.
Table 1 and Figure 5, Figure 6 and Figure 7 investigate the classifier outcome analysis of the GSO-IDCNN model under several kinds of validation. The GSO-IDCNN model obtained effective diagnostic outcomes by offering higher performance. For the samples on validation 1, the GSO-IDCNN approach has reached higher s e n s y , s p e c y , p r e c n , a c c y , F 1 s c o r e , and kappa values of 0.9324, 0.9380, 0.9389, 0.9365, 0.9310, and 0.9298, respectively.
Eventually, in validation 2, the GSO-IDCNN method attained superior s e n s y , s p e c y , p r e c n , a c c y , F 1 s c o r e , and kappa values of 0.9389, 0.9456, 0.9490, 0.9427, 0.9354, and 0.9376, respectively. Moreover, in validation 3, the GSO-IDCNN approach gained increased s e n s y , s p e c y , p r e c n , a c c y , F 1 s c o r e , and kappa values of 0.9423, 0.9472, 0.9498, 0.9462, 0.9403, and 0.9421, respectively. Further, in validation 4, the GSO-IDCNN model gained maximal s e n s y , s p e c y , p r e c n , a c c y , F 1 s c o r e , and kappa values of 0.9492, 0.9490, 0.9515, 0.9408, 0.9472, and 0.9219, respectively. Furthermore, in validation 5, the GSO-IDCNN method achieved superior sensitivity, specificity, precision, accuracy, F1-score, and kappa values of 0.9481, 0.9532, 0.9576, 0.9482, 0.9431, and 0.9423, respectively.
Table 2 and Figure 8 and Figure 9 offer a detailed comparative analysis of the GSO-IDCNN technique with respect to distinct measures [26]. The s e n s y analysis of the GSO-IDCNN approach with existing algorithms displays that the ANN approach accomplished ineffective results with a lower s e n s y value of 0.8745. Moreover, the Conv-NN system resulted in a somewhat increased s e n s y value of 0.8773, whereas the ANFIS and Deep-TL models have accomplished reasonably closer s e n s y values of 0.8848 and 0.8961, respectively. Eventually, the XGBoost algorithm demonstrated a reasonable outcome with a s e n s y value of 0.92. Afterward, the MLP and LR approaches depicted considerably increased s e n s y values of 0.93 and 0.93. Though the FM-HCF-DLF methodology offered a slightly better s e n s y value of 0.9361, the presented GSO-IDCNN technique achieved a maximum s e n s y value of 0.9422.
The s p e c y analysis of the GSO-IDCNN approach with recent methodologies demonstrates that the ANN model accomplished ineffective outcomes with the minimal s p e c y value of 0.8291. Additionally, the Conv-NN system resulted in a somewhat increased s p e c y value of 0.8697, whereas the ANFIS model accomplished a moderate s p e c y value of 0.8774. Next, the Deep-TL approach showcased reasonable outcomes with a s p e c y value of 0.9203. Afterward, the FM-HCF-DLF model depicted a considerably increased s p e c y value of 0.9456. However, the proposed GSO-IDCNN system gained a superior s p e c y value of 0.9466.
The p r e c n analysis of the GSO-IDCNN technique with the existing methods shows that the ANN methodology accomplished an ineffectual outcome with a minimum p r e c n value of 0.8259. In line with this, the Conv-NN model resulted in a slightly higher p r e c n value of 0.8741, whereas the ANFIS approach accomplished a moderate p r e c n value of 0.8808. Similarly, the LR and XGBoost models demonstrated a similar p r e c n value of 0.9200. In addition, the Deep-TL approach portrayed a reasonable outcome with a p r e c n value of 0.9259. Next, the MLP model has depicted a considerably increased p r e c n value of 0.9300. Although the FM-HCF-DLF methodology offered a slightly better p r e c n value of 0.9485, the proposed GSO-IDCNN model achieved a higher p r e c n value of 0.9494.
The a c c y analysis of the GSO-IDCNN methodology with existing approaches exhibits that the ANN method has accomplished ineffective results with a minimum a c c y of 0.8509. Similarly, the Conv-NN model resulted in a somewhat enhanced a c c y of 0.8736, whereas the ANFIS and Deep-TL systems accomplished reasonably closer a c c y values of 0.8811 and 0.9075, respectively. Following them, the XGBoost approach illustrated a reasonable outcome with an a c c y value of 0.9157. Concurrently, the LR and MLP methodologies depicted considerably improved a c c y values of 0.9212 and 0.9313. Although the FM-HCF-DLF model offered a near-optimal a c c y of 0.9408, the projected GSO-IDCNN technique reached a superior a c c y of 0.9429. Finally, the F 1 s c o r e analysis of the GSO-IDCNN approach with the existing methodologies displays that the LR and XGBoost methods accomplished ineffective results with the smallest F 1 s c o r e of 0.9200. Additionally, the MLP system resulted in a somewhat maximum F 1 s c o r e of 0.9300. Eventually, the FM-HCF-DLF model outperformed the reasonable results with an F 1 s c o r e of 0.9320. However, the proposed GSO-IDCNN methodology achieved a superior F 1 s c o r e of 0.9394.
From the brief experimental validation, we have ensured that the GSO-IDCNN technique exhibited an effective diagnostic performance on the related approaches since it provided a maximal s e n s y value of 0.9422, a s p e c y value of 0.9466, a p r e c n value of 0.9494, an a c c y value of 0.9429, and an F 1 s c o r e of 0.9394. It is due to the integration of the SSA for the parameter tuning of IDCNN using GSO and ANFC models.

5. Conclusions

This paper has established a GSO-IDCNN approach for COVID-19 diagnosis and classification. Primarily, the noise that occurs from radiological images is discarded by the GSF technique. Then, the feature extraction process occurs utilizing the IDCNN model, where the parameters involved in it are tuned by the GSO technique. Eventually, the classification process is executed by the ANFC model to allocate appropriate class labels to it. To validate the performance of the GSO-IDCNN method, extensive simulation analyses were carried out on the benchmark radiological imaging databases to highlight the superior outcome of the GSO-IDCNN technique. The experimental values pointed out that the GSO-IDCNN approach has demonstrated proficient outcome by offering a maximal s e n s y value of 0.9422, a s p e c y value of 0.9466, a p r e c n value of 0.9494, an a c c y value of 0.9429, and an F 1 s c o r e of 0.9394. In the future, the COVID-19 diagnostic performance could be improved by utilizing advanced end-to-end deep learning architectures.

Author Contributions

Conceptualization, A.M.H.; Data curation, A.A.A.; Formal analysis, A.A.A.; Investigation, J.S.A.; Methodology, I.A.; Project administration, M.M.E.; Resources, M.I.E.; Software, A.M.; Validation, I.Y.; Writing–original draft, I.A.; Writing–review & editing, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number (RGP 2/158/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R191), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: 22UQU4340237DSR05. The authors would like to thank Prince Sultan University for its support in paying the Article Processing Charges.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated in the study.

Conflicts of Interest

The authors declare that they have no conflict of interest to report regarding the present study.

References

  1. Stoecklin, S.B.; Rolland, P.; Silue, Y.; Mailles, A.; Campese, C.; Simondon, A.; Mechain, M.; Meurice, L.; Nguyen, M.; Bassi, C.; et al. First cases of coronavirus disease 2019 (COVID-19) in France: Surveillance, investigations, and control measures. Eurosurveillance 2020, 25, 2000094. [Google Scholar]
  2. Pustokhin, D.A.; Pustokhina, I.V.; Dinh, P.N.; Phan, S.V.; Nguyen, G.N.; Joshi, G.P. An effective deep residual network based class attention layer with bidirectional LSTM for diagnosis and classification of COVID-19. J. Appl. Stat. 2020, 1–18. [Google Scholar] [CrossRef]
  3. Shorten, C.; Khoshgoftaar, T.M.; Furht, B. Deep Learning applications for COVID-19. J. Big Data 2021, 8, 18. [Google Scholar] [CrossRef] [PubMed]
  4. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
  5. Ismael, A.M.; Şengür, A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 2021, 164, 114054. [Google Scholar] [CrossRef] [PubMed]
  6. Alzubi, J.A.; Kumar, A.; Alzubi, O.A.; Manikandan, R. Efficient Approaches for Prediction of Brain Tumor using Machine Learning Techniques. Indian J. Public Health Res. Dev. 2019, 10, 267–272. [Google Scholar] [CrossRef]
  7. Alafif, T.; Tehame, A.M.; Bajaba, S.; Barnawi, A.; Zia, S. Machine and deep learning towards COVID-19 diagnosis and treatment: Survey, challenges, and future directions. Int. J. Environ. Res. Public Health 2021, 18, 1117. [Google Scholar] [CrossRef]
  8. Nazir, S.; Alzubi, O.A.; Kaleem, K.; Hamdoun, H. Image subset communication for resource-constrained applications in wireless sensor networks. Turk. J. Electr. Eng. Comput. Sci. 2020, 28, 2686–2701. [Google Scholar] [CrossRef]
  9. Azizi, S.; Mustafa, B.; Ryan, F.; Beaver, Z.; Freyberg, J.; Deaton, J.; Loh, A.; Karthikesalingam, A.; Kornblith, S.; Chen, T.; et al. Big self-supervised models advance medical image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 3478–3488. [Google Scholar]
  10. Alzubaidi, L.; Al-Amidie, M.; Al-Asadi, A.; Humaidi, A.J.; Al-Shamma, O.; Fadhel, M.A.; Zhang, J.; Santamaría, J.; Duan, Y. Novel transfer learning approach for medical imaging with limited labeled data. Cancers 2021, 13, 1590. [Google Scholar] [CrossRef] [PubMed]
  11. Albrecht, A.; Hein, E.; Steinhöfel, K.; Taupitz, M.; Wong, C.K. Bounded-depth threshold circuits for computer-assisted CT image classification. Artif. Intell. Med. 2002, 24, 179–192. [Google Scholar] [CrossRef]
  12. Yang, X.; Sechopoulos, I.; Fei, B. Automatic tissue classification for high-resolution breast CT images based on bilateral filtering. In Proceedings of the Medical Imaging 2011: Image Processing, International Society for Optics and Photonics, Lake Buena Vista, FL, USA, 12–17 February 2011; Volume 7962, p. 79623H. [Google Scholar]
  13. Ozyurt, F.; Tuncer, T.; Avci, E.; Koc, M.; Serhatlioglu, I. A novel liver image classification method using perceptual hash-based convolutional neural network. Arab. J. Sci. Eng. 2019, 44, 3173–3182. [Google Scholar] [CrossRef]
  14. Xu, G.; Cao, H.; Udupa, J.K.; Yue, C.; Dong, Y.; Cao, L.; Torigian, D.A. A novel exponential loss function for pathological lymph node image classification. In Proceedings of the MIPPR 2019: Parallel Processing of Images and Optimization Techniques; and Medical Imaging, International Society for Optics and Photonics, Wuhan, China, 2–3 November 2020; Volume 11431, p. 114310A. [Google Scholar]
  15. Lakshmanaprabu, S.K.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification of lung cancer on CT images. Future Gener. Comput. Syst. 2019, 92, 374–382. [Google Scholar]
  16. Gao, M.; Bagci, U.; Lu, L.; Wu, A.; Buty, M.; Shin, H.C.; Roth, H.; Papadakis, G.Z.; Depeursinge, A.; Summers, R.M.; et al. Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks. Comp. Methods Biomech. Biomed. Eng. Imag. Vis. 2018, 6, 1–6. [Google Scholar] [CrossRef] [PubMed]
  17. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shi, Y. Lung Infection Quantification of COVID-19 in CT Images with Deep Learning. arXiv 2020, arXiv:2003.04655, 1–19. [Google Scholar]
  18. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X.; et al. A Deep Learning Algorithm using CT Images to Screen for Corona Virus Disease (COVID-19). medRxiv 2020, 1–26. [Google Scholar] [CrossRef] [PubMed]
  19. Xie, X.; Li, X.; Wan, S.; Gong, Y. Mining X-ray images of SARS patients. In Data Mining: Theory, Methodology, Techniques, and Applications; Williams, G.J., Simoff, S.J., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 282–294. ISBN 3540325476. [Google Scholar]
  20. Nandan, D.; Kanungo, J.; Mahajan, A. An error-efficient Gaussian filter for image processing by using the expanded operand decomposition logarithm multiplication. J. Ambient Intell. Humaniz. Comput. 2018. [Google Scholar] [CrossRef]
  21. Sikkandar, M.Y.; Alrasheadi, B.A.; Prakash, N.B.; Hemalakshmi, G.R.; Mohanarathinam, A.; Shankar, K. Deep learning based an automated skin lesion segmentation and intelligent classification model. J. Ambient Intell. Humaniz. Comput. 2021, 12, 3245–3255. [Google Scholar] [CrossRef]
  22. Zainal, N.; Zain, A.M.; Radzi, N.H.M.; Udin, A. Glowworm swarm optimization (GSO) algorithm for optimization problems: A state-of-the-art review. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Bäch, Switzerland, 2013; Volume 421, pp. 507–511. [Google Scholar]
  23. Al-Hmouz, A.; Shen, J.; Al-Hmouz, R.; Yan, J. Modeling and simulation of an adaptive neuro-fuzzy inference system (ANFIS) for mobile learning. IEEE Trans. Learn. Technol. 2012, 5, 226–237. [Google Scholar] [CrossRef]
  24. Hosseini, M.S.; Zekri, M. Review of medical image classification using the adaptive neuro-fuzzy inference system. J. Med. Signals Sens. 2012, 2, 49. [Google Scholar] [CrossRef] [PubMed]
  25. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 18 February 2021).
  26. Shankar, K.; Perumal, E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. Complex Intell. Syst. 2020, 7, 1277–1293. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Working process of the GSO-IDCNN model.
Figure 1. Working process of the GSO-IDCNN model.
Healthcare 10 00697 g001
Figure 2. Network schema of Inception v4.
Figure 2. Network schema of Inception v4.
Healthcare 10 00697 g002
Figure 3. ANFC structure.
Figure 3. ANFC structure.
Healthcare 10 00697 g003
Figure 4. Sample Images.
Figure 4. Sample Images.
Healthcare 10 00697 g004
Figure 5. Result analysis of the GSO-IDCNN approach with respect to s e n s y and s p e c y .
Figure 5. Result analysis of the GSO-IDCNN approach with respect to s e n s y and s p e c y .
Healthcare 10 00697 g005
Figure 6. Result analysis of the GSO-IDCNN technique with respect to P r e c n and A c c y .
Figure 6. Result analysis of the GSO-IDCNN technique with respect to P r e c n and A c c y .
Healthcare 10 00697 g006
Figure 7. Result analysis of the GSO-IDCNN technique with respect to F 1 s c o r e and kappa.
Figure 7. Result analysis of the GSO-IDCNN technique with respect to F 1 s c o r e and kappa.
Healthcare 10 00697 g007
Figure 8. Comparative analysis of the GSO-IDCNN technique with different measures.
Figure 8. Comparative analysis of the GSO-IDCNN technique with different measures.
Healthcare 10 00697 g008
Figure 9. Comparative analysis of the GSO-IDCNN technique with respect to s p e c y .
Figure 9. Comparative analysis of the GSO-IDCNN technique with respect to s p e c y .
Healthcare 10 00697 g009
Table 1. Result analysis of the presented GSO-IDCNN technique with respect to distinct measures.
Table 1. Result analysis of the presented GSO-IDCNN technique with respect to distinct measures.
No. of Validation S e n s y S p e c y P r e c n A c c y F 1 s c o r e Kappa
Validation 10.93240.93800.93890.93650.93100.9298
Validation 20.93890.94560.94900.94270.93540.9376
Validation 30.94230.94720.94980.94620.94030.9421
Validation 40.94920.94900.95150.94080.94720.9219
Validation 50.94810.95320.95760.94820.94310.9423
Average0.94220.94660.94940.94290.93940.9347
Table 2. Comparative studies of the existing models with the presented GSO-IDCNN models.
Table 2. Comparative studies of the existing models with the presented GSO-IDCNN models.
Methods S e n s y S p e c y P r e c n A c c y F 1 s c o r e
GSO-IDCNN0.94220.94660.94940.94290.9394
FM-HCF-DLF0.93610.94560.94850.94080.9320
Conv-NN0.87730.86970.87410.8736-
Deep-TL0.89610.92030.92590.9075-
ANN0.87450.82910.82590.8509-
ANFIS0.88480.87740.88080.8811-
MLP0.9300-0.93000.93130.9300
LR0.9300-0.92000.92120.9200
XGBoost0.9200-0.92000.91570.9200
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abunadi, I.; Albraikan, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Hilal, A.M.; Eldesouki, M.I.; Motwakel, A.; Yaseen, I. An Automated Glowworm Swarm Optimization with an Inception-Based Deep Convolutional Neural Network for COVID-19 Diagnosis and Classification. Healthcare 2022, 10, 697. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10040697

AMA Style

Abunadi I, Albraikan AA, Alzahrani JS, Eltahir MM, Hilal AM, Eldesouki MI, Motwakel A, Yaseen I. An Automated Glowworm Swarm Optimization with an Inception-Based Deep Convolutional Neural Network for COVID-19 Diagnosis and Classification. Healthcare. 2022; 10(4):697. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10040697

Chicago/Turabian Style

Abunadi, Ibrahim, Amani Abdulrahman Albraikan, Jaber S. Alzahrani, Majdy M. Eltahir, Anwer Mustafa Hilal, Mohamed I. Eldesouki, Abdelwahed Motwakel, and Ishfaq Yaseen. 2022. "An Automated Glowworm Swarm Optimization with an Inception-Based Deep Convolutional Neural Network for COVID-19 Diagnosis and Classification" Healthcare 10, no. 4: 697. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10040697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop