Next Article in Journal
Choroidal and Choriocapillaris Morphology in Pan-FGFR Inhibitor-Associated Retinopathy: A Case Report
Next Article in Special Issue
Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review
Previous Article in Journal
Reliability as a Precondition for Trust—Segmentation Reliability Analysis of Radiomic Features Improves Survival Prediction
Previous Article in Special Issue
Fully Automatic Knee Bone Detection and Segmentation on Three-Dimensional MRI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification

1
Facultad de Ingeniería, Universidad Andres Bello, Viña del Mar 2531015, Chile
2
Escuela de Tecnología Médica, Universidad de Valparaíso, Viña del Mar 2540064, Chile
3
Centro de Investigación y Desarrollo en Ingeniería en Salud, Escuela de Ingeniería C. Biomédica, Universidad de Valparaíso, Valparaíso 2362905, Chile
4
Instituto Milenio Intelligent Healthcare Engineering, Valparaíso 2362905, Chile
*
Author to whom correspondence should be addressed.
Submission received: 30 November 2021 / Revised: 17 December 2021 / Accepted: 28 December 2021 / Published: 20 January 2022
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)

Abstract

:
The evaluation of white blood cells is essential to assess the quality of the human immune system; however, the assessment of the blood smear depends on the pathologist’s expertise. Most machine learning tools make a one-level classification for white blood cell classification. This work presents a two-stage hybrid multi-level scheme that efficiently classifies four cell groups: lymphocytes and monocytes (mononuclear) and segmented neutrophils and eosinophils (polymorphonuclear). At the first level, a Faster R-CNN network is applied for the identification of the region of interest of white blood cells, together with the separation of mononuclear cells from polymorphonuclear cells. Once separated, two parallel convolutional neural networks with the MobileNet structure are used to recognize the subclasses in the second level. The results obtained using Monte Carlo cross-validation show that the proposed model has a performance metric of around 98.4% (accuracy, recall, precision, and F1-score). The proposed model represents a good alternative for computer-aided diagnosis (CAD) tools for supporting the pathologist in the clinical laboratory in assessing white blood cells from blood smear images.

1. Introduction

The peripheral blood smear represents a routine laboratory test that provides the physician with a great deal of information about a patient’s general condition. It provides a qualitative and quantitative assessment of blood components, mainly cells and platelets. Blood cells can be divided into white blood cells (WBC) or leukocytes and red blood cells (RBC) or erythrocytes [1]. Leukocytes, in turn, comprise five types of nucleated cells: monocytes, basophils, eosinophils, neutrophils, and lymphocytes. The total WBC count as the difference of percentage between the subtypes provides critical information in infectious diseases and chronic processes such as anemia, leukemia, and malnutrition [2].
The manual WBC count is based on the microscopic observation of the blood smear by the analyst, who can differentiate the subtypes mainly based on the morphological characteristics of the cell nucleus and cytosol. However, this process strongly depends on the analyst’s time and experience, leading to errors if the analyst is not adequately trained [3]. In addition, since this hematological evaluation is a routine test, it is often in high demand in clinical laboratories, representing an increased workload that affects performance. Thus, providing computer-aided diagnosis (CAD) tools for diagnostic assistance in the laboratory is required. For instance, CAD systems have been developed that use image processing techniques to classify the differential white blood cell count [4,5]. This automatic leukocyte classification allows a faster and more reproducible result to be generated while reducing bias and inter-observer variability.
On the other hand, due to the inherent complexity of observing medical examinations, their automation is still a challenge for researchers today. Achieving levels of precision comparable to those of a professional in the area is critical. The automatization is prone to errors or biases in recognizing and classifying these images. These will directly impact the diagnosis, increasing the treatment costs and negatively affecting the recovery and survival of patients.
The main component of the computer-aided system for WBC classification is the cell detection and segmentation algorithm. Based on image processing analysis, it will identify the different elements of interest considering various aspects associated with cell morphology: size, shape, texture, nucleus, etc. [4]. Commonly, cell segmentation is a complex task in tissue samples. However, this task is straightforward in cell smears, given the dark nucleus in leukocyte staining. The challenge is mainly in delineating cell borders, separating overlapping cells, and removing noise and artifacts in the image during acquisition [6]. Given the advantages of artificial intelligence in image processing, several machine learning (ML) alternatives have been evaluated to classify and segment leukocytes. These methods range from the support vector machine (SVM) [7,8] and Naïve Bayesian [9,10] to more complex algorithms such as deep learning (DL) models [11,12,13].
Within DL models, convolutional neural networks (CNNs) have shown exemplary performance in medical image classification [14,15,16,17]. CNNs are feed-forward neural networks that can be divided into two main parts: deep convolutional feature extraction and classification. This model extracts features by applying multiple convolutional and pooling layers that include linear operations (called kernels) that emphasize an input image’s characteristic. A fully connected dense layer does the classification step to learn the model using the extracted features [18,19]. Following this structure, several types of CNN models have been proposed for specific tasks such as classification, among which are AlexNet [20], ResNet [21], VGG [22], and GoogLeNet [23], and segmentation, highlighting the fully connected network (FCN) [24], U-Net [25], and Faster-RCNN [26], these being applied in the processing of blood smear images for differential WBC counting, achieving good performance results. Recently, an efficient network architecture called MobileNet was proposed as a small, lightweight, and low latency model for mobile and embedded vision applications [27]. This model has been demonstrated to be effective when applied to various tasks.
Most researchers have used these methods as one-level designs or single models built on the entire dataset. In other fields, these single models may experience difficulty in handling the increasingly complex data distribution [28] or large-scale visual recognition [29]. Given the characteristics of leukocytes, better performance in WBC classification could be obtained if a multi-level scheme is developed. In the first level of this scheme, the polymorphonuclears are separated from the mononuclear. Afterward, on the one hand, the monocytes and lymphocytes (mononuclear) and, on the other hand, the neutrophils, eosinophils, and basophils (polymorphonuclears) are classified in the second level. Thus, more features could be extracted from each cell image, and the classification performance could be increased.
On the other hand, medical datasets acquired from several institutions may have different quality, contrast, and acquisition mechanisms. Thus, the datasets are prone to an inherent bias caused by various confounding factors [30]. In machine learning, the dataset bias may lead to a difference between the estimated and the true value of desired model parameters. A possible solution to this problem is combining multi-source datasets such that the model will be robust to the unseen domains with better generalization performance.
Therefore the main objective of this research is to develop a multi-level convolutional neural network (ML-CNN) model to improve the automatic detection and classification of individual white blood cells by mitigating the dataset bias with the combination of multi-source datasets. The main contribution of this work is twofold. First, a new multi-level deep learning algorithm separates the leukocyte detection and classification processes into two levels. In the first level, the Faster R-CNN network is applied to identify the region of interest of white blood cells, together with the separation of mononuclear cells from polymorphonuclear cells. Once separated, two parallel CNN models are used to classify the subclasses in the second level. Second, the ML-CNN proposal was implemented using the MobileNet architecture as the base model. It is an efficient model with an adequate balance between high performance and structural complexity, making its implementation in automated equipment such as a CAD system feasible. Furthermore, the MobileNet applies the depthwise separable convolution to extract relevant features from each channel, which better uses the information contained in the images to improve leukocyte classification.
The article is structured as follows. In Section 2, we present a brief revision of the state of the art. In Section 3, the proposed method is presented. Results and discussion are presented in Section 4. Finally, in Section 5, we give some concluding remarks, and we outline some future works.

2. State of the Art

Traditional machine learning (ML) and deep learning (DL) models have been extensively proposed as alternatives for the automatic classification of leukocytes [5,31,32]. Such is the case of Abou et al. [33], who developed a CNN model to identify WBC. Likewise, Togacar et al. [34] proposed a subclass separation of WBC images using the AlexNet model. In addition, Hegde et al. [13] proposed a deep learning approach for the classification of white blood cells in peripheral blood smear images. Wang et al. [35] applied deep convolution networks on microscopy hyperspectral images to learn spectral and spatial features. This form makes full use of three-dimensional hyperspectral data for WBC classification. Basnet et al. [36] optimized the WBC CNN classification enhancing loss function with regularization and weighted loss, decreasing time processing. Jiang et al. [37] constructed a new CNN model called WBCNet that can fully extract features of the WBC image by combining the batch normalization algorithm, residual convolution architecture, and improved activation function. Other authors include steps to improve the feature extraction process. Thus, Yao et al. [38] proposed the two-module weighted optimized deformable convolutional neural networks (TWO-DCNN) white blood classification, characterized as two-module transfer learning and deformable convolutional layers for the betterment of robustness.
A novel blood-cell classification framework named MGCNN, which combines a modulated Gabor wavelet with CNN kernels, was proposed by Huang et al. [39]. In this model, multi-scale and orientation Gabor operators are taken dot product with initial CNN kernels for each convolutional layer. Experimental results showed that MGCNN outperformed traditional SVM and single CNN networks. Likewise, Khan et al. [40] presented a new model called MLANet-FS, which combined an AlexNet network with a feature selection strategy for WBC-type identification. Razzak et al. [41] proposed a WBC segmentation and classification using a deep contour aware CNN and extreme machine learning (ELM).
Another interesting strategy is to apply hybrid methods such as the proposal of Çınar and Tuncer [23]. They presented an approach combining AlexNet and GoogleNet to extract features from WBC images. Then, these features are concatenated and classified using SVM. In the same way, Özyurt [42] proposed a fused CNN model for WBC detection where the pre-trained architectures AlexNet, VGG-16, GoogleNet, and ResNet were used as feature extractors. Then, the features obtained were combined and later classifier using ELM. Patil et al. [43] presented a deep hybrid model, CNN and recurrent neural networks (RNN), with canonical correlation analysis to extract overlapping and multiple nuclei patches from blood cell images. As a novel proposal, Baydilli and Atila [44], to overcome the problem of data sets, developed a leukocyte classification via capsule networks, an enhanced deep learning approach. A capsule network consists of the decoder (used to reconstruct the image) and the encoder (responsible for extracting features and classifying the image).
Research presenting a multi-level scheme in the WBC classification is scarce. Although Baghel et al. [45] do not perform a plan, they propose a CNN classification model whose performance is evaluated in two phases: an initial step, binary discrimination between the mononuclear and polymorphonuclears, and a second phase that corresponds to the classification of subtypes. Tran et al. [46] presented as an initial setup a DL semantic segmentation between WBC and RBC, as initial steps for classifying leukocytes. However, none of these methods has a similar scheme to the one proposed in our research, so we consider that our algorithm is novel and can efficiently contribute to leukocyte classification.
Table 1 summarizes the main methods that use deep learning models found in the literature. The main advantage of deep learning models is that they are highly efficient since they extract the most important information available in the data to generate the prediction. The models presented above have shown an excellent performance higher than 90% [23,35,40], which makes them a helpful tool in the WBC classification. However, their architecture implies a high computational cost due to the use of complex architecture and large volumes of data, this being its main disadvantage. Simpler models that maintain good performance with fewer trainable parameters in their architecture could be generated. However, these methods require image processing techniques for feature extraction, making them slower [45,47]. Therefore, to implement deep learning models in a CAD system, methods should have a reasonable trade-off between performance and complexity without affecting the processing speed.

3. Materials and Methods

In this research, a multi-level convolutional neural network (ML_CNN) was developed to detect and classify individual WBC obtained from blood smear images.

3.1. White Blood Cell Images Datasets

In this work, five different datasets were used. The description of these sources follows.
In the previous datasets, there was an insufficient number of basophils. Therefore, the algorithm was developed using images that contain monocytes, lymphocytes, segmented, and eosinophils.
For developing the proposal, a human specialist used the LabelImg (https://github.com/tzutalin/labelImg, Accessed date: 10 September 2019) graphical annotation tool to label the blood smear images. For the first level, a subset of images obtained from the KBC data set was selected, the cells were detected with the identification of the region of interest (ROI) using bounding boxes, and the cells were labeled as polymorphonuclear or mononuclear. A total of 365 labeled images with the bounding boxes of the cells were used to train the Faster R-CNN of the first level. A human specialist used the LabelImg tool to label the cell images into lymphocytes, monocytes, segmented, and eosinophils. On the one hand, a CNN model was trained with the mononuclear cells to classify between the lymphocytes and monocytes, and a total of 2282 and 2134 images were used, respectively. On the other hand, a CNN model was trained with the polymorphonuclear cells to classify between the segmented neutrophils and eosinophils, and a total of 2416 and 2477 images were used, respectively.

3.2. A Multi-Level Convolutional Neural Network Approach

For WBC classification, a multi-level and hybrid scheme is proposed, in which the first step corresponds to the detection and separation of leukocytes into mononuclear and polymorphonuclear. Subsequently, the classification of the subtypes in the second level, monocytes, lymphocytes, eosinophils, and neutrophils, is performed, as shown in Figure 1.
A Faster R-CNN allows individual white blood cells to be detected and extracted in the first level. This model is an object detection system that improves on Fast R-CNN by utilizing a region proposal network (RPN) with the CNN model [59]. Thus, this network is structured in two distinct modules: a CNN that proposes regions and a Fast R-CNN detector that uses the proposed regions (Figure 2).
In the Faster R-CNN, the RPN from an image generates a set of proposed rectangular objects, each of which has an objectivity score. These proposals are developed by sliding a small network over the feature map emitted by the last shared convolutional layer, considering an n × n spatial window mapped to a lower-dimensional feature. Multiple region proposals are simultaneously predicted at each sliding window location, where the number of maximum possible suggestions for each area is denoted as k. Each of the k proposals is parameterized relative to k reference boxes, marked as anchors, and associated with a specific scale and aspect ratio. Each anchor is assigned a binary class (an object or not). The positive or negative labels depend on the most significant overlap of intersection over union values to minimize the objective function. Finally, the Fast R-CNN network inputs the evaluated image and the set of object proposals already obtained [60]. First, the whole image is processed with several convolutional and maxima clustering layers to produce a feature map. Then, for each object proposal, a region of interest (RoI) clustering layer extracts a fixed-length feature vector from the feature map. Each feature vector is fed into a sequence of fully connected layers that ultimately branch into two sister output layers: one that produces softmax probability estimates over K object classes plus a general “background” type. Another layer makes four real-valued numbers for each of the K object classes. Each set of 4 values encodes the refined positions of the enclosures for one of the K classes.
The softmax function takes an input vector z = [ z 1 , , z K ] and normalizes it into a probability distribution for the K classes:
σ ( z ) i = e z i k = 1 K e z k
In the proposed model, the Faster R-CNN is trained to detect leukocytes by separating these cells into two distinct classes according to the morphology of their nuclei, creating two labels at this stage:
  • Mononuclear (MN), whose nuclei show morphological unity, and includes lymphocytes and monocytes.
  • Polymorphonuclear (PMN), whose nuclei are divided, and includes segmented neutrophils and eosinophils.
After the first level, the dataset was separated into two cell groups according to the segmentation of the nucleus. In the second level of the proposal, two CNNs were developed: one for classifying mononuclear cells into lymphocytes and monocytes and another for separating polymorphonuclear cells into segmented neutrophils and eosinophils. The CNNs implemented have the MobileNet architecture as a base model with the application of transfer learning, where the weights were pretrained for the ImageNet classification. The output layer was discarded and replaced with a new fully connected classifier.
The MobileNet uses a depthwise separable convolution that reduces the number of parameters compared to conventional CNN. The depthwise separable convolutions factorize the convolution into a depthwise (dw) and a pointwise (pw) (see Figure 3). On the one hand, the dw convolution applied a single filter to each input channel. On the other hand, the pw convolution applied a 1 × 1 convolution to combine the outputs of the pw convolution.
The filtered matrix G ( x , y ) is obtained by applying the classical convolution between the kernel ω and the input matrix F ( x , y ) with the following equation:
G ( x , y ) = ω F ( x , y ) = δ x = k i k i δ y = k j k j ω ( δ x , δ y ) · F ( x + δ x , y + δ y ) + ω b i a s
where ω b i a s is the bias introduced to the convolution product.
Table 2 schematically shows the architecture of the MobileNet. In this article, we set the hyperparameters of the width multiplier α = 0.5 and the resolution multiplier ρ = 1 . However, the width multiplier reduces the input and output channels by half in this work. The first layer uses the classical convolution. The following layers are depthwise separable convolutions followed by batch-normalization and ReLU activation functions. The ReLU activation function has the following equation:
R e L u ( x ) = max { 0 , x }
The stride is a parameter that controls the movement of the filter over the image and induces the kernels’ downsampling. For instance, a stride of 2 (s2) reduces the image to half the original size. A final average pooling reduces the spatial resolution to 1 before the fully connected layer.
The model was trained using the backpropagation algorithm with the Adam optimizer and the binary cross-entropy loss function:
L c r o s s _ e n t r o p y ( y , y ^ ) = 1 N i = 1 N [ y i log ( y ^ i ) + ( 1 y i ) log ( 1 y ^ i ) ]
where y i and y ^ i are the i-th target and the predictions, respectively. N is the total number of samples.
The hyper-parameters of the learning algorithm were set to l e a r n i n g _ r a t e = 0.001 , b e t a _ 1 = 0.9 , b e t a _ 2 = 0.999 , and e p s i l o n = 10 7 .
In this work, we have used a web-based virtual experimental environment. We have run the models using the Kaggle and Google Colab environment with GPU, and the models were implemented with Keras and TensorFlow.

3.3. Performance Metrics

The confusion matrix corresponds to a summary of the prediction results obtained with the machine learning model. Given n samples, the T P is the number of true positives; the T N is the number of true negatives; F N is the number of false negatives; and F P is the number of false positives.
We use the classification metrics obtained from the confusion matrix to evaluate the performance. These metrics are accuracy, precision, recall, and F1-score, and they are described below.
  • Accuracy: the accuracy value refers to how close a measurement is to the true value, and the equation is given by
    A c c u r a c y = T P + T N T P + T N + F P + F N · 100 %
    where T N are true negatives.
  • Recall: the measure of sensitivity or recall is the percentage of positive cases that were correctly labeled by the model. The recall equation is given by
    r e c a l l = T P T P + F N · 100 %
    where T P is the ratio of true positives, F P is the ratio of false positives, and F N are the false negatives.
  • Precision: precision is the percentage of correct classifications. This metric is defined with the following equation:
    precision = T P T P + F P · 100 %
  • F_Score: F_score corresponds to the harmonic mean between precision and sensitivity and gives a trade-off measure between the recall and the precision:
    F _ S c o r e = 2 precision · R e c a l l precision + R e c a l l

4. Results and Discussion

As mentioned in the previous sections, a two-level hybrid model was run to classify white blood cells in blood smears. To evaluate the performance of our proposed model, we have used Monte Carlo cross-validation with 10 repetitions, with a split of 70% for training and 30% for testing. The averages and standard deviations of the metrics computed for the test set are shown in Table 3. It can be seen that a good performance was obtained, with metrics from 96% reaching up to 100%. All the performance metrics were around 98.4%, highlighting the model’s discriminatory power developed for classifying leukocytes.
Table 4 shows a comparative analysis of the results with those obtained in the state of the art. Our proposed model achieved higher performance than those reported by Abou et al. [33], Baydilli [44], Banik et al [47], Huang et al. [39], Jiang et al. [37], Kutlu et al. [49], Liang et al. [50], Özyurt [42], Patil et al. [43], Togacar et al. [34], Wang et al. [35], Yao et al. [38], and Yu et al. [51], who reported accuracy between 83% and 98%. However, it should be noted that the average performance of our proposal was lower than those reported by Baghel et al. [45] and Basnet et al. [36], where they have included image processing for feature extraction to enhance the prediction performance. Likewise, the works of Çınar et al. [23], Hedge et al. [48], and Khan et al. [40] have reported accuracy values higher than 99%. Nevertheless, it should be noted that the latter models have more complex structures and a larger number of trainable parameters, which represents a disadvantage in terms of computational cost.
It is essential to highlight some advantages in terms of the functionality of our proposal. The first one is that the ML-CNN is a method that involves simpler models and obtains comparable performances. This fact differentiates it, for example, from Wang’s proposal [35], which requires microscopy hyperspectral images and the architecture of a 3D residual block used in the deep hyper model. Likewise, Khan et al. [40] presents a more complex model involving convolutional features followed by a selection strategy to identify cellular subtypes. Alternatively, Yao et al. [38] propose a model based on two modules involving transfer learning. All these models are more complex than our proposal. Therefore, the multi-level structure presented is simpler to implement without affecting the classification performance on a large scale.
Another noteworthy aspect of the functionality of the proposed model is that the multi-level strategy allows for the first phase of cell detection. It is possible to identify the regions of interest in the images to extract the white blood cells to be subsequently classified. With the exception of [41,49], none of the reviewed works focuses on this detection scheme. It should be noted that the proposed multi-level scheme not only allows efficient cell element classification but also reduces processing times by running CNN networks in parallel during stage two. This hybrid scheme is one of the differentiating aspects of the proposal compared to the methods discussed in the state of the art, which are primarily single-level. This makes our proposal functionally efficient for use in automated equipment as a CAD system.
Figure 4 and Figure 5 are examples of the operation of the ML-CNN for mononuclear and polymorphonuclear cells, respectively, showing in both cases examples of correctly and erroneously classified images. The excellent classification performance obtained for the mononuclear may be due to the morphological characteristics being well-differentiated between monocytes and lymphocytes, where the former has an irregularly shaped nucleus, and the latter has a wholly rounded shape [1]. If a partial comparison is made, it can be observed that the hybrid model developed obtained a better mononuclear classification accuracy compared to the proposals of Wang et al. [35], Baydilli and Atila [44], and Huang et al. [39]. Although it is ideal to have a model that efficiently achieves a global classification of leukocytes, a model that classifies mononuclear cells very well could help screen viral infections since the count of these subclasses acquires greater relevance in these infections [2].
In the case of polymorphonuclear cells (Figure 5), a good classification of the cellular subtypes was also observed. However, in this group, the identification is more complex since the cellular characteristics are not as well differentiated as in the case of mononuclear cells. In this case, the differentiating aspect is not the nucleus but the cell cytoplasm. Therefore, the staining of the blood smear and the acquisition of the image for its correct identification play a fundamental role. Another aspect that could contribute to misclassification is the shape of the cell nucleus. Although mononuclear cells have a rounded core, in the case of polymorphonuclear cells, a slight indentation generates a lobule shape that is not always constant.
Another aspect to highlight is the image labeling process for the model developed. A medical technologist with extensive professional experience performed the manual labeling, thus minimizing the risk of using erroneous datasets. Likewise, we worked with a subset of random images from four databases that were subsequently labeled for the validation set. Thus, we have a new database whose labels are verified by a professional and that in the future will be available for use in other investigations.
A limitation of the model developed is that, on the one hand, basophils were not considered among the polymorphonuclears due to the small number of cases available in the databases. This issue is a challenge to be faced in future work since basophils are part of a routine blood smear but are found in a shallow frequency, which prevents having enough data to run the training of a machine learning model. It should be noted that in the first instance, it was considered to use a generative network to increase the basophil data. Still, the available cases were scarce; therefore, it was decided not to do so in order not to bias the sample. On the other hand, it is necessary to remember that the proposed model was developed to run a classification of mature leukocytes in blood smears, so it does not include the identification of immature cells that could lead to some pathology such as leukemia. The model would have to be extended to identify these cell subtypes. We intend to include more images for working with this cellular subclass in the classification model for future research.
At the particular point of the images present in the blood smear, which is the type of examination on which this work focuses, we have the additional difficulty that each particulate component of the blood has its shapes, characteristics, internal arrangement, and even color, which are relevant for the classification. Of these components, leukocytes (white blood cells), due to their structural complexity, represent a problem when developing algorithms that have a good level of precision in their classification and detection of each of the cell types that make up this group (lymphocytes, monocytes, eosinophils, segmented, and basophils).

5. Conclusions

This paper presents a hybrid multi-level approach for automatically detecting and classifying white blood cells into mononuclear (lymphocytes and monocytes) and polymorphonuclear (segmented neutrophils and eosinophils) types from blood smear images. At the first level, a Faster R-CNN network is applied to identify the region of interest of white blood cells, together with the separation of mononuclear cells from polymorphonuclear cells. Once separated, two parallel convolutional neural networks with the MobileNet structure are used to recognize the subclasses. The model achieved good performance metrics, achieving an average accuracy, precision, recall, and F-score of 98.4%, indicating that the proposal represents an excellent tool for clinical and diagnostic laboratories. Moreover, the proposed ML-CNN approach allows obtaining better accuracy results while optimizing the cost of computational resources, thus allowing the creation, evaluation, and retraining of each neural algorithm in an isolated way, without affecting those that achieve the expected levels of performance.
For further work, on the one hand, we intend to use data augmentation tools to include basophil images in the polymorphonuclear group in training and extend the model for the classification of immature leukocytes. On the other hand, it is also intended to develop machine learning techniques that include expert knowledge to improve performance [61,62].

Author Contributions

Conceptualization, C.C. and R.T.; methodology, C.C., R.L., R.T. and R.S.; software, C.C., R.L. and R.S.; validation, C.C., R.L. and R.S.; investigation, C.C. and M.Q.; data curation, C.C.; writing—original draft preparation, C.C., M.Q. and R.L.; writing—review and editing, M.Q., R.S. and R.T.; visualization, R.L.; supervision, R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable due to the data is publicly available and de-identified.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are openly available in a public repositories.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Adewoyin, A. Peripheral blood film-a review. Ann. Ib. Postgrad. Med. 2014, 12, 71–79. [Google Scholar] [PubMed]
  2. Bonilla, M.; Menell, J. Chapter 13–Disorders of White Blood Cells. In Lanzkowsky’s Manual of Pediatric Hematology and Oncology; Elsevier: Amsterdam, The Netherlands, 2016; pp. 209–238. [Google Scholar]
  3. Gurcan, M.N.; Boucheron, L.E.; Can, A.; Madabhushi, A.; Rajpoot, N.M.; Yener, B. Histopathological image analysis: A review. IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [Google Scholar] [CrossRef] [Green Version]
  4. Xing, F.; Yang, L. Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: A comprehensive review. IEEE Rev. Biomed. Eng. 2016, 9, 234–263. [Google Scholar] [CrossRef]
  5. Saraswat, M.; Arya, K. Automated microscopic image analysis for leukocytes identification: A survey. Micron 2014, 65, 20–33. [Google Scholar] [CrossRef]
  6. Janowczyk, A.; Madabhushi, A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J. Pathol. Inform. 2016, 7, 29. [Google Scholar] [CrossRef]
  7. Su, M.C.; Cheng, C.Y.; Wang, P.C. A neural-network-based approach to white blood cell classification. Sci. World J. 2014, 2014, 796371. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Hegde, R.B.; Prasad, K.; Hebbar, H.; Singh, B.M.K.; Sandhya, I. Automated decision support system for detection of leukemia from peripheral blood smear images. J. Digit. Imaging 2020, 33, 361–374. [Google Scholar] [CrossRef] [PubMed]
  9. Prinyakupt, J.; Pluempitiwiriyawej, C. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers. Biomed. Eng. Online 2015, 14, 63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Gautam, A.; Singh, P.; Raman, B.; Bhadauria, H. Automatic classification of leukocytes using morphological features and naïve Bayes classifier. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 1023–1027. [Google Scholar]
  11. Acevedo, A.; Alférez, S.; Merino, A.; Puigví, L.; Rodellar, J. Recognition of peripheral blood cell images using convolutional neural networks. Comput. Methods Programs Biomed. 2019, 180, 105020. [Google Scholar] [CrossRef] [PubMed]
  12. Tiwari, P.; Qian, J.; Li, Q.; Wang, B.; Gupta, D.; Khanna, A.; Rodrigues, J.J.; de Albuquerque, V.H.C. Detection of subtype blood cells using deep learning. Cogn. Syst. Res. 2018, 52, 1036–1044. [Google Scholar] [CrossRef]
  13. Hegde, R.B.; Prasad, K.; Hebbar, H.; Singh, B.M.K. Feature extraction using traditional image processing and convolutional neural network methods to classify white blood cells: A study. Australas. Phys. Eng. Sci. Med. 2019, 42, 627–638. [Google Scholar] [CrossRef]
  14. Ullah, A.; Muhammad, K.; Hussain, T.; Baik, S.W. Conflux LSTMs network: A novel approach for multi-view action recognition. Neurocomputing 2021, 435, 321–329. [Google Scholar] [CrossRef]
  15. Mellado, D.; Saavedra, C.; Chabert, S.; Torres, R.; Salas, R. Self-improving generative artificial neural network for pseudorehearsal incremental class learning. Algorithms 2019, 12, 206. [Google Scholar] [CrossRef] [Green Version]
  16. Castro, J.S.; Chabert, S.; Saavedra, C.; Salas, R.F. Convolutional neural networks for detection intracranial hemorrhage in CT images. CRoNe 2019, 2564, 37–43. [Google Scholar]
  17. Chabert, S.; Mardones, T.; Riveros, R.; Godoy, M.; Veloz, A.; Salas, R.; Cox, P. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture. Res. Ideas Outcomes 2017, 3, e11731. [Google Scholar] [CrossRef] [Green Version]
  18. Gao, J.; Yang, J.; Zhang, J.; Li, M. Natural scene recognition based on convolutional neural networks and deep Boltzmannn machines. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2–5 August 2015; pp. 2369–2374. [Google Scholar]
  19. Mellado, D.; Saavedra, C.; Chabert, S.; Salas, R. Pseudorehearsal approach for incremental learning of deep convolutional neural networks. In Proceedings of the Computational Neuroscience: First Latin American Workshop, LAWCN 2017, Porto Alegre, Brazil, 22–24 November 2017; Springer: Cham, Switzerland, 2017; pp. 118–126. [Google Scholar]
  20. Yildirim, M.; Cinar, A.C. Classification of White Blood Cells by Deep Learning Methods for Diagnosing Disease. Rev. d’Intell. Artif. 2019, 33, 335–340. [Google Scholar] [CrossRef]
  21. Toğaçar, M.; Ergen, B.; Cömert, Z. Classification of white blood cells using deep features obtained from Convolutional Neural Network models based on the combination of feature selection methods. Appl. Soft Comput. 2020, 97, 106810. [Google Scholar] [CrossRef]
  22. Honnalgere, A.; Nayak, G. Classification of normal versus malignant cells in B-ALL white blood cancer microscopic images. In ISBI 2019 C-NMC Challenge: Classification in Cancer Cell Imaging; Springer: Singapore, 2019; pp. 1–12. [Google Scholar]
  23. Çınar, A.; Tuncer, S.A. Classification of lymphocytes, monocytes, eosinophils, and neutrophils on white blood cells using hybrid Alexnet-GoogleNet-SVM. SN Appl. Sci. 2021, 3, 503. [Google Scholar] [CrossRef]
  24. H Mohamed, E.; H El-Behaidy, W.; Khoriba, G.; Li, J. Improved White Blood Cells Classification based on Pre-trained Deep Learning Models. J. Commun. Softw. Syst. 2020, 16, 37–45. [Google Scholar] [CrossRef] [Green Version]
  25. Lu, Y.; Qin, X.; Fan, H.; Lai, T.; Li, Z. WBC-Net: A white blood cell segmentation network based on UNet++ and ResNet. Appl. Soft Comput. 2021, 101, 107006. [Google Scholar] [CrossRef]
  26. Khouani, A.; Daho, M.E.H.; Mahmoudi, S.A.; Chikh, M.A.; Benzineb, B. Automated recognition of white blood cells using deep learning. Biomed. Eng. Lett. 2020, 10, 359–367. [Google Scholar] [CrossRef] [PubMed]
  27. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  28. Zhong, W.; Gu, F. A multi-level deep learning system for malware detection. Expert Syst. Appl. 2019, 133, 151–162. [Google Scholar] [CrossRef]
  29. Kuang, Z.; Yu, J.; Li, Z.; Zhang, B.; Fan, J. Integrating multi-level deep learning and concept ontology for large-scale visual recognition. Pattern Recognit. 2018, 78, 198–214. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Wu, H.; Liu, H.; Tong, L.; Wang, M.D. Improve Model Generalization and Robustness to Dataset Bias with Bias-regularized Learning and Domain-guided Augmentation. arXiv 2019, arXiv:1910.06745. [Google Scholar]
  31. Khan, S.; Sajjad, M.; Hussain, T.; Ullah, A.; Imran, A.S. A Review on Traditional Machine Learning and Deep Learning Models for WBCs Classification in Blood Smear Images. IEEE Access 2020, 9, 10657–10673. [Google Scholar] [CrossRef]
  32. Deshpande, N.M.; Gite, S.; Aluvalu, R. A review of microscopic analysis of blood cells for disease detection with AI perspective. PeerJ Comput. Sci. 2021, 7, e460. [Google Scholar] [CrossRef]
  33. Abou El-Seoud, S.; Siala, M.; McKee, G. Detection and Classification of White Blood Cells Through Deep Learning Techniques. Int. J. Online Biomed. Eng. (iJOE). 2020, 16, 15. [Google Scholar] [CrossRef]
  34. Togacar, M.; Ergen, B.; Sertkaya, M.E. Subclass separation of white blood cell images using convolutional neural network models. Elektron. Elektrotech. 2019, 25, 63–68. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, Q.; Wang, J.; Zhou, M.; Li, Q.; Wen, Y.; Chu, J. A 3D attention networks for classification of white blood cells from microscopy hyperspectral images. Opt. Laser Technol. 2021, 139, 106931. [Google Scholar] [CrossRef]
  36. Basnet, J.; Alsadoon, A.; Prasad, P.; Al Aloussi, S.; Alsadoon, O.H. A novel solution of using deep learning for white blood cells classification: Enhanced loss function with regularization and weighted loss (ELFRWL). Neural Process. Lett. 2020, 52, 1517–1553. [Google Scholar] [CrossRef]
  37. Jiang, M.; Cheng, L.; Qin, F.; Du, L.; Zhang, M. White blood cells classification with deep convolutional neural networks. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1857006. [Google Scholar] [CrossRef]
  38. Yao, X.; Sun, K.; Bu, X.; Zhao, C.; Jin, Y. Classification of white blood cells using weighted optimized deformable convolutional neural networks. Artif. Cells Nanomed. Biotechnol. 2021, 49, 147–155. [Google Scholar] [CrossRef] [PubMed]
  39. Huang, Q.; Li, W.; Zhang, B.; Li, Q.; Tao, R.; Lovell, N.H. Blood cell classification based on hyperspectral imaging with modulated Gabor and CNN. IEEE J. Biomed. Health Inform. 2019, 24, 160–170. [Google Scholar] [CrossRef]
  40. Khan, A.; Eker, A.; Chefranov, A.; Demirel, H. White blood cell type identification using multi-layer convolutional features with an extreme-learning machine. Biomed. Signal Process. Control 2021, 69, 102932. [Google Scholar] [CrossRef]
  41. Imran Razzak, M.; Naz, S. Microscopic blood smear segmentation and classification using deep contour aware CNN and extreme machine learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; IEEE Press: Piscataway, NJ, USA, 2017; pp. 49–55. [Google Scholar]
  42. Özyurt, F. A fused CNN model for WBC detection with MRMR feature selection and extreme learning machine. Soft Comput. 2020, 24, 8163–8172. [Google Scholar] [CrossRef]
  43. Patil, A.; Patil, M.; Birajdar, G. White blood cells image classification using deep learning with canonical correlation analysis. IRBM 2021, 42, 378–389. [Google Scholar] [CrossRef]
  44. Baydilli, Y.Y.; Atila, Ü. Classification of white blood cells using capsule networks. Comput. Med. Imaging Graph. 2020, 80, 101699. [Google Scholar] [CrossRef]
  45. Baghel, N.; Verma, U.; Nagwanshi, K.K. WBCs-Net: Type identification of white blood cells using convolutional neural network. Multimed. Tools Appl. 2021, 1–17. [Google Scholar] [CrossRef]
  46. Tran, T.; Kwon, O.H.; Kwon, K.R.; Lee, S.H.; Kang, K.W. Blood cell images segmentation using deep learning semantic segmentation. In Proceedings of the 2018 IEEE International Conference on Electronics and Communication Engineering (ICECE), Xi’an, China, 10–12 December 2018; pp. 13–16. [Google Scholar]
  47. Banik, P.P.; Saha, R.; Kim, K.D. An automatic nucleus segmentation and CNN model based classification method of white blood cell. Expert Syst. Appl. 2020, 149, 113211. [Google Scholar] [CrossRef]
  48. Hegde, R.B.; Prasad, K.; Hebbar, H.; Singh, B.M.K. Comparison of traditional image processing and deep learning approaches for classification of white blood cells in peripheral blood smear images. Biocybern. Biomed. Eng. 2019, 39, 382–392. [Google Scholar] [CrossRef]
  49. Kutlu, H.; Avci, E.; Özyurt, F. White blood cells detection and classification based on regional convolutional neural networks. Med. Hypotheses 2020, 135, 109472. [Google Scholar] [CrossRef]
  50. Liang, G.; Hong, H.; Xie, W.; Zheng, L. Combining convolutional neural network with recursive neural network for blood cell image classification. IEEE Access 2018, 6, 36188–36197. [Google Scholar] [CrossRef]
  51. Yu, W.; Chang, J.; Yang, C.; Zhang, L.; Shen, H.; Xia, Y.; Sha, J. Automatic classification of leukocytes using deep neural network. In Proceedings of the 2017 IEEE 12th International Conference on ASIC (ASICON), Guiyang, China, 25–28 October 2017; pp. 1041–1044. [Google Scholar]
  52. Aslan, A. WBC & RBC Detection Dataset from Peripheral Blood Smears. 2020. Available online: https://github.com/draaslan/blood-cell-detection-dataset (accessed on 10 June 2020).
  53. Alam, M.M.; Islam, M.T. Machine learning approach of automatic identification and counting of blood cells. Healthc. Technol. Lett. 2019, 6, 103–108. [Google Scholar] [CrossRef]
  54. Alam, M.; Islam, M. Complete Blood Count (CBC) Dataset. 2019. Available online: https://github.com/MahmudulAlam/Complete-Blood-Cell-Count-Dataset (accessed on 10 June 2020).
  55. Zheng, X.; Wang, Y.; Wang, G.; Liu, J. Fast and Robust Segmentation of White Blood Cell Images by Self-supervised Learning. Micron 2018, 107, 55–71. [Google Scholar] [CrossRef]
  56. Zheng, X. Data for: Fast and Robust Segmentation of Cell Images by Self-Supervised Learning. Mendeley Data, V1. 2018. Available online: https://data.mendeley.com/datasets/w7cvnmn4c5/1 (accessed on 10 June 2020).
  57. Mooney, P. Blood Cell Images. 2018. Available online: https://www.kaggle.com/paultimothymooney/blood-cells (accessed on 10 June 2020).
  58. Rezatofighi, S.H.; Soltanian-Zadeh, H. Automatic recognition of five types of white blood cells in peripheral blood. Comput. Med. Imaging Graph. 2011, 35, 333–343. [Google Scholar] [CrossRef]
  59. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Girshick, R. Fast r-cnn. arXiv 2015, arXiv:1504.08083. [Google Scholar]
  61. Chabert, S.; Castro, J.S.; Muñoz, L.; Cox, P.; Riveros, R.; Vielma, J.; Huerta, G.; Querales, M.; Saavedra, C.; Veloz, A.; et al. Image Quality Assessment to Emulate Experts’ Perception in Lumbar MRI Using Machine Learning. Appl. Sci. 2021, 11, 6616. [Google Scholar] [CrossRef]
  62. Cantor, E.; Salas, R.; Rosas, H.; Guauque-Olarte, S. Biological knowledge-slanted random forest approach for the classification of calcified aortic valve stenosis. BioData Min. 2021, 14, 35. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Scheme of identification and classification of white blood cells by the proposed method.
Figure 1. Scheme of identification and classification of white blood cells by the proposed method.
Diagnostics 12 00248 g001
Figure 2. Representation of Faster R-CNN segmentation.
Figure 2. Representation of Faster R-CNN segmentation.
Diagnostics 12 00248 g002
Figure 3. Depthwise separable convolution of the MobileNet, which factorizes the convolution into depthwise and pointwise convolutions.
Figure 3. Depthwise separable convolution of the MobileNet, which factorizes the convolution into depthwise and pointwise convolutions.
Diagnostics 12 00248 g003
Figure 4. Mononuclear cells classified by the proposed multi-level convolutional neural network. (upper-left) Lymphocytes correctly classified; (upper-right) monocytes Correctly classified; (lower-left) lymphocytes incorrectly classified as monocytes; (lower-right) monocytes incorrectly classified as lymphocytes.
Figure 4. Mononuclear cells classified by the proposed multi-level convolutional neural network. (upper-left) Lymphocytes correctly classified; (upper-right) monocytes Correctly classified; (lower-left) lymphocytes incorrectly classified as monocytes; (lower-right) monocytes incorrectly classified as lymphocytes.
Diagnostics 12 00248 g004
Figure 5. Polymorphonuclear cells classified by the proposed multi-level convolutional neural network. (upper-left) Eosinophils correctly classified; (upper-right) neutrophils correctly classified; (lower-left) eosinophils incorrectly classified as neutrophils; (lower-right) neutrophils incorrectly classified as eosinophils.
Figure 5. Polymorphonuclear cells classified by the proposed multi-level convolutional neural network. (upper-left) Eosinophils correctly classified; (upper-right) neutrophils correctly classified; (lower-left) eosinophils incorrectly classified as neutrophils; (lower-right) neutrophils incorrectly classified as eosinophils.
Diagnostics 12 00248 g005
Table 1. Summary of the state-of-the-art models for white blood cells classification.
Table 1. Summary of the state-of-the-art models for white blood cells classification.
AuthorsModel Description
Abou et al. [33]CNN model with ad hoc structure.
Baghel et al. [45]CNN model.
Banik et al. [47]CNN with fusing features in the first and last convolutional layer.
Basnet et al. [36]DCNN model with image pre-processing and a modified loss function.
Baydilli et al. [44]WBC classification using a small dataset via capsule networks.
Çınar et al. [23]Hybrid AlexNet, GoogleNet networks, and support vector machine.
Hegde et al. [48]AlexNet and CNN model with ad hoc structure.
Huang et al. [39]MFCNN CNN with hyperspectral imaging with modulated Gabor wavelets.
Jiang et al. [37]Residual convolution architecture.
Khan et al. [40]AlexNet model with feature selection strategy and extreme learning machine (ELM).
Kutlu et al. [49]Regional CNN with a Resnet50.
Liang et al. [50]Combining Xception-LSTM.
Özyurt [42]Ensemble of CNN models (AlexNet, VGG16, GoogleNet, ResNet) for feature extraction combined with the MRMR feature selection algorithm and ELM classifier.
Patil et al. [43]Combining canonical correlation analysis CCANet and convolutional neural networks (Inception V3, VGG16, ResNet50, Xception) with recursive neural network (LSTM).
Razzak [41]CNN combined with extreme learning machine (ELM).
Togacar et al. [34]AlexNet with QDA.
Wang et al. [35]Three-dimensional attention networks for hyperspectral images.
Yao et al. [38]Two-module weighted optimized deformable convolutional neural networks.
Yu et al. [51]Ensemble of CNN (Inception V3, Xception, VGG19, VGG16, ResNet50).
ML-CNN
(Our proposal)
Multi-level convolutional neural network approach with multi-source datasets. Combines Faster R-CNN for cell detection with a MobileNet for type classification.
Table 2. Architecture of the MobileNet with transfer learning.
Table 2. Architecture of the MobileNet with transfer learning.
LayerLayer TypeStrideKernel SizeInput SizeN°Parameters
MobileNet Base Model1Conv. 2Ds2 3 × 3 × 3 × 16 128 × 128 × 3 496
2Conv. dws1 3 × 3 × 16 64 × 64 × 16 208
3Conv. pws1 1 × 1 × 16 × 32 64 × 64 × 16 640
4Conv. dws2 3 × 3 × 32 64 × 64 × 32 416
5Conv. pws1 1 × 1 × 32 × 64 32 × 32 × 32 2304
6Conv. dws1 3 × 3 × 64 32 × 32 × 64 832
7Conv. pws1 1 × 1 × 64 × 64 32 × 32 × 64 4352
8Conv. dws2 3 × 3 × 64 32 × 32 × 64 832
9Conv. pws1 1 × 1 × 64 × 128 16 × 16 × 64 8704
10Conv. dws1 3 × 3 × 128 16 × 16 × 128 1664
11Conv. pws1 1 × 1 × 128 × 128 16 × 16 × 128 16,896
12Conv. dws2 3 × 3 × 128 16 × 16 × 128 1664
13Conv. pws1 1 × 1 × 128 × 256 8 × 8 × 128 33,792
14–23 5 × Conv. dws1 3 × 3 × 256 8 × 8 × 256 5 × 3328
Conv. pws1 1 × 1 × 256 × 256 8 × 8 × 256 5 × 66,560
24Conv. dws2 3 × 3 × 256 8 × 8 × 256 3328
25Conv. pws1 1 × 1 × 256 × 512 4 × 4 × 256 133,120
26Conv. dws1 3 × 3 × 512 4 × 4 × 512 6656
27Conv. pws1 1 × 1 × 512 × 512 4 × 4 × 512 264,192
DenseGlobal Avg. Pools1Pool 4 × 4 4 × 4 × 512 -
28FC512262,656
SoftmaxOutput21026
Total Parameters: 1,093,218
Trainable Parameters: 263,682
Table 3. Performance obtained in the classification model for each of the WBC cell types considered in the validation set.
Table 3. Performance obtained in the classification model for each of the WBC cell types considered in the validation set.
CellsClassification ModelAccuracyRecallPrecisionF_Score
MononuclearLymphocytes 99.92 % ± 0.08 99.94 % ± 0.08 99.91 % ± 0.08 99.93 % ± 0.07
Monocytes 99.92 % ± 0.08 99.91 % ± 0.09 99.94 % ± 0.09 99.92 % ± 0.08
PolymorphonuclearEosinophils 96.80 % ± 0.30 96.86 % ± 1.17 96.85 % ± 0.61 96.85 % ± 0.32
Segmented Neutrophils 96.80 % ± 0.30 96.75 % ± 0.70 96.78 % ± 1.14 96.76 % ± 0.27
Average 98.36 % 98.37 % 98.37 % 98.36 %
Table 4. Comparison of WBC classification results with models in the sate of the art. (NI denotes not informed.)
Table 4. Comparison of WBC classification results with models in the sate of the art. (NI denotes not informed.)
AuthorsAccuracy (%)Recall (%)F Score(%)LayersParameters
Abou et al. [33]96.8NINI5NI
Baghel et al. [45]98.997.797.67519,860
Baydilli et al. [44]96.992.592.368,238,608
Banik et al. [47]97.998.697.010 10 5
Basnet et al. [36]98.997.897.74NI
Çınar et al. [23]99.799998
22
60 · 10 6 (AlexNet)
7 · 10 6 (GoogleNet)
Hegde et al. [48]98.799998 60 · 10 6 (AlexNet)
Huang et al. [39]97.7NINI4NI
Jiang et al. [37]83.0NINI33NI
Khan et al. [40]99.199998
3
60 · 10 6 (AlexNet)
40 · 10 6 (ELM)
Kutlu et al. [49]97999850 26 · 10 6 (Resnet50)
Liang et al. [50]95.496.99471 23 · 10 6 (Xception)
Özyurt [42]96.03NINI8
22
16
50
60 · 10 6 (AlexNet)
7 · 10 6 (GoogleNet)
138 · 10 6 (VGG16)
26 · 10 6 (Resnet)
Patil et al. [43]95.995.895.871 23 · 10 6 (Xception)
Razzak et al. [41]98.895.996.43NI
Togacar et al. [34]97.895.795.68 60 · 10 6 (AlexNet)
Wang et al. [35]97.7NINI18 30 · 10 6
Yao et al. [38]95.795.795.755 60 · 10 6
Yu et al. [51]90.592.486.648
71
19
50
23 · 10 6 (InceptionV3)
23 · 10 6 (Xception)
138 · 10 6 (VGG19)
26 · 10 6 (Resnet50)
ML-CNN
(Our proposal)
98.498.498.428 1 · 10 6 (MobileNet)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cheuque, C.; Querales, M.; León, R.; Salas, R.; Torres, R. An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification. Diagnostics 2022, 12, 248. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020248

AMA Style

Cheuque C, Querales M, León R, Salas R, Torres R. An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification. Diagnostics. 2022; 12(2):248. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020248

Chicago/Turabian Style

Cheuque, César, Marvin Querales, Roberto León, Rodrigo Salas, and Romina Torres. 2022. "An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification" Diagnostics 12, no. 2: 248. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop