Next Article in Journal
Harmonization of Flow Cytometric Minimal Residual Disease Assessment in Multiple Myeloma in Centers of Polish Myeloma Consortium
Next Article in Special Issue
Predicting Breast Tumor Malignancy Using Deep ConvNeXt Radiomics and Quality-Based Score Pooling in Ultrasound Sequences
Previous Article in Journal
Update on Optical Coherence Tomography and Optical Coherence Tomography Angiography Imaging in Proliferative Diabetic Retinopathy
Previous Article in Special Issue
Classification of Breast Cancer Lesions in Ultrasound Images by Using Attention Layer and Loss Ensemble in Deep Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Presentation of Novel Architecture for Diagnosis and Identifying Breast Cancer Location Based on Ultrasound Images Using Machine Learning

by
Yaghoub Pourasad
1,*,
Esmaeil Zarouri
2,
Mohammad Salemizadeh Parizi
3 and
Amin Salih Mohammed
4,5
1
Department of Electrical Engineering, Urmia University of Technology (UUT), Urmia 57166-93188, Iran
2
School of Electrical Engineering, Electronic Engineering, Iran University of Science and Technology—IUST, Tehran 16846-13114, Iran
3
Department of Biomedical Engineering, University of Houston, Houston, TX 77204, USA
4
Department of Computer Engineering, College of Engineering and Computer Science, Lebanese French University, Erbil 44001, Iraq
5
Department of Software and Informatics Engineering, Salahaddin University, Erbil 44002, Iraq
*
Author to whom correspondence should be addressed.
Submission received: 3 August 2021 / Revised: 2 October 2021 / Accepted: 3 October 2021 / Published: 11 October 2021
(This article belongs to the Special Issue Machine Learning in Breast Disease Diagnosis)

Abstract

:
Breast cancer is one of the main causes of death among women worldwide. Early detection of this disease helps reduce the number of premature deaths. This research aims to design a method for identifying and diagnosing breast tumors based on ultrasound images. For this purpose, six techniques have been performed to detect and segment ultrasound images. Features of images are extracted using the fractal method. Moreover, k-nearest neighbor, support vector machine, decision tree, and Naïve Bayes classification techniques are used to classify images. Then, the convolutional neural network (CNN) architecture is designed to classify breast cancer based on ultrasound images directly. The presented model obtains the accuracy of the training set to 99.8%. Regarding the test results, this diagnosis validation is associated with 88.5% sensitivity. Based on the findings of this study, it can be concluded that the proposed high-potential CNN algorithm can be used to diagnose breast cancer from ultrasound images. The second presented CNN model can identify the original location of the tumor. The results show 92% of the images in the high-performance region with an AUC above 0.6. The proposed model can identify the tumor’s location and volume by morphological operations as a post-processing algorithm. These findings can also be used to monitor patients and prevent the growth of the infected area.

1. Introduction

Ultrasound is the main procedure for breast cancer detection and statistical analysis of results during a mechanical investigation. Ultrasound monitoring shifts the pathophysiology of breast cancer far from the most part massive lesions that are easily seen and effectively evident and toward ever smaller and occasionally harmless tumors [1]. A breakthrough in systems’ capacity to apply machine learning (ML) approaches to tackle a range of therapeutic scanning issues has occurred during the last decade. While straightforward computer-aided diagnosis (CAD) technologies have been in ultrasound for several years, their value and effectiveness have typically been restricted. New deep learning (DL) approaches, on the other hand, have been shown to identify cancers on standard mammograms as well as, if not superior to, professional physicians. It remains a challenge; the possibility of intelligence monitoring systems identifying autonomously in a randomized controlled trial has not been materialized. The current emphasis is on ML systems assisting radiologists instead of functioning as standalone diagnosticians [2]. Medical imaging, part of the broader scope of testing, is the biggest and most prospective channel via which DL may be utilized in healthcare [3,4]. To get a diagnosis promptly, radiographic examinations, despite modalities, need much interpretation by a professional clinician. There is a rising necessity for diagnostic automating as the constraints on existing radiologists increase [5,6]. Detecting malignancy in breast cancer images has previously been described using ML approaches. On the other hand, ML is restricted in interpreting essential information in its raw state. The constraint arises from the requirement for industry professionals who can manufacture information to feed a classification. On the other hand, DL, a branch of neural networks, learns several layers of description and conceptualization autonomously, allowing for a more in-depth analysis of breast cancer images. Artificial neural networks have made significant advances in image processing [7]. The prevalence of false positives is one of the issues connected with ultrasound. In Europe, women between the ages of 50 and 69 who undergo biannual screening face a 20% chance of receiving a false positive. The statistics in the U.S. are even more worrisome, with every tested woman experiencing at least one false-positive throughout her lifetime. The false-positive findings affect women’s lives, particularly in terms of daily welfare and medicine expenses. However, false positives are not ultrasound’s sole disadvantage [8]. Sure researchers have studied Nucleus analysis, who have extracted nucleus characteristics that can categorize cells as benign or malignant [9]. Likewise, grouping-based methods based on histogram equalization and various measurement characteristics have been used for nuclei recognition and classification. Nonetheless, the service’s effectiveness and efficiency suffer due to the complexity of traditional ML approaches like filtering, separation, and edge detection. The DL technique, which has just evolved, could solve standard ML problems. This technique can tackle picture identification and object localization problems with remarkable dimensionality reduction. CNNs are the most common DL algorithms available in the literature. The 2D input-image structure is used to modify the CNN architecture [10,11]. A CNN-training assignment needs a considerable amount of data in short supply in the healthcare field, particularly in BC. Using the TL method from a natural-images database, including ImageNet, and fine-tuning it answer problems.
In this paper, six analyses have been performed to detect and segment ultrasound images. Features of images are selected using the fractal method. After the k-nearest neighbor (KNN), support vector machine (SVM), decision tree (DT), and Naïve Bayes (NB) classification techniques are used to classify images. Then, convolutional neural network (CNN) architecture was designed to be directly classified based on ultrasound images. Finally, a CNN model is presented to identify the location of the breast cancer lesion.

2. Literature Review

Data mining methods were utilized by Ganggayah et al. to create models for discovering and displaying key prognostic markers of breast cancer survival ratio. There were 23 predictor factors in the database and one outcome variable, which alluded to the participants’ survival state. For performing firm using random forest, the information was grouped based on the receptor status of women with breast cancer detected by immunohistochemistry. The discovered key prediction variables impacting breast cancer survival rates, confirmed by survival curves, are helpful and may be converted into medical diagnosis systems [12]. To identify the Wisconsin Breast Cancer (Basic) database, Bayrak et al. employed two among the most prominent machine learning algorithms and evaluated their recognition accuracy. The Support Vector Machine method produced the most remarkable results with minor errors [13]. Zeebaree et al. presented a technique for extracting the region of interest (ROI) for detecting breast cancer abnormalities. The suggested model was developed using a local scanning technique and a classification technique. A learned simulation was performed in the learning phase by estimating the frequency of rounds from both ROI and environment. The background detected the ROI by monitoring the image with a set size window during the testing step. The suggested solution’s functionality was also matched to current techniques for segmenting specific inputs [14]. Using WDBC database, Agarap compared six ML methods by assessing their diagnostic quality standards. The hyper-parameters utilized for all the classes were control of work to construct the neural networks. According to the statistics, all the supervised learning models scored well on the created plan. With a test accuracy of 99.04 percent, the MLP strategy stands out between modeling and analysis [15]. Ferroni et al. demonstrated the value of combining an ML-based recommender system with stochastic optimization to retrieve diagnoses from frequently gathered breast cancer survivors’ personal, clinical, and molecular data. With a hazard ratio of 10.9, the algorithm could also screen the testing collection into people diagnosed with low- or high-risk advancement. Verification in revolutionary change trials was required, as was successful planning of security issues connected to computerized e-health data. Furthermore, the findings revealed that incorporating ML methods and models into e-health data might aid in obtaining therapeutic targets and could change the treatment of customized therapy [16]. Binder et al. demonstrated a machine-learning technique for comprehensively assessing phenotypic, biochemical, and clinical characteristics from breast cancer pathology that was easily understandable. Initially, the method enabled the accurate detection of tumors and tissue lymphocytes in pathological images and exact heat map representations that explained the classifier’s conclusions. Next, histology was used to identify molecular characteristics such as DNA methylation, gene expression, copy number changes, somatic mutations, and proteins. Eventually, using knowable AI, researchers determined the relationship between morphological and molecular cancer characteristics. Across a combined clinical score of histological, clinical, and molecular characteristics, the resultant statistical multiplex-histology model can help boost fundamental biomedical research and accuracy treatment [17]. There are metaheuristic algorithms such as Harris hawk’s optimization [18], multi-swarm whale [19], moth–flame optimizer [20,21,22], grey wolf [23,24], fruit fly [25,26], bacterial foraging optimization [27], Boosted binary Harris hawk’s optimizer [28], ant colony [29,30], biogeography-based whale optimization [31], and grasshopper optimization [32].
Souri et al. used ML to connect the activity of enzymes to overall survival and categorize tumors into more or less aggressive prediction types using breast cancer transcriptomics from numerous research projects. The proposed approach can categorize cancers into better-defined prognostic groupings instead of using knowledge on tumor volume, staging, or subtypes. The process helps increase prediction and enhance clinical decision-making and accuracy therapies, possibly reducing under diagnosis of high-risk cancers and reducing overtreatment of low-risk disease [33]. The efficiency of traditional ML and DL-based techniques was tested by Boumaraf et al. They also helped categorize breast cancer in histological images by providing a visual explanation. Using ML-based approaches, three feature extractors are used to obtain several features, which are then fused to create a feature representation that can train based on classical classification. They use the transfer learning technique to the VGG-19 classifier for DL-based approaches. They display the learned features after presenting the recognition accuracy of traditional ML and DL techniques to understand classification performance differences better. The results revealed that DL affected the cost ML methods [34]. To tackle the classification problem, Saxena et al. developed a new ML model.
The suggested model used pre-trained ResNet50 and the kernelized mixed deep neural network for CAD of breast cancer utilizing histology. The histological pictures of breast cancer were collected from massive databases. For the categorization of both minor and dominant class cases, the suggested approach performed relatively well. In perspective, the experimental result improves state-of-the-art ML models applied in prior research utilizing the identical BreakHis learning ratio [35]. Wang et al. presented a prototype transfer-generated adversarial network that combines generative adversarial systems and proto systems to categories a vast group of observations using a transfer learning classification model on a limited number of labeled input databases from a comparable area. This strategy decreased the pixel-level dispersion gap for breast histopathological images captured from different platforms with personality and style without necessitating a large number of labeled detection methods by generating an adversarial network, which decreased the style difference between the source and target. The pixel values learned by a prototype network were then embedded into the metric space, allowing discriminative information from the model to be extracted into the neural network. They trained an algorithm to predict huge quantities of target data using a specific “distance” in the subspace. The suggested approach for identifying benign and malignant tumors has an accuracy of almost 90%, according to the empirical results using the BreakHis sample. Shashaani et al. presented a new idea for detecting the effects of paclitaxel on normal and cancerous breast cells [36]. Nourbakhsh et al. demonstrated the effect of MDSC in autoimmune and its therapeutic application [37]. Khayamian et al. investigated the increase in cancer cell permeability and material absorption [38]. It demonstrates the benefit of our technique in offering a valuable tool for breast cancer multi-classification in healthcare situations while reducing the expense of complex annotation [39]. In addition, biological uses of computer vision are prevalent, for instance, diagnosis of tuberculous [40], thyroid Nodules [41], Parkinson’s [42], and paraquat-poisoned patients [43] (See Table 1).

3. Materials and Methods

3.1. Feature Extraction

Feature extraction aims to reduce the number of resources needed to depict an extensive set of data correctly. One of the most significant issues when doing complicated data collection is the number of factors studied. A high range of factors necessitates much memory and storage capacity, or a classifier that employs the instructive example and adapts to new situations. Feature extraction is a broad phrase that refers to strategies for putting together a set of variables to tackle high-precision issues. Image analysis aims to develop a unique approach to portray the essential elements of images in a particular way. A gray area vector was constructed in the fractal technique to produce feature vectors. The image characteristics of the confidence interval of the detected chemicals are computed in statistical analysis from the light intensity of the specified places relative to someone in the image. The frequency of intensity points (pixels) in each combination affects the statistics [53]. The fractal model is utilized to extract the feature in this work. Feature selection has been used to minimize the dimensions and find other fundamental characteristics that may sufficiently distinguish the different systems in engaging with high input data [54].
The fractal technique was used with covariance analysis to create eigenvalues from the image and lower the dimension. The input images for the fractal method must be the same size, and one image is referred to as a two-dimensional matrix and a single vector. Grayscale images with a specified resolution are required. By reshaping matrices, each image is transformed into a column vector. The photos are taken from a M × N matrix. N represents the number of pixels in each image, and M is the number of images. To determine the normal distribution of each original image, the average image must be computed. The covariance matrix would then be calculated, and the covariance matrix’s eigenvalues and eigenvector are produced. The fractal system’s method is that M represents the number of training images, F i is the mean of the images, and l i represents each image in T i . There are M images at first, each of which has the N × N size. Each image may be presented in an N-dimensional area using Equations (1) and (2) for average operations [55].
A = N × N × M
F i = 1 M t = 1 m T t
The fractal method assigns the standard deviation as a critical issue, computed using Equation (3) and the covariance matrix in Equation (4).
V a r i a n c e = 1 M t = 1 m T t
C o v = A A T
Such that A = V a r i a n c e 1 ,   V a r i a n c e 2 , ,   V a r i a n c e n and C o v = N 2 * N 2 and A = N 2   *   M . C o v equals by a considerable value. Then the eigenvalues of C o v are found based on Equation (5).
U i = A V i
Total scatter or covariance matrices are computed using Equations (6) and (7) for scattered matrices in the subclass [55].
S T = k = 1 N ( x k μ ) ( x k μ ) T
W F r a c t a l = a r g   m a x [ W T S T W ] = [ w 1 w 2   w f ]
μ is the average of all data and { w i   |   i = 1 , 2 , ,   f } is a set of eigen vector of f-dimension of S T that is associated with the largest eigenvalue f .

3.2. Convolutional Neural Network (CNN)

The CNN technique is explained in this section. One of the learning networks motivated by the Perceptron neural network is this sort of neural network. An input layer, an output layer, and a hidden deep layer make up this deep network. Initially, the issue’s images or data are classified and taught into the method. The weights of the hidden output layer might then appear in a variety of ways. The suggested method is a classifying or recognition approach if the algorithm’s output comprises many quantitative elements such as a binary or score. The given process is segmentation or identification if the output layer is a matrix as the input image as ground truth information. Convolutional neural networks (CNNs) are composed of convolutions, resampling, and fully coupled layers. The three head neuronal layers are convolutional, pooling layers, and fully associated layers [56]. Each layer has a different task assigned to it. Feature extractor layers are made up of convolutions and subsampling layers [57,58].
In contrast, a related layer order that classifies current data has a place using separated features. The components of feature maps and predictive utility are limited when a pooling layer is assigned. Because the computations of pooling layers consider nearby pixels, these change invariantly. The system is prepared using both forward and regressive progress. The forward progress aims to define the information image using the current parameters (loads and inclination) [59,60].

3.3. Performance Analysis Criteria

On a different collection of samples, called a test set, we examined the performance of a classifier. The accuracy rate is the standard evaluation metric in DL; accuracy correctly classifying the percent of test samples. The loss function is measured as the ratio of incorrectly categorized test samples divided by the total number of test samples. Therefore, records with a significant number of occurrences of one class compared to another are inappropriate for accuracy in an imbalanced dataset. Unless the issue has an imperfect model, a classifier that consistently identifies the majority class regardless of information is highly accurate. We utilize classification confusion matrix-based criteria in extracted features. The outcomes of a predictor in the training dataset are summarized in a confusion matrix. False positives anticipate many negative tests that are surprisingly positive. In contrast, true positives regard a quantity of positively predicted positive samples to be positive. True and false negatives are both based on the same principles. We could construct some important ones using the confusion matrix [60]:
s e n s i t i v i t y = T P T P + F N
P e r c i s i o n = T P T P + F P
A c c u r a c y = T N + T P F P + T N + F N + T P
The anticipated positive sample ratio’s sensitivity is positive, indicating that the expected negative sample ratio is also negative. The projected data set accuracy is positive and was positive. High sensitivity and specificity, or high accuracy and specificity, are both characteristics of a successful categorization. Sensitivity and specificity are desirable in diagnosing diseases, but accuracy and sensitivity are favored in ML. The chance that a sample is categorized as positive is the classification criteria. It strikes a balance between sensitivity and property (or, equivalently, accuracy, and evocation): a low-threshold training set is susceptible to classifying samples as positive, but it also has the potential to produce a large number of false positives, so it has high sensitivity but a low feature, etc. for high thresholds. The recall curve for an exact piece’s categorization differs from its recall as a threshold.

4. Results and Discussion

4.1. Data Collection

The data collected initially included breast ultrasound images of women between the ages of 25 and 75. This data was organized in 2018. The number of patients is 400 women. The data set contains 780 images with an average image size of 500 by 500 pixels. Images are in PNG format [36]. Images are classified into three classes: normal, benign, and malignant. In this research, for image classification and segmentation, the image size has been reduced to 256 by 256 to reduce the processing complexity.

4.2. Ground Truth Images

In this paper, segmentation of images has been used to find the primary location of the tumor. Segmentation is not one of the main steps of the convolution and DL algorithm. It has been used to validate the results. By separating pixels with zero values as the background, each non-zero pixel is the mass breast threshold (225). Each remaining pixel is 127 to the normal breast tissue, as shown in Figure 1.
Ultrasound has higher images quality and does not have any marks or scan effects on film; this allows the network to learn more specific features and segmentation. Having many images increases the model’s accuracy by increasing your data set and training on overlapping pieces. Eighty percent of the images are randomly assigned to the training sets and 20% to the test set for each division.

4.3. Feature Selection

Fractal features extracted from ultrasound images are used in model classification. In the fractal method, the histogram of the images on the images is extracted, as shown in Figure 2.
According to the diagram in Figure 2, the images are transformed to a histogram and modeled by the fractal method. As a result, the obtained model is replaced by four graphs of the normal distribution function. The characteristics of the obtained distributions are obtained in the form of four numbers as features of each image. The features are stored in a matrix and ready to be classified. Figure 2 shows the blue line of the image histogram. By rearranging the images used for modeling, we selected four features with higher accuracy. A red line indicates the sum of the functions. Image features are parameters of Gaussian functions. This process is done for all data set images. Thus, the classification data set is converted to a matrix using four features.

4.4. Classification of Ultrasound Images by Traditional Methods

In this part, the results of CNN architecture in breast cancer ultrasound images are presented. The fractal method is one of the most potent methods and feature selection in images, especially MRI; cancer has been diagnosed by combining these features and famous classifications. Feature extraction output for each image is four scalars, which are used as classification inputs. Moreover, the outer layer of all classifications was labeled 0 for normal tissue, 1 for benign tumors, and 2 for malignant tumors. The proposed models are designed to diagnose cancer types. The classification outputs are plotted as confusion matrices.
According to Figure 3, green cells show true values, and red cells show the number of images with false results. Gray cells also show sensitivity values (horizontal) and precision (vertical). Finally, the more colorful cell in the left corner estimated the total accuracy for the different models. According to the figure, four classical classification models have been selected to detect the type of tumor, among the most powerful methods for diagnosis and classification. These classifications usually give perfect results for binary detection. However, they have many problems for multiple classifications (for example, triplets in this case). According to the results of the decision tree method with an acceptable amount can help diagnose cancer. Out of 133 images with normal tissue in this method, 122 images (84.2%) were correctly diagnosed. In addition, 111 images (83.5% of patients) with benign tumors were correctly diagnosed. In this category of education, 133 images were correctly identified. With benign tumors, 10 images were diagnosed as healthy and 12 as malignant, of which 22 false images were recorded. Finally, the accuracy of the decision tree method was 81%, and the error rate was 19%. Following KNN, SVM and NB were recorded with 67.7%, 40.1%, and 44.9% accuracy, respectively. This level of accuracy could not satisfy the classification with complete accuracy. Therefore, we need to design a model that can diagnose the disease more accurately and sensitively. Therefore, in the next section, the proposed model based on a convolutional neural network is presented.

4.5. Classification of Ultrasound Images Based on Presented CNN Method

In this section, the results of CNN architecture in breast cancer ultrasound images are presented. In this classification, there are three classes: category of benign images, malignant images, and healthy or normal tissues. The CNN architectural model is trained using dataset images, and evaluation criteria are performed to analyze the model, which is as follows.
Figure 4 shows the architecture of CNN’s proposed methods for diagnosing cancerous tumors. This network consists of 16 layers with three layers of convolution. Images have labels for patients with a benign tumor equal to 1 malignant equal to 2 and healthy individuals zero. In addition, 70% of the image datasets are used for network training and 30% for model testing. The results are shown in the following section. Figure 5 shows the amount of loss and accuracy as a function of training epochs. These diagrams are depicted during network training. The process ended after obtaining the best result with higher accuracy and less network loss for 3000 iterations. The back points show the cross-validation of the training process.
The final trained model is evaluated in both training and experimental sets. Figure 6 of the confusion matrices shows this prediction. As can be seen in this figure, the model with 306 images of benign tumors properly trained 305 items (99.7%) in the training set. In addition, out of 147 malignant images and 93 healthy tissue images, 100 images are accurately predictable. As a result, the accuracy of the training set is 99.8%. The results are as follows in the experimental group, which is 30% of the original data and did not participate in the modeling.
Regarding the test results, out of 131 benign experimental images, 116 images were correctly detected. In other words, this diagnosis was associated with 88.5% sensitivity. In addition, out of 63 malignant images, 48 images (76.2%) were correctly diagnosed, of which 15 images (23.8%) were misdiagnosed as benign, which are called false results. This model was associated with low sensitivity or 35% to identify healthy tissues that did not participate in the model. In other words, the total accuracy of the model for the validation of the proposed model is 76.1%. The results of comparing the proposed models for cancer diagnosis are presented in Table 2 and Figure 7. According to Table 2, the model presented by CNN diagnosed cancer with much higher accuracy, which has performed better than other methods and provided significant improvement. According to the receiver operating characteristic (ROC) diagram of Figure 7, the area under the ROC curve (AUC) is another measure of the efficiency of the classification models, which achieved 96% for the proposed model. Based on the findings of this study, it can be concluded that the proposed high-potential CNN algorithm can be used to diagnose breast cancer from ultrasound images. In the next section, we present a segmentation method for detecting tumor tissue.

4.6. Segmentation of Ultrasound Images Using the Presented CNN

In this section, we present the results of the CNN segmentation algorithm. The architecture of the detection method is shown in Figure 8. It consists of 11 layers with three layers of convolution. Input images 256 × 256 are breast cancers, and the output layer contains ground truth or labeled input images. The number 255 is labeled on the tumor tissue in these images, and the other points are shown with the number zero. The proposed network is a kind of classification network. The output image pixels are selected as classification labels instead of the image label itself. In other words, the segmentation monitored in this study is a kind of classification with higher dimensions to classify the pixels and detect the infected area. Naturally, supervised segmentation is one of the most complex image processing issues in deep learning, which requires higher processing time.
In the input image in Figure 8, the gray area shows healthy human findings, with part of the image appearing in a darker state. In classifying or segmenting images, the input image is meaningful for the model. In other words, the presence of high dark pixels in the model indicates that these lesions are also tumors, while the tumor part is seen as round. Accordingly, the images should be such that they can evaluate the tumor diagnosis with higher accuracy. Recognizing this area using computer image processing is challenging. Due to the potential of deep learning methods for dividing areas with different colors is more than images with almost the same color. In this study, the infected area is first identified and labeled by a physician or automated algorithm. Therefore, ground truth images consisting of the tumor area are located in the architectural output layer. The algorithm has been trained with 5000 iterations, of which 70% of the data is used for model training and 30% for testing the proposed model. The amount of accuracy and loss of CNN model training provided in Figure 9 is shown.
The segmentation results are shown in Figure 10. According to Figure 10, the first and third columns of the input image show the image infected with the cancerous tumor. Moreover, the other side of the image shows the segmentation results. Seventy percent of the images used for network training and 30% for test results start the process. The results of detecting the infected area are shown in the second and fourth columns of Figure 10. The resulting images should look like ground truth images. According to the results, the presented findings are almost similar to the model output. They have correctly identified the location of the tumor. To better increase the output, minor points in the results should be connected with morphological operations. Because the output points of the model were able to identify the approximate location and size of the tumor, we relate the tumor morphology to the original size of cancer. Figure 11 shows the approximate location and size of the tumor after morphological surgery. The results show that the proposed architecture can correctly identify the contaminated area.
The results of the segmentation method are presented, theoretically, with performance criteria. The segmentation criteria are almost different from the classification methods shown in the previous section. The fusion matrices are used for classification. If drawn, a matrix in the fusion must be drawn for all images. Accordingly, the ROC curve with a true positive rate versus a false positive rate is the best criterion for evaluating the model. This criterion is unique to each image in the segmentation algorithm. The high-performance image trend is correct whether the graphs are shown with a higher true positive rate and a lower false-positive rate. The results show that the maximum number of ROC curves below the higher efficiency graph of the model’s high efficiency shows. According to Figure 12, to understand the ROC curve with specific values, we present the area under the curve (AUC). This measure shows the high performance of the model for each image. In this section, 400 images of benign breast cancer are included in the model. According to Figure 13, 92% of the images in the high-performance region with an AUC above 0.6. According to the graph, the high AUC shows the segmentation of the images in true mode, and the low AUC offers the detection of pixels in false mode. The proposed model results identified the tumor’s location and volume by morphological operations as a post-processing algorithm.

5. Conclusions

Breast cancer is one of the leading causes of death among women worldwide. Early detection helps reduce the number of premature deaths. This study uses medical ultrasound scans to examine medical images of breast cancer. Breast ultrasound datasets are classified into three classes: normal, benign, and malignant. When combined with ML, breast ultrasound images can have great results in classifying, diagnosing, and classifying breast cancer. This study presents six ML methods for classifying and segmenting ultrasound images of CT scans of cancer patients. Six ML methods have been performed to detect and segment ultrasound images to diagnose the disease or tumor type. First, the features of the images are extracted using the fractal method. Then KNN, SVM, DT, and NB classification techniques were used to classify patients’ images. Then, the convolution neural network (CNN) architecture was designed to classify patients based on direct ultrasound images. Traditional classifiers provide excellent results for binary recognition but have many problems for multiple classifications. According to the decision tree method or DT, results with an acceptable amount can help diagnose cancer.
The accuracy of the decision tree method is 81%, and the error rate is 19%. Following KNN, SVM and NB were recorded with 67.7%, 40.1%, and 44.9% accuracy, respectively. The final model trained in both the training and experimental sets for the proposed CNN method is evaluated. The presented model trained 305 cases (99.7%) correctly in 306 images with benign tumors. As a result, the accuracy of the training set is 99.8%. Regarding the test results, out of 131 benign experimental images, 116 were correctly detected; in other words, this diagnosis was associated with 88.5% sensitivity. In other words, the total accuracy of the model for the validation of the proposed model is 76.1%. Based on the findings of this study, it can be concluded that the proposed high-potential CNN algorithm can be used to diagnose breast cancer from ultrasound images. The second CNN model presented was able to identify the original location of the tumor. The results show 92% of the images in the high-performance region with an AUC above 0.6. The proposed model results identified the tumor’s location and volume by morphological operations as a post-processing algorithm. These findings can also be used to monitor patients and prevent the growth of the infected area. Much work is being done to classify patients using artificial intelligence, such as diagnosing brain tumors, breast cancer, and lung cancer. However, implementing these approaches is not always convenient. These methods can be used in a wearable monitoring system to diagnose the disease, monitor, and transfer to specific physicians. According to studies on ML in medical image processing, it is time to implement artificial intelligence methods in medicine to help physicians make better diagnoses and as soon as possible.

Author Contributions

Conceptualization, writing—original draft preparation Y.P.; methodology, software, E.Z.; validation, investigation, M.S.P.; writing—review and editing, A.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

The funding sources were not involved in support in the study design, collection, analysis, or interpretation of data, writing of the manuscript, or in the decision to submit the manuscript for publication.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Dataset of breast ultrasound images is available online https://www.kaggle.com/aryashah2k/breast-ultrasound-images-dataset (accessed on 3 October 2021).

Conflicts of Interest

We declare no conflict of interest.

References

  1. Masud, M.; Hossain, M.S.; Alhumyani, H.; Alshamrani, S.S.; Cheikhrouhou, O.; Ibrahim, S.; Muhammad, G.; Rashed, A.E.E.; Gupta, B.B. Pre-trained convolutional neural networks for breast cancer detection using ultrasound images. ACM Trans. Internet Technol. 2021, 21, 1–17. [Google Scholar] [CrossRef]
  2. Bai, J.; Posner, R.; Wang, T.; Yang, C.; Nabavi, S. Applying deep learning in digital breast tomosynthesis for automatic breast cancer detection: A review. Med. Image Anal. 2021, 71, 102049. [Google Scholar] [CrossRef] [PubMed]
  3. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Bluemke, D.A.; Moy, L.; Bredella, M.A.; Ertl-Wagner, B.B.; Fowler, K.J.; Goh, V.J.; Halpern, E.F.; Hess, C.P.; Schiebler, M.L.; Weiss, C.R. Assessing radiology research on artificial intelligence: A brief guide for authors, reviewers, and readers-from the Radiology Editorial Board. Radiology 2020, 294, 487–489. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Cossy-Gantner, A.; Germann, S.; Schwalbe, N.R.; Wahl, B. Artificial intelligence (AI) and global health: How can AI contribute to health in resource-poor settings? BMJ Glob. Health 2018, 3, e000798. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, L.; Wang, H.; Li, Q.; Zhao, M.H.; Zhan, Q.M. Big data and medical research in China. BMJ 2018, 360, j5910. [Google Scholar] [CrossRef] [Green Version]
  7. Sarode, V.; Chaudhari, A.; Barreto, F.T.R. A Review of Deep Learning Techniques Used in Breast Cancer Image Classification. Intell. Comput. Netw. 2021, 146, 177–186. [Google Scholar] [CrossRef]
  8. Mendes, J.; Matela, N. Breast cancer risk assessment: A review on mammography-based approaches. J. Imaging 2021, 7, 98. [Google Scholar] [CrossRef]
  9. Lotter, W.; Diab, A.R.; Haslam, B.; Kim, J.G.; Grisot, G.; Wu, E.; Wu, K.; Onieva, J.O.; Boyer, Y.; Boxerman, J.L.; et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat. Med. 2021, 27, 244–249. [Google Scholar] [CrossRef]
  10. Ahmadi, M.; Sharifi, A.; Jafarian Fard, M.; Soleimani, N. Detection of brain lesion location in MRI images using convolutional neural network and robust PCA. Int. J. Neurosci. 2021, 1–12. [Google Scholar] [CrossRef]
  11. Hassantabar, S.; Ahmadi, M.; Sharifi, A. Diagnosis and detection of infected tissue of COVID-19 patients based on lung X-ray image using convolutional neural network approaches. Chaos Solitons Fractals 2020, 140, 110170. [Google Scholar] [CrossRef] [PubMed]
  12. Ganggayah, M.D.; Taib, N.A.; Har, Y.C.; Lio, P.; Dhillon, S.K. Predicting factors for survival of breast cancer patients using machine learning techniques. BMC Med. Inform. Decis. Mak. 2019, 19, 48. [Google Scholar] [CrossRef] [Green Version]
  13. Dorosti, S.; Ghoushchi, S.J.; Sobhrakhshankhah, E.; Ahmadi, M.; Sharifi, A. Application of gene expression programming and sensitivity analyses in analyzing effective parameters in gastric cancer tumor size and location. Soft Comput. 2020, 24, 9943–9964. [Google Scholar] [CrossRef]
  14. Zeebaree, D.Q.; Haron, H.; Abdulazeez, A.M.; Zebari, D.A. Machine learning and region growing for breast cancer segmentation. In Proceedings of the 2019 International Conference on Advanced Science and Engineering (ICOASE 2019), Uhok, Iraq, 2–4 April 2019; pp. 88–93. [Google Scholar]
  15. Agarap, A.F.M. On breast cancer detection: An application of machine learning algorithms on the Wisconsin diagnostic dataset. In Proceedings of the 2nd International Conference on Machine Learning and Soft Computing (ICMLSC ‘18), Phu Quoc Island, Vietnam, 2–4 February 2018; pp. 5–9. [Google Scholar]
  16. Ferroni, P.; Zanzotto, F.M.; Riondino, S.; Scarpato, N.; Guadagni, F.; Roselli, M. Breast cancer prognosis using a machine learning approach. Cancers 2019, 11, 328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Binder, A.; Bockmayr, M.; Hägele, M.; Wienert, S.; Heim, D.; Hellweg, K.; Ishii, M.; Stenzinger, A.; Hocke, A.; Denkert, C.; et al. Morphological and molecular breast cancer profiling through explainable machine learning. Nat. Mach. Intell. 2021, 3, 355–366. [Google Scholar] [CrossRef]
  18. Chen, H.; Heidari, A.A.; Chen, H.; Wang, M.; Pan, Z.; Gandomi, A.H. Multi-population differential evolution-assisted Harris hawks optimization: Framework and case studies. Futur. Gener. Comput. Syst. 2020, 111, 175–198. [Google Scholar] [CrossRef]
  19. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. J. 2020, 88, 105946. [Google Scholar] [CrossRef]
  20. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  21. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.N.; Huang, H.; Tong, C. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  22. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728. [Google Scholar] [CrossRef]
  23. Zhao, X.; Zhang, X.; Cai, Z.; Tian, X.; Wang, X.; Huang, Y.; Chen, H.; Hu, L. Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients. Comput. Biol. Chem. 2019, 78, 481–490. [Google Scholar] [CrossRef]
  24. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl.-Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
  25. Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl.-Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
  26. Yu, H.; Li, W.; Chen, C.; Liang, J.; Gui, W.; Wang, M.; Chen, H. Dynamic Gaussian bare-bones fruit fly optimizers with abandonment mechanism: Method and analysis. Eng. Comput. 2020, 2020, 1–29. [Google Scholar] [CrossRef]
  27. Xu, X.; Chen, H.-L. Adaptive computational chemotaxis based on field in bacterial foraging optimization. Soft Comput. 2014, 18, 797–807. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted binary Harris hawks optimizer and feature selection. Eng. Comput. 2020, 37, 3741–3770. [Google Scholar] [CrossRef]
  29. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Liang, G.; Muhammad, K.; Chen, H. Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy. Knowl.-Based Syst. 2021, 216, 106510. [Google Scholar] [CrossRef]
  30. Zhao, X.; Li, D.; Yang, B.; Ma, C.; Zhu, Y.; Chen, H. Feature selection based on improved ant colony optimization for online detection of foreign fiber in cotton. Appl. Soft Comput. J. 2014, 24, 585–596. [Google Scholar] [CrossRef]
  31. Tu, J.; Chen, H.; Liu, J.; Heidari, A.A.; Zhang, X.; Wang, M.; Ruby, R.; Pham, Q.V. Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowl.-Based Syst. 2021, 212, 106642. [Google Scholar] [CrossRef]
  32. Yu, C.; Chen, M.; Cheng, K.; Zhao, X.; Ma, C.; Kuang, F.; Chen, H. SGOA: Annealing-behaved grasshopper optimizer for global tasks. Eng. Comput. 2021, 2021, 1–28. [Google Scholar] [CrossRef]
  33. Souri, E.A.; Chenoweth, A.; Cheung, A.; Karagiannis, S.N.; Tsoka, S. Cancer grade model: A multi-gene machine learning-based risk classification for improving prognosis in breast cancer. Br. J. Cancer 2021, 125, 748–758. [Google Scholar] [CrossRef]
  34. Boumaraf, S.; Liu, X.; Wan, Y.; Zheng, Z.; Ferkous, C.; Ma, X.; Li, Z.; Bardou, D. Conventional machine learning versus deep learning for magnification dependent histopathological breast cancer image classification: A comparative study with visual explanation. Diagnostics 2021, 11, 528. [Google Scholar] [CrossRef]
  35. Saxena, S.; Shukla, S.; Gyanchandani, M. Breast cancer histopathology image classification using kernelized weighted extreme learning machine. Int. J. Imaging Syst. Technol. 2021, 31, 168–179. [Google Scholar] [CrossRef]
  36. Shashaani, H.; Akbari, N.; Faramarzpour, M.; Salemizadeh Parizi, M.; Vanaei, S.; Khayamian, M.A.; Faranoush, M.; Anbiaee, R.; Abdolahad, M. Cyclic voltammetric biosensing of cellular ionic secretion based on silicon nanowires to detect the effect of paclitaxel on breast normal and cancer cells. Microelectron. Eng. 2021, 239, 111512. [Google Scholar] [CrossRef]
  37. Nourbakhsh, E.; Mohammadi, A.; Salemizadeh Parizi, M.; Mansouri, A.; Ebrahimzadeh, F. Role of Myeloid-derived suppressor cell (MDSC) in autoimmunity and its potential as a therapeutic target. Inflammopharmacology 2021, 1–9. [Google Scholar]
  38. Khayamian, M.A.; Shalileh, S.; Vanaei, S.; Salemizadeh Parizi, M.; Ansaryan, S.; Saghafi, M.; Abbasvandi, F.; Ebadi, A.; Soltan Khamsi, P.; Abdolahad, M. Electrochemical generation of microbubbles by carbon nanotube interdigital electrodes to increase the permeability and material uptakes of cancer cells. Drug Deliv. 2019, 26, 928–934. [Google Scholar] [CrossRef] [Green Version]
  39. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef] [PubMed]
  40. Li, C.; Hou, L.; Sharma, B.Y.; Li, H.; Chen, C.S.; Li, Y.; Zhao, X.; Huang, H.; Cai, Z.; Chen, H. Developing a new intelligent system for the diagnosis of tuberculous pleural effusion. Comput. Methods Programs Biomed. 2018, 153, 211–225. [Google Scholar] [CrossRef] [PubMed]
  41. Xia, J.; Chen, H.; Li, Q.; Zhou, M.; Chen, L.; Cai, Z.; Fang, Y.; Zhou, H. Ultrasound-based differentiation of malignant and benign thyroid Nodules: An extreme learning machine approach. Comput. Methods Programs Biomed. 2017, 147, 37–49. [Google Scholar] [CrossRef]
  42. Chen, H.L.; Wang, G.; Ma, C.; Cai, Z.N.; Liu, W.B.; Wang, S.J. An efficient hybrid kernel extreme learning machine approach for early diagnosis of Parkinson’s disease. Neurocomputing 2016, 184, 131–144. [Google Scholar] [CrossRef] [Green Version]
  43. Hu, L.; Hong, G.; Ma, J.; Wang, X.; Chen, H. An efficient machine learning approach for diagnosis of paraquat-poisoned patients. Comput. Biol. Med. 2015, 59, 116–124. [Google Scholar] [CrossRef] [PubMed]
  44. Yu, K.; Tan, L.; Lin, L.; Cheng, X.; Yi, Z.; Sato, T. Deep-learning-empowered breast cancer auxiliary diagnosis for 5GB remote E-health. IEEE Wirel. Commun. 2021, 28, 54–61. [Google Scholar] [CrossRef]
  45. Jiang, M.; Zhang, D.; Tang, S.C.; Luo, X.M.; Chuan, Z.R.; Lv, W.Z.; Jiang, F.; Ni, X.J.; Cui, X.W.; Dietrich, C.F. Deep learning with convolutional neural network in the assessment of breast cancer molecular subtypes based on US images: A multicenter retrospective study. Eur. Radiol. 2021, 31, 3673–3682. [Google Scholar] [CrossRef] [PubMed]
  46. Bychkov, D.; Linder, N.; Tiulpin, A.; Kücükel, H.; Lundin, M.; Nordling, S.; Sihto, H.; Isola, J.; Lehtimäki, T.; Kellokumpu-Lehtinen, P.L.; et al. Deep learning identifies morphological features in breast cancer predictive of cancer ERBB2 status and trastuzumab treatment efficacy. Sci. Rep. 2021, 11, 4037. [Google Scholar] [CrossRef] [PubMed]
  47. Saber, A.; Sakr, M.; Abo-Seida, O.M.; Keshk, A.; Chen, H. A novel deep-learning model for automatic detection and classification of breast cancer using the transfer-learning technique. IEEE Access 2021, 9, 71194–71209. [Google Scholar] [CrossRef]
  48. Lee, Y.W.; Huang, C.S.; Shih, C.C.; Chang, R.F. Axillary lymph node metastasis status prediction of early-stage breast cancer using convolutional neural networks. Comput. Biol. Med. 2021, 130, 104206. [Google Scholar] [CrossRef] [PubMed]
  49. Zhang, X.; Li, H.; Wang, C.; Cheng, W.; Zhu, Y.; Li, D.; Jing, H.; Li, S.; Hou, J.; Li, J.; et al. Evaluating the accuracy of breast cancer and molecular subtype diagnosis by ultrasound image deep learning model. Front. Oncol. 2021, 11, 606. [Google Scholar] [CrossRef]
  50. Zhou, L.Q.; Wu, X.L.; Huang, S.Y.; Wu, G.G.; Ye, H.R.; Wei, Q.; Bao, L.Y.; Deng, Y.; Bin Li, X.R.; Cui, X.W.; et al. Lymph node metastasis prediction from primary breast cancer US images using deep learning. Radiology 2020, 294, 19–28. [Google Scholar] [CrossRef]
  51. Sharma, S.; Mehra, R. Conventional machine learning and deep learning approach for multi-classification of breast cancer histopathology images—A comparative insight. J. Digit. Imaging 2020, 33, 632–654. [Google Scholar] [CrossRef] [PubMed]
  52. Hu, Q.; Whitney, H.M.; Giger, M.L. A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI. Sci. Rep. 2020, 10, 10536. [Google Scholar] [CrossRef]
  53. Han, D.; Zhao, N.; Shi, P. Gear fault feature extraction and diagnosis method under different load excitation based on EMD, PSO-SVM and fractal box dimension. J. Mech. Sci. Technol. 2019, 33, 487–494. [Google Scholar] [CrossRef]
  54. Srinivasan, A.; Battacharjee, P.; Prasad, A.I.; Sanyal, G. Brain MR image analysis using discrete wavelet transform with fractal feature analysis. In Proceedings of the 2nd International Conference on Electronics, Communication and Aerospace Technology (ICECA 2018), Coimbatore, India, 29–31 March 2018; pp. 1660–1664. [Google Scholar]
  55. Chaurasia, V.; Chaurasia, V. Statistical feature extraction based technique for fast fractal image compression. J. Vis. Commun. Image Represent. 2016, 41, 87–95. [Google Scholar] [CrossRef]
  56. Ahmadi, M.; Sharifi, A.; Hassantabar, S.; Enayati, S. QAIS-DSNN: Tumor area segmentation of MRI image with optimized quantum matched-filter technique and deep spiking neural network. Biomed Res. Int. 2021, 2021, 6653879. [Google Scholar] [CrossRef]
  57. Rezaei, M.; Farahanipad, F.; Dillhoff, A.; Elmasri, R.; Athitsos, V. Weakly-supervised hand part segmentation from depth images. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference (PETRA 2021), Corfu, Greece, 29 June–2 July 2021; pp. 218–225. [Google Scholar]
  58. Artin, J.; Valizadeh, A.; Ahmadi, M.; Kumar, S.A.P.; Sharifi, A. Presentation of a novel method for prediction of traffic with climate condition based on ensemble learning of neural architecture search (NAS) and linear regression. Complexity 2021, 2021, 8500572. [Google Scholar] [CrossRef]
  59. Ahmadi, M.; Taghavirashidizadeh, A.; Javaheri, D.; Masoumian, A.; Ghoushchi, S.J.; Pourasad, Y. DQRE-SCnet: A novel hybrid approach for selecting users in federated learning with deep-q-reinforcement learning based on spectral clustering. J. King Saud Univ. Inf. Sci 2021, in press. [Google Scholar]
  60. Zhang, Y.; Liu, R.; Heidari, A.A.; Wang, X.; Chen, Y.; Wang, M.; Chen, H. Towards augmented kernel extreme learning models for bankruptcy prediction: Algorithmic behavior and comprehensive analysis. Neurocomputing 2021, 430, 185–212. [Google Scholar] [CrossRef]
Figure 1. Ultrasound and clustered images of breast cancer for benign, malignant, and normal tissue conditions.
Figure 1. Ultrasound and clustered images of breast cancer for benign, malignant, and normal tissue conditions.
Diagnostics 11 01870 g001aDiagnostics 11 01870 g001b
Figure 2. Histogram of features extracted from the ultrasound images. Blue: Histogram of the image, Red: Modeled histogram, Green: Gaussian Functions.
Figure 2. Histogram of features extracted from the ultrasound images. Blue: Histogram of the image, Red: Modeled histogram, Green: Gaussian Functions.
Diagnostics 11 01870 g002
Figure 3. Confusion matrices for classifying or diagnosing tumor type and disease.
Figure 3. Confusion matrices for classifying or diagnosing tumor type and disease.
Diagnostics 11 01870 g003aDiagnostics 11 01870 g003b
Figure 4. The architecture provided by CNN for classifying or diagnosing tumor type and disease.
Figure 4. The architecture provided by CNN for classifying or diagnosing tumor type and disease.
Diagnostics 11 01870 g004
Figure 5. The accuracy and loss of the CNN classification model during program execution.
Figure 5. The accuracy and loss of the CNN classification model during program execution.
Diagnostics 11 01870 g005
Figure 6. The confusion matrix for the training and test set for the proposed CNN model.
Figure 6. The confusion matrix for the training and test set for the proposed CNN model.
Diagnostics 11 01870 g006
Figure 7. ROC curve for the various classifications presented in the research.
Figure 7. ROC curve for the various classifications presented in the research.
Diagnostics 11 01870 g007
Figure 8. Presented CNN network architecture for the cancer tumor segmentation.
Figure 8. Presented CNN network architecture for the cancer tumor segmentation.
Diagnostics 11 01870 g008
Figure 9. The accuracy and loss of CNN network for cancer tumor segmentation.
Figure 9. The accuracy and loss of CNN network for cancer tumor segmentation.
Diagnostics 11 01870 g009
Figure 10. Results of CNN network cancer tumor segmentation.
Figure 10. Results of CNN network cancer tumor segmentation.
Diagnostics 11 01870 g010
Figure 11. The results of CNN network cancer tumor morphology operations.
Figure 11. The results of CNN network cancer tumor morphology operations.
Diagnostics 11 01870 g011
Figure 12. The ROC curve of a CNN network cancer tumor.
Figure 12. The ROC curve of a CNN network cancer tumor.
Diagnostics 11 01870 g012
Figure 13. AUC value plot for images used on CNN.
Figure 13. AUC value plot for images used on CNN.
Diagnostics 11 01870 g013
Table 1. Summary of research for diagnosis of breast cancer based on DL approaches.
Table 1. Summary of research for diagnosis of breast cancer based on DL approaches.
AuthorYearTypeNetworkResults
Yu et al. [44]2021Auxiliary diagnosisInception-v3Breast cancer diagnosis accuracy in distant locations has improved.
Jiang et al. [45]2021Assessment of molecular subtypesDCNNThe DL algorithm uses pretreatment ultrasound images of breast cancer to identify molecular subtypes with excellent diagnosis accuracy.
Bychkov et al. [46]2021Identifying morphological featureDNNThe success of adjuvant anti-ERBB2 therapy was linked to ERBB2-associated morphology, which might help predict treatment outcomes in breast cancer.
Saber et al. [47]2021Automatic Detection and ClassificationResNet50, VGG-16, Inception-V2 ResNetOverall accuracy is 98.96%
Boumaraf et al. [34]2021Image Classification of Histopathological Breast Cancer concerning MagnificationVGG-19The pathologist believes autonomous DL techniques as a legitimate and credible support tool for breast cancer detection can be enhanced by the decisions.
Lee et al. [48]2021Prediction of axillary lymph node metastasesCNNThe findings show that the suggested CAP paradigm, which includes primary tumor and peritumoral cells to determine ALN status in women with symptomatic breast cancer, is reliable for predicting the ALN condition.
Zhang et al. [49]2021Molecular Subtype DiagnosisOptimized DL modelFurthermore, this model’s prediction capacity for molecular subtypes was good, which has therapeutic implications.
Zhou et al. [50]2020Lymph Node Metastasis PredictionInception V3, Inception-ResNet V2, and ResNet-101Using ultrasound images from patients with initial breast cancer, DL algorithms can accurately predict clinically negative axillary lymph node metastases.
Sharma and Mehra [51]2020Histopathology classificationVGG16, VGG19, and ResNet50For all magnification variables, the benign and malignant classes are the most complicated.
Hu et al. [52]2020Multiparametric MRI is used to diagnose breast cancer.CNNThe multilayer perceptron transfer learning technique for MRI may boost prediction value in breast imaging interpretation by lowering the false positive rate and increasing the high accuracy rate.
Table 2. Comparison of different classification models.
Table 2. Comparison of different classification models.
ModelAUCErrorAccuracy
Presented CNN0.960.20%99.80%
DT0.8719%81%
KNN0.6632.30%67.70%
SVM0.659.90%40.10%
NB0.655.10%44.90%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pourasad, Y.; Zarouri, E.; Salemizadeh Parizi, M.; Salih Mohammed, A. Presentation of Novel Architecture for Diagnosis and Identifying Breast Cancer Location Based on Ultrasound Images Using Machine Learning. Diagnostics 2021, 11, 1870. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11101870

AMA Style

Pourasad Y, Zarouri E, Salemizadeh Parizi M, Salih Mohammed A. Presentation of Novel Architecture for Diagnosis and Identifying Breast Cancer Location Based on Ultrasound Images Using Machine Learning. Diagnostics. 2021; 11(10):1870. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11101870

Chicago/Turabian Style

Pourasad, Yaghoub, Esmaeil Zarouri, Mohammad Salemizadeh Parizi, and Amin Salih Mohammed. 2021. "Presentation of Novel Architecture for Diagnosis and Identifying Breast Cancer Location Based on Ultrasound Images Using Machine Learning" Diagnostics 11, no. 10: 1870. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11101870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop