Next Article in Journal
Evaluation of Post-Stroke Impairment in Fine Tactile Sensation by Electroencephalography (EEG)-Based Machine Learning
Next Article in Special Issue
Mathematical Description of Changes of Dried Apple Characteristics during Their Rehydration
Previous Article in Journal
Whole-Body Cryostimulation in Fibromyalgia: A Scoping Review
Previous Article in Special Issue
Modeling of the Drying Process of Apple Pomace
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Buckwheat Disease Recognition Based on Convolution Neural Network

1
College of Computer Science, Chongqing University, Chongqing 400044, China
2
College of Computer and Information Science, Southwest University, Chongqing 400715, China
3
College of Agronomy and Biotechnology, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Submission received: 25 March 2022 / Revised: 29 April 2022 / Accepted: 30 April 2022 / Published: 9 May 2022
(This article belongs to the Special Issue Applications of Computer Science in Agricultural Engineering)

Abstract

:
Buckwheat is an important cereal crop with high nutritional and health value. Buckwheat disease greatly affects the quality and yield of buckwheat. The real-time monitoring of disease is an essential part of ensuring the development of the buckwheat industry. In this research work, we proposed an automated way to identify buckwheat diseases. It was achieved by integrating a convolutional neural network (CNN) with the image processing technology. Firstly, the proposed approach would detect the buckwheat disease area accurately. Then, to improve the accuracy of classification, a two-level inception structure was added to the traditional convolutional neural network for accurate feature extraction. It also helps to handle low-quality image problems, which includes complex imaging environment and leaf crossing in sampling buckwheat image, etc. At the same time, instead of the traditional convolution, the convolution based on cosine similarity was adopted to reduce the influence of uneven illumination during the imaging. The experiment proved that the revised convolution enabled better feature extraction within samples with uneven illumination. Finally, the experiment results showed that the accuracy, recall, and F1-measure of the disease detection reached 97.54, 96.38, and 97.82%, respectively. For identifying disease categories, the mean values of precision, recall, and F1-measure were 84.86, 85.78, and 85.4%. Our method has provided important technical support for realizing the automatic recognition of buckwheat diseases.

1. Introduction

1.1. The Significance of Buckwheat Disease Identification

Buckwheat is an important grain with abundant nutrition, containing protein, cellulose, sugar, and antioxidant rutin that are very beneficial to human health. Moreover, buckwheat is a high-quality crop with development potential due to its strong planting adaptability, cold tolerance, and poor soil adaptability. In China, Buckwheat is mainly distributed in the high altitude and cold mountainous areas, including the northwest, the northeast, and southwest. Buckwheat is the main food and economic crop in these areas [1]. Globally, buckwheat is mainly distributed in Canada, India, Japan, and other countries [2].
However, buckwheat disease affects the yield and quality of buckwheat greatly and degrades nutrition and quality. The buckwheat disease is one of the most critical agricultural natural disasters in the world. There are many types of buckwheat disease, which have a great impact and often cause disasters [3]. In recent years, the buckwheat planting area has been gradually expanded. Buckwheat planting has changed from traditional small-scale cultivation to modern mechanized cultivation, and therefore, the requirements for disease control are also increasing. Diagnosing buckwheat disease accurately and in a timely manner is an important means of prevention and control of the disease [4].

1.2. Disease Recognition Based on Deep Learning in Agriculture

Conventional disease recognition has been limited by slow speed, strong subjectivity, high misjudgment rate, and inefficiency, therefore, it no longer meets the needs of modern agricultural production. Buckwheat mostly grows in mountainous areas, so there is a shortage of agricultural technicians, often leading to missed optimal periods of disease control [5]. Fortunately, with the development of machine learning and pattern recognition, it is feasible to classify and identify buckwheat pests and diseases in agriculture. In recent years, many new methods of pattern recognition have emerged for image classification, detection, and recognition. These technologies have been beneficial in improving the identification efficiency, reducing the cost, increasing the identification accuracy, and easing the burden of experts [6,7,8]. Researchers have developed the classifiers of pests and diseases in agricultural crops by support vector machine [9], K-means clustering [10], radial basis function [11], genetic algorithm [12], Bayesian classification [13], integrated learning [14], filter segmentation [15], and so on, and have achieved good results. With the rise of deep learning, deep structure learning technologies, such as convolutional neural networks and recurrent neural networks have made constant progress. Much attention has been paid to carrying out the automatic identification of pests and diseases by using deep learning techniques.
In the above research, the recognition of crop diseases mainly focuses on the field crops. Field crops are planted extensively and it is relatively easy to collect and select samples; the sample database established is standard. In the image samples, morbid and healthy crops are relatively clear and the image quality is good; therefore, the current deep learning framework can achieve a better recognition effect on these field crops. However, as a multigrain crop, buckwheat has no standard image database and is mostly planted in mountainous areas; there exist many limitations for sampling, including that the illumination of samples is uneven, and the leaves are overlapping, etc. As a consequence, the image of buckwheat is low quality, containing serious noise.
In this paper, a two-level inception structure is inserted into the basic framework of the convolutional neural network to accurately extract the features of buckwheat images and improve the accuracy of classification as well. The proposed approach is capable of dealing with overlapping leaves and low-quality images. Meanwhile, in order to reduce the influence of light in the sampling process, convolution based on cosine similarity is adopted instead of traditional convolution operation, so that better feature extraction can be carried out for samples with uneven light, and finally, accurate recognition of buckwheat disease can be realized.
In agricultural production, automatic classification has been widely applied to crop disease images, which is a key technology for pesticide selection and spraying in precision agriculture [16]. Feature extraction is an important technique for identifying crop diseases. However, it is not flexible to traditional classification algorithms based on artificial feature extraction, as it requires high requirements of professional knowledge, high time complexity, and difficulty in extracting high-quality features [17]. Deep learning can obtain the multi-scale feature of crop diseases, realize the characteristic expression of different diseases more accurately, and it is beneficial to the accurate identification of crop diseases [18]. Tongke Fan et al. presented a deblurring method for local blurred images based on deep learning. By training convolution neural network models with different structures, a normalized segmentation algorithm based on spectral theory was used to segment the image of plant diseases and insect pests. This method has good robustness, generalization, and accuracy for the segmentation of pests and diseases in agricultural disease image recognition [19]. Qiang Dai et al. proposed a generative adversarial network with dual-attention and topology-fusion mechanisms (DATFGAN), which can effectively transform unclear images into clear and high-resolution images. The weight sharing scheme in DATFGAN can significantly reduce the number of parameters. For recognizing the processed image, the result shows that DATFGAN is superior to other methods and has sufficient robustness [20].
In the field of deep learning, many learning frameworks have been developed for image feature extraction and applied in agricultural pest and disease detection. Rahman et al. proposed a detection method for rice pests based on deep learning [21]. They used VGG16 and Inception V3 structures and fine-tuned them to detect and identify rice pests and diseases. Chen, Junde et al. used a migration learning mechanism to obtain the DenseNet network through pre-training on the ImageNet database and Inception module, which is used for the identification of rice pests and diseases. Compared with other methods, this method has better recognition performance and lower training costs [22]. Li Dengshan et al. proposed a video detection architecture based on deep learning to detect plant diseases and insect pests by video [23]. In addition, this paper proposes a set of video detection evaluation indexes based on machine learning classifier, which can effectively evaluate the quality of video detection. Compared with VGG16, ResNet-50, ResNet-101, and YOLOv3, this network structure system is more suitable for detecting diseases and pests in untrained rice videos. In the literature [24], CNN is also used to identify maize leaf disease. It can identify three major maize diseases in southern Africa; maize leaf blight, common rust (sorghum rust), and gray spot (brown spot). Krishnaswamy et al. used VGG16 as the eighth convolution layer feature extractor and multi-class support vector machine (MSVM) to classify diseases and pests in eggplant, and achieved good results [25]. On the basis of analyzing state-of-the-art convolution neural networks, AlexNet, GoogleNet, Inception V3, ResNet18, and ResNet 50, Valeria et al. used the improved GoogleNet model to identify tomato pests and diseases [26].
Furthermore, Fuentes et al. combined visual object recognition and language generation models to generate detailed information about plant abnormal symptoms and scene interactions. In the identification task of tomato pests and disease, the method achieved 92.5% accuracy in the dataset of tomato plant anomaly [27]. Xiao Qingxin et al. used the Aprioro algorithm to find the association rules between climatic factors and the occurrence of cotton pests and diseases, the prediction of pests and diseases was attributed to time series prediction, and a prediction method based on LSTM (long short-term memory) was presented. The results show that suitable temperature, humidity, low rainfall, low wind speed, suitable sunshine time, and low evaporation are the main factors causing cotton pests and diseases [28]. Verma, S. et al. used Capsule Net to classify potato diseases and achieved good results [29].
At present, the detection of crop diseases is mostly carried out in the standard crop sample database. The standard sample database often has obvious disease characteristics through expert screening, and the imaging conditions and image quality are reliable. The existing deep learning methods for identifying crop diseases and insect pests do not consider the complexity of disease images collected in the field, such as leaf overlap, uneven lighting, shadow coverage, etc. These challenges greatly affect the recognition effect. Therefore, how to adapt to a complex imaging environment and accurately extract key features is an important direction of crop disease recognition research.
Therefore, the form of buckwheat plants, especially the leaf shape, affects the light distribution of the population canopy. Therefore, canopy leaves tend to block the middle and lower leaves, and in the process of image collection, the shaded areas formed by occlusion often causes great interference to the discrimination of diseases. In this paper, the recognition of buckwheat disease includes two processes: detection and recognition of disease area. First, the MSER (Maximally Stable Extremal Regions) method is used to detect disease area, and further, the improved AlexNet network accurately implements disease area detection. Then the inception structure and cosine similarity convolution are used to complete the discrimination of specific diseases.
The rest of the paper is organized as follows: the network structure of CNN are discussed in Section 2; materials and data processing is introduced in Section 3; the method of Buckwheat disease recognition is presented; the achieved experimental results and related discussions are presented in Section 4; finally, Section 5 draws up our conclusions of the study and future directions.

2. Network Structure of CNN

CNN is a feed forward neural network including convolution operation and with a deep structure. Its basic structure includes input layer, conv layer, pooling layer, full connection layer and output layer (classifier) [30]. A typical CNN structure is shown in Figure 1.
In the convolution network, the transformation between layers is a process of feature extraction. Each layer is composed of several two-dimensional planes, each plane represents a feature map (FM). The input layer is the original image, and each feature extraction layer (convolution layer) in the network is followed by a secondary extraction calculation layer (pooling layer). This structure of secondary feature extraction enables the convolutional network to have a certain tolerance when there is a large deformation of the input data. Generally, there are several convolution layers and pooling layers. The specific operation process is as follows:
(1)
Convolution process: The input image (or FM of the upper layer) is convoluted with a trainable filter and then the bias bx is added. The convolution layer Cx is obtained.
(2)
Pooling process: Pixels in the neighborhood calculate the average value to get a pixel, which is weighted by scalar Wx + 1, then offset bx + 1 is added to the weighted results. Using an activation function, a reduced feature map Sx + 1 can be obtained.
(3)
The full connection layer: The full connection layer is equivalent to the hidden layer in multilayer perceptron (MLP). It is fully connected to the previous layer. The calculation process is to multiply the output result of the upper layer by the weight vector, add a bias, and then pass it to the sigmoid function.
(4)
The output layer (classification layer): It consists of Euclidean radial basis function units, and each category corresponds to a unit. The output layer uses a classifier to calculate the probability that the input sample belongs to a category.

3. Materials and Data Processing

We have established a disease database for buckwheat. The disease images were taken in rural farmland by a Canon EOS 90D digital camera from 2017 to 2019. Buckwheat diseases mainly occur in the leaves. Their occurrence depends on many factors, such as temperature, humidity, rainfall, variety, season, nutrition, and so on. The Chongqing buckwheat industry innovation team of China has carried out extensive research activities in Chongqing and Sichuan Province. In Chongqing, the sampling areas are mainly located in Weituo farm of Hechuan district, Xiema farm of Beibei District, Banxi farm of Youyang District, Fengan farm of Wanzhou District, and Zhongyi farm in Shizhu District; in Sichuan the sampling areas are mainly located in Liangshan, involving Jinqu Township in Zhaojue District, Shaluo Township in Butuo District, Lami Township in Leibo District, and Nanwa Township in Jinyang District. From February 2017 to November 2019, 7230 buckwheat disease images were collected from buckwheat fields. The images with different backgrounds were collected in real scenes. In order to obtain a group of representative images as much as possible, the images were collected in spring and autumn, and the weather conditions included “less cloud”, “cloudy”, and “overcast”. These steps improve the robustness of the model.
The database contains eight types of disease images, buckwheat spot blight, buckwheat sclerotinia, buckwheat seedling blight, buckwheat ring spot, buckwheat downy mildew, buckwheat brown spot, buckwheat virus disease, and buckwheat white mold. Sample images of each class have been depicted in Figure 2a–h. 5000 images have been selected in this work, which includes 500 images for each disease and 1000 images without disease. The training data set consists of 400 images for each disease and 800 disease-free images. The remaining were used as the test set. A 5-fold cross-validation was used for the evaluation.

4. Buckwheat Disease Recognition Based on Convolution Neural Network

4.1. Detection of Disease Area

Buckwheat pests and diseases often occur on the leaves [31]. Usually, the whole leaf is used as the input to train the recognition model of crop diseases to get the best classification structure. This method originates from the early application of deep learning in image classification. It takes advantage of the simple structure of the model constructed and the high efficiency of training and recognition. However, the whole image contains a lot of feature information, many of which are not highly relevant to the recognition task and interferes with model performance. There are many kinds of buckwheat diseases, and some diseases are similar in leaves, which is easy to produce misidentification. For example, spot blight and brown spot are easily confusing. If we can locate the disease area of buckwheat, make the recognition model only focus on the disease area, and accurately extract the most important feature of buckwheat disease, it will improve the recognition accuracy of different types of disease. Our task is to identify the categories of buckwheat diseases. It is necessary to accurately detect the buckwheat disease occurrence region from the image so as to extract the features of the disease region. Based on the regional feature extracted, different types of diseases can be effectively identified to ensure classification accuracy. The region detection for buckwheat disease is to separate the disease region and non-disease region from the images, and then send the disease region into the network to complete the training and recognition. For the detection of buckwheat disease area, this paper proposed a method combining (MSER) [32] with CNN to detect the buckwheat disease regions. The detailed steps are as follows:
Step 1: Using MSER to detect disease areas of buckwheat, the MSER algorithm is as follows:
(1)
The image of buckwheat disease was grayed, and the gray image was binarized with 256 different thresholds in the gray range (0–255); Let Q t denote a connected region in the binary image corresponding to the binarization threshold t. When the threshold of binarization changes from t to t   +   Δ and t     Δ , Δ is the change value, the connected region Q t becomes Q t   +   Δ and Q t Δ correspondingly.
(2)
Calculating the area ratio when the threshold is t, q t   =   Q t + Δ     Q t Δ / Q t . When the area of Q t changes slightly with the change of the binarization threshold t, namely, q t is the local minimum, Q t is the Stable extremum region. Where Q t is the area about connected region Q t . Q t + Δ     Q t Δ is the area of the remaining region after Q t + Δ subtracts Q t Δ
In the process of MSER detection, some large rectangular boxes could contain small ones, so these regions should be merged and the small rectangular boxes should be removed. For the merging of two regions, let the parameters of connected region 1 are β 1 , χ 1 , δ 1 , ε 1 and the parameters of connected region 2 are β 2 , χ 2 , δ 2 , ε 2 where χ and β represent the minimum and maximum values of the minimum circumscribed rectangle in the y-axis direction of the connected region. δ and ε represent the minimum and maximum values of the minimum circumscribed rectangle in the x-axis direction of the connected region, then the connected region 1 contains the connected region 2, which can be determined according to Equation (1):
χ 1     χ 2 β 1     β 2 δ 1     δ 2 ε 1     ε 2
Through the above steps, the disease area is selected. However, it can be seen from Figure 3 and Figure 4 that there is still overlap and error detection between disease area and non-disease area. The fuzzy area in the background and blade edge area were incorrectly detected as the disease area.
Step 2: For further distinguishing between disease area and non-disease areas, avoiding detection boxes overlap and false detection, we designed a CNN binary classifier based on AlexNet [33]. Its structure is shown in Figure 5. The network has two convolution layers and two pooling layers. The final full connection layer was a binary classifier for disease area and non-disease area. First of all, input a 128 × 128 image, and then 16 3 × 3 convolution kernels are used to extract the features of the input image. Then, a 32 × 32 × 16 convolution layer is obtained. Next, the data dimension of the convolution layer is reduced by using a 2 × 2 maximum pooling layer so that we can obtain a 16 × 16 × 16 pixel pooling layer. 32 5 × 5 convolution kernels are used to further extract higher-level features. Finally, the output of 8 × 8 × 32 is obtained by using a 2 × 2 maximum pooling. All the output features are connected in a fully connected layer. The weights of the output features is calculated according to the feature vector. The probability of belonging to two categories is output, so we can locate disease region of the input image.

4.2. Convolutional Neural Network Structure of Buckwheat Disease Recognition

The appropriate framework of CNN is the key to identifying buckwheat diseases. Generally, the performance improvement of convolutional neural networks depends on increasing the depth and width of the network, which is, increasing the number of hidden layers and neurons in each layer. This will lead to a wider parameter range, easier overfitting, and more computing resources. The deeper the network, the easier the gradient will disappear, and it makes optimization difficult. To deal with these challenges, the full connection is changed into a sparse connection, and the convolutional layer also adopts a sparse connection. However, it is inefficient to calculate the asymmetric sparsity, we need to find the optimal local sparse structure, which can be approximated by the convolution network. The inception structure was introduced to tackle this issue.

4.2.1. Improved Network Based on Inception Structure

Inception is a local topology network, which performs multiple convolution operations or pooling operations for the input image in parallel, and stitches all the output into a very deep feature map. Because different convolution operations and pooling operations such as 1 × 1, 3 × 3 or 5 × 5 can obtain different information of buckwheat disease image, parallel processing of these operations and combining all the results will obtain better image characterization of buckwheat disease.
The convolutional neural network used in this article is shown in Figure 6. The network adds two inceptions based on the traditional structure. The specific processing was as follows:
(1)
Firstly, the input of the network was the image of buckwheat disease with the size of 64 × 64. The diseased image was then convolved with 136 9 × 9 convolution kernels to obtain 136 56 × 56 feature maps.
(2)
The feature map was sent to the inception 1 structure, and all inception structures in the network used the same convolution operation, that is, the size of the feature map was not changed. The size of the pooling window in all subsequent pooling layers was 2 × 2. Therefore, the size of the feature map becomes 28 × 28 after pooling.
(3)
200 5 × 5 convolution kernels are adopted to obtain 200 24 × 24 feature maps. After these feature maps were sent to the inception 2 structure, 264 5 × 5 convolution kernels were used to obtain 264 8 × 8 feature maps after pooling.
(4)
After pooling, it entered the last layer of the convolution layer. The number of convolution kernels was 520 and the size was 3 × 3. Therefore, 520 feature maps of 2 × 2 will be obtained. Finally, the results were processed by the full connection layer and the classified output layer.
In the initial two convolutional layers of the network, the receptive fields of size 9 × 9 and 5 × 5 were used, respectively. In order to extract more feature information of different scales in a smaller receptive field, and expand the width and depth of the network, we added inception 1 and inception 2 structures to the two convolution layers, respectively. Their structures are shown in Figure 7 and Figure 8. In the inception 1 structure, 1 × 1, 3 × 3, 1 × 5, 5 × 1, 4 different scales of convolution kernels are used for multi-channel feature extraction, and the channels are fused. The top 1 × 1 convolution can effectively reduce the number of channels in the input feature map and the computation cost of the network. The 1 × 1 convolution at the bottom layer is to restore the number of channels in the input feature map and maintain the consistency of the number of channels in the input and output feature maps. In the inception 2 structure, the input feature map becomes smaller, so the 1 × 5 and 5 × 1 convolutions in inception 1 are replaced with 1 × 3 and 3 × 1 convolutions, respectively. In the second convolutional layer, the number of input feature maps is more than that of the first layer, so the number of channels in the structure is increased from 30 to 40.
Conv represents the convolution operation according to the size of the convolution kernel; the number of channels represents the number of convolution channels.

4.2.2. Convolution Based on Cosine Similarity

Buckwheat disease data are collected from field, which is limited by the sampling environment and disturbed by noise. Therefore, in order to achieve a higher activation value after convolution operation, only the positions with similar characteristics to the convolution kernel can be obtained in the feature diagram. At the same time, it is necessary to reduce the difference between features and avoid the interference of sample noise on feature extraction. In this paper, the idea of cosine similarity was introduced into the operation of the convolution layer [34], and the input feature map and the convolution kernel were regarded as two vectors, and the correlation between them was calculated.
In the traditional convolutional neural network, the output of J-th feature map belonging to l -th convolution layer can be expressed as follows:
x J l   =   g I M x I l 1     W I J l   +   B J l
In the equation, g . represents the activation function, M represents the set of input feature maps, W I J l represents the convolution kernel vector used between the I -th feature map and the J -th feature map. B J l is the bias.
The cosine similarity is an index to measure the similarity between two vectors. It calculates the cosine of the angle between two vectors. The smaller the angle, the higher the correlation between the two vectors, the larger the cosine. Its value range is (−1,1). Here is how the cosine similarity between X and Y is calculated, where n is the dimension of vector.
cos X , Y   =   i = 1 n x i y i i = 1 n x i 2 · i = 1 n y i 2   =   X , Y X Y
F I X I l 1 , W I J l represents the similarity function between the input feature map of the l -th convolutional layer and the convolution kernel. X represents the input feature map vector. The convolution operation based on cosine similarity can be expressed as the following equation:
W r × z   ·   X r × z   =   i = 1 r j = 1 z w i j x i j / i = 1 r j = 1 z w i j 2 · i = 1 r j = 1 z x i j 2
In Equation (4), r   ×   z represents the size of the convolution kernel. w i j and x i j respectively represent the coefficients in the convolution kernel and the feature map. The similarity function can be expressed as:
F I X I l 1 , W I J l   =   I M X I l 1   ·   W I J l   +   B J l
Therefore, in the l -th convolution layer, the output based on cosine similarity operation is expressed as Equation (6)
x J l   =   g F I W I J l   =   W I 1 l , W I 2 l , , W I n l T X   =   X 1 l 1 , X 2 l 1 , , X n l 1 T
g . represents the activation function. The higher the similarity between the input feature map and the convolution kernel W I J l , the greater the output value of the convolution layer.

5. Results

Experimental environment: CPU Intel(R) core(7M) i7-7700ghz; Memory DDR4, 8.00G; GPU NVIDA GeForce RTX 2080 SUPER, basic frequency 1650 MHz, acceleration frequency 1815 MHz, video memory: GDDR6, 8G, 256 bit, video memory frequency 15.5 GHz, video memory bandwidth 496 GB/s.

5.1. Disease Area Detection and Analysis

In the training of the CNN model, Adam was used as the optimization algorithm. The learning rate was set to 0.001, the multiplier factor of learning rate decline was set to 0.1, and the cross-entropy loss function was selected as the loss function [35]. The training samples were obtained by clipping the original image, where the positive samples were the disease area and the negative samples were the non-disease. One hundred twenty-four buckwheat disease images were selected to construct an image dataset that contained only clipped images, as shown in Figure 9a,b. The positive samples were disease regions (792 regions in total), and the negative samples were non-disease regions (1135 regions in total). When data are inputted for training, the positive and negative samples are first thoroughly mixed and randomly divided in a ratio of 4:1, respectively, as training set and test set. Then, the input images were standardized by parameters with mean values of 0.471, 0.452, and 0.412, and variance of 0.282, 0.267, and 0.231, respectively.
Figure 10 shows the relationship between the size of the clipping sample and the average accuracy based on CNN prediction. The experimental results show that the average accuracy of prediction tends to be stable during the 20th to 30th round of training. By comparing the different sizes of cutting samples, i.e., 24 × 16, 24 × 24, 32 × 24, 32 × 32, and 48 × 32. We find that the average prediction accuracy of 32 × 32 is higher than that of other sizes. After 20 iterations, the prediction accuracy tends to be stable. Therefore, we finally chose the size of 32 × 32 as the training data. At the same time, the candidate region obtained by this algorithm is uniformly adjusted to 32 × 32 images for classification.
In this work, 500 buckwheat disease images were tested. The number of diseased spots was 16 at most and 2 at least on a single leaf. After testing and analysis, the plaques of 135 images were fully detectable. The detection rate of 117 images ranged from 100% to 90%. There were 134 images with a detection rate of less than 50 percent. As is shown in Figure 11, the horizontal coordinate represents the percentage of plaque detected in a single image, and the vertical coordinate represents the number of disease images. In general, the disease area detection rate reached 455 pictures with 70%, which had a good disease area detection effect. There are 455 images with a disease area detection rate of more than 70%, which had a good effect on disease area detection.
Figure 12 and Figure 13 show the disease area obtained after CNN classification. It can be seen that the detection result is more accurate, eliminating the cross box and error detection of the disease area. Therefore, this method can accurately classify the disease area and non-disease area of buckwheat.

5.2. The Performance Analysis of Inception Module

To evaluate the performance of the inception module, 1200 samples were selected from eight types of samples, including buckwheat spot blight, buckwheat sclerotinia, buckwheat seedling blight, buckwheat ring spot, buckwheat downy mildew, buckwheat brown spot, buckwheat virus disease, buckwheat white mold, and 800 images of non-disease buckwheat were added at the same time. The training set and test set were divided into 4:1. The control variables in inception 1 and inception 2 were compared. The experimental result is shown in Table 1. Convolution layer number, convolution kernel number, learning rate, and batch size were debugged. The optimal parameters were set: 4 convolution layers, convolution kernel: (136, 200, 264, 520), learning rate: 0.02, batch size: 100. The epoch was set to 160. The experiment’s results show that larger epochs could not improve performance, and further increases will result in overfitting.
From Table 1, it is found that after adding the inception 1 and inception 2, the recognition accuracy of the network has been improved within the same number of iterations. After adding the Inception 1 and Inception 2 together, the accuracy rate reaches 91.51%, is higher than other structures. thus, the inception structure proposed is effective in this paper.

5.3. The Performance Analysis of Cosine Similarity Convolution

We compared the recognition performance of CNN based on traditional convolution, CNN based on other similarity functions, and CNN based on cosine similarity. 1200 samples were selected from eight types of samples including buckwheat spot blight, buckwheat sclerotinia, buckwheat seedling blight, buckwheat ring spot, buckwheat downy mildew, buckwheat brown spot, buckwheat virus disease, and buckwheat white mold. Then, 800 buckwheat image samples without disease were added, and the training set and test set were divided according to the ratio of 4:1. We selected the network structure determined in Section 4.2.1. For disease and non-disease identification, five experiments were performed on several convolution networks, and the results are shown in Table 2.
Compared with the traditional convolution method, it can be observed from Table 2 that the recognition accuracy of the convolution based on cosine similarity is improved, and the average accuracy rate increased by 4.14%. However, the convolution network based on other similarity functions has lower accuracy than the traditional convolution network. This shows that the calculation methods of other similarity functions amplify the differences between the features in the sample, making the sample noise greatly interfere with the process of feature extraction. The cosine similarity will limit the output result between −1 and 1, which can minimize the impact of noise on feature extraction.
In order to further analyze the performance of the cosine similarity convolutional network, there are three experiments; we give the loss function and accuracy curve for the traditional convolution network and the cosine similarity convolution network, as shown in Figure 14 and Figure 15.
It can be seen from Figure 14 and Figure 15 that the network of traditional convolution gradually converges after about 6000 iterations, while the convolution based on cosine similarity converges after 4000 iterations. Therefore, the network based on cosine similarity convolution converges faster and finds the global optimal solution more easily.
The convolution method based on cosine similarity can better evaluate relevancy between the convolution kernel and the features of the input feature map. In this way, a larger activation value can be obtained at a location similar to the convolution kernel feature, while avoiding the influence of noise on feature extraction. It can be found from our study that the convolution based on cosine similarity is more accurate than other convolution in the characterization of buckwheat disease. It is insensitive to noise, and its recognition accuracy is higher than other convolution methods. At the same time, the calculation of cosine similarity convolution is fast, iteration steps are fewer, so the network has a shorter operation time.
Due to the influence of sampling period and environment, there are many samples with uneven illumination in the image of buckwheat disease. In this case, the gray value of the image changes significantly, so the activation value obtained by the traditional convolution operation will also change suddenly. Figure 16 shows the output of traditional convolution and cosine similarity convolution.
Assuming that the values in Figure 16 represent the pixel gray values of the image, and the difference in these values is mainly caused by the uneven illumination, the convolution kernel in Figure 16a and the input feature map in Figure 16b are used for traditional convolution and cosine similarity based convolution respectively. It can be seen that the output value obtained by the traditional convolution method has obvious differences, and its feature extraction ability is correspondingly weakened, which is obviously not the result we expected. The output value based on the cosine similarity convolution was uniform, which shows that the method can better adapt to the uneven illumination and is more conducive to feature extraction.
Figure 17 shows the output feature maps of buckwheat leaf under different lighting environments after implementing the convolution operation based on cosine similarity. In the first layer of the convolution feature map, besides the sample contour, there are some obvious boundaries caused by uneven illumination; After the second convolution operation, the feature map is weak to uneven illumination; In the third convolution layer, the feature map basically eliminates the impact of illumination; the fourth convolution layer and the fifth convolution layer, all feature maps tend to retain only the high-level feature of buckwheat leaves. Therefore, it can be concluded that the hue of the feature image is relatively balanced, which is obtained by convolution based on cosine similarity. It indicates that this method can reduce the impact of uneven illumination and has better feature extraction capabilities.

5.4. The Recognition of Buckwheat Diseases

The inception structure of Section 4.2.1 and the CNN with cosine similarity convolution were adopted to recognize buckwheat disease. We compared the final recognition accuracy and used Precision, Recall, and F1-measure to evaluate the recognition effect [36].
The experimental results are shown in Table 3, in addition to the comparison with mainstream CNN models (AlexNet, VGG, GoogleNet, ResNet, LeNet, Faster R-CNN, R-FCN, FPN, YOLOv3, ZFNet) [36], in order to demonstrate the performance of the proposed method, we also introduced the face recognition model, which is currently well studied to test in data set of the buckwheat disease. The results show that accuracy, precision, recall, and F1-measure reached 96.43, 96.82, 95.62, and 96.71%, respectively, for our method. Compared with AlexNet, VGG, GoogleNet, ResNet and Faster R-CNN, R-FCN, YOLOv3 LeNet, ZFNet, the optimal performance of our method was improved by 1.47, 2.1, 1.17, and 3.03%, respectively. The FPS (average number of pictures processed per second) of our method is 5.19, which shows that its processing speed is also at a high level. We selected some models to draw ROC curves on validation sets and test sets, as follows in Figure 18. It can be seen that ROC curves on the test set and validation set are basically consistent; therefore, our model is stable and performance is optimal.
In order to further study the model effect for buckwheat disease recognition after the disease areas of buckwheat leaf are detected, we tested whether to perform area detection. After the method of disease area detection was added in Section 4.1, the recognition effect was significantly improved, as shown in Table 4. After the method of disease area detection was adopted, the accuracy, precision, recall, and F1-measure of our method reached 98.13, 97.54, 96.38, and 97.82%, respectively, which was an increase of 1.7, 0.72, 1.76, and 1.11% compared without disease area detection. Combined with the analysis of Table 3 it is found that after the disease area detection is adopted, the average accuracy precision, recall, and F1-measure of each recognition model increased by 1.3, 1.51, 1.69, and 1.48%, respectively. However, we found that area detection introduces additional computational overhead, so the processing speed is slightly reduced. The PFS of all models decreased by an average of 0.46, but it has little effect on the overall performance. To verify the advantages of our structure, the cosine similarity convolution was added to ResNet, Faster R-CNN, R-FCN, FPN, YOLOv3, LeNet, and ZFNet. The results are shown in Figure 19. It can be seen that the structure presented in this paper has a better effect on the discrimination of buckwheat diseases compared with other networks. ROC curves on validation sets and test sets, as shown in Figure 20. The model performance is similar in the validation set and test set. In fact, the effect on the test is a little more than the effect on the validation set.
We also classified buckwheat spot blight, buckwheat sclerotinia, buckwheat seedling blight, buckwheat ring spot, buckwheat downy mildew, buckwheat brown spot, buckwheat virus disease, and buckwheat white mold. Table 5 shows the results of specific disease classification using the identification framework proposed in this article after the disease area detection. It can be seen that the recognition effect for specific diseases has been significantly reduced, especially for buckwheat downy mildew. Because it is a binary classification problem to discriminate whether there are diseases in buckwheat, it is simply, the classification effect is better. If we want to identify specific disease types, the difficulty of identification will increase as the number of classification targets increases. Due to the different performance of the photography equipment and different photography environments, the imaging quality is quite different; Moreover, the accuracy of recognition largely depends on the training of the model and the number of samples. The samples in this article comes from field collection and is limited by conditions. There are only 500 samples of each disease, and the model training is insufficient, so the recognition effect is greatly affected. However, it can be seen that the accuracy, precision, recall, and F1-measure of buckwheat spot blight and buckwheat ring disease still reach more than 90%. This is because the edge contours of these two diseases on buckwheat leaves are clear. The feature map of the disease can be obtained accurately when the convolutional neural network is used for feature extraction, so it shows higher accuracy in classification. In order to analyze the recognition accuracy of our method in detail, the confusion matrix is used to represent the classification results. In Figure 21a, the diagonal represents the number of correct classifications for each classification; In Figure 21b, the diagonal represents the recognition accuracy of each classification. Although there is a difference in the recognition accuracy, on the whole, the recognition effect of spot disease is better than that of mycosis. The regional feature of leaf spot disease are obvious for buckwheat, which has high recognition, so regional features can be accurately extracted. However, the distribution of mycosis on buckwheat leaves is scattered and the edge is fuzzy, it is difficult to extract the feature.

6. Conclusions and Future Directions

In the article, the main contribution is to realize accurate disease area detection by MSER and CNN, and a two-level inception structure was added to improve classification accuracy. Furthermore, in order to eliminate the interference in disease identification, which is caused by illumination imbalance, cosine similarity is introduced in the convolution process. In this work, the CNN structure based on inception combined with cosine similarity was better than the current recognition framework in the performance for buckwheat diseases. Compared with recognition frameworks, such as the AlexNet, VGG, GoogleNet, ResNet et al., the accuracy, precision, recall, and F1-measure, our method got a consistent outperformance regarding precision, recall, and F1-score. In particular, the proposed method is robust when the light is not uniform and the leaves are crossed and overlapped. Because our method detects the disease area, it results in a small increase in processing time, but overall performance is not affected. Due to the similar symptoms of buckwheat downy mildew and buckwheat Seedling blight, the accuracy of recognition is not high. In future plans, we will further improve the identification accuracy of these two diseases.

Author Contributions

X.L. completed theoretical analysis and model design; S.Z. completed the data analysis; S.C. carried out implementation and data tests on the model; Z.Y. provides basic data and disease identification; R.Y. completed the program; H.P. sorted out the papers and completed the drawings. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by NSFC Grant Nos. 61701060 and 61801067, Guangxi Colleges and Universities Key Laboratory of Intelligent Processing of Computer Images and Graphics Project No. GIIP1806, and the Science and Technology Research Project of Higher Education of Hebei Province (Grant No. QN2019069), and Chongqing Key Lab of Computer Network and Communication Technology (CY-CNCL-2017-02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to acknowledge the support for our work from the researchers of Chongqing Key Lab of Computer Network and Communication Technology.

Conflicts of Interest

There are no conflicts of interest in this work.

References

  1. Qi, Y.; Chen, Z.; Li, Z.; Liu, H.; Wang, L.; Li, C. Progress in research on the diseases of buckwheat. Pratacult. Sci. 2020, 37, 75–86. [Google Scholar]
  2. Demchenko, O.; Shevchuk, V.; Yuzvenko, L.; Boyko, O.; Boyko, A. Investigation of the resistance of different varieties of buckwheat to infectious diseases after the pre-sowing treatment of seeds and vegetating plants with biological preparations. Agrobiology 2016, 124, 631–635. [Google Scholar]
  3. Dong, X.; Tang, Y.; Ding, M.; Li, W.; Li, J.; Wu, Y.; Shao, J.; Zhou, M. Chinese buckwheat germplasm resources and their feeding value. Pratacult. Sci. 2017, 34, 378–388. [Google Scholar]
  4. Lu, W.J.; Luo, S.G.; Li, C.H.L.; Wang, Y.Q.; He, C.X.; Sun, D.W.; Yin, G.F.; Wang, L.H. Biological characteristics of buckwheat ring rot pathogen. J. Anhui Agric. Univ. 2016, 43, 799–803. [Google Scholar]
  5. Chun, L.; Dao, S.; Cheng, H.; Jo, H.; Zhao, G.; Yun, W.; Wen, L.; Yan, W.; Gui, Y.; Li, W. Effects of Different Bio-organic Fertilizers on Disease and Yield of Buckwheat. Chin. Agric. Sci. Bull. 2018, 34, 1–4. [Google Scholar]
  6. Lu, W.J.; Su, D.W.; He, C.X.; Li, C.H.; Wang, Y.Q.; Yin, G.F.; Wang, L.H. Occurrence and identification of the pathogen of buckwheat ring rot in Yunnan Province. Chin. Agric. Sci. Bull. 2017, 33, 154–158. [Google Scholar]
  7. Tian, X.; Li, C.H. Pathogen of peyronellaea leaf spot in buckwheat. Acta Agric. Boreali-Occident. Sin. 2017, 26, 1544–1549. [Google Scholar]
  8. Atole, R.R.; Park, D. A multiclass deep convolutional neural network classifier for detection of common rice plant anomalies. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 66–70. [Google Scholar]
  9. Zhang, D.Y.; Chen, G.; Zhang, H.H.; Jin, N.; Gu, C.Y.; Weng, S.Z.; Wang, Q.; Chen, Y. Integration of spectroscopy and image for identifying fusarium damage in wheat. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2020, 236, 118–126. [Google Scholar] [CrossRef]
  10. Wang, Z.B.; Wang, K.Y.; Pan, S.H.; Han, H.Y. Segmentation of crop disease images with an improved k-means clustering algorithm applied engineering in agriculture. Appl. Eng. Agric. 2018, 34, 277–289. [Google Scholar] [CrossRef]
  11. Ghaffari, R.; Laothawornkitkul, J.; Iliescu, D.; Hines, E.; Leeson, M.; Napier, N.; Moore, J.P.; Paul, N.D.; Hewitt, C.N.; Taylor, J.E. Plant pest and disease diagnosis using electronic nose and support vector machine approach. J. Plant Dis. Prot. 2012, 119, 200–207. [Google Scholar] [CrossRef]
  12. Zhang, C.L.; Zhang, S.W.; Yang, J.C.; Shi, Y.C.; Chen, J. Apple leaf disease identification using genetic algorithm and correlation based feature selection method. Int. J. Agric. Biol. Eng. 2017, 10, 74–83. [Google Scholar]
  13. Bi, C.G.; Chen, G.F. Bayesian Networks Modeling for Crop Diseases. In Proceedings of the International Conference on Computer and Computing Technologies in Agriculture, Jiangxi, China, 29–31 October 2010; pp. 312–320. [Google Scholar]
  14. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  15. Kamath, R.; Balachandra, M.; Prabhu, S. Crop and weed discrimination using Laws’ texture masks. Int. J. Agric. Biol. Eng. 2020, 13, 191–197. [Google Scholar] [CrossRef]
  16. Saedi, S.I.; Khosravi, H. A deep neural network approach towards real-time on-branch fruit recognition for precision horticulture. Expert Syst. Appl. 2020, 159, 113594. [Google Scholar] [CrossRef]
  17. Jung, D.E. Distributed feature selection for multi-class classification using ADMM. IEEE Control Syst. Lett. 2020, 5, 821–826. [Google Scholar] [CrossRef]
  18. Huang, Z.H.; Xu, X.; Zhu, H.H.; Zhou, M.C. An efficient group recommendation model with multiattention-based neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4461–4474. [Google Scholar] [CrossRef]
  19. Fan, T.K.; Xu, J. Image classification of crop diseases and pests based on deep learning and fuzzy system. Int. J. Data Warehous. Min. 2020, 16, 34–47. [Google Scholar] [CrossRef]
  20. Dai, Q.; Cheng, X.; Qiao, Y.; Zhang, Y.H. Crop leaf disease image super-resolution and identification with dual attention and topology fusion generative adversarial network. IEEE Access 2020, 8, 55724–55735. [Google Scholar] [CrossRef]
  21. Rahman, C.R.; Arko, P.S.; Ali, M.E.; Khan, M.A.I.; Wasif, A.; Jani, M.R.; Kabir, M.S. Identification and recognition of rice diseases and pests using convolutional neural networks. Biosyst. Eng. 2020, 194, 112–120. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, J.; Zhang, D.; Nanehkaran, Y.A.; Li, D. Detection of rice plant diseases based on deep transfer learning. J. Sci. Food Agric. 2020, 100, 3246–3256. [Google Scholar] [CrossRef] [PubMed]
  23. Li, D.S.; Wang, R.J.; Xie, C.J.; Liu, L.; Zhang, J.; Li, R.; Wang, F.Y.; Zhou, M.; Liu, W.C. A recognition method for rice plant diseases and pests video detection based on deep convolutional neural network. Sensors 2020, 20, 578. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Sibiya, M.; Sumbwanyambe, M. A Computational procedure for the recognition and classification of maize leaf diseases out of healthy leaves using convolutional neural networks. AgriEngineering 2019, 1, 9. [Google Scholar] [CrossRef] [Green Version]
  25. Rangarajan, A.K.; Purushothaman, R. Disease classification in eggplant using pre-trained VGG16 and MSVM. Sci. Rep. 2020, 10, 2322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Maeda-Gutiérrez, V.; Galván-Tejada, C.E.; Zanella-Calzada, L.A.; Celaya-Padilla, J.M.; Olvera-Olvera, C.A. Comparison of convolutional neural network architectures for classification of tomato plant diseases. Appl. Sci. 2020, 10, 1245. [Google Scholar] [CrossRef] [Green Version]
  27. Fuentes, A.; Yoon, S.; Park, D.S. Deep learning-based phenotyping system with glocal description of plant anomalies and symptoms. Front. Plant Sci. 2019, 10, 1321. [Google Scholar] [CrossRef] [PubMed]
  28. Xiao, Q.; Li, W.; Kai, Y.; Chen, P.; Zhang, J.; Wang, B. Occurrence prediction of pests and diseases in cotton on the basis of weather factors by long short term memory network. BMC Bioinform. 2019, 20, 688. [Google Scholar] [CrossRef]
  29. Verma, S.; Chug, A.; Singh, A.P. Exploring capsule networks for disease classification in plants. J. Stat. Manag. Syst. 2020, 23, 307–315. [Google Scholar] [CrossRef]
  30. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; pp. 326–366. [Google Scholar]
  31. Lu, W.J.; Li, C.H.; Wang, Y.Q.; Su, D.W.; He, C.X.; Wang, L.H. Resistance identification method on buckwheat ring rot and screening of disease resistance germplasm resources. Chin. Agric. Sci. Bull. 2017, 33, 98–102. [Google Scholar]
  32. Hassan, S.A.; Sayed Abdalla, M.S.; Abdalla, M.I.; Rashwan, M.A. Detection of breast cancer mass using MSER detector and features matching. Multimed. Tools Appl. 2019, 78, 20239–20262. [Google Scholar] [CrossRef]
  33. Alex, K.; Ilya, S.; Geoffrey, E.; Hinton, G. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
  34. Zhou, L.; Xiao, Y.; Chen, W. Imaging Through Turbid Media With Vague Concentrations Based on Cosine Similarity and Convolutional Neural Network. IEEE Photonics J. 2019, 11, 1–15. [Google Scholar] [CrossRef]
  35. Hinton, G.E.; Osidero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2014, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  36. Wu, W.; Kan, M.; Liu, X.; Yang, Y.; Shan, S.; Chen, X. Recursive Spatial Transformer (ReST) for Alignment-Free Face Recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3772–3780. [Google Scholar]
Figure 1. Typical convolution neural network structure.
Figure 1. Typical convolution neural network structure.
Applsci 12 04795 g001
Figure 2. A sample image of each detected class. (a) Spot blight; (b) Downy mildew; (c) Virus disease; (d) Seedling blight; (e) Sclerotinia; (f) Ring spot; (g) Brown spot; (h) White mold.
Figure 2. A sample image of each detected class. (a) Spot blight; (b) Downy mildew; (c) Virus disease; (d) Seedling blight; (e) Sclerotinia; (f) Ring spot; (g) Brown spot; (h) White mold.
Applsci 12 04795 g002
Figure 3. Detection effect of buckwheat brown spot disease.
Figure 3. Detection effect of buckwheat brown spot disease.
Applsci 12 04795 g003
Figure 4. Detection effect of buckwheat ring rot disease.
Figure 4. Detection effect of buckwheat ring rot disease.
Applsci 12 04795 g004
Figure 5. The network structure of disease area detection.
Figure 5. The network structure of disease area detection.
Applsci 12 04795 g005
Figure 6. The structure of the convolutional neural network for buckwheat disease identification.
Figure 6. The structure of the convolutional neural network for buckwheat disease identification.
Applsci 12 04795 g006
Figure 7. Inception 1 structure.
Figure 7. Inception 1 structure.
Applsci 12 04795 g007
Figure 8. Inception 2 structure.
Figure 8. Inception 2 structure.
Applsci 12 04795 g008
Figure 9. Training samples. (a) positive training sample; (b) negative training sample.
Figure 9. Training samples. (a) positive training sample; (b) negative training sample.
Applsci 12 04795 g009
Figure 10. The relationship between the size of the clipping sample and the average accuracy of CNN prediction.
Figure 10. The relationship between the size of the clipping sample and the average accuracy of CNN prediction.
Applsci 12 04795 g010
Figure 11. The statistics of disease area detection.
Figure 11. The statistics of disease area detection.
Applsci 12 04795 g011
Figure 12. Final detection effect of buckwheat brown spot disease.
Figure 12. Final detection effect of buckwheat brown spot disease.
Applsci 12 04795 g012
Figure 13. Final detection effect of buckwheat ring rot disease.
Figure 13. Final detection effect of buckwheat ring rot disease.
Applsci 12 04795 g013
Figure 14. Loss curve for training.
Figure 14. Loss curve for training.
Applsci 12 04795 g014
Figure 15. Accuracy curve for training.
Figure 15. Accuracy curve for training.
Applsci 12 04795 g015
Figure 16. Comparison of two convolution operations. (a) Convolutional kernel; (b) Input feature map; (c) Traditional convolution output; (d) Convolution output based on cosine similarity.
Figure 16. Comparison of two convolution operations. (a) Convolutional kernel; (b) Input feature map; (c) Traditional convolution output; (d) Convolution output based on cosine similarity.
Applsci 12 04795 g016
Figure 17. Convolution feature map of buckwheat leaf under uneven illumination.
Figure 17. Convolution feature map of buckwheat leaf under uneven illumination.
Applsci 12 04795 g017
Figure 18. ROC curve for different models on validation sets and test sets. (a) ROC curve for different models on testing sets; (b) ROC curve for different models on validation sets.
Figure 18. ROC curve for different models on validation sets and test sets. (a) ROC curve for different models on testing sets; (b) ROC curve for different models on validation sets.
Applsci 12 04795 g018
Figure 19. The cosine similarity convolution was added to other networks.
Figure 19. The cosine similarity convolution was added to other networks.
Applsci 12 04795 g019
Figure 20. ROC curve for different models on validation sets and test sets after region detection. (a) ROC curve for different models on test sets; (b) ROC curve for different models on validation sets.
Figure 20. ROC curve for different models on validation sets and test sets after region detection. (a) ROC curve for different models on test sets; (b) ROC curve for different models on validation sets.
Applsci 12 04795 g020
Figure 21. Buckwheat disease classification; (a) confusion matrix, (b) normalized confusion matrix.
Figure 21. Buckwheat disease classification; (a) confusion matrix, (b) normalized confusion matrix.
Applsci 12 04795 g021
Table 1. Performance evaluation of inception module.
Table 1. Performance evaluation of inception module.
Network StructureIterationsAccuracy Rate (%)
Without the Inception350087.47
Only the Inception 1350090.28
Only the Inception 2350090.64
The Inception 1 and Inception 2350091.51
Table 2. Recognition accuracy (%) for different convolution methods.
Table 2. Recognition accuracy (%) for different convolution methods.
Experiment 1Experiment 2Experiment 3Experiment 4Experiment 5Average
Traditional Convolution93.2594.8594.1493.4894.7194.07
Euclidean Distance Convolution92.3192.6792.7192.1992.4992.47
Chebyshev Distance Convolution94.1894.5893.8294.1794.3894.23
Manhattan Distance Convolution92.6792.1393.2892.3593.1692.72
Cosine Similarity Convolution98.1398.2898.4298.6398.3898.37
Table 3. Results of buckwheat disease identification.
Table 3. Results of buckwheat disease identification.
ModelAccuracy (%)Precision (%)Recall (%)F1-Measure (%)AUC (%)FPS (s)
AlexNet87.3184.9288.5790.3496.133.15
Vgg-1689.2788.4290.3589.5394.123.18
GoogleNet90.5392.8491.4790.5395.354.15
ResNet94.7593.7292.3793.4197.184.17
Faster R-CNN93.2492.3493.72.91.5895.365.37
R-FCN94.2294.3194.3692.1595.314.85
FPN94.1694.8392.5193.8797.784.27
YOLOv395.0394.7294.5192.6796.464.53
LeNet94.5794.2392.5991.8895.125.21
ZFNet94.2194.1793.2393.4297.435.33
Ours (inception +Cosine similarity convolution)96.4396.8295.6296.7198.215.19
DeepFace91.5390.3190.7389.3594.375.18
VGGFace91.8291.2489.4290.1795.525.16
FaceNet90.4789.2790.3390.1696.294.21
DeepID2+93.8293.5792.4990.3896.174.17
WST Fusion92.7392.4793.2192.5497.053.18
SphereFace93.6894.3892.8692.6797.463.23
RangeLoss91.7992.4591.6590.7895.373.21
HiReST-9+88.3688.6789.5989.7694.885.79
Table 4. Identification results of Buckwheat diseases after region detection.
Table 4. Identification results of Buckwheat diseases after region detection.
ModelAccuracy (%)Precision (%)Recall (%)F1-Measure (%)AUC (%)FPS (s)
AlexNet87.9386.3788.9292.3497.123.12
Vgg-1690.1790.1791.8390.2795.732.87
GoogleNet93.4294.2193.8492.6796.593.87
ResNet93.7894.9294.4294.8997.373.72
Faster R-CNN95.1695.2594.5792.5496.863.81
R-FCN95.2696.1395.3892.8797.123.85
FPN96.3796.2194.3695.1797.843.57
YOLOv395.5795.3296.8395.3798.034.12
LeNet94.8795.1793.2792.6895.684.27
ZFNet95.7894.3595.1293.7196.734.03
Ours (inception + Cosine similarity convolution)98.1397.5497.3897.8298.894.31
DeepFace94.7393.2892.6790.4895.424.43
VGGFace93.0292.8391.6192.3596.744.32
FaceNet92.5791.6792.5792.6796.813.49
DeepID2+94.2494.3893.8792.4997.133.53
WST Fusion93.8694.3595.6193.4997.413.02
SphereFace94.5694.8593.7293.5797.583.13
RangeLoss92.7193.7392.7192.1896.932.97
HiReST-9+90.7390.3291.4991.3395.934.27
Table 5. Effect of different buckwheat diseases for their identification.
Table 5. Effect of different buckwheat diseases for their identification.
Disease TypeAccuracy (%)Precision (%)Recall (%)F1-Measure (%)AUC (%)
Spot blight90.3790.5191.3592.9296.78
Sclerotinia83.5382.4883.4180.5487.32
Seedling blight79.1878.3880.7280.2186.75
Ring spot91.4391.5792.6192.3996.41
Downy mildew78.3178.4575.2877.3882.13
Brown spot86.9385.9188.4786.5289.57
Virus disease87.4286.1987.4386.5288.46
White mold85.4885.3686.9286.7487.59
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, X.; Zhou, S.; Chen, S.; Yi, Z.; Pan, H.; Yao, R. Buckwheat Disease Recognition Based on Convolution Neural Network. Appl. Sci. 2022, 12, 4795. https://0-doi-org.brum.beds.ac.uk/10.3390/app12094795

AMA Style

Liu X, Zhou S, Chen S, Yi Z, Pan H, Yao R. Buckwheat Disease Recognition Based on Convolution Neural Network. Applied Sciences. 2022; 12(9):4795. https://0-doi-org.brum.beds.ac.uk/10.3390/app12094795

Chicago/Turabian Style

Liu, Xiaojuan, Shangbo Zhou, Shanxiong Chen, Zelin Yi, Hongyu Pan, and Rui Yao. 2022. "Buckwheat Disease Recognition Based on Convolution Neural Network" Applied Sciences 12, no. 9: 4795. https://0-doi-org.brum.beds.ac.uk/10.3390/app12094795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop