Next Article in Journal
In-Field Citrus Disease Classification via Convolutional Neural Network from Smartphone Images
Previous Article in Journal
The Antioxidant Potential of Grains in Selected Cereals Grown in an Organic and Conventional System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Deployable Instance Elimination Segmentation Algorithm Based on Watershed Transform for Dense Cereal Grain Images

1
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
2
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
3
Technical Center for Animal Plant and Food Inspection and Quarantine of Shanghai Customs, Shanghai 200002, China
4
Network & Information Center, Shanghai Jiao Tong University, Shanghai 200240, China
*
Authors to whom correspondence should be addressed.
Submission received: 28 July 2022 / Revised: 9 September 2022 / Accepted: 14 September 2022 / Published: 16 September 2022
(This article belongs to the Section Digital Agriculture)

Abstract

:
Cereal grains are a vital part of the human diet. The appearance quality and size distribution of cereal grains play major roles as deciders or indicators of market acceptability, storage stability, and breeding. Computer vision is popular in completing quality assessment and size analysis tasks, in which an accurate instance segmentation is a key step to guaranteeing the smooth completion of tasks. This study proposes a fast deployable instance segmentation method based on a generative marker-based watershed segmentation algorithm, which combines two strategies (one strategy for optimizing kernel areas and another for comprehensive segmentation) to overcome the problems of over-segmentation and under-segmentation for images with dense and small targets. Results show that the average segmentation accuracy of our method reaches 98.73%, which is significantly higher than the marker-based watershed segmentation algorithm (82.98%). To further verify the engineering practicality of our method, we count the size distribution of segmented cereal grains. The results keep a high degree of consistency with the manually sketched ground truth. Moreover, our proposed algorithm framework can be used as a great reference in other segmentation tasks of dense targets.

1. Introduction

In the field of food industries, the physical size and the appearance quality of cereal grains are directly related to the market value of cereals [1,2,3,4]. For instance, the market value of broken grains is only 60~80% [1], and moldy grains lose almost all edible value [4]. Some advanced technologies based on computer vision have been applied to assess the quality and count the size distribution of cereal grain samples, in which an accurate instance segmentation to the cereal grain image is an important practice [5,6]. Taking advantage of the instance segmentation results, we can accomplish the intelligent classification task of imperfect cereal grains by a fine-grained network [7,8]. In addition, it is easy to measure the size distribution of the sampled cereal grains, such as the area, perimeter, and long–short axis ratio.
There exist several instance segmentation methods based on the convolutional neural network (CNN), which are driven by data and achieve excellent performance through supervised learning [9,10,11,12]. These CNN-based methods all belong to the pixel-level classification of objects, which means the labeled dataset for training must be pixel by pixel. Moreover, the dataset for training and testing in a neural network should satisfy independent and identical distribution. In actual application, it indicates that the training data and the future prediction data should be under the same distribution. Therefore, it is necessary to construct an exclusive dataset when completing specific instance segmentation tasks, such as cereal grain image segmentation. However, it is usually laborious and time-consuming to label a pixel-level dataset, especially for images with characteristics of dense, small targets. In other words, this restricted form of CNN-based supervision limits its generality and usability. For the purpose of a fast and deployable engineering application, traditional image segmentation algorithms may be an excellent solution to deal with the instance segmentation task of cereal grain images.
The image segmentation algorithms based on watershed transformation are extremely sensitive to the boundary contour [13,14,15]. They show great performance in processing dense and small targets with indistinct outlines, which is attributed to their topography-like representation of image intensity [13,14]. The watershed segmentation algorithm considers an image as an undulating basin with peaks and valleys, in which three basic notions are contained: “minima”, “catchment basins”, and “watershed lines” [16,17]. The “minima” represents kernel points of all the targets; the “catchment basins” represent different target areas; and the “watershed lines” represent the dividing lines between neighboring areas with gray differences. However, due to the advantage of contour sensitivity alone, this also becomes the disadvantage of the watershed method. An inconspicuous noise on the surface of a target frequently causes the over-segmentation of images [15]. Researchers have proposed a lot of improved proposals to cope with the over-segmentation problem, which include improvements in the postprocessing or preprocessing of the watershed steps [18,19,20,21,22,23,24]. Postprocessing usually means to merge the over-segmentation areas together [20,24], although, for the cereal image, it is time-consuming and hard to find a suitable merging criterion. Preprocessing is a marker-based watershed segmentation method (MWS), which utilizes prior knowledge to mark instances as kernel areas instead of the “minima” before executing watershed segmentation [15,18]. It guarantees that one kernel area corresponds to a target to be divided. However, in the actual application of our instance segmentation task, it is challenging to mark each target correctly since the target characters are small and dense, as in Figure 1. This situation causes the imperfect segmentation of targets. For instance, if the target is missed to mark, it will lead to under-segmentation (adjacent targets are marked to an entirety; examples in red boxes of middle column of Figure 1) or missing segmentation (small targets are missed; example in yellow box of middle column of Figure 1); and if the target’s marking is duplicated, it will lead to over-segmentation (a whole target is marked two or multiple times; examples in green boxes of middle column of Figure 1). Therefore, finding a reasonable way to accurately mark instance kernel areas is pivotal for our segmentation task.
In this study, we propose two improved strategies on the basis of MWS to optimize the marking process, aiming at solving the problem of mistaken segmentation. As for over-segmentation, we focus on generating one kernel area for a target. In this respect, we propose an improved marker-based watershed segmentation algorithm with a self-selected single channel image and an improved binary image (MWS[c,b]). As for the under-segmentation or missing segmentation, we propose a marker-based watershed elimination segmentation algorithm (MWES) on the basis of embedded iterative ideas. It is worth stating that our two proposed strategies are integrated together to simultaneously alleviate the problem of over-segmentation and under-segmentation, which is called MWES[c,b]. Two simple examples of segmentation results by MWES[c,b] are shown in the last column of Figure 1, where the existing mis-segmented grains under MWS have been eliminated. To verify the engineering practicality of our method, we not only assess the segmentation accuracy for cereal grain images by qualitative and quantitative methods, but also evaluate the statistical size distribution (including perimeter, area, long–short axis ratio) accuracy of cereal grains. The results keep a high degree of consistency with the manually sketched ground truth. The proposed strategies in this research can be applied to segmenting cereal grain images into single instances so as to evaluate their appearance quality or count size distribution. In addition, the proposed idea of processing small and dense targets can be used as a significant reference to migrate into other domains, such as medical cell images.
The main contributions are summarized as follows:
  • We propose an instance segmentation method based on a generative marker-based watershed segmentation algorithm, which overcomes the problems of over-segmentation and under-segmentation for images with dense and small targets.
  • The proposed method is extensively evaluated by qualitative and quantitative measures. The results demonstrate the effectiveness and robustness of our method.
  • We verify the engineering practicality of our method by counting the size distribution of segmented cereal grains. The results keep a high degree of consistency with the manually sketched ground truth.
  • The method proposed in this study has a potential positive effect of getting rid of the reliance on data-driven deep learning algorithms in instance segmentation tasks, which can be regarded as an image processing framework with promising application and rapid deployment in more fields.

2. Materials and Methods

2.1. Cereal Samples and Image Capturing

The cereal samples were supported by Technical Center for Animal Plant and Food Inspection and Quarantine of Shanghai Customs District P.R. China for studying. The camera (4700 × 3600 resolution) from Nikon Inc. was used for capturing images. The computing resource in our experiments was a desktop (Intel® Core™ i7-10700F CPU @2.90GHZ, 16GB RAM, ASUS, Taiwan, China). In our experiments, a fixed platform was used to hold samples, and a camera was fixed 18   cm above it. Four 25   cm -long strip Light Emitting Diode (LED) light sources were fixed on four sides about 10   cm above the shooting plane. Simultaneously, to reduce the effect of ambient environment light, we packaged all modules into a closed system. An electric actuator sent samples into the platform of the system automatically for image capturing.
In practical engineering application, we have to guarantee the sampling efficiency. Simultaneously, the image resolution of single sample should be enough to match the quality analysis task of cereals. Sampling standard is not uniform for different types of cereals. According to the National Standard of the People’s Republic of China: GBT 5494-2019 [25], medium-sized seeds (such as wheat, grain sorghum) sample 50 g as a batch. The amount was approximately 1200 grains. Comprehensively considering the above factors, we chose a 245   mm   × 185   mm container to place cereal samples for capturing images and a batch of sample was divided twice when imaging.
In the process of sampling, we poured 25 g cereals into the container, and then shook them manually until the cereals flattened on the surface of the container, making sure that there was no accumulation among the grains. After that, we captured an image. Shaking the container again, we captured a next image. Repeating the above procedure five times, we obtained five images, defining them as a group. Following that, we replaced the cereals with another 25 g, and repeated the whole event until capturing five groups of images, which were 25 images in total.

2.2. Watershed Segmentation Algorithm

Due to superior segmentation performance, the MWS algorithm has been widely used in various fields, especially for the processing of medical images and remote sensing images [24,26,27]. However, if using the algorithm in our cereal image directly, there exists obvious over-segmentation or under-segmentation instances. In this paper, an improved MWS algorithm is proposed to overcome the two problems.

2.2.1. Marker-Based Watershed Segmentation Algorithm

The MWS method is developed on basis of the traditional watershed segmentation method to alleviate the problem of image over-segmentation [15,18]. It utilizes prior knowledge to mark instances as kernel areas. According to the marked kernels and watershed rules, MWS method sketches the counters of targets in image [13]. The operating flow can be described as Figure 2: ① presents a raw image. ② presents a grayscale image of the raw image. ③ presents the binary image, which is obtained by processing the gray image with OSTU threshold method [28]. ④ is obtained by dilating the binary image two times, which aims at finding the definite background area (black). ⑤ presents the definite foreground image (white). It uses the distance transformation function [29] to extract skeleton of the binary image, keeping the area above a threshold set to 255, or a low threshold set to 0. Here the threshold is an experience value, which is determined according to Equation (1). Simply, a generated binary image is overly corroded to obtain the definite foreground image, in which the empirical parameter controls the corrosion degree of the binary image. ⑥ presents the pending contour areas. It is obtained by subtracting the definite background image and the definite foreground image. ⑦ presents the labeling image / kernel image. It is obtained by marking the definite foreground image with different gray values, which denote different targets of the raw image. ⑧ presents the segmentation image. It is obtained by processing the raw image and the labeling image according to the watershed rules. To better visualize the result, we fill with different colors along the output outlines. It is noteworthy that the procedures of ①–⑤ are the process of marking instance kernels.
T = 0.45 × m a x   ( p i x e l   v a l u e s [ skeleton ] )
Here T represents the calculated threshold; 0.45 is an experience value, which can be changed according to different tasks. p i x e l   v a l u e [ skeleton ] represents image pixel values of the obtained skeleton image, and m a x ( ) represents taking the maximum value. It is worth pointing out that, in this study, we consider 0.45 as an initial value, and then dynamically update the parameter by iteration. Therefore, our method has a wider range of dynamic adaptation, and the method is no longer limited by the traditional fixed parameter.

2.2.2. An Example of Existing Over-Segmentation and Its Improvements

The MWS algorithm (Section 2.2.1) is able to accomplish the accurate segmentation of most targets. However, the over-segmentation is still common, which manifests as one target being split into two or multiple parts. The direct reason is that the kernel area belongs to one target grain that is separated into two kernels. While the indirect reason can be attributed to the non-uniformity of grain surfaces. Figure 3 shows an example of over-segmentation and its improved performance. The left displays a raw image, in which the wheat in red marker box is a demonstration. The right parts represent the processes (the middle four columns) and results (the right dotted box) of the demonstrated wheat, in which above the dotted line belongs to MWS method and below is MWS[c,b]. It is obvious that over-segmentation occurs when using MWS method. It is just the grayscale non-uniformity on the surface of the wheat grain (Figure 3a) that results in the incompletion of the binary image (Figure 3b). Therefore, through the distance transformation processing, the incomplete binary image forms two kernel areas (Figure 3c). Naturally, two outlines are generated on a wheat grain (Figure 3d). In view of this, our method makes an improvement on the basis of MWS. The detailed procedural is as below.
As shown in Figure 4, the middle dotted box presents the procedure of MWS, which is consistent with Figure 2. In our improved algorithm, two solutions are proposed to acquire a better kernel area. Firstly, an optimal single channel image is applied to replace the gray image in the procedure of MWS. We roughly separate the foreground and the background of a raw image using OSTU threshold method [28]. Then we calculate the histograms of R, G, and B channels of the foreground, presenting them with red, green, blue, respectively, and we depict the histogram belonging to the background with black. After that, the pixel level corresponding to the envelope peak of each curve is found out, named as Peak Pixel. We calculate the differences of Peak Pixel between the background curve and three single channel curves. Then, selecting the biggest difference channel as an optimal single channel replaces the gray image in MWS. For instance, R channel image is chosen in Figure 4. If the gray image in MWS is replaced by the selected single channel image in the red box, we call the procedure marker-based watershed segmentation with a self-selected single channel image (MWS[c]).
In addition, we further optimize the acquirement process of binary images, which is shown in the blue box of Figure 4. Different from that in MWS, we acquire an optimal binary image in three steps. The first step is repeating OSTU method to obtain a crude binary image; then we use the morphological close operation to fill small holes; we further set the pixel of small-connected domains to 255. By the above three steps, a new binary image is obtained. If the binary image in MWS is replaced by the new binary image in the blue box, the procedure is called marker-based watershed segmentation with an optimal binary image (MWS[b]).

2.2.3. An Example of Existing Under-Segmentation and Its Improvement

Except for the problem of over-segmentation in Section 2.2.2, under-segmentation is also common in actual application, shown in Figure 5. Figure 5A represents a raw image. Figure 5B represents the segmented image with MWS. Various colors show different segmented areas, in which the numbers 10 and 12 are under-segmentation grains. The direct reason can be attributed to the adjacent targets being marked into one kernel, while the fundamental reason is that an experimental parameter in MWS that controls the target separation cannot separate all adjacent targets. If the parameter is set too big, small objects are eroded and lost (missing segmentation occurs in this case), while if the parameter is too small, the situation happens as in Figure 5B so that the adjacent targets cannot be separated. Based on the above problem, we propose an improved strategy, called marker-based watershed elimination segmentation (MWES) algorithm (shown in Figure 6). The procedure can be described as: input a raw image to complete watershed segmentation using an initial parameter; extract the correctly segmented objects and save them to an absolute folder; the input image substrates the saved images; update parameter iteratively; repeat the above procedure until all the objects are saved or on achieving a certain iteration number. Processing effect can refer to Figure 5C–E. Figure 5C represents the correct segmented images and they are saved into an absolute folder. Figure 5D represents the repartition result after updating the parameter. Numbers 10 and 12 in Figure 5B have been segmented correctly. Figure 5E represents the saved image from Figure 5D. The results of Figure 5C,E together form a segmentation of the raw image.

3. Results

In this study, we propose two improved strategies (MWS[c] and MWS[b]) to overcome the problem of image over-segmentation, referring to Section 2.2.2 for a detailed description. Moreover, we propose an MWES strategy to ameliorate the problem of image under-segmentation, referring to Section 2.2.3 for a detailed description. It is notable that these strategies can be combined freely. For instance, MWS[c] and MWES together are called MWES[c]; the combination of MWS[b] and MWES is called MWES[b]; the three strategies (MWS[c], MWS[b], and MWES) are combined together to be called MWES[c,b]. In order to test whether the three strategies worked in our procedure, in the qualitative evaluation, we compared the visual segmentation effect of MWES and MWES[c,b], and in the quantitative evaluation, we counted the number error and the accuracy of segmentation with MWES, MWES[c], MWES[b], and MWES[c,b].

3.1. Qualitative Evaluation Results

Figures S1 and S2 display the dynamic processing processes applied to the same raw image, using MWES and MWES[c,b], respectively (see Supplementary Materials). The top half shows that an absolute folder saves the segmented targets during each iteration, and the below half shows the images through each elimination operation. The results indicate that all of the targets can be segmented one by one using MWES or MWES[c,b], while the segmented targets are more intact using MWES[c,b] than MWES, demonstrating the effectiveness of MWES[c,b] in dealing with image over-segmentation and image under-segmentation.

3.2. Quantitative Evaluation Results

Figure 7A,B compare the average number error and the average segmentation accuracy of 25 wheat grain images using MWES, MWES[c], MWES[b], and MWES[c,b], respectively. Number error is defined as the discrepancy between the number of segmented targets and the actual number of samples in an image. The accuracy is calculated by counting the percentage of the correct segmented targets in all the segmented targets. Here we compare the saved targets with the manually sketched results, and if the coincidence reaches 95%, we consider it to be a correctly segmented object. From Figure 7A, the average number error is the least in MWES[c,b] (1.88) compared with MWES (67.68), MWES[c] (13.16), and MWES[b] (16.32), while it is the most in MWES and in the middle for the other two. There is a significant difference between any two methods (p < 0.01). The results indicate that the counted target number is the most reliable in MWES[c,b]. From Figure 7B, the average segmentation accuracy with MWES[c,b] is 98.73%, which is the best among the other three methods. The segmentation accuracy of MWES is 82.98%, which is the worst in the four methods. Similarly, MWES[c] (95.75%) and MWES[b] (94.30%) are in between. There is a significant difference between any two methods (p < 0.001). These results demonstrate that our proposed strategies are all effective in improving segmentation accuracy, and using these strategies together maximizes the effect.
In addition, we chose an image randomly from each group of images (totally five images) to compare the number error and the accuracy of the four methods, shown as Figure 7C,D. From Figure 7C, the number error is the most with MWES in any one image of the five groups, and MWES[c,b] performs the best compared to others, while MWES[c] and MWES[b] are in the middle. Similarly, from Figure 7D, the results of each group are consistent with those in Figure 7B. These results indicate that our proposed strategies are robust in processing the segmentation task of wheat images.
Table 1 displays the time costs of our proposed methods and the traditional MWS method, in which our improvements to the single channel image costs almost no additional time (0.88 s to 0.89 s), while the main time cost is reflected in the process of elimination segmentation ( × n ), which utilizes the embedded iterative idea, making the time cost double with each additional iteration. The other time-consuming step is the process of optimizing the binary image (0.88 s to 9.90 s), in which the filling of small connected domains takes lots of time.

4. Application

The physical size of cereal grains determines the cereal milling yield and its market value to a certain extent. Generally, grains with a larger average size may contribute to a higher milling yield, and grains with better consistency may be more popular among customers [1]. In this research, we count the size distribution of cereal grains (including perimeters, areas, and long–short axis ratios) based on our segmented instances and the manually sketched instances. On the one hand, it tests the robustness of our instance segmentation algorithm in actual application, and on the other hand, it verifies the accuracy of our segmentation results. Figure 8 shows the size distribution results of a wheat image (Figure 8A), a rice image (Figure 8B), and a sunflower seed image (Figure 8C), respectively. Figure 8(A_1,A_3,A_5) represent the area distribution, the perimeter distribution, and the long–short axis ratio distribution of our segmented instances in the wheat image, respectively. Figure 8(A_2–A_6) represent the corresponding distribution results of manually sketched ground truth instances. Figure 8B,C are analogous to Figure 8A. In all of the distribution histograms, the horizontal axis represents the distribution interval, and the vertical axis represents counted numbers. There is little discrepancy between the distribution results of our segmented instances and those of the manually sketched instances.
Similarly, Table 2 lists the average size distributions of our segmented instances and the manually sketched instances in a wheat image, rice image, and sunflower seed image, respectively. The differences are tiny. These results indicate that our proposed algorithm performs well in the segmentation of granular cereal images, and the size distribution results are significant references to assess the market value of cereals.

5. Discussion

5.1. How Do the Proposed Strategies Work to Solve the Images’ Over-Segmentation and Under-Segmentation

Due to its sensitivity to the object borderline, the MWS algorithm is widely used in numerous complex image segmentation fields, such as medical images [30,31], traffic images [32,33], and even satellite images [34,35]. The segmentation effect closely depends on the marked kernels, which are obtained by manual tagging or automated generation. The measurement of manual tagging is obviously unrealistic in our cereal grain images as it is time-consuming. In this study, we construct the kernel areas referring to the MWS method. However, the MWS method is not perfect enough, and often divides a target into two kernels causing over-segmentation or merges multiple targets into one kernel causing under-segmentation. In this study, we propose two solutions to optimize the problems of over-segmentation and under-segmentation. As for over-segmentation, we notice that an incomplete binary image is the main reason causing mistaken kernel areas. Therefore, we focus on improving the quality of the generated binary image. To begin with, we use an optimal single channel image to replace the gray image, which stretches the contrast between the background and the foreground image. After that, we fill small connected areas by a morphological method, which makes the binary image more intact. These series of operations make the generated kernels more accurate, and alleviate the problem of over-segmentation.
As for the under-segmentation problem of cereal grain images, we propose an MWES method, which relies on embedded iterative ideas to extract instances batch by batch. Since the size and intensity of grains are inhomogeneous and they are dense, it is difficult to divide each instance once. Our MWES deals with this problem well. It adjusts the parameter dynamically in the process of segmentation, saving the segmented objects one by one. The process is shown in Figures S1 and S2. By multiple iterations, all of the objects are saved, which alleviates the problem of under-segmentation.

5.2. Advantages and Application Prospects of the Proposed Algorithm

Essentially, instance segmentation is one of the basic visual tasks in computer vision, in which the CNN methods perform outstandingly. However, they belong to strongly supervised learning, which needs a mass of data and time-consuming labeling, while our method gets rid of the reliance on the data-driven process, so it is more convenient to deploy in engineering practice.
As a critical step in digital image preprocessing, image instance segmentation has very practical application value. In addition to calculating the grain size distributions for evaluating the cereal market value, an accuracy instance segmentation result can be used for fine-grained quality detection of cereals. In this study, our proposed instance segmentation algorithm can be broken down into three steps: (1) mark kernel areas for targets; (2) draw contours by watershed transformation [15]; (3) elimination segmentation. With regards to step one, this study utilizes a series of operations, such as morphology processes (erosion, dilation, open and distance transformation), image enhancement processing, and binary image optimization, for the purpose of acquiring kernel areas that are as accurate as possible. However, it is notable that the way of acquiring kernels is not limited, and it can be changed according to specific tasks if the algorithm is applied in other fields in the future. Step two is a quite mature operation [15]. With regards to step three, this study employs iteration ideas to segment dense targets batch by batch, making sure that grains with different degrees of adhesion can be effectively separated by tuning parameters. In summary, our research provides an enclosed frame for segmenting cereal grain images with dense and small characteristics. Moreover, it shows great performance in completing the cereal grain segmentation task. In addition to the proposed algorithm exerting values in the field of food industries, it is ready to be generalized to other fields, e.g., medical cell image processing.

5.3. Limitations

Our proposed MWES [ c , b ] performs significantly better than the traditional MWS method in segmentation effects (details can be seen in Section 3). However, our method comes at the expense of increased time (seen as Table 1). On the one hand, the reasons can be attributed to the elimination segmentation process, which utilizes an embedded iterative idea that greatly increases the cost of time. On the other hand, we fill the small-connected domains in the process of optimizing the binary image. This step costs a lot of time. In the next research, we expect to optimize our method so as to decrease the cost of time. We can transport our algorithm to an embedded system, such as DSP, to accelerate the computation. Moreover, we can explore another suitable binary image optimization method to reduce the time cost.
Additionally, in this study, our research objects are dense and small targets with the same category, which is a special research point in the agriculture field, while due to the diversity of natural images, there are challenges to our method. Therefore, if it is extended to natural image processing fields with complicated backgrounds, other image preprocessing measures will be necessary, such as image denoise and salient object detection.

6. Conclusions

An improved watershed segmentation algorithm is proposed for the precise instance segmentation of cereal images. The algorithm is improved on the basis of the MWS algorithm. It utilizes an optimal single channel image and an improved binary image to replace the gray image and the binary image of the original procedure, reducing the image’s over-segmentation greatly. Meanwhile, we are inspired by the idea of iteration, proposing an elimination segmentation method, which reduces the image’s under-segmentation. We verify our proposed algorithm by qualitative and quantitative evaluation, exposing a great improvement in visualization and the segmentation accuracy. Our method may be a critical preprocessing step that can be utilized in engineering practice, such as imperfect particle detection from segmented objects, counting the object number of a batch of samples, and displaying the size distribution of objects. The algorithm can even be integrated into the public MATLAB or OpenCV libraries as a toolkit for direct use in the future.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/agriculture12091486/s1, Figure S1: The dynamic processing and preservation processes of a wheat image using MWES method; Figure S2. The dynamic processing and preservation processes of a wheat image using MWES [ c ,   b ] method.

Author Contributions

Conceptualization, X.C. and H.L.; methodology, J.L.; software, J.C. and J.L.; validation, M.Z. and F.X.; formal analysis, Z.Z.; investigation, J.L.; resources, L.Y. and F.X.; data curation, M.Z. and F.X.; writing—original draft preparation, J.C., J.L. and Z.Z.; writing—review and editing, H.L., J.L., L.Y. and F.X.; visualization, J.L.; supervision, X.C. and H.L.; project administration, H.L.; funding acquisition, H.L. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by National key R&D program of China grant number [2021YFD1400102]; National Natural Science Foundation of China grant numbers [62103269] and [62073221]; the project was funded by China Postdoctoral Science Foundation grant number [2019M661509]; Med-X Research Fund of Shanghai Jiao Tong University grant number [YG2022QN077]; Scientific Research Project of Customs grant number [2019HK023]; and the APC was funded by [2021YFD1400102].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request from researchers who meet the eligibility criteria. Kindly contact the first author privately through e-mail.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, S.K.; Vidyarthi, S.K.; Tiwari, R. Machine learnt image processing to predict weight and size of rice kernels. J. Food Eng. 2020, 274, 109828. [Google Scholar] [CrossRef]
  2. Zhou, H.; Yun, P.; He, Y. Rice appearance quality. In Rice; AACC International: Washington, DC, USA, 2019; pp. 371–383. [Google Scholar] [CrossRef]
  3. Sharma, N.; Khanna, R. Rice grain quality: Current developments and future prospects. In Recent Advances in Grain Crops Research; Intech Open: London, UK, 2019; p. 105. [Google Scholar] [CrossRef]
  4. Groote, H.D.; Narrod, C.S.; Kimenju, C.; Bett, C.; Scott, R.P.B.; Tiongco, M.M.; Gitonga, Z. Measuring rural consumers’ willingness to pay for quality labels using experimental auctions: The case of aflatoxin-free maize in Kenya. Agric. Econ. 2016, 47, 33–45. [Google Scholar] [CrossRef]
  5. Vithu, P.; Moses, J. Machine vision system for food grain quality evaluation: A review. Trends Food Sci. Technol. 2016, 56, 13–20. [Google Scholar] [CrossRef]
  6. Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef]
  7. Chen, Y.; Bai, Y.; Zhang, W.; Mei, T. Destruction and construction learning for fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5157–5166. [Google Scholar] [CrossRef]
  8. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021. [Google Scholar] [CrossRef]
  9. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar] [CrossRef]
  10. Huang, Z.; Huang, L.; Gong, Y.; Huang, C.; Wang, X. Mask scoring r-cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6409–6418. [Google Scholar] [CrossRef]
  11. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Long Beach, CA, USA, 15–20 June 2019; pp. 9157–9166. [Google Scholar] [CrossRef]
  12. Hafiz, A.M.; Bhat, G.M. A survey on instance segmentation: State of the art. Int. J. Multimed. Inf. Retr. 2020, 9, 171–189. [Google Scholar] [CrossRef]
  13. Zhang, H.; Tang, Z.; Xie, Y.; Gao, X.; Chen, Q. A watershed segmentation algorithm based on an optimal marker for bubble size measurement. Measurement 2019, 138, 182–193. [Google Scholar] [CrossRef]
  14. Ji, X.; Li, Y.; Cheng, J.; Yu, Y.; Wang, M. Cell image segmentation based on an improved watershed algorithm. In Proceedings of the 2015 8th International Congress on Image and Signal Processing, Shenyang, China, 14–16 October 2015; pp. 433–437. [Google Scholar] [CrossRef]
  15. Hamarneh, G.; Li, X. Watershed segmentation using prior shape and appearance knowledge. Image Vis. Comput. 2009, 27, 59–68. [Google Scholar] [CrossRef]
  16. Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef]
  17. Meyer, F. Topographic distance and watershed lines. Signal Processing 1994, 38, 113–125. [Google Scholar] [CrossRef]
  18. Gao, H.; Xue, P.; Lin, W. A new marker-based watershed algorithm. In Proceedings of the 2004 IEEE International Symposium on Circuits and Systems, Vancouver, BC, Canada, 23–26 May 2004. [Google Scholar] [CrossRef]
  19. Zhang, X.; Jia, F.; Luo, S.; Liu, G.; Hu, Q. A marker-based watershed method for X-ray image segmentation. Comput. Methods Programs Biomed. 2014, 113, 894–903. [Google Scholar] [CrossRef] [PubMed]
  20. Ng, H.; Ong, S.; Foong, K.; Liu, G.; Hu, Q. Medical image segmentation using k-means clustering and improved watershed algorithm. In Proceedings of the 2006 IEEE Southwest Symposium on Image Analysis and Interpretation, Denver, CO, USA, 26–28 March 2006; pp. 61–65. [Google Scholar] [CrossRef]
  21. Oo, S.Z.; Khaing, A.S. Brain tumor detection and segmentation using watershed segmentation and morphological operation. Int. J. Res. Eng. Technol. 2014, 3, 367–374. [Google Scholar] [CrossRef]
  22. Avinash, S.; Manjunath, K.; Kumar, S.S. An improved image processing analysis for the detection of lung cancer using Gabor filters and watershed segmentation technique. In Proceedings of the 2016 International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 26–27 August 2016; pp. 1–6. [Google Scholar] [CrossRef]
  23. Li, G.; Wan, Y. Improved watershed segmentation with optimal scale based on ordered dither halftone and mutual information. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; pp. 296–300. [Google Scholar] [CrossRef]
  24. Genitha, C.H.; Sowmya, M.; Sri, T. Comparative Analysis for the Detection of Marine Vessels from Satellite Images Using FCM and Marker-Controlled Watershed Segmentation Algorithm. J. Indian Soc. Remote Sens. 2020, 48, 1207–1214. [Google Scholar] [CrossRef]
  25. National Standard Disclosure System. Available online: https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=EB37F2E3E8B0C26EBB3A329D6C0E390E (accessed on 8 September 2022).
  26. Atwood, R.; Jones, J.; Lee, P.; Hench, L. Analysis of pore interconnectivity in bioactive glass foams using X-ray microtomography. Scr. Mater. 2004, 51, 1029–1033. [Google Scholar] [CrossRef]
  27. Wong, C.F.; Yeo, J.Y.; Gan, S.K.E. APD Colony Counter App: Using Watershed algorithm for improved colony counting. Nat. Methods 2016, 5, 1–3. [Google Scholar] [CrossRef]
  28. Jiao, S.; Li, X.; Lu, X. An improved Ostu method for image segmentation. In Proceedings of the 2006 8th international Conference on Signal Processing, Guilin, China, 16–20 November 2006. [Google Scholar] [CrossRef]
  29. Shih, F.C.; Mitchell, O.R. A mathematical morphology approach to Euclidean distance transformation. IEEE Trans. Image Processing 1992, 1, 197–204. [Google Scholar] [CrossRef] [PubMed]
  30. Norouzi, A.; Rahim, M.S.; Altameem, A.; Saba, T.; Rad, A.E.; Rehman, A.; Uddin, M. Medical image segmentation methods, algorithms, and applications. IETE Tech. Rev. 2014, 31, 199–213. [Google Scholar] [CrossRef]
  31. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision, Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef] [Green Version]
  32. Fritsch, J.; Kuehnl, T.; Geiger, A. A new performance measure and evaluation benchmark for road detection algorithms. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems, The Hague, The Netherlands, 6–9 October 2013; pp. 1693–1700. [Google Scholar] [CrossRef]
  33. Menze, M.; Geiger, A. Object scene flow for autonomous vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3061–3070. [Google Scholar] [CrossRef]
  34. Deepika, N.; Vishnu, K. Different techniques for satellite image segmentation. In Proceedings of the 2015 Online International Conference on Green Engineering and Technologies, Coimbatore, India, 27 November 2015; pp. 1–6. [Google Scholar] [CrossRef]
  35. Su, B.; Noguchi, N. Discrimination of land use patterns in remote sensing image data using minimum distance algorithm and watershed algorithm. Eng. Agric. Environ. Food 2013, 6, 48–53. [Google Scholar] [CrossRef]
Figure 1. Segmentation examples of cereal grain images with MWS and MWES[c,b].
Figure 1. Segmentation examples of cereal grain images with MWS and MWES[c,b].
Agriculture 12 01486 g001
Figure 2. The operating flow by MWS algorithm.
Figure 2. The operating flow by MWS algorithm.
Agriculture 12 01486 g002
Figure 3. An example of over-segmentation using MWS method, and the exhibition using our improved method. MWS represents the marker-based watershed segmentation algorithm, and MWS[c,b] represents our improved method. (ad) represent a gray image, a binary image, an identify foreground image and a segmented image by MWS, respectively; (eh) represent the corresponding to (ad) by our proposed method.
Figure 3. An example of over-segmentation using MWS method, and the exhibition using our improved method. MWS represents the marker-based watershed segmentation algorithm, and MWS[c,b] represents our improved method. (ad) represent a gray image, a binary image, an identify foreground image and a segmented image by MWS, respectively; (eh) represent the corresponding to (ad) by our proposed method.
Agriculture 12 01486 g003
Figure 4. Flow diagrams of MWS and the improved watershed segmentation algorithms (MWS[b] and MWS[c]). Middle dotted box presents the procedure of MWS, in which the acquisition process of gray image is replaced by the right dotted box representing MWS[c], and the acquisition process of binary image is replaced by the left dotted frame representing MWS[b]. R, G, B represent the red, green and blue channel of the input image, respectively.
Figure 4. Flow diagrams of MWS and the improved watershed segmentation algorithms (MWS[b] and MWS[c]). Middle dotted box presents the procedure of MWS, in which the acquisition process of gray image is replaced by the right dotted box representing MWS[c], and the acquisition process of binary image is replaced by the left dotted frame representing MWS[b]. R, G, B represent the red, green and blue channel of the input image, respectively.
Agriculture 12 01486 g004
Figure 5. An example of under-segmentation using MWS and our improved method. (A) represents a raw image; (B) represents the segmented image; (C) represents the correctly segmented images; (D) represents the resegmented image; (E) represents the correctly resegmented images.
Figure 5. An example of under-segmentation using MWS and our improved method. (A) represents a raw image; (B) represents the segmented image; (C) represents the correctly segmented images; (D) represents the resegmented image; (E) represents the correctly resegmented images.
Agriculture 12 01486 g005
Figure 6. The flow diagram of MWES algorithm.
Figure 6. The flow diagram of MWES algorithm.
Agriculture 12 01486 g006
Figure 7. Statistical results of the number error and the segmentation accuracy. (A) represents the average number errors of 25 images in four methods. (B) represents the average segmentation accuracies of 25 images in four methods. (C) represents number errors of five different images in four methods. (D) represents segmentation accuracies of five different images in four methods. Error bars represent the standard error of mean (SEM) (*** p < 0.001; ** p < 0.01; T test with analysis of double-sample mean in pairs).
Figure 7. Statistical results of the number error and the segmentation accuracy. (A) represents the average number errors of 25 images in four methods. (B) represents the average segmentation accuracies of 25 images in four methods. (C) represents number errors of five different images in four methods. (D) represents segmentation accuracies of five different images in four methods. Error bars represent the standard error of mean (SEM) (*** p < 0.001; ** p < 0.01; T test with analysis of double-sample mean in pairs).
Agriculture 12 01486 g007
Figure 8. The size distribution results of a wheat image (A), a rice image (B), and a sunflower seed image (C). As for each result, the top row represents the perimeter distribution, the area distribution, and the long–short axis ratio distribution in our segmented instances, respectively. The bottom row represents the perimeter distribution, the area distribution, and the long–short axis ratio distribution in manually sketched instances, respectively.
Figure 8. The size distribution results of a wheat image (A), a rice image (B), and a sunflower seed image (C). As for each result, the top row represents the perimeter distribution, the area distribution, and the long–short axis ratio distribution in our segmented instances, respectively. The bottom row represents the perimeter distribution, the area distribution, and the long–short axis ratio distribution in manually sketched instances, respectively.
Agriculture 12 01486 g008
Table 1. Time complexity analysis.
Table 1. Time complexity analysis.
MWS (s)MWES (s) M W E S [ c ]   ( s ) M W E S [ b ]   ( s ) M W E S [ c , b ]   ( s )
0.88 0.88 ×   n 0.89 ×   n 9.90 ×   n 10.05 ×   n
Here n represents the number of iterations. We take 9 in our experiments.
Table 2. The average size distributions of our segmented instances and the manually sketched instances.
Table 2. The average size distributions of our segmented instances and the manually sketched instances.
Mean   Perimeter   ( m m )   Mean   Area   ( m m 2 ) Mean Long–Short Axis RatioTotal Grains
WheatOurs 19.80     ± 1.25 24.04   ± 3.05 1.95   ± 0.17601
Ground truth 19.78     ±   1.23 24.04   ±   3.04 1.95   ± 0.16600
RiceOurs 17.08   ± 1.91 16.40   ± 2.63 2.11   ± 0.39746
Ground truth 16.80   ±   1.69 16.35   ±   2.45 2.11   ±   0.36750
Sunflower seedOurs 41.75   ± 3.10 81.02   ± 12.61 2.60   ± 0.36124
Ground truth 41.52   ± 2.99 81.23   ± 12.32 2.60   ± 0.36126
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liang, J.; Li, H.; Xu, F.; Chen, J.; Zhou, M.; Yin, L.; Zhai, Z.; Chai, X. A Fast Deployable Instance Elimination Segmentation Algorithm Based on Watershed Transform for Dense Cereal Grain Images. Agriculture 2022, 12, 1486. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture12091486

AMA Style

Liang J, Li H, Xu F, Chen J, Zhou M, Yin L, Zhai Z, Chai X. A Fast Deployable Instance Elimination Segmentation Algorithm Based on Watershed Transform for Dense Cereal Grain Images. Agriculture. 2022; 12(9):1486. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture12091486

Chicago/Turabian Style

Liang, Junling, Heng Li, Fei Xu, Jianpin Chen, Meixuan Zhou, Liping Yin, Zhenzhen Zhai, and Xinyu Chai. 2022. "A Fast Deployable Instance Elimination Segmentation Algorithm Based on Watershed Transform for Dense Cereal Grain Images" Agriculture 12, no. 9: 1486. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture12091486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop