Next Article in Journal
Dynamic Adaptation Method of Business Process Based on Hierarchical Feature Model
Previous Article in Journal
Topic Models Ensembles for AD-HOC Information Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cow Rump Identification Based on Lightweight Convolutional Neural Networks

1
School of Computer Science, Harbin Finance University, Harbin 150030, China
2
College of Electrical and Information, Northeast Agricultural University, Harbin 150030, China
3
Department of Science and Technology, Northeast Agricultural University, Harbin 150030, China
*
Author to whom correspondence should be addressed.
Submission received: 23 June 2021 / Revised: 21 August 2021 / Accepted: 23 August 2021 / Published: 2 September 2021

Abstract

:
Individual identification of dairy cows based on computer vision technology shows strong performance and practicality. Accurate identification of each dairy cow is the prerequisite of artificial intelligence technology applied in smart animal husbandry. While the rump of each dairy cow also has lots of important features, so do the back and head, which are also important for individual recognition. In this paper, we propose a non-contact cow rump identification method based on convolutional neural networks. First, the rump image sequences of the cows while feeding were collected. Then, an object detection model was applied to detect the cow rump object in each frame of image. Finally, a fine-tuned convolutional neural network model was trained to identify cow rumps. An image dataset containing 195 different cows was created to validate the proposed method. The method achieved an identification accuracy of 99.76%, which showed a better performance compared to other related methods and a good potential in the actual production environment of cow husbandry, and the model is light enough to be deployed in an edge-computing device.

1. Introduction

Individual identification is a tool that could be used to manage the possible development and the diseases of the dairy cows [1]. For modern precision dairy farming, the individual cow has been paid more attention than the herd. In addition, the implementation of automatic individual cow identification is the fundamental ingredient which will extend to fields such as intelligent milking, automatic behavior and health monitoring, etc. [2,3]. In this paper, we proposed a cow identification method focused on the rump part, which can be applied to some fields of intelligent analysis and individualized behavior detection with less labor, such as lameness detection, body condition scoring, individual localization, etc. [4,5,6]. Furthermore, the cow identification based on other angles of view and various systems for cows can take it as a reference.
In general, the animal identification can be accomplished by numerous methods, which could be divided into mechanical, electronic, and biometric [7]. For mechanical methods, take the ear brand, a traditional mechanical method, as a counter-example; its surface information can be identified by people easily. However, it might be low in speed and be less automatic, and the brands tend to be stolen, removed or duplicated, which causes some inevitable issues [8]. Thus far, the electronic method, such as the sensor-based system, has become a widespread electronic method in farms, which includes small passive RFID ear tags [9,10,11], active RFID ear tags [12], and some wired, wireless or hybrid digital device 01networks, radar, etc. [13]. These methods did gain popularity over the past few years, but they actually present some restrictions. For example, the ear tags may cause stress on the cows and may also be lost or damaged over time. Additionally, the reading distance is limited [14]. The development cost of the local position measurement system based on radar technology [15] is too high. To solve the restriction of these methods, the technology based on the computer vision has drawn most researchers’ attention due to its low cost and non-contact type. So, it is necessary to apply computer vision technology to individual cow identification.
Nowadays, with the computer vision technology growing rapidly, most tasks in dairy farming have been much more automatic than before. In [16], an imaging system based on deep learning to detect feeding behaviors of dairy cows was developed. The authors of [17] proposed an improved single shot multi-box detector method to score the body condition of cows. In [18] the authors achieved the lameness detection based on the YOLOv3 and a relative step size characteristic vector. From these related studies, we can see that these non-contact methods have improved the automation and the accuracy of the precision dairy farming.
With regard to the identification of dairy cows, the current research focuses on the following aspects: muzzle, face, iris, body and gait, as shown in Figure 1a–f. The authors of [19] came up with vision animal biometric systems based on muzzle point image patterns; a Gaussian pyramid was applied to filter the noise from the muzzle images, SIFT and SURF were used as feature extraction and representation algorithms, the matching similarity score based on the key points of the muzzle point image was used to evaluate the identification accuracy of cattle. The authors of [20] proposed a cow’s face representation model based on local binary pattern (LBP) texture features. After obtaining the cow face images, the images were divided into multiple regions, and a description of each region was provided using a local binary pattern, and then these descriptors were combined into a histogram of the facial image to realize the cow identification. The authors of [21] developed a cow identification system based on iris analysis includes iris imaging, iris detection, and identification. First, a clear iris was selected by comparing the captured iris sequence image, and the image was segmented by edge detection, and the contour of the iris was integrated into an elliptical shape. Then, the iris image was normalized, and the local and global characteristics of the milk cattle were extracted to complete the individual identification of dairy cows based on the iris images. In [22], a vision system to extract body images and identify cows was proposed. The FAST, SIFT and FLANN methods were used for feature extraction, descriptor, and matching. However, in a large herd of cows, extracting only the body image of the side of the cow is not enough to ensure the accuracy of identifying the cow. The authors of [23] proposed a cow identification method based on the L component of Lαβ color space to identify the cow’s side. In [24], the authors proposed a cow identification method based on three-dimensional video analysis using RGB-D cameras. First, use the ICP algorithm to align the 3D point cloud data, then extract the gait information based on the average gait contour, and then linearly combine the extracted texture features of the cow coat to realize the identification of individual cows. However, cows will perform non-periodic head movements when walking, which will reduce the identification accuracy.
In actual application, for the side of the cow, the requirement of the shooting angle is quite strict, and the cover problem may affect the results; for using the back of the cow, the deployment of the experimental device may be more complicated; as for using the rump of the cow, the complexity of the image collection can be reduced. Additionally, in subsequent cases, such as body condition scores, type classification, both of them will use the characteristics of the overall or partial area of the dairy rump, so the individual identification of the dairy cow can be used as the basis for future research. Therefore, in this paper, we implement cow individual identification based on cow rump images through a fine-tuned convolutional neural network. First, we use a camera placed behind the cow to obtain the upright standing images of the cow rump, then we use the SSD [25] model to perform real-time cow detection, and finally we fine-tune a convolutional neural network to achieve individual cow identification.
The sections of this paper are arranged as follows. Section 2 describes the data collection and the detailed process of the method we adopted. The experimental results are summarized and discussed in Section 3. Section 4 gives the final conclusion.

2. Materials and Methods

2.1. Image Acquisition

In this experiment, the rump image sequences of the cow were collected. The image acquisition was performed in a relatively natural environment; some other objects, such as walls and iron railings, may cause difficulties in the detection and identification. Images can reflect many common characteristics of dairy cows’ actual breeding environment, so they could evaluate the identification performance of dairy cows objectively.
The experimental images collection was performed in the cowsheds of Shanghe Ranch and Nestle Dairy Cow Breeding Training Center in Harbin, Heilongjiang, in July 2018 and September 2018, respectively. The experimental object was the Holstein cow. Dairy cows were housed in barns with sand beds, with a fan and a sprayer to reduce the temperature around them. The self-locking neck clips were installed at the feeding line. The layout of the barn is shown in Figure 2. The “Δ” is the initial position of the camera, 3.5 m away from the feeding line. The neck clip will clamp the cow’s neck when the cow is eating. An Intel Realsense D435 camera was used to collect the image sequences of the cow rump along the camera movement route in this figure.

2.2. Experimental Data

In the process of collecting experimental data, the duration of each cow appearing in the field of view was about 4 s, and the frame rate was about 4 fps. The collected image sequences of the cow rump were saved in .png format with a resolution of 1280 (horizontal) × 720 (vertical) pixels. A total of 195 dairy cows’ image sequences were collected for subsequent object detection and individual identification in our experiment. First, the collected RGB image sequence were inputted as the images to be detected into the object detection model, then the detection results were assigned to the corresponding cow categories, forming the rump images to be identified.
After detecting the rump object, our experiment extracted 3057 rump images of 195 cows as input images for subsequent individual identification. In order to simplify the experiment, the 195 cows were numbered from 0 to 194. The rump images of each cow to be identified were randomly divided into a training set and a validation set with a ratio of 7:3. Finally, we obtained 2140 images in the training set and 917 images in the test set.

2.3. Individual Identification

Recently, convolutional neural networks (CNN) have had great achievements in visual recognition/classification tasks by learning the deep features of the original images [26,27,28,29]. Since the texture information of the cow rump is relatively less, it is difficult to manually define discriminative features from these cow images using traditional algorithms. Therefore, in order to take advantage of the deep features of the cow rump images, this paper proposed a cow individual identification method based on convolutional neural networks. The flowchart of this identification method is shown in Figure 3. The SSD object detection model was used to extract the cow rump object in each frame as the image to be identified, and then the image dataset was used a light convolutional neural network model to complete the individual identification of dairy cows. The detailed introduction of this method is as follows.

2.3.1. Object Detection

In order to detect the cow image in each cow images, we used the SSD object detection model, a deep learning framework for object detection, which converts the two stages of selecting proposal regions and classification into a single-stage regression problem, and the effective main network outputs very sparse detection results to achieve real-time object detection of the trained object category. SSD has pre-trained the cow object and meets the real-time detection requirements of the cow object in actual production applications. Therefore, we used the SSD object detection model to detect the cow rump. We use Equation (1) to select the rump images of interest as the images to be identified.
C O W = d e t i | A d e t i > A t , R m i n < R d e t i < R m a x , l a b e l d e t i = c o w ; i = 1 , 2 , n
where COW denotes the set of images to be identified, A d e t i denotes the area of the object detection result, A t denotes an area threshold, R d e t i represents the ratio of the width and height of the object detection result d e t i , and the R m i n and the R m a x denotes the minimum and the maximum thresholds of the aspect ratio, respectively, l a b e l d e t i denotes the object name of d e t i , n denotes the number of object detection results for the entire image sequence. During the experiment, we found that when A t is set to 0.1 × 1280 × 720 and R m i n and R m a x were set to 0.35 and 0.7, based on experience, respectively, the prominent rumps of interest in each cow image were extracted. Our experiment selected the detection result named c o w , the area was larger than A t , and the aspect ratio R d e t i was between the aspect ratio thresholds R m i n and R m a x as the images to be identified, the object detection time of the single image sample can reach 20 ms.
Figure 3 shows an example of rump object detection. From this figure, it can be found that the method detected the position of the rump object accurately. Some false detection objects in the detection result were filtered by Equation (1), as shown in Figure 4a; the two cows on the left were mistakenly detected as one cow object due to the camera angle. For such detection results, the aspect ratio can be used to eliminate them.

2.3.2. Cow Identification Model Based on Convolutional Neural Networks

In the selection of base network for individual identification model, Mobilenet v2 [30] network with the highest and the lightest performance in subsequent individual identification experiments was selected as the base network for this experiment. Mobilenet v2 is a typical light convolutional neural network model, which also has high classification performance in ImageNet. In the individual identification method proposed in this paper, the Mobilenet v2 network was fine-tuned for the deep features of the cow rump. The images of cow rump were used as the input of the network, and the weights pre-trained on ImageNet were taken as the initial parameters of the network, and the new model was fine-tuned by updating the parameters of the last two layers of Mobilenet v2. Mobilenet v2 model structure is shown in Table 1. Layer1 was a convolution layer, Layer2 to Layer7 were bottleneck depth-separable convolution layers, the bottleneck depth-separable convolution layers is shown in Figure 5, in which ReLU6 activation function was used behind each bottleneck depth-separable convolution layer. The FC8 was a fully connected layer, the Layer9 was an average pooling layer, the last Layer was the final fully connected layer.
In order to explore the influence of over-fitting or under-fitting on the experimental results, the original dataset and the data enhanced experimental control group were added to the experimental process. Horizontal flip and random cropping methods were applied in the experiment and the original dataset was enhanced by 20 times. In terms of model training, the original images in the dataset were input into the convolutional neural network for training, and each input image was converted into a 224 × 224 RGB image. After multiple convolution and pooling operations were performed, the prediction result was generated by the last fully connected layer of the network. There were 195 cows in the dataset of this experiment. The purpose of the experiment was to identify 195 cows. Therefore, the last fully connected layer of the convolutional neural network was made up of 195 neurons, and input the results into the 195-dimensional softmax layer, which produces a distribution on 195 categories of labels.
Mobilenet v2 uses inverted residual and linear bottlenecks to reduce the parameters of the model extremely. First, the rump image and the corresponding cow category label were input to the network and pass to the first convolution layer conv1.This layer has 32 convolution kernels with the size of 1 × 1, the convolution operation was performed in 2-pixel steps. The original image with the size of 224 × 224 × 3 was converted into a 112 × 112 × 32 feature map, then use the activation function Relu6 to increase the nonlinearity of neural network model. After this operation, 17 bottleneck depth-separable convolution layers was used, and the size of the obtained feature map was 7 × 7 × 320. Then, the fully connected layer has 1280 neurons, and there was a maximum pooling layer behind, and finally there was a 195-dimensional softmax layer. The probability that the input cow was classified into a certain category was represented by the output result, and the preliminary prediction of the input image was completed.
Next, after calculating the error between the prediction result and the actual category, the stochastic gradient descent method was used to minimize the loss function through the back propagation of the error to achieve the update of the parameters of the last two layers and complete the fine-tuning of the network. After the model converges, a set of optimal parameters of the network were obtained, which was the model obtained through training.
Finally, in the validation stage, the cow images in the validation set were input into the model for prediction, and the final identification result was obtained.

3. Experimental Results and Analysis

The hardware configuration for the experiment is as follows: the operating system is Windows 10, the CPU is Intel Core i7-7800X 3.5 GHz, the memory is 64 GB, and the graphics card is NVIDIA GTX 1080Ti 11 GB. The code is implemented using Python based on TensorFlow [31].
In this experiment, the two datasets were used to fine-tune the convolutional neural network to complete the end-to-end identification of the cow rump. In this paper, the accuracy was used as the evaluation indicator for individual identification, that is, the percentage of correctly predicted samples in the validation set. In terms of experimental parameters, the model was trained with a fixed learning rate of 0.001, and the batch size was set to 30. In terms of the base network selection of the model, this experiment compared the experimental results of five typical convolutional neural network models. The identification accuracy comparison for fine-tuning different base networks in three datasets is shown in Table 2.
From the experimental results, it can be found that the accuracy of each base network is relatively close without data augment. With the increase in the number of experimental data, the identification accuracy of all the network is increasing, and the accuracy of each network model is similar. While it is obvious that the model trained by Mobilenet v2 is smaller than other networks, one possible reason is that the Mobilenet v2 used a small number of model parameters. In the end, the model with Mobilenet v2 using a 20 times augmented dataset achieved the highest identification accuracy of 99. 76%. In terms of model reason, it can reach 31.17 ms, which is comparable in reasoning time due to its network structure, Therefore, this study can achieve real-time individual identification. It can be seen that although accuracy is considerable, there is still a small amount of samples that have been misused, mostly caused by fuzzy images, caused by the lighting. Some misunderstandings are shown in Figure 6, while (a) shows the image under glare condition, and the (b) shows the blur image.
At present, with our understanding, there is almost no work using cow rump for individual identification. Therefore, in this paper, the individual identification method of cow rump was compared with other methods from related views. The identification accuracy comparison of cow rump and other related views is shown in Table 3.
The Table showed that the number of object categories of other related methods are relatively small. The work of [21] based on iris analysis achieved an identification accuracy of 98.33%, but the iris image acquisition is difficult. Furthermore, their method was only evaluated on six cows, and as the amount of experimental data increased, the identification accuracy will be affected. The authors of [23] proposed a SIFT-based method to identify the cow’s side, which also achieved an identification accuracy of 98.33%. However, the SIFT-based traditional method is greatly affected by the environment, and its calculation amount is large and it is time-consuming, so it is difficult to realize real-time identification in actual production environments. Moreover, in our previous research [32], we proposed a cow identification method based on fusion of deep parts features of the cow’s side. The method achieved an identification accuracy of 98.36% in a dataset containing 93 cows, which is 0.03% higher than the work of [23]. The proposed cow rump identification method achieved 4.46% and 2.04% improvements in performance compared to the work of using the face and body, respectively, which showed the advantages of high accuracy.

4. Conclusions

In this work, a non-contact cow rump identification method based on convolutional neural networks was proposed. In this method, SSD object detection model was first applied to detect the cow rump object in the rump image sequences of the cows in the feeding, and then a light convolutional neural network model was trained to identify cow rumps. To validate the proposed method, an image dataset containing 195 different cows was created, and relevant individual identification experiments were performed on this dataset. The proposed cow rump identification method achieved an accuracy of 99.76%, which has the advantages of high accuracy compared to other methods from related views. Moreover, the model can detect and classify 120 images per second, so the model can conduct real-time detection and identification, at the mean time, the model is light enough to be deployed in an edge-computing device. The experimental results also demonstrated the potential for our method to be applied to individualized behavior detection and intelligent analysis of dairy cows.

Author Contributions

The specific division of labor in the study is as follows: Conceptualization, H.H.; Data curation, W.S. (Wei Shi) and J.G.; Formal analysis, W.S. (Wei Shi) and J.G.; Funding acquisition, H.H. and W.S. (Weizheng Shen); Investigation, H.H., W.S. (Weizheng Shen) and S.K.; Methodology, H.H.; Project administration, H.H., W.S. (Weizheng Shen) and S.K.; Resources, H.H.; Software, W.S. (Wei Shi), J.G. and Z.Z.; Validation, W.S. (Weizheng Shen); Visualization, W.S. (Wei Shi); Writing—original draft, H.H.; Writing—review & editing, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China, grant number 2019YFE0125600; National Natural Science Foundation of China, grant number 32072788; Agriculture Research System of China, grant number CARS-36; Natural Science Foundation of Heilongjiang Province, grant number F2016020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported in part by the Natural Science Foundation of Heilongjiang Province (F2016020), the National Key Research and Development Program of China (2019YFE0125600), the National Natural Science Foundation of China (32072788) and the China Agriculture Research System (CARS-36).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adell, N.; Puig, P.; Rojas-Olivares, A.; Caja, G.; Carné, S.; Salama, A.A.K. A bivariate model for retinal image identification in lambs. Comput. Electron. Agric. 2012, 87, 108–112. [Google Scholar] [CrossRef]
  2. Kumar, S.; Pandey, A.; Satwik, K.S.R.; Kumar, S.; Singh, S.K.; Singh, A.K.; Mohan, A. Deep learning framework for identification of cattle using muzzle point image pattern. Measurement 2018, 116, 1–17. [Google Scholar] [CrossRef]
  3. Zin, T.T.; Phyo, C.N.; Tin, P.; Hama, H.; Kobayashi, I. Image Technology Based Cow Identification System using Deep Learning. In Proceedings of the International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 14–16 March 2018; p. 1. [Google Scholar]
  4. Li, W.; Ji, Z.; Wang, L.; Sun, C.; Yang, X. Automatic individual identification of Holstein dairy cows using tailhead images. Comput. Electron. Agric. 2017, 142, 622–631. [Google Scholar] [CrossRef]
  5. Drach, U.; Halachmi, I.; Pnini, T.; Izhaki, I.; Degani, A. Automatic herding reduces labour and increases milking frequency in robotic milking. Biosys. Eng. 2017, 155, 134–141. [Google Scholar] [CrossRef]
  6. Phyo, C.N.; Zin, T.T.; Hama, H.; Kobayashi, I. A Hybrid Rolling Skew Histogram-Neural Network Approach to Dairy Cow Identification System. In Proceedings of the 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, 19–21 November 2018; pp. 1–5. [Google Scholar]
  7. Gaber, T.; Tharwat, A.; Hassanien, A.E.; Snasel, V. Biometric cattle identification approach based on Weber’s Local Descriptor and AdaBoost classifier. Comput. Electron. Agric. 2016, 122, 55–66. [Google Scholar] [CrossRef] [Green Version]
  8. Wei, G.; Dongping, Q. Techniques of Radio Frequency Identification and Anti-collision in Digital Feeding Dairy Cattle. In Proceedings of the 2009 Second International Conference on Information and Computing Science, Manchester, UK, 21–22 May 2009; Volume 1, pp. 216–219. [Google Scholar]
  9. Awad, A.I. From classical methods to animal biometrics: A review on cattle identification and tracking. Comput. Electron. Agric. 2016, 123, 423–435. [Google Scholar] [CrossRef]
  10. Ng, M.L.; Leong, K.S.; Hall, D.M.; Cole, P.H. A small passive UHF RFID tag for livestock identification. In Proceedings of the IEEE International Symposium on Microwave, Antenna, Propagation and EMC Technologies for Wireless Communications, Beijing, China, 8–12 August 2005; Volume 1, pp. 67–70. [Google Scholar]
  11. Tikhov, Y.; Kim, Y.; Min, Y.H. A novel small antenna for passive RFID transponder. In Proceedings of the 2005 European Microwave Conference, Paris, France, 4–6 October 2005; Volume 1, p. 4. [Google Scholar]
  12. Jin, G.; Lu, X.; Park, M.S. An indoor localization mechanism using active RFID tag. In Proceedings of the IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC’06), Taichung, Taiwan, 5–7 June 2006; Volume 1, p. 4. [Google Scholar]
  13. Trevarthen, A.; Michael, K. The RFID-enabled dairy farm, towards total farm management. In Proceedings of the 2008 7th International Conference on Mobile Business, Barcelona, Spain, 7–8 July 2008; pp. 241–250. [Google Scholar]
  14. Voulodimos, A.S.; Patrikakis, C.Z.; Sideridis, A.B.; Ntafis, V.A.; Xylouri, E.M. A complete farm management system based on animal identification using RFID technology. Comput Electron. Agric. 2010, 70, 380–388. [Google Scholar] [CrossRef]
  15. Gygax, L.; Neisen, G.; Bollhalder, H. Accuracy and validation of a radar-based automatic local position measurement system for tracking dairy cows in free-stall barns. Comput. Electron. Agric. 2007, 56, 23–33. [Google Scholar] [CrossRef]
  16. Kuan, C.Y.; Tsai, Y.C.; Hsu, J.T.; Ding, S.T.; Te Lin, T. An Imaging System Based on Deep Learning for Monitoring the Feeding Behavior of Dairy Cows. In Proceedings of the 2019 ASABE Annual International Meeting, American Society of Agricultural and Biological Engineers. Boston, MA, USA, 7–10 July 2019; p. 1. [Google Scholar]
  17. Kuan, C.Y.; Tsai, Y.C.; Hsu, J.T.; Ding, S.T.; Te Lin, T. An Improved Single Shot Multibox Detector Method Applied in Body Condition Score for Dairy Cows. Animals 2019, 9, 470. [Google Scholar]
  18. Wu, D.; Wu, Q.; Yin, X.; Jiang, B.; Wang, H.; He, D.; Song, H. Lameness detection of dairy cows based on the YOLOv3 deep learning algorithm and a relative step size characteristic vector. Biosyst. Eng. 2020, 189, 150–163. [Google Scholar] [CrossRef]
  19. Kumar, S.; Singh, S.K. Cattle identification, A New Frontier in Visual Animal Biometrics Research. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2019, 90, 689–708. [Google Scholar] [CrossRef]
  20. Cai, C.; Li, J. Cattle face identification using local binary pattern descriptor. In Proceedings of the 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Kaohsiung, Taiwan, 29 October–1 November 2013; pp. 1–4. [Google Scholar]
  21. Lu, Y.; He, X.; Wen, Y.; Wang, P.S. A new cow identification system based on iris analysis and recognition. Int. J. Biomet. 2014, 6, 18. [Google Scholar] [CrossRef]
  22. Zhao, K.; Jin, X.; Ji, J.; Wang, J.; Ma, H.; Zhu, X. Individual identification of Holstein dairy cows based on detecting and matching feature points in body images. Biosyst. Eng. 2019, 181, 128–139. [Google Scholar] [CrossRef]
  23. Lv, F.; Zhang, C.; Lv, C. Image identification of individual cow based on SIFT in Lαβ color space. Proc. MATEC Web Conf. EDP Sci. 2018, 176, 01023. [Google Scholar] [CrossRef]
  24. Okura, F.; Ikuma, S.; Makihara, Y.; Muramatsu, D.; Nakada, K.; Yagi, Y. RGB-D video-based individual identification of dairy cows using gait and texture analyses. Comput. Electron. Agric. 2019, 165, 104944. [Google Scholar] [CrossRef]
  25. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In European Conference on Computer Vision; Springer: Cham, The Switzerland, 2015. [Google Scholar]
  26. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In CVPR; IEEE Computer Society: Los Alamitos, CA, USA, 2017; Volume 1, p. 3. [Google Scholar]
  27. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  28. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:arXiv:1409.1556. [Google Scholar]
  29. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In CVPR: IEEE Computer Society: Los Alamitos, CA, USA, 2015. IEEE Computer Society: Los Alamitos, Los Alamitos, CA, USA, 7–12 June 2015. [Google Scholar]
  30. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2, Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  31. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow, Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  32. Hu, H.; Dai, B.; Shen, W.; Wei, X. Cow identification based on fusion of deep parts features. Biosyst. Eng. 2020, 192, 245–256. [Google Scholar] [CrossRef]
Figure 1. Some research focus for cow identification. (a) Muzzle, (b) face, (c) iris, (d) body, (e) cow’s side and (f) gait.
Figure 1. Some research focus for cow identification. (a) Muzzle, (b) face, (c) iris, (d) body, (e) cow’s side and (f) gait.
Information 12 00361 g001
Figure 2. Layout of the barn.
Figure 2. Layout of the barn.
Information 12 00361 g002
Figure 3. Flowchart of the cow rump identification method.
Figure 3. Flowchart of the cow rump identification method.
Information 12 00361 g003
Figure 4. An example of rump object detection.
Figure 4. An example of rump object detection.
Information 12 00361 g004
Figure 5. Bottleneck depth-separable convolution structure.
Figure 5. Bottleneck depth-separable convolution structure.
Information 12 00361 g005
Figure 6. Incorrect detection cases.
Figure 6. Incorrect detection cases.
Information 12 00361 g006
Table 1. Structure of Mobilenet v2.
Table 1. Structure of Mobilenet v2.
InputOperatortcns
2242 × 3Conv2d-3212
1122 × 32Bottleneck11611
1122 × 16Bottleneck62422
562 × 24Bottleneck63232
282 × 32Bottleneck66442
142 × 64Bottleneck69631
142 × 96Bottleneck616032
72 × 160Bottleneck632011
72 × 320Conv2d 1 × 1-128011
72 × 1280Avgpool 7 × 7--1-
1 × 1 × 1280Conv2d 1 × 1-195-
Table 2. Identification accuracy comparison of cow rump.
Table 2. Identification accuracy comparison of cow rump.
Base NetworkAccuracy (%)Model Size (M)Reasoning Times (Image/s)
Original Dataset20 Times Augmented Dataset
Mobilenet v297.2899.769.25193
AlexNet96.8599.60226.0292
GoogLeNet95.2999.6840.9795
VGG-1697.7099.80519.1088
ResNet-5095.2298.8918.01109
Table 3. Identification accuracy comparison of cow rump and other related views.
Table 3. Identification accuracy comparison of cow rump and other related views.
Method Accuracy (%)Object CategoriesRegion of Interest (ROI)
Cheng Cai and Jianqiao Li (2013) [20]95.3030Face
Lu Y et al. (2016) [21]98.336Iris
Zhao, Jin, Ji, Wang, Ma, Zhu (2019) [22]96.7266Body
Feng Lv, Chunmei Zhang, and Changwei Lv (2018) [23]98.3360Cow’s side
CNN for cow rump99.76195Rump
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hou, H.; Shi, W.; Guo, J.; Zhang, Z.; Shen, W.; Kou, S. Cow Rump Identification Based on Lightweight Convolutional Neural Networks. Information 2021, 12, 361. https://0-doi-org.brum.beds.ac.uk/10.3390/info12090361

AMA Style

Hou H, Shi W, Guo J, Zhang Z, Shen W, Kou S. Cow Rump Identification Based on Lightweight Convolutional Neural Networks. Information. 2021; 12(9):361. https://0-doi-org.brum.beds.ac.uk/10.3390/info12090361

Chicago/Turabian Style

Hou, Handan, Wei Shi, Jinyan Guo, Zhe Zhang, Weizheng Shen, and Shengli Kou. 2021. "Cow Rump Identification Based on Lightweight Convolutional Neural Networks" Information 12, no. 9: 361. https://0-doi-org.brum.beds.ac.uk/10.3390/info12090361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop