Next Article in Journal
Evaluation of Frost Impact on Traditional Ceramic Building Materials Utilized in Facing Walls
Previous Article in Journal
Modal Analysis Using Digital Image Correlation Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Applied to Defect Detection in Powder Spreading Process of Magnetic Material Additive Manufacturing

1
Department of Mechanical Engineering, National Cheng Kung University, Tainan 701, Taiwan
2
Department of Computer Science and Information, National Pingtung University, Pingtung 900, Taiwan
3
Department of Intelligent Robotics, National Pingtung University, Pingtung 900, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 29 June 2022 / Revised: 5 August 2022 / Accepted: 15 August 2022 / Published: 17 August 2022

Abstract

:
Due to its advantages of high customization and rapid production, metal laser melting manufacturing (MAM) has been widely applied in the medical industry, manufacturing, aerospace and boutique industries in recent years. However, defects during the selective laser melting (SLM) manufacturing process can result from thermal stress or hardware failure during the selective laser melting (SLM) manufacturing process. To improve the product’s quality, the use of defect detection during manufacturing is necessary. This study uses the process images recorded by powder bed fusion equipment to develop a detection method, which is based on the convolutional neural network. This uses three powder-spreading defect types: powder uneven, powder uncovered and recoater scratches. This study uses a two-stage convolutional neural network (CNN) model to finish the detection and segmentation of defects. The first stage uses the EfficientNet B7 to classify the images with/without defects, and then to locate the defects by evaluating three different instance segmentation networks in second stage. Experimental results show that the accuracy and Dice measurement of Mask-R-CNN network with ResNet 152 backbone can reach 0.9272 and 0.9438. The computational time of an image only takes approximately 0.2197 sec. The used CNN model meets the requirements of the early detected defects, regarding the SLM manufacturing process.

1. Introduction

Metal additive manufacture is an additive manufacturing technique that stacks materials to manufacture a wide variety of products. It uses the properties of metal powder and the diversity of manufacture processes, to manufacture complex work pieces. This paper mainly discusses the selective laser melting (SLM) process, which uses high-power lasers melting metal powder in the processing area to reach the melting point. This leads to an instant melting and solidification with the multiple layers being melted and superimposed into a solid, to form a three-dimensional element. Nowadays, SLM produces products with excellent quality; the process of SLM is complex, however, with the quality of the finished product being affected by many external factors, including laser parameters, scanning speed, environmental condition, and material selection [1,2,3,4].
In this study, FeSiCr is proved to have lower core loss and eddy current loss at high frequency via the comparison analysis between FeSiCr and silicon steel sheets, a popularly used material for electric motor construction. However, the FeSiCr powder in the SLM process will appear in situations of uneven powder distribution, agglomeration and over-melting. Defect detection is required to avoid errors to improve the process yield.
Due to the above reasons, it is necessary to monitor the SLM process. The traditional flaw detection mainly integrates external devices, rather than computer vision. Craeghs uses a powder-spreading device to detect defects in the difference of speeds, between general metal powder and melting deformation [5]. Craeghs also uses the gray-scale value of the powder bed fusion (PBF) image to determine and evaluate the possible damage of recoater damaged [6]. Li uses different molten pool parameters to determine the current processing situation [7]. In other methods, Jacobsmuhlen uses X-rays for defect detection [8], while Kirka uses thermal signals as the basis for his defect detection’s method [9]. Cao uses light-emitting diode irradiation to obtain images with different incident angles for defect technique detection [10]. The objective of using optical irradiation is to obtain spectral images as well as obtaining the profile and porosity of the powder bed fusion [11,12].
Traditional detection methods usually lack efficiency and specificity. Thus, they need extra sensor devices such as X-ray or ultrasonic instruments in the requirements of detection of two or several types of defects. Lin uses traditional image processing for defect image segmentation, which are combined with multilayer perception (MLP) and a support vector machine (SVM) for defect detection [13]. This work is focused on several image classifications with increased impurity and scraper damage defects, reporting a classification accuracy of 98.33% and 97.5% for MLP [14] and SVM [15], respectively. Scime uses a multi-feature layer convolutional neural network for defect detection, resulting in the classification of large, medium, and small defects [16]. Later, the same authors improved this method in [17], introducing an additional Unet model [18] to the multi-feature classification proposed in [16], which improved results of defect classification by image segmentation. In [16,17], six different powder bed fusion anomaly classes are chosen: recoater hopping, recoater streaking, debris, super-elevation, part damage, and incomplete spreading. The resulting classification accuracy rate is 75% [16]; the test true positive rate is 0.84 [17]. These articles are limited to the image classifications with different types of defects without further discussions of defect location and segmentation.
In this paper, a two-stage deep learning model is proposed to detect three different types of defects. The purpose of the first stage is to classify images into different groups of with/without defects, by using the EfficientNet B7 model [19]. If one image belongs to the defective group, then it will be further segmented, and the defect’s type will be classified. The second stage uses Mask R-CNN [20], YOLO [21,22] and YOLACT [23] to the instance segmentation of the defect area pixel by pixel, and then selects the best one. The experimental results, as well as comparisons with other methods reveal that the used Mask R-CNN in the second stage is more precise and faster.

2. Materials and Methods

2.1. Experiment Setup

The images used in this study are recorded by the CCDs of the ITRI AM100 Laser-PBF equipment (Industrial Technology Research Institute (ITRI, Taiwan). The image recording device used in the AM100 Laser-PBF equipment is a GS3-PGE-91S6 (Teledyne FLIR LLC, Wilsonville, OR, USA) with an image size of 3376 × 2704 pixels2. The equivalent pixel 0.0374 mm/pixel can be obtained through camera calibration and perspective transformation. The computation time of image acquisition takes 0.078 s.

2.1.1. Materials

The powder sample in this paper is manufactured by a commercial FeSiCr magnetic material. The powder’s chemical composition and particle-size distribution are listed in Table 1. The soft magnetic composite FeSiCr material is analyzed for its beneficial electromagnetic properties and unique 3D formation capacity [24]. A microscopic and spectroscopic experiment performed on FeSiCr after SLM process showed that FeO is the oxide layer responsible for the unique advantages of this material.

2.1.2. Camera Calibration and Perspective Transformation

The aim of camera calibration is to establish the correspondence between world and image coordinates. More precisely, there is the need to determine the camera’s intrinsic matrix, extrinsic matrix, and distortion matrix. In general, extrinsic matrix converts world coordinates to camera coordinates and intrinsic matrix converts camera coordinates to image coordinates.
The association between image coordinates and world coordinates given by camera calibration is defined as Equation (1) [25].
[ x c y c 1 ] = = ( α γ x 0 0 β y 0 0 0 1 ) ( R 2 2 t 2 1 0 1 ) [ x w y w 1 ] ,
where ( α γ x 0 0 β y 0 0 0 1 ) and ( R 2 2 t 2 1 0 1 ) are the intrinsic and extrinsic matrices. The intrinsic matrix is composed of camera parameters, where x w , y w are world coordinate; x c , y c are camera coordinate; α and β are the scale factors in the image x and y axes, respectively, and γ is the parameter that describes the skewness of the two image axes. The extrinsic matrix is composed of the displacement and rotation required to convert the world coordinate into the camera coordinate, where R 2 2 is a 2 * 2 rotation matrix and t 2 1 is a displacement matrix.
Moreover, two kinds of distortions introduce significant distortion to images. One is radial distortion, which causes straight lines to curve, which is defined as Equation (2). The other is tangential distortion, which occurs because the image-taking lens is not aligned perfectly parallel to the imaging plane, defined as Equation (3).
{ x r a d _ d i s t o r t e d = x ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) y r a d _ d i s t o r t e d = x ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) ,
{ x tan _ d i s t o r t e d = x + [ 2 k 3 x y + k 4 ( r 2 + 2 x 2 ) ] y tan _ d i s t o r t e d = y + [ k 3 ( r 2 + 2 y 2 ) + 2 k 4 x y ] ,
where k 1 ,   k 2 ,   k 3 ,   k 4 ,   k 5 are distortion parameter; these parameters are influenced by the extrinsic matrix and intrinsic matrix, and r 2   =   x 2   +   y 2 .
In this paper, we use a checkerboard as a calibrated board, which is shown in Figure 1b; the original image is shown in Figure 1a; and the calibrated image is shown in Figure 1c. Figure 1c shows one example after the image is calibrated, leading us to conclude that the image is still not flat. Therefore, we need convert it with a linear transformation matrix, which is shown as Equation (4).
[ x y ]   =   M   [ x c y c ] ,
where x , y are the pair of corrected coordinates; M is a perspective transformation; and x c , y c denote for the pair of the original coordinate. The effect of perspective transformation is shown in Figure 1d.

2.1.3. Defect Definition

Three major pieces of equipment affect SLM processing: the laser beam, the metal powder, and the recoater. Improper parameters of the laser beam will cause residual thermal stress that further results in warping, deformation, or breaking of the workpiece. These situations result in the occurrence of several defects in the process of powder-spreading. The defects are defined as the powder uncover class. When powder spreading happens, some areas are defined as the powder uneven class because of the uneven powder coverage. The large defective powder uncover causes damage in the re-coater as a result of generating the vertical scratches. The scratches are grouped into the recoater scratch class. To sum up, three different types of defects are defined as the powder uncover, as shown in Figure 2a; the powder uneven, as shown in Figure 2b; and the recoater scratch, as shown in Figure 2c.

2.2. Used Deep Learning Models

Two consecutive deep learning models are used to detect and locate several defects. The first stage is to determine whether the acquired images are with/without defects. One image without defect will be ignored (see Figure 2d); otherwise, it is further fed into the next stage to locate defects, so the three different types of defects can be located. The complete procedures are shown in Figure 3.

2.2.1. Efficient Net

The EfficientNet [19] is used as the first model for classifying images with/without defects. In past deep learning models, only a one-dimensional parameter of depth, width or resolution is modified, which probably leads to the performance bottleneck in an ImageNet dataset. EfficientNet uses Equation (5) to find the optimal depth, width and resolution of neural networks by modifying multiple scales to achieve the best performance.
N ( d , w , r )   =   i = 1 s F i L i ( X H i , W i , C i ) ,
where F i L i denotes layer Fi repeated Li times in stage i, and <Hi, Wi, Ci> denotes the shape of input tensor X of the layer. The convolution net N is represented by a list of composed layers: N = F k F 2 F 1 and s denotes for the s-th layer. The computation method of Equation (5) aims to find the Hi, Wi, Ci with maximum accuracy.

2.2.2. Mask R-CNN

Mask R-CNN is a two-stage detection network that includes feature extraction model, region proposal network and prediction head shown in Figure 4. The feature extraction is implemented by using the feature pyramid network (FPN) [26] with backbones of ResNet 101 or ResNet 152 [27] in the first stage. In the second stage, the region proposal network (RPN) generates candidate objects according to the features; the region of interest (ROI) alignment finds the candidate objects proposed by RPN and the multi-scaled features of the backbone. Finally, the classification and regression branches of the prediction head infer the object position and classification, and the mask branch determines the mask of object and the corresponding bounding box.

2.2.3. Loss Function for Mask RCNN

Several commonly used loss functions for classification, regression and mask in Mask R-CNN are defined in Equations (6)–(9).
L b o x = { 0.5 x 2 , | x | < 1 | x | 0.5 , x < 1     o r     x > 1
L c l s   =   i = 1 n y i l o g ( y ^ i )
L m a s k   =   i = 1 n y ^ i l o g y i + ( 1 y ^ i ) l o g ( 1 y i )
L M a s k R C N N   =   L b o x   +   L c l s   +   L m a s k
where L b o x and L c l s are same as faster R-CNN, L b o x is the loss bounding box regression, and L c l s is the loss for classification. L m a s k is the binary cross entropy, which is used to classify whether the pixel is defective, y i is the target value, and y ^ i is the prediction value.

2.2.4. Metrics

In this paper, mean average precision (mAP) and the Dice coefficient are employed to quantify the performance of algorithm. The precision, recall, average precision (AP) and mAP are defined as shown in Equations (10)–(13), respectively.
p r e c i s i o n   =   T P T P + F P
r e c a l l   =   T P T P + F N
A P   =   0 1 P ( R ) d R
m A P   =   1 C i = 1 C A P i
where TP (true positive) denotes the number of defects that are correctly classified, FP (false positive) denotes the number of defects that are classified as normal, FN (false negative) denotes the normal objects that are identified as defects, and C is number of classes to be detected.
Next, we define the Dice coefficient as Equation (14), which can also be defined with a binary confusion matrix, that is F1 score, which is shown as Equation (15)
D i c e   C o e f f i c i e n t   =   2 | p r e d | | g t | | p r e d | + | g t | ,
where pred denotes to the prediction mask and gt denotes to the ground truth.
F 1   =   2 T P 2 T P + F P + F N

3. Experimental Results

In this section, three important system requirements are addressed as follows:
  • The classification accuracy of images with/without defects must be more than 0.95.
  • The computation time of the detection of three types of defects is limited to below 2.0 s.
  • The performance of defect segmentation, Dice coefficient, between ground-truth and predicted defects, is at least more than 0.90.

3.1. Environment and Data

The training environment is described as follows:
  • Software: Linux Ubuntu 20.04 LTE, Python 3.9, Pytorch 3.10 with Detectron 2 [28];
  • Hardware: Personal computer of Intel Core I9-10900K with main memory 64GB RAM and a Nvidia GeForce RTX 3090 GPU processor.
The whole datasets retrieved from ITRI are shown in Table 2. The proposed model mentioned in Section 3 has two stages: (1) the classification of the detecting image into with/without flaws classes, and (2) the segmentation of the corresponding defect areas.

3.2. Defected Image Classification

The used model is the Efficient Net B7; the test dataset is shown in Table 3. In this stage, we train about 50 epochs and use Adam as an optimizer. The learning rate is set to 0.001 initially, when training for 20 epochs, it drops to 0.0001. In total, the used number of parameters of Efficient Net B7 is 66,365,975. The results of the confusion matrix (see Table 4), and other performance metrics are listed in Table 5; the corresponding precision-recall (PR) curve is shown in Figure 5, which meet the system’s requirement.

3.3. Defects Segmentation

This study uses four-fold cross-validation method in datasets (see Table 6) to measure the defect detection and segmentation. We train about 60,000 iterations and define the anchors with IOU scores greater than 0.43 as positive. In addition, SGD method is used as an optimizer with learning rate 0.001 initially, which decay one tenth per 10,000 iterations. The size of anchor is set to [4, 8, 16, 48, 96, 216, 480, 720, 860]; the aspect ratio is defined as [0.1, 0.2, 0.5, 1, 2, 5, 10, 25, 50, 60, 70]. Four different CNN models: Mask RCNN with ResNet 101 backbone, Mask RCNN with Resent 152 backbone, YOLACT, and YOLOv3+Unet are used to segment the defects. The first three models are belonging to instance segmentation model, and the final one, YOLOv3+Unet model, uses the YOLOv3 to detect the defect areas, and the segment the defects using the Unet model. Table 7, Table 8, Table 9 and Table 10 show that performance indices of defect detection and segmentation. In these Tables, the Dice coefficient is used to measure the defect segmentation; the mAP, APuneven, APuncover, APscatch are means of precision of all defects, powder uneven, powder uncover, and re-coater scratch. The Dice coefficients and mAPs of the four methods are 0.8934 and 0.8627 (Mask RCNN with ResNet 101 backbone), 0.9438 and 0.9272 (Mask RCNN with ResNet 152 backbone), 0.8475 and 0.8544 (YOLACT), and 0.9342 and 0.9187 (YOLOv3)). The model of Mask-RCNN with ResNet 152 backbone is superior to other three methods. Figure 6 shows the experimental results of samples by using the Mask R-CNN predictions.

4. Discussion and Conclusions

As shown in Table 5 and Table 8, the accuracy of and Dice coefficient of Mask RCNN with Resent 151 backbone meet system’s requirements. Moreover, the FPS of 8.61 s. is much faster than system’s expectations. In general, the average computation time of one image including the image classification and defect segmentation takes approximately 0.130 s. which is 7.80 FPS. Additionally, the time of image acquisition and data transition is about 0.0897 s. In total, the average computation time of one image is about 0.2197 s.
Additionally, Table 7, Table 8, Table 9 and Table 10 show that APUncover does not perform well compared with other two different types of defects. The AP Uncover, resulting from improper residual thermal stress, exhibits small defects (lower than 32 pixel2). Although we use small anchors with sizes of 4 or 8 to overcome the small defect problems, the APUncover only achieves 0.9046.
Compared with [13], our method can be directly applied without adjusting the brightness threshold and without using the morphology of dilation and erosion. Compared with [16,17], our method has better performance and efficiency, regardless of the yield of the pre-processing procedures.
In the future, the proposed method can be extended to implement a corrective procedure once the defects have been detected during the MAM process. Although the Mask RCNN with Resent 151 backbone model met the requirements of the system’s computational time and detection accuracy, the utilizations of other CNN models such as the EfficientDet [29] and CenterNet [30] to develop a more powerful model for the powder spreading process is still an interesting further research avenue.

Author Contributions

Conceptualization, C.-C.L. and M.-H.H.; methodology, M.-H.H. and T.-W.C.; software, H.-Y.C., J.-C.H., R.-M.L. and J.-H.H.; validation, M.-C.T.; formal analysis, M.-C.T.; investigation, M.-C.T.; resources, C.-C.L.; data curation, L.-K.C.; writing—original draft preparation, H.-Y.C. and M.-H.H.; writing review and editing, M.-C.T.; visualization, H.-Y.C.; supervision, M.-C.T.; project administration, M.-H.H.; funding acquisition, M.-C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science and Technology Council (NSTC), Taiwan, under Grant MOST 109-2221-E-006-087-MY3, MOST 110-2218-E-006-028 and MOST 111-2218-E-006-013.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the finding of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, S.; Yang, W.; Shi, X.; Li, B.; Duan, S.; Guo, H.; Guo, J. Influence of laser process parameters on the densification, microstructure, and mechanical properties of a selective laser melted AZ61 magnesium alloy. J. Alloys Compd. 2019, 808, 151160. [Google Scholar] [CrossRef]
  2. Giganto, S.; Zapico, P.; Castro-Sastre, Á.; Martínez-Pellitero, S.; Leo, P.; Perulli, P. Influence of the scanning strategy parameters upon the quality of the SLM parts. Procedia Manuf. 2019, 41, 698–705. [Google Scholar] [CrossRef]
  3. Abe, F.; Santos, E.C.; Kitamura, Y.; Osakada, K.; Shiomi, M. Influence of forming conditions on the titanium model in rapid prototyping with the selective laser melting process. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2003, 217, 119–126. [Google Scholar] [CrossRef]
  4. Reinarz, B.; Witt, G. Process Monitoring in the Laser Beam Melting Process-Reduction of Process Breakdowns and Defective Parts. Proc. Mater. Sci. Technol. 2012, 2012, 9–15. [Google Scholar]
  5. Craeghs, T.; Clijsters, S.; Yasa, E.; Kruth, J. Online quality control of selective laser melting. In Proceedings of the 20th Solid Freeform Fabrication (SFF) Symposium, Austin, TX, USA, 8–10 August 2011. [Google Scholar]
  6. Li, Z.; Liu, X. In situ 3D monitoring of geometric signatures in the powder-bed-fusion additive manufacturing process via vision sensing methods. Sensors 2018, 18, 1180. [Google Scholar] [CrossRef] [PubMed]
  7. Jacobsmuhlen, J.Z.; Kleszczynski, S.; Witt, G.; Merhof, D. Detection of elevated regions in surface images from laser beam melting processes. In Proceedings of the IECON 2015-41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 1270–1275. [Google Scholar]
  8. Kirka, M.; Rose, D.; Halsey, W.; Ziabari, A.; Paquit, V.; Dehoff, R.; Brackman, P. Analysis of data streams for qualification and certification of inconel 738LC airfoils processed through electron beam melting. ASTM Int. 2019, 492–501. [Google Scholar] [CrossRef]
  9. Hu, Y.N.; Wu, S.C.; Withers, P.J.; Zhang, J.; Bao, H.Y.X.; Fu, Y.N.; Kang, G.Z. The effect of manufacturing defects on the fatigue life of selective laser melted Ti-6Al-4V structures. Mater. Des. 2020, 192, 108708. [Google Scholar] [CrossRef]
  10. Cao, L.; Zhou, Q.; Han, Y.; Song, B.; Nie, Z.; Xiong, Y.; Xia, L. Review on intelligent monitoring and process control of defects in laser selective melting additive manufacturing. Acta Aeronaut. Astronaut. Sin. 2021, 42, 1–37. [Google Scholar]
  11. Wu, S.; Dou, W.; Yang, Y. Research progress of detection technology for laser selective melting metal additive manufacturing. Precis. Form. Eng. 2019, 37–50. [Google Scholar] [CrossRef]
  12. Xiao, G.; Li, Y.; Xia, Q.; Cheng, X.; Chen, W. Research on the on-line dimensional accuracy measurement method of conical spun workpieces based on machine vision technology. Measurement 2019, 148, 106881. [Google Scholar] [CrossRef]
  13. Lin, Z.; Lai, Y.; Pan, T.; Zhang, W.; Zheng, J.; Ge, X. A new method for automatic detection of defects in selective laser melting based on machine vision. Materials 2021, 14, 4175. [Google Scholar] [CrossRef] [PubMed]
  14. Singh, G.; Sachan, M. Multi-layer perceptron (MLP) neural network technique for offline handwritten Gurmukhi character recognition. In Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 7 September 2015. [Google Scholar]
  15. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  16. Scime, L.; Beuth, J. A multi-scale convolutional neural network for autonomous anomaly detection and classification in a laser powder bed fusion additive manufacturing process. Addit. Manuf. 2018, 24, 273–286. [Google Scholar] [CrossRef]
  17. Scime, L.; Siddelb, D.; Bairdc, S.; Paquita, V. Layer-wise anomaly detection and classification for powder bed additive manufacturing processes: A machine-agnostic algorithm for real-time pixel wise semantic segmentation. Addit. Manuf. 2020, 36, 101453. [Google Scholar] [CrossRef]
  18. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  19. Tan, M.; Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  20. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  21. Lawal, M.O. Tomato detection based on modified YOLOv3 framework. Sci. Rep. 2021, 11, 1447. [Google Scholar] [CrossRef] [PubMed]
  22. Roy, A.M.; Bose, R.; Bhaduri, J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Comput. Appl. 2022, 34, 3895–3921. [Google Scholar] [CrossRef]
  23. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT: Real-time Instance Segmentation. In Proceedings of the 2019 IEEE/CVF In-ternational Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 9156–9165. [Google Scholar]
  24. Jhong, K.-J.; Chang, T.-W.; Lee, W.-H.; Tsai, M.-C.; Jiang, I.-H. Characteristic of high frequency Fe-Si-Cr material for motor application by selective laser melting. AIP Adv. 2019, 9, 035317. [Google Scholar] [CrossRef]
  25. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  26. Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  28. Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; Girshick, R. Detectron2. 2019. Available online: https://github.com/facebookresearch/detectron2#citing-detectron (accessed on 10 January 2021).
  29. Tan, M.; Pang, R.P.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  30. Wang, R.; Cheung, C.-F. CenterNet-based defect detection for additive manufacturing. Expert Syst. Appl. 2022, 188, 116000. [Google Scholar] [CrossRef]
Figure 1. Camera calibration, (a) original image, (b) checkerboard for calibration, (c) image fetch from calibrated camera, (d) image after perspective transformation.
Figure 1. Camera calibration, (a) original image, (b) checkerboard for calibration, (c) image fetch from calibrated camera, (d) image after perspective transformation.
Materials 15 05662 g001
Figure 2. Three kinds of defects: (a) powder uncovered, (b) powder uneven, (c) recoater scratch and (d) flawless image. The flawless image is a normal case in the powder-spreading process.
Figure 2. Three kinds of defects: (a) powder uncovered, (b) powder uneven, (c) recoater scratch and (d) flawless image. The flawless image is a normal case in the powder-spreading process.
Materials 15 05662 g002
Figure 3. Flow Chart of Defects Detection System.
Figure 3. Flow Chart of Defects Detection System.
Materials 15 05662 g003
Figure 4. Mask R-CNN architecture.
Figure 4. Mask R-CNN architecture.
Materials 15 05662 g004
Figure 5. PR curve for image classification.
Figure 5. PR curve for image classification.
Materials 15 05662 g005
Figure 6. Samples for Mask R-CNN predictions.
Figure 6. Samples for Mask R-CNN predictions.
Materials 15 05662 g006
Table 1. Chemical composition and particle-size distribution of FeSiCr powder [21].
Table 1. Chemical composition and particle-size distribution of FeSiCr powder [21].
MaterialsChemical CompositionParticle Size Distribution (%)
Fe-Si3.5-Cr4.5Si (wt%)3.47D10 (μm)15
Cr (wt%)4.44D50 (μm)34
O (ppm)2440D90 (μm)84
Table 2. Statistics of all data.
Table 2. Statistics of all data.
TypeTotal
Flawless (normal)900
Powder Uncover426
Powder Uneven229
Recoater Scratch239
Table 3. Dataset Set Distribution for Classification.
Table 3. Dataset Set Distribution for Classification.
TypeTotalTraining SetValidation SetTest Set
Flawless (normal)900540180180
Defect894538178178
Table 4. Confusion Matrix for Image Classification.
Table 4. Confusion Matrix for Image Classification.
Confusion MatrixGround Truth
NormalDefects
PredictionNormal1781
Defects2177
Table 5. Performance Metrics of the Proposed Methodology.
Table 5. Performance Metrics of the Proposed Methodology.
MetricsAccuracyTPPrecisionRecallF1 ScoreFPS
Value99.1698.8999.4598.9199.1671.91
Table 6. Cross Validation Dataset Set Distribution for Segmentation.
Table 6. Cross Validation Dataset Set Distribution for Segmentation.
FoldFold 1Fold 2Fold 3Fold 4
Images245245245244
Table 7. Four-fold cross validation result, where FPS denotes for frame per second by using Mask-RCNN with ResNet 101 backbone. In total, the number of parameters Mask RCNN used is 69,188,563.
Table 7. Four-fold cross validation result, where FPS denotes for frame per second by using Mask-RCNN with ResNet 101 backbone. In total, the number of parameters Mask RCNN used is 69,188,563.
FoldDice (%)mAP (%)AP50(%)AP75(%)APuneven(%) APuncover(%) APscratch(%)FPS
Fold 191.2488.4796.6594.8192.1779.9497.879.27
Fold 288.3284.4896.8393.6391.6778.4798.479.27
Fold 390.4287.6497.689605190.4777.4696.529.27
Fold 487.3884.4996.4892.4992.0578.8598.149.27
Average89.3486.2796.9194.3691.5978.6897.759.27
Table 8. Four-fold cross validation result, where FPS denotes for frame per second by using Mask-RCNN with ResNet 152 backbone. In total, the number of parameters used is 101,188,563.
Table 8. Four-fold cross validation result, where FPS denotes for frame per second by using Mask-RCNN with ResNet 152 backbone. In total, the number of parameters used is 101,188,563.
FoldDice (%)mAP (%)AP50(%)AP75(%)APuneven(%) APuncover(%) APscratch(%) FPS
Fold 195.7893.8498.2097.3493.7991.2598.918.61
Fold 293.1392.8998.7595.7292.4689.9897.928.61
Fold 393.9892.3799.6798.2694.1290.9898.178.61
Fold 494.9391.7889.6096.6094.5189.6396.928.61
Average94.3892.7298.9796.9893.7290.4697.988.61
Table 9. Four-fold cross validation result, where FPS denotes for frame per second by using the YOLACT model. In total, the number of parameters used is 43,286,432.
Table 9. Four-fold cross validation result, where FPS denotes for frame per second by using the YOLACT model. In total, the number of parameters used is 43,286,432.
FoldDice (%)mAP (%)AP50(%)AP75(%)APuneven APuncover APscratch FPS
Fold 186.1887.5094.2094.7885.7183.7692.8719.94
Fold 283.2283.9092.7188.4282.3680.3589.5719.94
Fold 387.1685.3794.9793.5687.6281.7291.7719.94
Fold 482.4480.9993.9393.8886.1580.0188.8719.94
Average84.7584.4493.8192.6785.4681.4690.4319.94
Table 10. Four-fold cross validation result, where FPS denotes for frame per second by using the YOLOv3+Unet model. In total, the number of parameters used is 69,537,100. The YOLOv3 detects the defect area, and then the Unet segment the defects.
Table 10. Four-fold cross validation result, where FPS denotes for frame per second by using the YOLOv3+Unet model. In total, the number of parameters used is 69,537,100. The YOLOv3 detects the defect area, and then the Unet segment the defects.
FoldDice (%)mAP (%)AP50(%)AP75(%)APuneven APuncover APscratch FPS
Fold 194.2192.8499.0194.7992.6989.6797.8716.6
Fold 293.6791.6798.9894.0191.6789.9998.0016.6
Fold 393.1990.6297.9495.6790.7890.4796.1416.6
Fold 492.6192.3598.7992.9391.9189.3598.3516.6
Average93.4291.8798.68 94.3591.7689.8797.5916.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, H.-Y.; Lin, C.-C.; Horng, M.-H.; Chang, L.-K.; Hsu, J.-H.; Chang, T.-W.; Hung, J.-C.; Lee, R.-M.; Tsai, M.-C. Deep Learning Applied to Defect Detection in Powder Spreading Process of Magnetic Material Additive Manufacturing. Materials 2022, 15, 5662. https://0-doi-org.brum.beds.ac.uk/10.3390/ma15165662

AMA Style

Chen H-Y, Lin C-C, Horng M-H, Chang L-K, Hsu J-H, Chang T-W, Hung J-C, Lee R-M, Tsai M-C. Deep Learning Applied to Defect Detection in Powder Spreading Process of Magnetic Material Additive Manufacturing. Materials. 2022; 15(16):5662. https://0-doi-org.brum.beds.ac.uk/10.3390/ma15165662

Chicago/Turabian Style

Chen, Hsin-Yu, Ching-Chih Lin, Ming-Huwi Horng, Lien-Kai Chang, Jian-Han Hsu, Tsung-Wei Chang, Jhih-Chen Hung, Rong-Mao Lee, and Mi-Ching Tsai. 2022. "Deep Learning Applied to Defect Detection in Powder Spreading Process of Magnetic Material Additive Manufacturing" Materials 15, no. 16: 5662. https://0-doi-org.brum.beds.ac.uk/10.3390/ma15165662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop