Next Article in Journal
Design of Non-Uniform Antenna Arrays for Improved Near-Field MultiFocusing
Previous Article in Journal
Sliding Mode Fault Tolerant Control for Unmanned Aerial Vehicle with Sensor and Actuator Faults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automatic Surface Defect Inspection System for Automobiles Using Machine Vision Methods

1
State Key Laboratory of Mechanics and Control of Mechanical Structures, College of Aerospace Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
COMAC ShangHai Aircraft Design and Research Institute, Shanghai 201210, China
3
China National Aeronautical Ratio Electronics Research Institute, Shanghai 200241, China
*
Author to whom correspondence should be addressed.
Submission received: 4 January 2019 / Revised: 28 January 2019 / Accepted: 31 January 2019 / Published: 4 February 2019
(This article belongs to the Section Physical Sensors)

Abstract

:
Automobile surface defects like scratches or dents occur during the process of manufacturing and cross-border transportation. This will affect consumers’ first impression and the service life of the car itself. In most worldwide automobile industries, the inspection process is mainly performed by human vision, which is unstable and insufficient. The combination of artificial intelligence and the automobile industry shows promise nowadays. However, it is a challenge to inspect such defects in a computer system because of imbalanced illumination, specular highlight reflection, various reflection modes and limited defect features. This paper presents the design and implementation of a novel automatic inspection system (AIS) for automobile surface defects which are the located in or close to style lines, edges and handles. The system consists of image acquisition and image processing devices, operating in a closed environment and noncontact way with four LED light sources. Specifically, we use five plane-array Charge Coupled Device (CCD) cameras to collect images of the five sides of the automobile synchronously. Then the AIS extracts candidate defect regions from the vehicle body image by a multi-scale Hessian matrix fusion method. Finally, candidate defect regions are classified into pseudo-defects, dents and scratches by feature extraction (shape, size, statistics and divergence features) and a support vector machine algorithm. Experimental results demonstrate that automatic inspection system can effectively reduce false detection of pseudo-defects produced by image noise and achieve accuracies of 95.6% in dent defects and 97.1% in scratch defects, which is suitable for customs inspection of imported vehicles.

1. Introduction

The external appearance of a vehicle is of outmost importance as it gives consumers their first impression [1]. However, due to the existence or occurrence of surface defects (small scratches or dents) during the process of manufacturing and cross-border transportation of imported vehicles which can bring huge economic losses to automobile import agency companies and imported vehicle buyers, automobile agency companies will arrange for professional human checkers [2] to conduct pipelined manual inspections of imported vehicles, even for minor defects (up to 0.5 mm in diameter), after each shipment. Unqualified vehicles with surface defects are picked up to be returned to the factory for repair. The traditional checking procedure is a random sampling method of manual inspection in the product pipeline, judging whether the product is qualified or not by observing the differences of the surface appearance of the target product with the eyes, and eliminating the unqualified product manually. This detection method cannot meet the needs of modern industrial production, because the current manual inspection process has the following issues:
  • The criteria for human vision are not quantified, but more dependent on subjective evaluation, so the criteria for each checker judging the product quality are not the same.
  • Due to the large output of the product and the large number of inspection items, the inspector will experience ocular fatigue on the assembly line due to the high-intensity repetitive work which leads to less reliable defect detection making the quality of product not fully guaranteed.
  • When the surface defect is not obvious, the help of external conditions (such as strong lighting environment) is needed to detect it. It is difficult to easily identify the surface defects only by the human eye, and it is impossible to achieve a continuous and stable workflow, resulting in greatly reduced production efficiency.
Automatic visual inspection systems (AISs), however, have gained great popularity with the development of programmable hardware processors and built-in image processing techniques which have the characteristics of fast speed, high precision, low cost and nondestructive features, improve productivity, quality management and provide a competitive advantage to industries that employ this technology [3]. This paper focuses on the core techniques of AIS.
Vehicle surface defects are broadly divided into two categories: scratches or dents. A scratch is a defect that occurs when a vehicle rubs against a hard raised metal during loading into a carrier or container transportation. The dents on the vehicle body are usually caused by the small stones that are propelled from the road surface hitting the surface of the vehicle traveling at a high speed. These defects are difficult to detect with automatic visual inspection systems (and even for human eyes) because of various factors:
  • Imbalanced illumination: Vehicle surface images are captured by cameras installed at slide rails under the irradiation of four LED lights. The light mode determines that the vehicle body surface cannot achieve full uniform illumination, therefore global features or threshold parameters are not suitable for defect positioning and recognition.
  • Specular highlight reflection: The dark field illumination technique (Figure 1) is utilized to enhance the contrast in defect samples. Even in the case of dark-field illumination, the captured vehicle frame gap may still cause specular highlights where the normal surface is oriented precisely halfway between the direction of incoming light (called half-angle direction because it bisects (divides into halves) the angle between the incoming light and the viewer).
  • Variations of reflection modes for different vehicle colors: The surfaces of vehicle bodies do not share the same reflection modes. Using the comparison between a black vehicle body and a white vehicle body as an example, the gray-value of black vehicle defects present higher value than the body background, whereas the white vehicle shows lower values because of the diffuse reflection under different reflection coefficients.
  • Limited features for defect recognition: Defects on the vehicle surface do not share common textures or shape features. Further, the pixel area of defects is even less than 20 pixels for tiny scratches or dents. The most reliable feature to distinguish surface defects is local gray-level information. The limitation of visual features makes inspection systems exclude most object recognition methodologies that are based on sophisticated texture and shape features [4].
This paper presents an online AIS for vehicle body surface defects. The AIS comprises an image acquisition subsystem (IAS) and image processing subsystem. The IAS acquires gray automobile images for the surface of a vehicle body. After that, the automobile image is processed and possible defects are detected. In this paper, we focus on three key techniques of AIS: image preprocessing, defect binarization, pseudo-defect removal and classification. We propose image preprocessing methods to remove the specular highlight pixels and enhance the distinction between defects and background in a vehicle image, considering imbalanced illumination and specular highlight reflection property of vehicle surfaces. In addition, we put forward multi-scale defect binarization based on a Hessian matrix (DBHM) algorithm, which identifies possible defects by construction of defect filter based on the Hessian matrix over each acquisition of the image detection area. Finally, we come up with pseudo-defect removal and classification on account of the noise-sensitive characteristics of Hessian filters and the existence of pseudo-defects. AIS has the following advantages:
1)
Image preprocessing is a data fusion and image enhancement step. Therefore, it is able to eliminate the specular highlight reflection of raw images. In addition, it can greatly reduce the variation in the background of a vehicle image.
2)
DBHM combined with pseudo-defect removal and classification is robust to noise and easy to identify tiny scratches or dents with a width of 0.5 mm, because DBHM relies on the local distribution and features of intensities, rather than the global characteristics used in most thresholding algorithms [5,6,7,8].
3)
AIS has the characteristics of fast speed, high precision, low cost and nondestructive features. Under our experimental setup, VIS is able to be in real time with a speed of 1 min 50 s/vehicle, which is greatly faster than the manual inspection of check-man (10–15 min).
The rest of paper is organized as follows: in Section 2, related work on automatic defect inspection of vehicles is introduced. Then, the overall system and proposed approach are respectively described in detail in Section 3 and Section 4. Afterwards, Section 5 presents the experimental results. Finally, discussion and conclusions are provided in Section 6.

2. Related Work

Defect detection on car bodies usually adopts an inspection program flow, which can perform image acquisition, fusion by collecting the images of car bodies, image processing and classification to detect and distinguish millimeter-sized defects. Various vehicle inspection systems and techniques have been introduced with the aim to detect surface defects.
A commercial and very successful system was developed and installed in Ford Motor Company factories all around the world. This system, described in [9], uses a moving structure made up of several light bars (high-frequency fluorescent tubes) and a set of cameras in fixed positions around the stationary car body, and is able to detect millimetric defects of 0.3 mm diameter or greater with different shapes which were very hard to detect, thus improving the quality of the manual inspection carried out until that point. Thenceforth, Molina et al. [2] introduced a novel approach using deflectometry- and vision-based technologies at industrial automatic quality control system named QEyeTunnel in the production line at the Mercedes-Benz factory in Vitoria, Spain. The authors present that the inspection system satisfies cycle time production requirements (around 30 s per car). A new image fusion pre-processing algorithm is proposed to enhance the contrast between pixels with a low level of intensity (indicating the presence of defects) based on rendering equation [10,11] in computer graphics; Then a post-processing step with an image background extraction approach based on a local directional blurring method and a modified image contrast enhancement, which enables detection of larger defects in the entire illuminated area with a multi-level structure, given that each level identifies defects of different sizes.
Fan et al. [12] demonstrated a closed indirect diffusion-lighting system with the pattern-light and full-light mode by the lighting semi-permeable membrane to overcome the strong reflection of the car’s body. Then, a smooth band-pass filter based on frequency domain analysis processing technology is applied to extract the defect information on the panels from the used cars. In order to improve the accuracy of the result and remove the pseudo-defects, the area size is employed by the authors as the parameter to filter out the noise and achieve the segmentation of flaw for the reason that compared with flaws, noise is always some isolated points which just occupy a few pixels, but by filtering noise only by the area information of the noise (one important feature, but not comprehensive enough), it is possible to mistakenly filter some small-area pixels which originally were a defect. Kamani et al. [13,14] located defect regions by using a joint distribution of local binary variance pattern (LBP) and rotation invariant measure of local variance (VAR) operators, and then the detected defects are classified into various defect types using Bayesian and support vector machine classifiers. Chung et al. [15] implemented a optical 3D scanner and visualization system to find defect regions from the curvature map of unpainted car body outer panels. Leon et al. [16] presented new strategies to inspect specular and painted surfaces. This method classified defects into three classes: tolerable defects, removable defects and defects that lead to the rejection of the inspected car body part with 100% classification accuracy. Borsu et al. [17] used 3D reconstruction of the surface profile of the panel to detect deformation and the positions of the surface defects are provided to a robotic marking station that handles pose and motion estimation of the part on an assembly line using passive vision. Xiong et al. [18] proposed a 3D laser profiling system which integrates a laser scanner, odometer, inertial measurement unit and global position system to capture the rail surface profile data. The candidate defective area is located by comparing the deviation of registered rail surface profile and the standard model with the given threshold. Then, the candidate defect points are merged into candidate defect regions using the K-means clustering and the candidate defect regions are classified by a decision tree classifier.
In addition to the traditional detection methods mentioned above, learning-based methods have also been investigated for surface crack detection. Qu et al. [19] proposed a bridge cracks detection algorithm by using a modified active contour model and greedy search-based support vector machine. Zhang et al. [20] used a deep convolutional neural network to conduct automatic detection of pavement cracks on a data set of 500 images collected by a low-cost smart phone. Fully convolutional neural networks were studied to infer cracks of nuclear power plants using multi-view images by Schmugge et al. [21]. Recently, Zou et al. [22] proposed a novel end-to-end trainable deep convolutional neural network called DeepCrack for automatic pavement crack detection by learning high-level features for crack representation. In this method, multi-scale deep convolutional features learned at hierarchical convolutional stages are fused together to capture the line structures. Many other methods were also proposed for crack detection, e.g., the saliency detection method [23], the structure analysis methods by using the minimal spanning tree [24] and the random structure forest [25].

3. Overall System

In order to eliminate the influence of natural light irradiated on the vehicle body, the detection process will be carried out in a closed laboratory environment shown in Figure 2. Five plane-array CCD cameras are mounted on motor-driven slide guides to achieve fixed-point acquisition of vehicle body images. A set of images is acquired during cameras sweeping the vehicle body ensuring complete cover of the object to be inspected, see Figure 2b.
The number of images depends on the size of the vehicle and the resolution of acquisition of images by the camera. These images are transmitted to the control and defection analysis room for further detection and classification. The control and defection analysis room is also used to control the light sources, the camera shooting and the operation of the guide rail motors.

3.1. AIS

The overall architecture of vehicle automatic inspection system is depicted in Figure 3. The automatic inspection system of appearance defect for automobile mainly includes two parts, namely, image acquisition subsystem (hardware) and image processing subsystem (software). The hardware system mainly includes light sources, industrial cameras, camera lens, slide rails, servo controller, servo driver and servo motors. The slide rails, servo controller, servo driver and servo motors constitute the motion control system. Software system mainly includes image acquisition, vehicle model database, image processing algorithms, damage detection results display and motor control program module.

3.2. Cameral Resolution

The key component of the proposed system is a GCP4241 GigE Vision plane-array CCD camera (SMARTEK Vision, Cakovec, Croatia) with 12 megapixels (4240 × 2824) at a frame rate of 9 fps which is the perfect for moving object inspection, low-light and outdoor applications or in high-speed automation with very short exposure times. Full Gigabit Ethernet bandwidth based on the proven GigE Vision standard data interface is utilized to support high resolution image transmission. With a C-10MP-4/3-12mm lens (Foshan HuaGuo Optical Co., Ltd., Guangdong, China) with a shooting distance of 1.5 m, the resolution of the camera can reach 72 dpi (0.35 mm/pixel) which meets the accuracy requirements for detecting defects above 0.5 mm in size.

3.3. Framework of Image Processing Subsystem

After a vehicle body image is generated by the IAS, an image processing subsystem is proposed to detect possible defects in the image. This subsystem includes three main modules: image preprocessing, defect binarization and pseudo-defect remove and classification. Figure 4 shows the diagram of the subsystem:
1)
Image preprocessing: Defects are easily hidden or confused in vehicle images because of illumination inequality and the variation of reflection property of vehicle surfaces, so image preprocessing is a necessary procedure to highlight defects area from the image background. Effective image preprocessing process can improve the efficiency and classification accuracy of subsequent algorithms.
2)
Defect binarization: Defect binarization is the process of extracting candidate defect areas from a vehicle body. The second-order information of defect region can be extracted by DBHM method, and the detection rate of micro-defects can be improved by binarizing the possible defect regions.
3)
Feature extration and classification: A defect binarization image comprises three types of pixels: defects, background and pseudo-defect generated by noise of the preprocessed image. Due to the influence of noise and image quality, there are cases of pseudo-defects after binarization of defects. Through feature extraction of candidate feature regions, we can classify defects while excluding some pseudo-authorities, thereby improving the detection accuracy of AIS.

4. Proposed Approach

4.1. Image Preprocessing

The image formation process consists of two steps: optical reflection and photoelectric conversion. In the first step, the illumination model converts the physical properties such as the reflectivity of the vehicle body surface into the reflected light intensity. The second step converts the reflected light intensity into a digital image of the vehicle body through a camera lens and an analog-to-digital converter (ADC or A/D). Based on the understanding of the physical properties of light, the Phong illumination model (PIM) is a widely used illumination physics model. PIM was first proposed by Phong [26] and improved by Whitted [27]. Its main structure consists of ambient term, diffuse term and specular term:
R p = I p a m b + m l i g h t s ( k d d i f ( L m · N ) I m , d l + k d s p e ( R m · V ) I m , d l )
where
  • Rp reflected intensity at position p;
  • I p a m b reflection intensity of ambient light at position p;
  • m serial number of light sources
  • I m , d l Light intensity of the m-(th) light source
  • k d d i f diffuse reflection coefficient;
  • k d s p e specular reflection coefficient;
  • N normal direction of position p;
  • V direction of view (usually the direction of the camera);
  • L m direction of the m-(th) light source;
  • R m perfect mirror direction.
For multiple point light sources, the PIM sums up diffuse term and specular term simultaneously for each individual light source [26]. In the second step, the photoelectric process converts the reflected light intensity Rp into a digital image Ip of the vehicle body through a camera lens and an analog-to-digital converter (ADC or A/D). This transformation can be expressed by a linear formula:
I p = α · R p + β
where α and β are constant parameters determined by camera setup [27].
The quality of automobile body image acquisition determines the results of defect detection in subsequent image processing. The two formulas of the body imaging model provide guidance for automotive appearance defect detection experiments:
1)
Using indirect diffused light pattern and increasing the incident angle of the light source can reduce or remove the specular reflection item in the body image to obtain a uniform image of the vehicle body;
2)
The method of image superposition can effectively remove the Gaussian noise in the initial image (this experiment uses the output image after superimposing six consecutive image frames as the actual image to be detected).

4.2. Candidate Defect Binarization

The Hessian matrix is a common method for extracting image features by high-order differential [28]. The Hessian method regards the direction of the second-order directional derivative with the largest module as the direction perpendicular to the image feature, and the direction perpendicular to it as the direction parallel to the image feature. For a linear model with Gauss function, it can be expressed by the second derivative with larger absolute value orthogonal to the straight line and the second derivative with smaller absolute value along the line. This is exactly the geometric meaning of the two eigenvalues of the two-dimensional Hessian matrix. In the process of binarization of candidate defects, we conceive the binarization of defects as a filtering process that searches for geometric structures which can be regarded as point or linear defects. Because the size of defects is different, it is important to introduce measurement scales that vary within a certain range.
The local behavior of an image I can be approximated to its second-order Taylor expansion [29]. Thus, image I in the neighborhood of point xo can be expressed as:
I ( x o + δ x o , s ) I ( x o , s ) + δ x o T o , s + δ x o T Η o , s δ x o
This expansion approximates the structure of the image up to second order. ∇o,s and Ho,s are respectively the gradient vector and Hessian matrix of the image in position xo at scale s. According to the concept of linear scale space theory, differentiation is defined as a convolution with derivatives of Gaussians:
x I ( x , s ) = I ( x ) x G ( x , s )
where the two-dimensional Gaussian is defined as:
G ( x , s ) = 1 2 π s 2 e x 2 + y 2 2 s 2
Assume that the Hessian matrix is expressed as H o , s = [ H x x H x y H y x H y y ] ,   ( H x y = H y x ). Eigenvalues of the Hessian matrix can be calculated as:
λ 1 = 1 2 ( H x x + H y y + ( H x x H y y ) 2 + 4 H x y )
λ 2 = 1 2 ( H x x + H y y ( H x x H y y ) 2 + 4 H x y )
The eigenvalue of the Hessian matrix satisfies the case of | λ 1 | 0 , | λ 2 | 0 or | λ 1 | | λ 2 | , | λ 2 | 0 at the location where a body defect exists. Thus, a two-dimensional candidate defect enhanced image is defined as:
e = { 0 , f ( x , y ) < f L   or   f ( x , y ) > f H 1 2 ( H x x + H y y + ( H x x H y y ) 2 + 4 H x y ) t h , otherwise
where th is a constant threshold value, which is used to enhance the possible defective regions. The background noise in the vehicle body diagram is directly removed by f(x,y) < fL and f(x,y) > fH, and the subsequent mask to background interference operation is reduced.
For body defect structural elements, the output of defect region in enhanced image is maximized only when the scale factor s best matches the actual width of the defect, so we introduce a three level scales detection method as shown in Figure 5. By performing a global thresholding of enhanced images s1e, s2e, s3e in different scales s1, s2, s3 (usually s1 = 1), the middle result images can be obtained as s1r, s2r, s3r, the presentations of which are defined as s1r ∈ {0,1}X, s2r ∈ {0,1}Y, s3r ∈ {0,1}Z. X = Z m × Z n = { x : = ( x , y ) Z 2 | 0 x m 1 , 0 y n 1 } , where m and n are the number of columns and rows of the image, Y = Z [ m × s 2 s 1 ] × Z [ n × s 3 s 1 ] , and Z = Z [ m × s 3 s 1 ] × Z [ n × s 3 s 1 ] . The binary pattern images can be obtained as follows:
s 1 r = η ( s 1 e ) s 2 r = η ( s 2 e ) s 3 r = η ( s 3 e )
where η = { 1   if   e > 0 0   if   e < 0 . By iterating the scale factor, the images at different scales are obtained, and the union of the defect regions is taken as the final candidate defect image:
O U T : = s 1 r χ 1 ( s 2 r ) χ 2 ( s 3 r )
where χ 1 : R Y R X ,   χ 2 : R Z R X are the up-scaling operators with ZY ⊂ X. ‖ is the bitwise or operator.

4.3. Feature Extraction of Candidate Defects

Since the candidate defects filtered by the Hessian matrix are easily affected by image noise, features of the defective region like shape, size and divergence are introduced to generate the feature vectors for training and remove pseudo-defects produced by image noise. In order to quantify the shape and size of the defect region, five common image features is utilized: area, major length, minor length, aspect ratio of minimum bounding rectangle, thinness ratio, and to distinguish the real defects from the pseudo defect regions, five statistical characteristics of the candidate region are utilized: mean pixel value, standard deviation, mean value of regional divergence, standard deviation of regional divergence and maximum value of regional divergence. The details of the defect feature are described as follows.
Area is the actual number of pixels in the candidate defect region, which is proportional to the actual area of the defect on the vehicle body.
The major length and minor length of minimum-area-rectangle (MAR) specify the major and minor side length (in pixels) of the minimum rotational circumscribed rectangle that contains all the point sets in the defective region. The aspect ratio of MAR is the ratio of the major length and minor length of the smallest circumscribed rectangle in the defect region.
Thinness ratio [30] describes the similarity between the defect area and the circle which can be calculated from the perimeter and area. The equation for thinness ratio is defined as follows:
T = 4 π A P 2
where A is the area and P is the perimeter of the defect which is defined as the total pixels that constitute the edge of the defect. As the defect becomes thinner and thinner, the perimeter becomes larger and larger relative to the area and the ratio decreases so the roundness decreases. As the shape of defect becomes closer to a circle, this ratio become closer to 1.
Mean pixel value and standard deviation of candidate defect regions are utilized to eliminate pseudo-defect areas. In addition to mean and standard deviation, we also propose the concept of divergence in feature extraction. From the point of view of energy diffusion, the divergence of gradient image (two-dimensional vector field) is the extent to which the vector field flow behaves like a source at a given point. It is a local measure of its “outgoingness”—the extent to which there is more of some quantity exiting an infinitesimal region of space than entering it [31]. If the divergence is positive, it means that the vector fields are scattered outward. If it is negative, it means that the vector fields are concentrated inward. That is to say, divergence indicates the direction of the flow of the vector, and the greater the degree of the flow, the faster the divergence, the greater the corresponding divergence value. The divergence of a vector field is defined as follows:
div F : = lim V 0 1 V V F · d S
In the 2-dimensional Descartes coordinate system, the divergence of image gradient field grad(I) = Ui + Vj is defined as the scalar-valued function:
div ( g r a d ( I ) ) : = · g r a d ( I ) = ( x , y ) · ( U , V ) = U x + V y
where i, j are the corresponding basis of the unit vectors, grad(I) is the gradient of image I. Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. Thus, the mean value, standard deviation and maximum value of regional divergence also have the properties of translation and rotation invariance.

4.4. Defect Classification

After the feature of candidate defects been extracted, we employ SVM [32] to classify the candidate defects into three types: pseudo-defect, dents and scratches. SVM first project the data points into a feature space by kernel function, and then perform linear classification in the feature space. A typical mathematical formulation is to solve the following convex optimization problem:
min ω , b , ξ i 1 2 ω T ω + C i ξ i s . t .   y i ( ω T ϕ ( x i ) + b ) 1 ξ i , ξ i 0
where xi are the data samples, yi are the labels, φ(·) is the kernel function, and C is the penalty coefficient. Common kernel functions include linear kernels: ϕ ( x i ) T ϕ ( x j ) = x i T x j , and radial basis function (RBF) kernels: ϕ ( x i ) T ϕ ( x j ) = exp ( γ x i x j 2 2 ) , where γ is a positive hyperparameter. The candidate Defects are divided into training and test sets using LIBSVM [33] to perform training and testing.

5. Experiments and Results

The AIS was jointly developed by the Nanjing University of Aeronautics and Astronautics (Nanjing, China) and the Shanghai Together International Logistics Co., Ltd. (Shanghai, China). All the experiments in this paper are built with the OpenCV C++ module in the Windows 7 operating system. PC desktop hardware configuration: dual-core Xeon SP CPU, 32 G memory, P4000 8 G unique. The body image was collected by a GCP4241C industrial camera of the Smartek Vision Company (Cakovec, Croatia). The images have a 4240 × 2824 size in 8-bit BMP format and 0.35 mm × 0.35 mm resolution. A data set with 96 images is constructed to evaluate the system performance, where each image contains at least five defects. As seen in Section 3 and Section 4, the defect detection algorithm proposed in this paper consists of three phases: an image preprocessing step, in which image superposition and region of interest extraction are proposed, a candidate defect binarization step, in which multi-scale fusion of largest of eigenvalue of Hessian matrix method is proposed, and a feature extraction and classification step, in which the eight features of candidate defects are extracted and these candidate defects are classified into scratches, dents and pseudo-defects.

5.1. Preprocessing Step: Image Superposition and Region of Interest Extration

In Figure 6, the red one-dimensional signal is the result of the average value of multiple images. By comparing the results of image Figure 6a,b, we can draw the conclusion that the superposition results of six images are similar to those of 20 images, which can effectively reduce the Gauss noise in the image. At the same time, the computational complexity of six images superposition is less than that of 20 images superposition. Therefore, the follow-up experimental pictures all use six image superimposed pictures.
Although the photoelectric distance sensor installed on the boom can accurately determine the position of the camera, the manual parking way used may cause a certain deviation in the spatial coordinate position of the vehicle image. Therefore, before the candidate defect is extracted, the region of interest (ROI) region is extracted using image registration. Image registration is obtained by feature point extraction, feature point description and feature point matching, and then the spatial transformation matrix estimation of two similar images is obtained. Compared with the traditional SIFT method [34], the SURF method [35] uses a small amount of data and has a short calculation time, therefore, the SURF registration method is adopted in this paper.
When a new body is ready to be introduced into the inspection system, the checker manually selects the test vehicle type on the PC software detection interface and the database provides all the necessary body-related data (e.g., model, color, template, mask, etc.) The complete body defect detection process is illustrated in Figure 7. Firstly, the spatial transformation matrix H is obtained from the experimental image and template image by the registration principle (steps ① and ②). Secondly, the template mask corresponding to the template image is read by the database, and the mask of the image to be detected is obtained by interpolating the space transformation matrix H, that is, the area to be detected (ROI) of the experimental image (steps ③ and ④). Finally, the actual defect area is obtained by the proposed defect detection algorithm and marked with green box on the experimental map (steps ⑤ and ⑥).
In order to improve the accuracy of image registration, we introduce three steps before registration which are camera calibration (Figure 8), edge detection and area opening. This means that the registration process of ROI extraction is not performed directly on gray scale vehicle images, but on the edge images after area opening operations (excluding noise pixels). The lens distortion of the camera has a great influence on the accuracy of image registration. Through experiments, we find that the edge pixel error of the calibrated camera is decreased from more than a dozen pixels to 2–5 pixels compared with the uncalibrated camera, which meets the requirement of body detection area extraction.

5.2. Candidate Defect Binarization: Case Study and Multi-Scale Fusion Method

Figure 9 shows an example of defect detection for a dent with a diameter of 1.4 mm (only occupying nine pixels) near the style line on the fuel tank cap. From the 3D representation of the detection region in Figure 9b,c, it can be seen that the location of the defect is very close to the style line and the edge of the fuel tank cap.
With a global threshold (T = 16) set by the binarization filter in Equations (8) and (9), we can distinguish between noise and defect and thus identify the defect on the surface as a result which is shown in Figure 9d.
In Figure 10, the picture used for experiments is acquired from an automated vision-based quality control inspection system named QEyeTunnel in Molina’s paper [2]. Their inspection system consists of two parts: an external fixed structure where a determined number of cameras are optimally placed in order to see the entire surface of the body to be analyzed, as well as a moving internal structure similar to a scanning machine which houses curved screens known as ‘sectors’ which act as the light sources that project the illumination patterns over the body surface. Figure 10a shows nine defects of automobile surface after spraying paint on automobile production line. This vehicle body image contains punctiform pits, linear defects and uneven paint spray that are difficult for the human eye to distinguish.
In our method, the most obvious line and point scratches are extracted in the first scale level detection (s1 = 1). Defects of uneven paint spray which are difficult for human eye to distinguish are detected in second and third scale level detection (s2 = 0.5, s3 = 0.25). After the second and third level images are rescaled to the first level scale of detection result, the final image result detects all types of defects which is merged by the bitwise or operator using the three level rescaled results.
By comparing the graphs of Figure 10b,e, we can conclude that, for the spot-like, linear and unevenly sprayed surface defects that are difficult to detect in the human eye, our proposed multi-scale Hessian eigenvalue method can still detect these defects.

5.3. Feature Extration and Classification of Candidate Defect Region

We extract 96 images from the image acquisition system. Then we search the connected components form the fused binary pattern of candidate defect regions and label the defect types manually. We then split it into a training subset and a test subset, with the training subset containing 70% of the data and the test subset containing the remainder. Grid search with fivefold cross-validation is performed on the training subset, and report generalization performance on the test subset. Specifically, for linear kernel, we try C with C = 2−5, 2−4, …, 215. The best hyperparameters for the linear kernel are shown in Table 1.
From Table 1, we can see that the classification accuracy of scratches is high which is up to 97.1%, but the relative accuracy of pits, 95.6%, is low. A possible reason is that the average pixel area of dents usually occupies only a single-digit number of pixels. The occurrence of these misclassified pit defects may be due to dust falling on the surface of the vehicle body during transportation and false defects caused by image noise.
To demonstrate the performance of the defect detection algorithm presented in this paper, a selection of results obtained on the production line with different parts and car bodies is shown in Figure 11. In some of these, detections are observed close to or on style lines and surface edges, as in Figure 11a–c. In addition, it is possible to see detections carried out in concave areas such as the handle, Figure 11d.

6. Conclusions and Future Work

In this paper, we have developed an AIS for detection of discrete surface defects of vehicle bodies. The proposed AIS has the ability to detect defects which are the located in or close to style lines, edges and handles, thus demonstrating the detection power of our system. In particular, we put forward a three-level scales DBHM detection method for extracting of candidate defects on the vehicle surface. This method distinguishes the defect location information and binarizes the candidate defects at three different scales by comparing the second-order features of the background from the image defects. This allows defect detection in concave areas or those with abrupt changes in the surface near style lines, edges and corners, provided that the area is well lit. Three level scales detection process is proposed to extract defects that are hard to distinguish from human eyes by fusing the multi-layer results under the same scale. In addition, we propose the feature extraction method of candidate defects and divide defects into three categories (pseudo-defects, dents, scratches) through linear SVM classifier. The experimental results demonstrate that AIS detects the dent defects with accuracy of 95.6% and scratch defects with accuracy of 97.1%. Furthermore, VIS is a very fast system with a speed of 1min50s/vehicle, which is faster than the manual inspection of check-man.
Although AIS is validated to be an effective system for detecting vehicle surface defects, it can be further improved in three aspects:
1)
Deflectometry techniques [36] show promise in image defect contrast enhancement on specular surfaces. If the funds are sufficient, we will try this lighting method to collect the body images.
2)
The precision of AIS could be improved if the selection of the parameter th for the multi-scale DBHM method could be more intelligent, although this has the risk of slowing down the detection speed. We will investigate adaptive selection approaches.
3)
The detection speed of the AIS could be further increased if it is paralleled using parallel computing techniques, such as general-purpose computation on graphics processing units. The five plane-array CCD cameras collect images of the five sides of the automobile synchronously, so the AIS has the potential to be paralleled and accelerated.

Author Contributions

Conceptualization, R.C. and Q.Z.; methodology, Q.Z. and B.H.; software, Q.Z. and J.Y.; validation, Q.Z., C.L. and X.Y.; formal analysis, J.Y.; investigation, Q.Z., C.L. and X.Y.; resources, R.C.; data curation, Q.Z. and X.Y.; writing—original draft preparation, Q.Z.; writing—review and editing, Q.Z.; visualization, C.L. and X.Y.; supervision, R.C.; project administration, R.C.

Funding

This research was funded by National Natural Science Foundation of China (Grant No.51675265), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) and the China Scholarship Council.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was supported by Professor Huixiang Wu and Shanghai Together International Logistics Co., Ltd. (TIL). The authors gratefully acknowledge these supports.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karbacher, S.; Babst, J.; Häusler, G.; Laboureux, X. Visualization and detection of small defects on car-bodies. Mode Vis. 1999, 99, 1–8. [Google Scholar]
  2. Molina, J.; Solanes, J.E.; Arnal, L.; Tornero, J. On the detection of defects on specular car body surfaces. Robot. Comput. Integr. Manuf. 2017, 48, 263–278. [Google Scholar] [CrossRef]
  3. Malamas, E.N.; Petrakis, E.G.; Zervakis, M.; Petit, L.; Legat, J.-D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  4. Li, Q.; Ren, S. A real-time visual inspection system for discrete surface defects of rail heads. IEEE Trans. Instrum. Meas. 2012, 61, 2189–2199. [Google Scholar] [CrossRef]
  5. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  6. Liao, P.-S.; Chen, T.-S.; Chung, P.-C. A fast algorithm for multilevel thresholding. J. Inf. Sci. Eng. 2001, 17, 713–727. [Google Scholar]
  7. dos Anjos, A.; Shahbazkia, H.R. Bi-level image thresholding. BioSignals 2008, 2, 70–76. [Google Scholar]
  8. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–166. [Google Scholar]
  9. Armesto, L.; Tornero, J.; Herraez, A.; Asensio, J. Inspection system based on artificial vision for paint defects detection on cars bodies. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  10. Immel, D.S.; Cohen, M.F.; Greenberg, D.P. A radiosity method for non-diffuse environments. In ACM Siggraph Computer Graphics; ACM: New York, NY, USA, 1986; pp. 133–142. [Google Scholar]
  11. Kajiya, J.T. The rendering equation. In ACM Siggraph Computer Graphics; ACM: New York, NY, USA, 1986; pp. 143–150. [Google Scholar]
  12. Fan, W.; Lu, C.; Tsujino, K. An automatic machine vision method for the flaw detection on car’s body. In Proceedings of the 2015 IEEE 7th International Conference on Awareness Science and Technology (iCAST), Qinhuangdao, China, 22–24 September 2015; pp. 13–18. [Google Scholar]
  13. Kamani, P.; Noursadeghi, E.; Afshar, A.; Towhidkhah, F. Automatic paint defect detection and classification of car body. In Proceedings of the 2011 7th Iranian Conference on Machine Vision and Image Processing, Tehran, Iran, 16–17 November 2011; pp. 1–6. [Google Scholar]
  14. Kamani, P.; Afshar, A.; Towhidkhah, F.; Roghani, E. Car body paint defect inspection using rotation invariant measure of the local variance and one-against-all support vector machine. In Proceedings of the 2011 First International Conference on Informatics and Computational Intelligence, Bandung, Indonesia, 12–14 December 2011; pp. 244–249. [Google Scholar]
  15. Chung, Y.C.; Chang, M. Visualization of subtle defects of car body outer panels. In Proceedings of the SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 4639–4642. [Google Scholar]
  16. Leon, F.P.; Kammel, S. Inspection of specular and painted surfaces with centralized fusion techniques. Measurement 2006, 39, 536–546. [Google Scholar] [CrossRef]
  17. Borsu, V.; Yogeswaran, A.; Payeur, P. Automated surface deformations detection and marking on automotive body panels. In Proceedings of the 2010 IEEE Conference on Automation Science and Engineering (CASE), Toronto, ON, Canada, 21–24 August 2010; pp. 551–556. [Google Scholar]
  18. Xiong, Z.; Li, Q.; Mao, Q.; Zou, Q. A 3D laser profiling system for rail surface defect detection. Sensors 2017, 17, 1791. [Google Scholar] [CrossRef] [PubMed]
  19. Qu, Z.; Bai, L.; An, S.-Q.; Ju, F.-R.; Liu, L. Lining seam elimination algorithm and surface crack detection in concrete tunnel lining. J. Electron. Imaging 2016, 25, 063004. [Google Scholar] [CrossRef]
  20. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712. [Google Scholar]
  21. Schmugge, S.J.; Rice, L.; Lindberg, J.; Grizziy, R.; Joffey, C.; Shin, M.C. Crack Segmentation by Leveraging Multiple Frames of Varying Illumination. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; pp. 1045–1053. [Google Scholar]
  22. Zou, Q.; Zhang, Z.; Li, Q.; Qi, X.; Wang, Q.; Wang, S. DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection. IEEE Trans. Image Process. 2019, 28, 1498–1512. [Google Scholar] [CrossRef] [PubMed]
  23. Xu, W.; Tang, Z.; Zhou, J.; Ding, J. Pavement crack detection based on saliency and statistical features. In Proceedings of the 2013 20th IEEE International Conference on Image Processing (ICIP), Melbourne, Australia, 15–18 September 2013; pp. 4093–4097. [Google Scholar]
  24. Zou, Q.; Cao, Y.; Li, Q.; Mao, Q.; Wang, S. CrackTree: Automatic crack detection from pavement images. Pattern Recognit. Lett. 2012, 33, 227–238. [Google Scholar] [CrossRef]
  25. Shi, Y.; Cui, L.; Qi, Z.; Meng, F.; Chen, Z. Automatic road crack detection using random structured forests. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3434–3445. [Google Scholar] [CrossRef]
  26. Phong, B.T. Illumination for computer generated pictures. Commun. ACM 1975, 18, 311–317. [Google Scholar] [CrossRef] [Green Version]
  27. Whitted, T. An improved illumination model for shaded display. In Proceedings of the ACM Siggraph 2005 Courses, Los Angeles, CA, USA, 31 July–4 August 2005. [Google Scholar]
  28. Lorenz, C.; Carlsen, I.-C.; Buzug, T.M.; Fassnacht, C.; Weese, J. A multi-scale line filter with automatic scale selection based on the Hessian matrix for medical image segmentation. In Proceedings of the International Conference on Scale-Space Theories in Computer Vision, Utrecht, The Netherlands, 2–4 July 1997; pp. 152–163. [Google Scholar]
  29. Frangi, A.F.; Niessen, W.J.; Vincken, K.L.; Viergever, M.A. Multiscale vessel enhancement filtering. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Cambridge, MA, USA, 11–13 October 1998; pp. 130–137. [Google Scholar]
  30. Rehkugler, G.; Throop, J. Apple sorting with machine vision. Trans. ASAE 1986, 29, 1388–1397. [Google Scholar] [CrossRef]
  31. Available online: https://en.wikipedia.org/w/index.php?title=Divergence&oldid=863835077 (accessed on 13 October 2018).
  32. Vapnik, V. Statistical Learning Theory; Wiley: New York, NY, USA, 1998; Volume 3. [Google Scholar]
  33. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intel. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  34. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  35. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
  36. Prior, M.A.; Simon, J.; Herraez, A.; Asensio, J.M.; Tornero, J.; Ruescas, A.V.; Armesto, L. Inspection System and Method of Defect Detection on Specular Surfaces. U.S. Patent US20130057678A1, 7 March 2013. [Google Scholar]
Figure 1. Sketch of dark field illumination.
Figure 1. Sketch of dark field illumination.
Sensors 19 00644 g001
Figure 2. Laboratory of vehicle automatic inspection system. (a) Sketch of the vehicle inspection laboratory. (b) The actual experimental scene. The cameral is installed at 3-axis robot aluminum arms which are motor-driven by slide rails.
Figure 2. Laboratory of vehicle automatic inspection system. (a) Sketch of the vehicle inspection laboratory. (b) The actual experimental scene. The cameral is installed at 3-axis robot aluminum arms which are motor-driven by slide rails.
Sensors 19 00644 g002
Figure 3. Overall architecture of the vehicle automatic inspection system.
Figure 3. Overall architecture of the vehicle automatic inspection system.
Sensors 19 00644 g003
Figure 4. Diagram of image processing subsystem.
Figure 4. Diagram of image processing subsystem.
Sensors 19 00644 g004
Figure 5. Three level scales DBHM binary fusion method.
Figure 5. Three level scales DBHM binary fusion method.
Sensors 19 00644 g005
Figure 6. Image superposition with different number of sheets (a) superposition of six vehicle images (b) superposition of 20 vehicle images.
Figure 6. Image superposition with different number of sheets (a) superposition of six vehicle images (b) superposition of 20 vehicle images.
Sensors 19 00644 g006
Figure 7. Examples of detection of ROI defects detection procedure.
Figure 7. Examples of detection of ROI defects detection procedure.
Sensors 19 00644 g007
Figure 8. Camera calibration using a 6 × 9 × 100 mm3 chessboard.
Figure 8. Camera calibration using a 6 × 9 × 100 mm3 chessboard.
Sensors 19 00644 g008
Figure 9. Case study: evaluation of the proposed algorithm dealing with a defect close to fuel tank cap (a) Detection region on fuel tank cap; (b) 3D representation of the magnified region (view1); (c) 3D representation of the magnified region (view2); (d) 3D map of λ1 in DBHM.
Figure 9. Case study: evaluation of the proposed algorithm dealing with a defect close to fuel tank cap (a) Detection region on fuel tank cap; (b) 3D representation of the magnified region (view1); (c) 3D representation of the magnified region (view2); (d) 3D map of λ1 in DBHM.
Sensors 19 00644 g009aSensors 19 00644 g009b
Figure 10. Illustration of three level scales detection process (a) ground truth of surface defects (b) detection result of enhancement background subtraction of three-level detection method proposed by Jaime Molina [2] (c) our first scale level detection represented by a small solid-line box, (d) our second scale level detection represented by a broken-line box (e) our third scale level detection by a large dotted-line box (f) our final fusion candidate defect image.
Figure 10. Illustration of three level scales detection process (a) ground truth of surface defects (b) detection result of enhancement background subtraction of three-level detection method proposed by Jaime Molina [2] (c) our first scale level detection represented by a small solid-line box, (d) our second scale level detection represented by a broken-line box (e) our third scale level detection by a large dotted-line box (f) our final fusion candidate defect image.
Sensors 19 00644 g010aSensors 19 00644 g010b
Figure 11. Set of detection examples showing the detection of defects on different position of car bodies (a) one defect near edge of body trunk cover on the roof of the car, (b) two defects near the edge of the fender surface (c) three defects, one on the fuel tank cover two near the edge of the fender surface (d) two defects, one on the handle, one near the style line.
Figure 11. Set of detection examples showing the detection of defects on different position of car bodies (a) one defect near edge of body trunk cover on the roof of the car, (b) two defects near the edge of the fender surface (c) three defects, one on the fuel tank cover two near the edge of the fender surface (d) two defects, one on the handle, one near the style line.
Sensors 19 00644 g011
Table 1. Optimal hyperparameters and accuracy for linear kernel.
Table 1. Optimal hyperparameters and accuracy for linear kernel.
Defect TypeCTraining Accuracy ± Standard DeviationTesting Accuracy ± Standard Deviation
Pseudo-defects0.01562598.2 ± 0.9%96.0% ± 1.8%
Dents0.01562596.5 ± 2.6%95.6% ± 3.4%
Scratches0.01562598.6 ± 1.3%97.1% ± 1.6%

Share and Cite

MDPI and ACS Style

Zhou, Q.; Chen, R.; Huang, B.; Liu, C.; Yu, J.; Yu, X. An Automatic Surface Defect Inspection System for Automobiles Using Machine Vision Methods. Sensors 2019, 19, 644. https://0-doi-org.brum.beds.ac.uk/10.3390/s19030644

AMA Style

Zhou Q, Chen R, Huang B, Liu C, Yu J, Yu X. An Automatic Surface Defect Inspection System for Automobiles Using Machine Vision Methods. Sensors. 2019; 19(3):644. https://0-doi-org.brum.beds.ac.uk/10.3390/s19030644

Chicago/Turabian Style

Zhou, Qinbang, Renwen Chen, Bin Huang, Chuan Liu, Jie Yu, and Xiaoqing Yu. 2019. "An Automatic Surface Defect Inspection System for Automobiles Using Machine Vision Methods" Sensors 19, no. 3: 644. https://0-doi-org.brum.beds.ac.uk/10.3390/s19030644

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop