Next Article in Journal
Integrating Remote Sensing, Machine Learning, and Citizen Science in Dutch Archaeological Prospection
Next Article in Special Issue
Masi Entropy for Satellite Color Image Segmentation Using Tournament-Based Lévy Multiverse Optimization Algorithm
Previous Article in Journal
Evaluation of SMAP, SMOS-IC, FY3B, JAXA, and LPRM Soil Moisture Products over the Qinghai-Tibet Plateau and Its Surrounding Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Volume of Oil Tanks Based on High-Resolution Remote Sensing Images

1
Navigation College, Dalian Maritime University, Dalian 116026, China
2
Environmental Information Institute, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Submission received: 1 March 2019 / Revised: 31 March 2019 / Accepted: 1 April 2019 / Published: 3 April 2019
(This article belongs to the Special Issue Image Optimization in Remote Sensing)

Abstract

:
The purpose of this study is to obtain oil tank volumes from high-resolution satellite imagery to meet the need to measure oil tank volume globally. A preprocessed remote sensing HSV image is used to extract the shadow of the oil tank by Otsu thresholding, shadow area thresholding, and morphological closing. The oil tank shadow is crescent-shaped. Hence, a median method based on sub-pixel subdivision positioning is used to calculate the shadow length of the oil tank and then determine its height with high precision. The top of the tank and its radius in the image are identified using the Hough transform. The final tank volume is calculated using its height and radius. A high-resolution Gaofen 2 optical remote sensing image is used to evaluate the proposed method. The actual height and volume of the tank we tested were 21.8 m and 109,532 m3. The experimental results show that the mean absolute error of the height of the tank calculated by the median method is 0.238 m, the relative error is within 1.15%, and the RMES is 0.23. The result is better than the previous work. The absolute error between the calculated and the actual tank volumes ranges between 416 and 3050 m3, and the relative error ranges between 0.38% and 2.78%. These results indicate that the proposed method can calculate the volume of oil tanks with high precision and sufficient accuracy for practical applications.

1. Introduction

For many years now, the status and production of oil have been highly valued. In an oil strategy, how to estimate the current amount of global oil reserves and predict changes in production are key for determining oil procurement and transportation measures. They are also important factors in economic development. If the volume of an oil tank can be calculated, it is possible to estimate the storage capacity of an oil storage base. In recent years, with the development and improvement of satellite remote sensing technology, high-resolution image information from any part of the world can be obtained easily [1]. Since 1989, in aerial photogrammetry, researchers have long used shadow information to estimate building height [2]. However, most of the buildings considered in the literature are houses, and their shadows are approximately rectangular. The shape of a tank shadow in a remote sensing image is similar to a crescent shape. There are, hence, few methods to calculate the actual length of a tank shadow.
The HSV color space was first proposed by Smith in 1978. In the HSV color space, hue, saturation, and value are used to express the color of the image [3] since the geography of tank areas, deep blue water, and dark green vegetation areas cannot be observed in oil tank images. This study hence detects shadows in the HSV color space. Shadows have low brightness, high saturation, and increased hue in the HSV color space. To take advantage of this characteristic, a hue-to-value ratio image can be obtained, thereby enhancing the low luminance and hue values of the shadow region [4]. Using this image, the method proposed in this paper performs a series of segmentation and optimization algorithms to obtain the tank shadow.
The sub-pixel edge detection method can improve the accuracy of edge detection [5]. The proposed method uses the idea of sub-pixel subdivision positioning to improve the accuracy of shadow length calculation. The tank height is calculated based on the geometric relationship between the length of the shadow and the actual height of the tank. Finally, the tank volume is calculated using the tank radius obtained by the Hough transform algorithm.
The rest of this paper is organized as follows: Section 2 presents a summary of work related to shadow extraction and building height calculation. Section 3 details the proposed approach for calculating tank volume based on high-resolution optical remote sensing. The image preprocessing, oil tank shadow extraction, and oil tank volume calculation are described in detail. The experimental results are presented and discussed in Section 4. Finally, the paper concludes in Section 5 with some recommendations for further work.

2. Related Work

2.1. Remote Sensing Image Shadow Detection

Many approaches for shadow detection have been reported in the literature; they are often categorized into property-based and model-based methods [6]. However, the model-based algorithms have strong disadvantages [7]. The core of the algorithm is based on the geometry of the occlusion object, digital surface model data and solar illumination orientation, sensors and other related parameters. However, in the practical research, this information is difficult to obtain and has limitations.
Property-based methods take into account shadow properties that are generally directly deduced from the image data information. This includes both radiometric attributes with spectral features and textural attributes through spatial features. Some examples of the methods in this class are simple histogram thresholding and invariant color models [8]. Histogram thresholding is the simplest shadow detection tool, but is also the most popular in the literature. For example, Nagao et al. [9] used a linear combination of RGB and NIR channels. Then, the search for a threshold in the histogram is based on various assumptions about the histogram such as a bimodal distribution [10], an underlying Gaussian mixture model [11], or the number of peaks and valleys in the histogram [12]. The threshold can also be chosen arbitrarily by visual inspection [13]. The main drawback of these methods is the misclassification of sunlit dark objects (such as water) as shadows as well as the misclassification of shadowed bright objects as sunlit regions.
The shadow chromaticity expressed in a 2D image is exploited in invariant color model methods. Shadows have high hue value, high saturation in blue–violet channels, and low intensity. Polidorio et al. [14] subtracted the luminance I component from the saturation component S in the HSI color space, separating the shaded and non-shaded regions. Tsai [4] developed a spectral ratio between hue and intensity, and showed that his method outperforms those of Polidorio et al. [14] and Huang et al. [15]. In contrast, Chung et al. [16] improved the spectral ratio of Tsai and showed that their method has better accuracy. Luo et al. [17] based their method on the properties of each component of the HSI color space combined with the maximum inter-class variance threshold method for the separation of shadow regions and non-shadow regions.

2.2. Shadow-Based Building Height Calculation

In the past 20 years, the technique of inferring the height of buildings based on the shadows of buildings in remote sensing images has become relatively mature [2]. Cheng et al. established a simple calculation model of building shadows and their heights, and obtained a mean-square error of 3.69 m over 42 building height measurements [18]. The average height of 42 buildings in [18] is about 20 m, so the result is not so good. Hartl and Cheng [19] extended the work of Cheng and Thiel. They tested 77 buildings and obtained a root-mean-square error of 6.19 m. Massalabi [20] provided a formula for the determination of building heights based on shadow length, solar elevation, solar azimuth, satellite elevation, and satellite azimuth to make calculation easier. Izadi [21] and Wang [22] respectively deduced formulas for building height based on the sine-cosine theorem using shadow length, solar elevation, satellite elevation, and the difference between solar azimuth and satellite azimuth. To determine building heights based only on single satellite images, Qi [23] presented a new method named the corner–shadow–length ratio (CSLR) method. Cheng and Thiel [18] and Turker and Sumer [24] used the building shadows to calculate the height of buildings that collapsed in an earthquake. More recently, Qi [25] estimated building height from Google Earth images by first calculating the ratio of building height to the shadow length of known buildings, and thereafter utilizing the identified shadow-length ratio to obtain the heights of other buildings with unknown heights. Liasis and Stavrou [26] developed a custom filter for enhancing shadows and reducing the spectral heterogeneity of the regions of interest (ROIs) to form an optimized contour model for estimating building height using a shadow length and a solar elevation angle. Liu [27]. vectorized the extracted shadows and used parallel lines to measure the actual length of the building’s shadows, thereby calculating the height of the building. Zhang [28] extracted the shadow area by multi-band spectral difference, calculated the shadow length using the pixel number method, and then found the height of the building.

3. Proposed Approach

3.1. Overview of the Approach

A flowchart of the proposed approach is illustrated in Figure 1. Image preprocessing is performed on the original Gaofen 2 satellite image data. The shadow area of the oil tank is obtained by converting the image to HSV color space, using the maximum inter-class variance, a threshold, and morphological operators. The actual length of the tank shadow is calculated using a median method based on sub-pixel subdivision positioning. The height of the tank is determined based on the geometric relationship between the length of the shadow and the height of the tank. Then, the Hough transform is used to identify the top of the tank and calculate its radius. Finally, the volume of the tank is obtained according to the formula for the volume of a cylinder.

3.2. Image Preprocessing

To ensure the accuracy of shadow extraction, the original remote sensing data image need to be preprocessed. The main processes in the proposed method include radiation correction, geometric correction, image fusion, and image cropping.
Radiation correction is divided into two key steps: radiometric calibration and atmospheric correction. Radiation calibration is to establish a quantitative relationship between the brightness value of the image pixel and the actual radiance value. Atmospheric correction can eliminate the influence of atmospheric and illumination factors on the reflection of ground objects, so physical parameters, such as reflectivity, emissivity, and surface temperature of the object, can be obtained. The original remote sensing image data of Gaofen 2 has been processed by on-orbit radiation calibration [29,30,31,32]. In the proposed method, atmospheric correction is performed using the radiation transfer model, which is more accurate. We performed general geometric correction. Pan-sharpening fusion is employed to fuse the images. As an example, the 0.81-m panchromatic image is shown in Figure 2a and the 3.2-m multispectral image is shown in Figure 2b. The merged image is shown in Figure 3. The remote sensing images were cropped to extract the areas of interest for the experiments in this paper. The cropped images are shown in Figure 4.

3.3. Oil Tank Shadow Extraction

In the HSV color space, hue, saturation, and value are used to express the color of the image. In shadow areas, the hue values are relatively consistent and high in value, whereas non-shadow areas have relatively low hue values, as shown in Figure 5. In contrast, in Figure 6, the saturation of the shaded area and the saturation of the surrounding ground are difficult to distinguish. The value of the shadow area is smaller than that of the non-shadow area. We use the ( H + 1 ) / ( V + 1 ) ratio image to enhance the low luminance and hue values of the shadow region, where H is the hue channel image and V is the value channel image.
After preprocessing the satellite remote sensing image, the image of the experimental region in the RGB color space is obtained by cropping and then converted into an HSV image, as shown in Figure 7a. The ratio image is obtained, as shown in Figure 7b. Then, the ratio image is thresholded using Otsu’s method (maximum inter-class variance method) to obtain the image shown in Figure 7c. These images mainly consist of the tank shadows and other small areas of speckle noise. The average area of the connected components is calculated and used as a threshold to obtain Figure 7d. Finally, a morphological closing operation is used to obtain the shadow of the tank. The structing elements (SE) we used is a 3 × 3 matrix. The values in the matrix are all 1. The result as shown in Figure 7e.

3.4. Oil Tank Volume Calculation

3.4.1. Shadow Length Calculation

Most of the shadows used in the literature to calculate building height are the shadows of rectangular buildings. In contrast, the shadows of the tanks extracted in Section 3.3 are similar to a crescent, as shown in Figure 8.
If the line segment parallel to the sun’s rays is obtained, the distance between two points can be obtained using the Euclidean distance as follows:
D ( p ,   q ) = [ ( s x ) 2 + ( t y ) 2 ] 1 2
where (s, t) and (x, y) are the coordinates of two points and coordinates are in pixel units. S is the building shadow length, which is calculated as:
S = D × k
and k is the pixel size in meters.
Figure 9 is the interpretation diagram of shadow length calculation. The detailed steps for calculating the shadow length are as follows:
First, obtain the coordinates of the upper and lower edges respectively. The boundary coordinates of the image tank shadow are extracted by the findContours() function in OpenCV. The key parameters of the function are: CV_RETR_EXTERNAL (only the outer contour is detected), and CV_CHAIN_APPROX_NONE (all the points on the contour are stored). The storage rule of the contour point is to store the point with the smallest y value first. If there is more than one point with the smallest y value, the point with the smallest value of x is taken as the first position of the storage list. Besides, all the contour points are sorted in a counterclockwise order. The right and left endpoints are the point with the largest x value and the point with the largest y value, respectively. Now we have stored the coordinates of all the contours and have identified the coordinates of the left and right endpoints. Starting from the first bit of the list, the y value of the coordinate is compared to ymax. If y is less than ymax, the coordinate is classified as the upper edges. If more than one y value is equal to ymax, the coordinate of the last ymax is the left end point. The coordinates of the remaining ymax are classified as the upper edges. Starting from the left end point, the x value of the remaining coordinates is compared with xmax. If x is less than xmax, the coordinate is classified as the lower edges. If more than one x value is equal to xmax, the coordinate of the last xmax is the right end point. The coordinates of the remaining xmax are classified as the lower edges. At this time, the remaining coordinates in the contour coordinate point set are classified as the upper edges. The coordinate values of each point are the pixel’s center-point coordinate values.
Convert the obtained shadow edge pixel center-point coordinate values into floating point values. Take a pixel center point on the lower edge of the shadow (e.g., point O shown in Figure 10) and traverse the upper edge of the shadow to find angle γ. Select the coordinates of the center point of the upper edge of the shadow such that the absolute value of the difference between angle γ and supplementary angle θ of the solar azimuth is less than 0.5°.
According to sub-pixel subdivision positioning [33], subdivide the upper edge of the selected pixel. Find the point such that | γ θ | 0.05 . The distance between the selected point and the center point of the pixel at the lower edge of the shadow is calculated according to Equation (1). The bottom pixel of the shadow may indicate the edge pixel of the tank, so the center point of the pixel at the bottom edge of the shadow is used.
The above steps are used to calculate the length of all such segments in the tank shadow. The median, maximum, and average of these values are each used to calculate the actual length of the shadow according to Equation (2).

3.4.2. Oil Tank Height Calculation

The formation of building shadows in high-resolution satellite remote sensing images is related to the size of the building itself, the solar elevation angle, solar azimuth, satellite azimuth, and satellite elevation angle. In addition, the experiments in this study consider the oil tanks of a national oil reserve base where the terrain area does not have large fluctuations, does not affect the shadow of the tank on the ground, and the tank is perpendicular to the ground surface. There is also a certain interval between the tanks. In the selection of remote sensing satellite images, the shadow of the tanks completely falls on the spaced ground. Under these conditions, according to the traditional method, the height information of the tank can be obtained by using the positional relationships among the building, the satellite azimuth, the solar azimuth, and the solar elevation angle. The traditional positional relationships between the height of a building and its shadow on a remote sensing image can be divided into three cases: (i) the satellite azimuth differs from the solar azimuth by more than 180°; (ii) the satellite azimuth is equal to the sun azimuth; and (iii) the azimuth of the satellite is not equal to the azimuth of the sun and the difference is less than 180°. Since our experimental data fall into the first case, we only cover this case in detail.
When imaging high-altitude remote sensing satellite images, if the satellite azimuth differs from the solar azimuth by more than 180°, i.e., the satellite and sun are on the opposite sides of the building, then all the shadows of the building can be observed on the satellite image, as shown in Figure 11.
The geometry for this case is shown in Figure 12, where α is the solar elevation angle, h is the height of the building, and S is the length of the shadow projected by the building on the ground.
In this case, when the remote sensing satellite takes an image, the geometric relationship between the shadow length of the building and the height of the building itself is given by:
h = S × tan α .

3.4.3. Oil Tank Top Radius Calculation

The top of the tank is assumed to be circular in the two-dimensional plane. When the tank height is known, the radius of the top of the tank is required to calculate the volume. In the proposed method, the Hough transform is used to detect the oil tank and identify its top radius. The Hough transform is suitable for detecting targets with specific shapes. It uses point–line duality to determine a specific shape in the image by a voting mechanism. The analytical geometry equation of a circle is:
( x c 1 ) 2 + ( y c 2 ) 2 = γ 2
where ( c 1 , c 2 ) is the center of the circle and r is the radius of the circle. At any point ( x , y ) on the circle in a two-dimensional image space, there will be a conical surface corresponding to it in the parameter space C1C2R. It can also be said that the points on the same circle in the two-dimensional image space are mapped to the parameter space, and a cluster of intersecting conical surfaces is formed. As shown in Figure 13, a cluster of conical surfaces intersects at point ( c 1 , c 2 , r ) in parameter space.
The specific algorithm for detecting circles using the Hough transform is as follows:
Solve for the gradient on a circular point in the image, and form a set of edge points according to the threshold processing. Initialize a voting matrix A ( c 1 , c 2 , r ) = 0 in parameter space C1C2R. According to the set of edge points, take all possible values of c 1 , c 2 to obtain the corresponding value of γ. Accumulate matrix A ( c 1 , c 2 , r ) according to the obtained ( c 1 , c 2 , r ) , that is, A ( c 1 , c 2 , r ) = A ( c 1 , c 2 , r ) + 1 . Local extrema are detected in matrix A ( c 1 , c 2 , r ) to obtain the parameters ( c 1 , c 2 , r ) .

4. Experiment

4.1. Experimental Data

The experimental data used in this paper are data from China’s independent Gaofen 2 satellite. The Gaofen 2 satellite was successfully launched on August 19, 2014, and it has a spatial resolution of 0.81 m at the sub-satellite point. The imaging spectrum includes four multi-spectral spectral segments and one panchromatic segment. The detailed technical specifications and parameters of Gaofen 2 are shown in Table 1.
The remote sensing images were taken at 10:51 h on April 16, 2015, and consist of full-color image data at a resolution of 0.81 m and blue, green, red, and near-infrared multi-band image data at a resolution of 3.20 m. In addition, the solar elevation angle is 57.96°, the solar azimuth is 150.66°, and the satellite azimuth is 310.48°. The satellite image was preprocessed to form a multi-spectral full-color fused image, eliminating the effects of solar azimuth and satellite azimuth, and the solar azimuth and satellite azimuth differ by 159.82°, which is close to 180°. Hence, Equation (3) was used to calculate the height of the tank. We calculated the height of 24 oil tanks. The actual height of the tanks in the experimental area is 21.8 m [34].

4.2. Evaluation Metrics

4.2.1. Evaluation Metrics of Extracted Shadows

The evaluation metrics proposed by Shufelt (accuracy and precision) were used to evaluate the accuracy of the detected shadow area. Recall CF and shadow detection accuracy CA are calculated as evaluation metrics, respectively [35], as follows:
C F = T P T P + F N ,
C A = T P T P + F N + F P .
Here, TP is the number of shadow pixels that are correctly detected, FP is the number of unshaded pixels detected as shadows, FN is the number of shadow pixels that are detected as non-shadow pixels.

4.2.2. Evaluation Metrics of Oil Tank Height, Radius, and Volume

We used the absolute value and relative error to evaluate the accuracy of the estimated oil tank height, radius, and volume. The absolute error is the absolute value of the calculated height and the actual height, and the relative error is the ratio of the absolute error to the actual height.

4.3. Results and Evaluation

4.3.1. Shadow Extraction

Images were preprocessed on the ENVI (The Environment for Visualizing Images) 5.3 software platform, which is a product of Exelis Visual Information Solutions, USA. On the Eclipse Neon 3.2 platform, which is managed by the Eclipse Foundation, Python 3.6.3 and OpenCV 3.3.1 were used to implement the tank shadow extraction. To verify the effectiveness of the algorithm, the traditional histogram threshold and morphological methods were used to extract the oil tank shadow in the experimental images. These two algorithms are compared with the proposed algorithm, as shown in Figure 14.
The results are shown in Table 2.

4.3.2. Oil Tank Height Calculation

The length calculation of the oil tank shadow is carried out according to the traditional pixel number-based method, the mean method based on direct manual measurement, and the methods proposed in this paper, which include the maximum method, median method, and average method, all based on the idea of sub-pixel subdivision positioning. The specific processes of the pixel number method and direct measurement method are given in Appendix A. The shaded areas are numbered as shown in Figure 15.
The results are shown in Table 3.
Figure 16 compares the height of the tanks estimated by the five methods. In addition, Figure 17 compares the mean of the absolute value error, and Figure 18 compares the mean of the relative error for these methods. The root mean square error (RMES) of the pixel number method, direct measurement method, median method, maximum method, and mean method are 5.58, 0.53, 0.23, 3.63, and 0.90, respectively.
The results of oil tank height calculation are shown in Table 4.

4.3.3. Tank Top Radius

The proposed method uses the “HoughCircles()” function in OpenCV to identify and detect oil tanks in remote sensing images. After many experiments, when the function parameter "param1" is 132.55, “param2” is 20, “minRadius” is 48 or 49, and “maxRadius” is 51, the recognition rate of the tank sin the image is the best, as shown in Figure 19.
As Figure 19 shows, the numbers of tanks identified are 23 and 22 for minimum radius parameter values of 48 and 49, respectively. In this case, we selected the average of the number of tanks with the top radius correctly identified in the experiment as 22, as shown in Table 5.

4.3.4. Tank Volume

The results of tank volume estimation for the 22 identified oil tanks are shown in Table 6. In addition, Figure 20 shows the absolute error of the calculated volume and Figure 21 shows the relative error of the calculated volume.

4.4. Discussion

This paper used spectral ratio techniques in the HSV color space to detect tank shadows. This method was compared with classical algorithms, a histogram threshold algorithm, and a mathematical morphological algorithm. All three algorithms can detect the shadow of the tank. However, traditional histogram threshold algorithms and mathematical morphological algorithms erroneously detect dark-colored parts of the surface that are similar to shadows. The outline of the oil tank shadow extracted by the proposed algorithm is basically consistent with the original image area and there is less interference. An evaluation of the accuracy of the detected shadow area used the method proposed by Shufelt. Table 2 shows that the accuracy and precision of the shadow detection algorithm proposed in this article are above 96% and better than the comparison methods. The other two algorithms also have high accuracy, but their precision is low. The oil tank shadow extracted by the algorithm in this paper is better that those of other methods, which lays a good foundation for the calculation of the subsequent tank height.
The accuracy of the shadow length calculation determines the accuracy of the tank height calculation. The shape of tank shadows in remote sensing images are similar to crescent shapes, in contrast to the shadows of houses, which resemble rectangles. In this paper, according to the idea of sub-pixel subdivision positioning, the lengths of all the line segments that meet the conditions in the shadow region are obtained, and the median, maximum, and mean values of the line segment length are obtained separately. We also used the classic pixel number method and direct measurement of the shadow length using ENVI software to compare the results. As can be seen from Figure 16, the heights of each tank obtained from the averaging and median methods are very close to the actual heights of the tanks. Figure 17 and Figure 18 show that the median method has the smallest absolute and relative errors. The mean absolute error of the height of the tank calculated by the median method is within 0.25 m, and the relative error is within 1.15%, which is better than the result in [36]. The smallest absolute error of calculated tank height is 0.5 m and the smallest relative error is 2.85% in [36]. Additionally, The overall accuracy of our approach is compared against previous works on image-based building height estimation in Table 7. The “-” in Table 7 means no relevant data.
We used the Hough transform to find the tank radius. Once the height and radius are obtained, we can calculate the volume of the tank. The absolute error of the calculated volume of the tank ranges from 416–3050 m3 and the relative error ranges from 0.38%–2.78%. The estimation error of most tank volumes is within 3000 m3. Therefore, the need to estimate the actual volumes of large tanks is met by the proposed method.

5. Conclusions

In the method proposed in this article, we converted remote sensing images into HSV images, and the ratio image was obtained to emphasize the shadows. The ratio image was thresholded using the maximum inter-class variance of the levels, then thresholded according to area and processed using morphological operators to obtain the shadow of the oil tank. The accuracy of the proposed shadow extraction was verified by comparing it with the traditional method. The calculation of the length of the tank shadow was carried out according to the traditional pixel number method, manual direct measurement method, and proposed method (using the maximum, median, and average methods). The heights of the tanks calculated by these five methods were compared with the actual heights of the tanks. The final results showed that the median method proposed in this paper outperformed the other methods. Combined with the Hough transform to detect the radius of the top of the tank, the volume of the tank can be estimated with sufficient accuracy.
In the future, we will further study the nature of shadows in other color spaces and combine the texture features of shadows to extract tank shadows from a wider range of images. In this study, the Hough transform did not identify all oil tanks. In the future, the Hough transform and machine-learning-related algorithms should be further combined to improve the accuracy of the tank identification rate and estimation of the radius of the tank.

Author Contributions

Funding acquisition, Y.L. (Ying Li); Investigation, S.Y.; Methodology, S.Y.; Supervision, Y.L. (Ying Li) and Y.L. (Yu Liu); Writing—original draft, T.W.; Writing—review & editing, T.W.

Acknowledgments

This article was supported by the National Marine Public Welfare Project (grant no. 201305002), National Natural Science Foundation of China (grant nos. 41571336), National Key Research and Development Program of China (grant no. 2017YFC 0211904).

Conflicts of Interest

The authors declare that they have no competing interests.

Appendix A

Appendix A.1. Calculation of the Shadow Length from the Number of Pixels

In high-resolution remote sensing images, in general, the parallel direction is the east–west direction, and the vertical direction is the north–south direction. As the sun rises and falls, the change in the azimuth of the sun causes the direction of the building’s shadow to change constantly. Therefore, the shadow in the image cannot be consistent with the direction in which the pixels in the image are arranged. The geometric relationship between pixels and shadows is shown in Figure A1. Here, α is the azimuth of the sun and S is the shadow length of the building. According to the edge and angle relationships of a right triangle, the length is calculated as follows [40]:
S = | Y / cos ( 180 ° α ) |
S = | X / sin ( 180 ° α ) |
Here, X is the component of the building shadow length in the horizontal direction, and Y is the component of the building shadow length in the vertical direction. The calculation of X and Y is:
X = i × k , Y = i × k
where i is the number of pixels in the horizontal direction, j is the number of pixels in the vertical direction, and k is the pixel size in meters of the remote sensing image.
The solar azimuth is 150.66° in the image used in the experiments. In this case, it is better to select the number of pixels in the north–south direction [36]. The shape of the tank shadow is similar to a crescent shape. Hence, the midpoint of the lower edge of the shadow is selected, the number of pixels in the vertical direction is found, and then the shadow length of the tank according to Equations (7) and (8) is calculated.
Figure A1. Geometry of the shadow and actual image pixel coordinates.
Figure A1. Geometry of the shadow and actual image pixel coordinates.
Remotesensing 11 00793 g0a1

Appendix A.2. Direct Measurement of Shadow Length

ENVI software is a commercial software package for processing remote sensing satellite images. This software was used to directly measure the tank shadows in the direction of the sun’s rays [29], as shown in Figure A2. However, in the process of manual measurement, there will be measurement errors. To reduce such errors, the average of multiple measurements was taken.
Figure A2. Direct manual measurement of shadow length.
Figure A2. Direct manual measurement of shadow length.
Remotesensing 11 00793 g0a2

References

  1. Navulur, K.; Pacifici, F.; Baugh, B. Trends in optical commercial remote sensing industry. IEEE Geosci. Remote Sens. Mag. 2013, 1, 57–64. [Google Scholar] [CrossRef]
  2. Irvin, R.B.; McKeown, D.M. Methods for exploiting the relationship between buildings and their shadows in aerial imagery. IEEE Trans. Syst. Man Cybern. 1989, 19, 1564–1575. [Google Scholar] [CrossRef] [Green Version]
  3. Skoneczny, S. Nonlinear image sharpening in the HSV color space. Prz. Elektrotech. 2012, 88, 140–144. [Google Scholar]
  4. Tsai, V.J.D. A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1661–1671. [Google Scholar] [CrossRef]
  5. Hueckel, M.F. A local visual operator which recognize edge and lines. J. ACM 1971, 18, 113–125. [Google Scholar] [CrossRef]
  6. Arévalo, V.; González, J.; Ambrosio, G. Shadow detection in color high resolution satellite images. Int. J. Remote Sens. 2008, 29, 1945–1963. [Google Scholar] [CrossRef]
  7. Yan, L.; Sasagawa, T.; Peng, G. A system of the shadow detection and shadow removal for high resolution city aerial photo. In Proceedings of the 20th ISPRS Congress, Beijing, China, 3–11 July 2008; pp. 378–382. [Google Scholar]
  8. Adeline, K.R.M.; Chen, M.; Briottet, X.; Pang, S.K.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar] [CrossRef]
  9. Nagao, M.; Matsutyama, T.; Ikeda, Y. Region extraction and shape analysis in aerial photos. Comput. Graph. Image Process. 1979, 10, 195–223. [Google Scholar] [CrossRef]
  10. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef]
  11. Otsu, N. A threshold selection method from gray level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  12. Chen, Y.; Wen, D.; Jing, L.; Shi, P. Shadow information recovery in urban areas from very high resolution satellite imagery. Int. J. Remote Sens. 2007, 28, 3249–3254. [Google Scholar] [CrossRef]
  13. Yamazaki, F.; Liu, W.; Takasaki, M. Characteristics of shadow and removal of its effects for remote sensing imagery. In Proceedings of the International Geoscience and Remote Sensing Symposium, IGARSS, Cape Town, South Africa, 12–17 July 2009; pp. 426–429. [Google Scholar]
  14. Polidorio, A.M.; Flores, F.C.; Imai, N.N.; Tommaselli, A.M.G.; Franco, C. Automatic shadow segmentation in aerial color images. In Proceedings of the XVI Brazilian Symposium on Computer Graphics and Image Processing, São Carlos, Brazil, 12–15 October 2003; pp. 270–277. [Google Scholar]
  15. Huang, J.; Xie, W.; Tang, L. Detection of and compensation for shadows in colored urban aerial images. In Proceedings of the 5th World Congress on Intelligent Control and Automation, WCICA 2004, Hangzhou, China, 15–19 June 2004; pp. 3098–3100. [Google Scholar]
  16. Chung, K.L.; Lin, Y.R.; Huang, Y.H. Efficient shadow detection of color aerial images based on successive thresholding scheme. IEEE Trans. Geosci. Remote Sens. 2009, 47, 671–682. [Google Scholar] [CrossRef]
  17. Luo, H.; Shao, Z. A shadow detection method from urban high resolution remote sensing image based on color features of shadow. In Proceedings of the International Symposium on Information Science and Engineering, Paris, France, 28–29 October 2013; pp. 48–51. [Google Scholar]
  18. Cheng, F.; Thiel, K.-H. Delimiting the building height in a city from the shadow in a panchromatic SPOT image: Part 1: Test of forty-two buildings. Int. J. Remote Sens. 1995, 16, 409–415. [Google Scholar] [CrossRef]
  19. Hartl, P.; Cheng, F. Delimiting the building heights in a city from the shadow on a panchromatic SPOT-image: Part 2: Test of a complete city. Int. J. Remote Sens. 1995, 16, 2829–2842. [Google Scholar] [CrossRef]
  20. Massalabi, D.C.; Bénié, G.B.; Beaudry, E. Detecting information under and from shadow in panchromatic Ikonos images of the city of Sherbrooke. In Proceedings of the Geoscience and Remote Sensing Symposium, IEEE, Anchorage, AK, USA, 20–24 September 2004; pp. 2000–2003. [Google Scholar]
  21. Izadi, M.; Saeedi, P. Three-dimensional polygonal building model estimation from single satellite images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2254–2272. [Google Scholar] [CrossRef]
  22. Wang, J.L.; Wang, X.H. Information extraction of building height and density based on quick bird image in Kunming, China. In Proceedings of the 2009 Joint Urban Remote Sensing Event, Shanghai, China, 20–22 May 2009; pp. 1–8. [Google Scholar]
  23. Qi, F.; Wang, Y.X. A new calculation method for shape coefficient of residential building using Google Earth. Energy Build. 2014, 76, 72–80. [Google Scholar] [CrossRef]
  24. Turker, M.; Sumer, E. Building-based damage detection due to earthquake using the watershed segmentation of the post-event aerial images. Int. J. Rem. Sens. 2008, 29, 3073–3089. [Google Scholar] [CrossRef]
  25. Qi, F.; Zhai, J.Z.; Dang, G. Building height estimation using Google Earth. Energy Build. 2016, 118, 123–132. [Google Scholar] [CrossRef]
  26. Liasis, G.; Stavrou, S. Satellite images analysis for shadow detection and building height estimation. ISPRS J. Photogram. Remote Sens. 2016, 119, 437–450. [Google Scholar] [CrossRef]
  27. Liu, X.; Su, Y. The QuickBird RemoteSensing image extracting building shadow Information method improvement. J. Hebei United Univ. 2013, 35, 64–67. [Google Scholar]
  28. Zhang, H. Research on Buildings Shadow Detection Method and Height Inversion with Hight Resolution Sensed Image. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, May 2017. [Google Scholar]
  29. Chander, G.; Hewison, T.J.; Fox, N.; Wu, X.; Xiong, X.; Blackwell, W.J. Overview of intercalibration of satellite instruments. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1056–1080. [Google Scholar] [CrossRef]
  30. Markham, B.L.; Helder, D.L. Forty-year calibrated record of earth-reflected radiance from Landsat: A review. Remote Sens. Environ. 2012, 122, 30–40. [Google Scholar] [CrossRef] [Green Version]
  31. Mishra, N.; Helder, D.; Angal, A.; Choi, J.; Xiong, X. Absolute calibration of optical satellite sensors using Libya 4 pseudo invariant calibration site. Remote Sens. 2014, 6, 1327–1346. [Google Scholar] [CrossRef]
  32. Karlsson, K.G.; Johansson, E. Multi-sensor calibration studies of AVHRR-heritage channel radiances using the simultaneous nadir observation approach. Remote Sens. 2014, 6, 1845–1862. [Google Scholar] [CrossRef]
  33. Xu, G.-S. Sub-pixel edge detection based on curve fitting. In Proceedings of the 2009 Second International Conference on Information and Computing Science, IEEE, Manchester, UK, 21–22 May 2009; pp. 373–375. [Google Scholar]
  34. Lin, Q. Oil Reservoir Safety Evaluation and Risk Control Research of Port of Dalian. Master’s Thesis, Dalian Maritime University, Dalian, China, March 2013. [Google Scholar]
  35. Shufelt, J.A. Performance Evaluation and Analysis of Monocular Building Extraction from Aerial Imagery. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 311–326. [Google Scholar] [CrossRef]
  36. Kadhim, N.; Mourshed, M. A shadow-overlapping algorithm for estimating building heights from VHR satellite images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 8–12. [Google Scholar] [CrossRef]
  37. Kim, T.; Javzandulam, T.; Lee, T.-Y. Semiautomatic reconstruction of building height and footprints from single satellite images. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 4737–4740. [Google Scholar]
  38. Shao, Y.; Taff, G.N.; Walsh, S.J. Shadow detection and buildingheight estimation using IKONOS data. Int. J. Remote Sens. 2011, 32, 6929–6944. [Google Scholar] [CrossRef]
  39. Lee, T.; Kim, T. Automatic building height extraction by volumetric shadow analysis of monoscopic imagery. Int. J. Remote Sens. 2013, 34, 5834–5850. [Google Scholar] [CrossRef]
  40. Wang, F.; Lin, Q. The study of the extraction of buildings’ height on a large scale with shadows. In Proceedings of the International Conference on Remote Sensing (ICRS), Nanchang, China, 2010. [Google Scholar]
Figure 1. Flowchart of the proposed approach.
Figure 1. Flowchart of the proposed approach.
Remotesensing 11 00793 g001
Figure 2. Original image: (a) panchromatic image; and (b) multispectral images.
Figure 2. Original image: (a) panchromatic image; and (b) multispectral images.
Remotesensing 11 00793 g002
Figure 3. Merged image.
Figure 3. Merged image.
Remotesensing 11 00793 g003
Figure 4. Cropped images of tanks.
Figure 4. Cropped images of tanks.
Remotesensing 11 00793 g004
Figure 5. Tank image: (a) original image; (b) hue channel image.
Figure 5. Tank image: (a) original image; (b) hue channel image.
Remotesensing 11 00793 g005
Figure 6. Tank image: (a) original image; and (b) saturation channel image.
Figure 6. Tank image: (a) original image; and (b) saturation channel image.
Remotesensing 11 00793 g006
Figure 7. Shadow extraction process: (a) cropped HSV image; (b) ratio image; (c) Otsu-thresholded image; (d) area-thresholded image; and (e) final image of tank shadows.
Figure 7. Shadow extraction process: (a) cropped HSV image; (b) ratio image; (c) Otsu-thresholded image; (d) area-thresholded image; and (e) final image of tank shadows.
Remotesensing 11 00793 g007
Figure 8. Shadow of a tank.
Figure 8. Shadow of a tank.
Remotesensing 11 00793 g008
Figure 9. Interpretation diagram of shadow length calculation.
Figure 9. Interpretation diagram of shadow length calculation.
Remotesensing 11 00793 g009
Figure 10. Geometry used in the algorithm.
Figure 10. Geometry used in the algorithm.
Remotesensing 11 00793 g010
Figure 11. Image in which the satellite and sun are on the opposite sides of the building.
Figure 11. Image in which the satellite and sun are on the opposite sides of the building.
Remotesensing 11 00793 g011
Figure 12. Geometry of the case shown in Figure 10.
Figure 12. Geometry of the case shown in Figure 10.
Remotesensing 11 00793 g012
Figure 13. Conical surfaces in parameter space.
Figure 13. Conical surfaces in parameter space.
Remotesensing 11 00793 g013
Figure 14. Extraction results: (a) histogram threshold method; (b) morphological method; and (c) proposed method.
Figure 14. Extraction results: (a) histogram threshold method; (b) morphological method; and (c) proposed method.
Remotesensing 11 00793 g014
Figure 15. Tank shadow numbering.
Figure 15. Tank shadow numbering.
Remotesensing 11 00793 g015
Figure 16. Comparison of tank heights obtained by various methods.
Figure 16. Comparison of tank heights obtained by various methods.
Remotesensing 11 00793 g016
Figure 17. Mean of the absolute error of the tank height calculated by different methods.
Figure 17. Mean of the absolute error of the tank height calculated by different methods.
Remotesensing 11 00793 g017
Figure 18. Mean of the relative error of the tank height calculated by different methods.
Figure 18. Mean of the relative error of the tank height calculated by different methods.
Remotesensing 11 00793 g018
Figure 19. Hough transform circle detection: (a) minimum radius parameter = 48; and (b) minimum radius parameter = 49.
Figure 19. Hough transform circle detection: (a) minimum radius parameter = 48; and (b) minimum radius parameter = 49.
Remotesensing 11 00793 g019
Figure 20. Absolute error of the estimated volumes of the tanks.
Figure 20. Absolute error of the estimated volumes of the tanks.
Remotesensing 11 00793 g020
Figure 21. Relative errors of the estimated volumes of the oil tanks.
Figure 21. Relative errors of the estimated volumes of the oil tanks.
Remotesensing 11 00793 g021
Table 1. Gaofen 2 satellite specifications.
Table 1. Gaofen 2 satellite specifications.
ParameterCameraValue
Spectral rangePanchromatic0.450.90 μm
Multispectral0.45–0.52 μm
0.52–0.59 μm
0.63–0.69 μm
0.77–0.89 μm
Spatial resolutionPanchromatic0.81 m
Multispectral3.20 m
Amplitude45 km (two camera combinations)
Revisit cycle (side swing)5 days
Coverage period (not side) 69 days
Table 2. Comparison of the shadow detection results.
Table 2. Comparison of the shadow detection results.
Detection MethodRecall (%) Precision (%)
Histogram threshold method95.7360.07
Morphological method96.2054.02
Proposed method96.9596.05
Table 3. Shadow length results.
Table 3. Shadow length results.
Shadow NumberPixel Number Method (m)Direct Measurement Method (m)Median Method (m)Maximum Method (m)Mean Method (m)
115.827613.289113.475713.479713.3476
215.827613.289113.475713.479713.3819
315.827613.289113.477715.338913.4697
415.827613.289113.475713.479713.3383
515.827613.289113.475713.479713.2034
615.827613.289113.475713.479713.0895
715.827613.289113.475713.479713.3634
815.827613.289113.475713.479713.0940
916.758613.277913.475713.479713.0120
1016.758613.277913.475713.479713.2622
1116.758613.277913.475713.479713.0253
1216.758613.277913.475713.479713.0914
1316.758613.277913.475713.479713.4041
1416.758613.277913.477717.198114.0023
1516.758613.277913.475713.479713.4074
1616.758613.277913.475713.479713.3108
1715.827613.273613.475713.479713.1256
1815.827613.273613.475713.479713.3260
1915.827613.273613.477715.338913.4682
2015.827613.273613.475713.479713.3260
2115.827613.273613.475713.479713.3890
2215.827613.273613.477723.704415.7199
2315.827613.273613.477715.338913.4979
2415.827613.273613.475713.479713.3180
Table 4. Tank heights obtained by various methods.
Table 4. Tank heights obtained by various methods.
Shadow NumberPixel Number Method (m)Direct Measurement Method (m)Median Method (m)Maximum Method (m)Mean Method (m)
125.324221.262621.561121.567521.3562
226.813821.294721.561121.567521.4110
326.813821.246921.564324.542221.5515
425.324221.281921.561121.567521.3413
523.834621.301021.561121.567521.1254
623.834621.265321.561121.567520.9432
725.324221.296221.561121.567521.3814
825.324221.254721.561121.567520.9504
926.813821.244621.561121.567520.8192
1025.324221.271021.561121.567521.2195
1125.324221.257021.561121.567520.8405
1225.324221.267421.561121.567520.9462
1326.813821.281821.561121.567521.4466
1425.324221.266721.564327.517022.4037
1526.813821.296321.561121.567521.4518
1626.813821.287021.561121.567521.2973
1725.324221.237821.561121.567521.0010
1825.324221.272521.561121.567521.3216
1926.813821.283821.564324.542221.5491
2026.813821.287821.561121.567521.3216
2126.813821.272221.561121.567521.4224
2240.220621.253921.564337.927025.1518
2326.813821.270221.564324.542221.5966
2426.813821.287521.561121.567521.3088
Table 5. Estimated tank radiuses.
Table 5. Estimated tank radiuses.
Shadow NumberCalculated Radius Length (m)Actual Radius Length (m)Absolute Error (m)Relative Error (%)
139.717740.00000.28230.71
239.758440.00000.24160.60
339.824540.00000.17550.44
439.733540.00000.26650.67
640.113440.00000.11340.28
739.692140.00000.30790.77
839.684440.00000.31560.79
939.700440.00000.29960.75
1139.658740.00000.34130.85
1239.725340.00000.27470.69
1339.666840.00000.33320.83
1439.658740.00000.34130.85
1539.716840.00000.28320.71
1639.716640.00000.28340.71
1739.683440.00000.31660.79
1839.717140.00000.28290.71
1939.716840.00000.28320.71
2040.146140.00000.14610.37
2139.733540.00000.26650.67
2239.675240.00000.32480.81
2339.725540.00000.27450.69
2439.666840.00000.33320.83
Mean39.7469 0.27670.69
Table 6. Calculated volumes of the oil tanks.
Table 6. Calculated volumes of the oil tanks.
Shadow NumberCalculated Tank Volume (104 m3)Actual Tank Volume (104 m3)Absolute Error (m3)Relative Error (%)
110.679910.95320.27332.50
210.701810.95320.25142.30
310.739010.95320.21421.96
410.688410.95320.26482.42
610.893810.95320.05940.54
710.666210.95320.28702.62
810.662010.95320.29122.66
910.670610.95320.28262.58
1110.648210.95320.30502.78
1210.684010.95320.26922.46
1310.652610.95320.30062.74
1410.649810.95320.30342.77
1510.679510.95320.27372.50
1610.679310.95320.27392.50
1710.661510.95320.29172.66
1810.679610.95320.27362.50
1910.681010.95320.27222.49
2010.911610.95320.04160.38
2110.688410.95320.26482.42
2210.658710.95320.29452.69
2310.685710.95320.26752.44
2410.652610.95320.30062.74
Mean10.6961 0.25712.35
Table 7. Algorithm performance against previous works.
Table 7. Algorithm performance against previous works.
RefYearVHR Satellite Imagery SourceRMSEMean Error (m)
[37]2007Panchromatic IKONOS1.861.34
[38]2011Panchromatic IKONOS12.99
[21]2012QuickBird1.381.14
[39]2013Panchromatic IKONOS1.34
QuickBird1.71
KOMPSAT21.67
WorldView1(WV1)1.88
[25]2016Google Earth0.980.82
[26]2016Google Earth22.66
[35]2018WorldView3(WV3)1.220.65
This paperGaofen-20.230.24

Share and Cite

MDPI and ACS Style

Wang, T.; Li, Y.; Yu, S.; Liu, Y. Estimating the Volume of Oil Tanks Based on High-Resolution Remote Sensing Images. Remote Sens. 2019, 11, 793. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070793

AMA Style

Wang T, Li Y, Yu S, Liu Y. Estimating the Volume of Oil Tanks Based on High-Resolution Remote Sensing Images. Remote Sensing. 2019; 11(7):793. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070793

Chicago/Turabian Style

Wang, Tong, Ying Li, Shengtao Yu, and Yu Liu. 2019. "Estimating the Volume of Oil Tanks Based on High-Resolution Remote Sensing Images" Remote Sensing 11, no. 7: 793. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop