Next Article in Journal
Mapping Coastal Dune Landscape through Spectral Rao’s Q Temporal Diversity
Previous Article in Journal
A Robust Method for Generating High-Spatiotemporal-Resolution Surface Reflectance by Fusing MODIS and Landsat Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cloud Observation and Cloud Cover Calculation at Nighttime Using the Automatic Cloud Observation System (ACOS) Package

Convergence Meteorological Research Department, National Institute of Meteorological Sciences, Seogwipo, Jeju 63568, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(14), 2314; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12142314
Submission received: 27 June 2020 / Revised: 16 July 2020 / Accepted: 17 July 2020 / Published: 18 July 2020
(This article belongs to the Section Atmospheric Remote Sensing)

Abstract

:
An Automatic Cloud Observation System (ACOS) and cloud cover calculation algorithm were developed to calculate the cloud cover at night, and the calculation results were compared with the cloud cover data of a manned observatory (Daejeon Regional Office of Meteorology, DROM) that records human observations. Annual and seasonal analyses were conducted using the 1900–0600 local standard time (LST) hourly data from January to December 2019. Prior to calculating the cloud cover of ACOS, pre-processing was performed by removing surrounding obstacles and correcting the distortion caused by the fish-eye lens. In addition, the red–blue ratio (RBR) threshold was determined, according to the image characteristics (RBR and luminance) using the red, green, and blue (RGB) brightness value of the area in which the solar zenith angle (SZA) was less than 80°, to calculate the cloud cover. The calculated cloud cover exhibited a bias of −0.28 tenths, root mean square error (RMSE) of 1.78 tenths, and a correlation coefficient of 0.91 for DROM across all cases. The frequency of the cases that exhibited differences less than 1 tenth between the observed and calculated cloud cover was 46.82%, while the frequency of cases that exhibited differences less than 2 tenths was 87.79%.

Graphical Abstract

1. Introduction

Clouds, which represent approximately 70% of the area of the earth, continually change the balance of radiative energy, resulting in meteorological changes that affect the hydrologic cycle, local weather, and climate. Their interaction with aerosols results in global cooling as well as the greenhouse effect owing to changes in their physical characteristics. Such interactions have an effect on precipitation efficiency, which in turn affects vegetation and water resource security [1,2,3]. In addition, the cloud cover and cloud type are meteorological elements that increase uncertainty in predicting climate and meteorological phenomena because they perform the dual actions of reflecting the solar radiation and absorbing the earth radiation reflected from the ground surface [4,5]. Therefore, the observation of clouds, including high-frequency, high-accuracy cloud cover and cloud type, is required as input data for radiation, weather forecast, and climate models [6,7].
Even though cloud cover represents the main observation data for global weather and climate, it is one of the meteorological variables for which ground-based automatic observation have not yet been performed in several countries, including South Korea. In other words, cloud cover data (octa or tenth) has only been recorded through the eyes of a human observer; thus, it is based on the subjective judgment of the observer [8,9]. Therefore, cloud cover data is often characterized by so much variability, depending on the observer. Particularly, observation data recorded by an unskilled observer will contain several errors, which degrade the quality of the observation data [2,6]. In other words, the observation data lack objectivity. During the day, human cloud observation is performed at 1-h intervals; however, at night, it is only performed at 1–3 h intervals, depending on the work cycle of the observer and the weather conditions. Therefore, to complement the lack of objectivity and improve the observation period, which are the major shortcomings of human observation, meteorological satellites and surface remote observation data can be used [4,10]. In the case of geostationary satellites, it is difficult to identify the horizontal distribution or shape of clouds because their spatial resolution is large (2 km × 2 km) even though their observation period is short (2–10 min). In the case of polar-orbiting satellites, there is a relatively high spatial resolution of 250 m × 250 m (e.g., MODIS), but they can only detect the same point twice a day [3,11,12]. Ground-based remote sensing equipment includes ceilometers, lidar, and camera-based imagers (e.g., Skyviewer (PREDE Inc. [6,13]), Whole Sky Imager (WSI) (Marine Physical Laboratory [14,15]), and Total Sky Imager (TSI) (Yankee Environmental System (YES) Inc. [16]). Among these, ceilometers and lidar, which are equipment that irradiate a laser directly to an object and detect the return signal, cannot determine cloud information for the entire sky because they can only detect a part of the sky [10]. Camera-based devices, on the other hand, have a number of useful benefits for detecting clouds, such as the observation area (coverage), image resolution, and observation period [6,17,18].
Camera-based devices for cloud observation can observe the entire sky of a hemisphere, as humans do, because they are equipped with a fish-eye lens (180° field of view (FOV)) [19]. The observation period can be set to one minute, and 24-h continuous observation can be performed, depending on camera performance and operational method. Many previous studies have been conducted to detect cloud cover during the day when a light source (sun) is present [6,7,9,20,21]. In other words, clouds were distinguished from the sky using the characteristics of light scattered in the atmosphere by a light source [12,22]. At night, on the other hand, it is difficult to distinguish clouds from the sky because there is no light source, such as the sun. When they are completely blocked from the surrounding light source, it is difficult to detect nighttime clouds with visible channel information (i.e., RGB brightness). In other words, clouds are detected through lights scattered from the light source (such as street lamps around the fields, or the lights from buildings). Therefore, many studies [23,24,25,26] calculated nighttime cloud cover by mounting an infrared (IR) sensor or IR filter or by detecting the radiance emitted from clouds and the sky using a sky scanner. In other words, additional devices that can detect the IR region are required, instead of conventional cameras [27]. In this study, a system capable of observing clouds at night using a conventional digital camera, which can perform imaging in the visible region, and a nighttime cloud cover calculation algorithm, were developed. Details of the system are explained in Section 2, and the nighttime cloud cover calculation algorithm is described in Section 3.

2. Data and Research Method

To calculate the nighttime cloud cover in this study, an Automatic Cloud Observation System (ACOS) based on a digital camera (Canon EOS 6D) was developed by the Korea Meteorological Administration (KMA) National Institute of Meteorological Sciences (NIMS), as shown in Figure 1. This system was installed at Daejeon Regional Office of Meteorology (DROM, 36.37°N, 127.37°E), a manned observatory that performs human observations and has been in operation since June 30, 2018. Table 1 provides the detailed specifications of ACOS. The complementary metal oxide semiconductor (CMOS) sensor of this camera was designed to respond sensitively to the luminance change of an object detected at night by supporting an International Organization for Standardization (ISO) sensitivity of up to 25,600. In addition, the camera offers shutter speeds of up to five seconds, which made it possible to capture dark images with a large depth of field (a large area of focus, including a distant background) through a long exposure and setting a F-stop of F11 [10,28]. The camera lens was installed at approximately 1.8 m high to approximate a similar view as from the height of the human eye. A fish-eye lens (EF8-15 F/4L fisheyes USM) was installed to capture the images of surrounding objects, including the sky and clouds, within the 180° FOV in the same manner as human observation. In addition, heating and ventilation devices were installed inside the system so that the system could perform 24-h continuous observation without artificial management. When the ambient temperature dropped below −2 °C, heating was automatically operated to remove frost and snow from the glass dome and moisture from the lens. The ventilation device employed air flow between the glass dome and the body of the system to remove foreign debris, such as snow, water droplets, leaves and dust, since these objects contaminate images (Figure 1a).
The images captured by ACOS were processed by converting the red, green, and blue (RGB) channels for each pixel in the image into a brightness between 0 and 255 as digital number. Although the FOV of the camera was up to 180°, only data within the 80° solar zenith angle (SZA) (160° FOV) were used for analysis due to the permanent blockage of the horizontal plane by surrounding objects (e.g., buildings, trees, and equipment) [11,29,30]. For the analysis, night observations from 1 January to 31 December 2019 (1900–0600 local standard time (LST) hourly data) were collected and compared with the DROM human observation data. In the case of the nighttime human observation data of DROM, observations were performed every hour during inclement weather (e.g., rain and snow) and every three hours otherwise. The nighttime human observation time zone differed depending on the season. A total of 3097 images were collected, excluding missing cases from ACOS. ACOS was installed on the Automated Synoptic Observing System (ASOS) observation field of DROM, an optimal location for a comparison with the human observation data by an observer. In other words, there was no difference in cloud cover caused by the distance between the observation points [6,31]. The cloud cover of ACOS was calculated as a percentage (0–100%), while the human observation cloud cover of DROM was recorded as an integer between 0 and 10. Therefore, the percentage of cloud cover was converted to the tenth cloud cover scale for comparison, as shown in Table 2. For the frequency analysis in all cases and seasons, the accuracy of the calculated cloud cover was evaluated using the bias, root mean square error (RMSE), and correlation coefficient (R) as shown in Equations (1)–(3).
b i a s = i = 1 N ( M i O i ) / N
R M S E = i = 1 N ( M i O i ) 2 / N
R = i = 1 N ( M i M ¯ ) ( O i O ¯ ) / i = 1 N ( M i M ¯ ) 2 i = 1 N ( O i O ¯ ) 2
Here, M represents the cloud cover calculated from the algorithm proposed in this study, O represents the human-observed DROM cloud cover observed, and N represents the total number of cases (3097).

3. ACOS Cloud Cover Calculation Method

The sky is blue during the day except sunrise (dawn), sunset (twilight), and cloudy days. This is because strong scattering occurs in the blue wavelength range, which has a short wavelength in the visible region, due to the effect of Rayleigh scattering by sunlight (proportional to the −4th power of the wavelength, λ−4) [12,22]. Clouds, on the other hand, are white because they scatter all visible wavelength ranges. Therefore, the brightness of the B channel is highest among the RGB channels when imaging of the sky is performed using a digital camera for the visible region, while the brightness of all the RGB channels is high when imaging of clouds is performed [2]. In other words, the red–blue ratio (RBR), which has been used in many studies to detect clouds, is smaller than 1 in sky pixels (e.g., 0.6–0.8 [11,18]) and close to 1 in cloud pixels. Thus, clouds can be distinguished from the sky using RBR [6,7,8]. At night, on the other hand, it is more difficult to distinguish surrounding objects than during the day because there is no light source, such as the sun [32]. Therefore, Y that represents the luminance of an image in the International Commission on Illumination (CIE) 1931 XYZ color space was used in addition to the RBR information. In this instance, Y = 0.2126R + 0.7152G + 0.0722B was applied [33,34].
Images captured using a fish-eye lens contain the sky, clouds, and surrounding obstacles (e.g., building, trees, and equipment) in the wide FOV of a hemisphere, but they are stored as distorted two-dimensional (2D) images. In other words, the object photographed in the process of storing the 3-D sky dome image as a 2-D image may be distorted, and the object may have a cloud coverage that is different from that of the actual object [11]. Particularly, the difference is large in the edge region of the image. Therefore, the cloud cover must be calculated after removing blocked areas in the images and correcting distortion [3,18,21]. In other words, the pixels in the blocked areas cannot be detected as the sky or clouds, and the distorted images may cause large differences from the actual objects. Thus, these error factors must be considered in the algorithm during the calculation of cloud cover. Therefore, cloud cover was calculated by performing pre-processing to remove obstacles in the images and to correct distortion, as shown in Section 3.1, and then applying the nighttime cloud cover calculation algorithm, as shown in Section 3.2.

3.1. Obstacle Removal and Distortion Correction

Images captured using a fish-eye lens contain obstacles, such as surrounding buildings, trees, and equipment. Therefore, pixels that cannot be used to distinguish clouds from the sky must be eliminated first. In other words, obstacles captured within the 80° SZA, which was set in this study, were removed [6,9]. Here, a clear image obtained during the day, when the distinction of surrounding objects was relatively clear, was used to set the cases, for which the average brightness of the RGB channels was less than 60, and those for which the standard deviation (std) of B in the 5 by 5 pixels around each pixel was 10 or more, as obstacles. Such cases were removed from the images. For the orthogonal projection distortion correction of the sky images obtained by removing obstacle pixels from all the pixels within the 80° SZA, the correction of the x and y axes was performed using Equations (4)–(8) [3,18,35,36,37].
r = ( x c x ) 2 + ( y c y ) 2
θ = asin ( r / r a d i )
ϕ = asin ( ( y c y ) / r )
x = c x + r × θ × cos ( ϕ )
y = c y + r × θ × sin ( ϕ )
where r is the distance between the center pixel ( c x , c y ) of the original image and each pixel, θ is the SZA, r a d i is the radius of the image, and ϕ is the azimuth. x and y are the coordinates of each pixel after distortion correction.
Figure 2 shows the images before and after correction as examples of obstacle removal and distortion correction. In the area with SZA less than 80°, which was used for cloud cover calculation, obstacles, objects such as surrounding buildings, trees, and antennae, were removed, such as the green area in Figure 2b. This area continuously generated blockage in the image, and the corresponding pixels were not considered during cloud cover calculation. As shown in Figure 2a, the distortion of the image made objects close to the image center look larger and those farther from the image center look smaller [3,38,39]. Thus, the clouds at the center of the image were corrected to be smaller and those at the edge of the image were corrected to be larger, as seen for the clouds around A and B shown in Figure 2. Therefore, when images captured using a fish-eye lens are used, it is necessary to correct the relative size of clouds by performing distortion correction [18,19].

3.2. Nighttime Cloud Cover Calculation Algorithm

Figure 3 shows the nighttime cloud cover calculation algorithm developed in this study. Images captured by ACOS were converted into the brightness of each RGB channel and were subjected to obstacle removal (primary obstacle pixel classification) and distortion correction in accordance with the methods described in Section 3.1. Subsequently, variable obstacle areas in which the average brightness of the RGB channels was 240 or higher, or the standard deviation of B in the 5 × 5 pixels region was 10 or higher (moonlight and ambient artificial light), were removed (secondary obstacle pixel classification). In the case of nighttime images, the brightness of the R channel is relatively high compared with daytime images. This is because the longwave radiation of the earth emitted into the night sky affects the R channel, which has the longest wavelength among the RGB channels [28,40]. Therefore, when the RBR threshold is applied to an image captured at night, the boundary between the sky and clouds becomes obscured. This occurs because RBR becomes large in the sky area as well as in the cloud area. Therefore, in this study, the brightness of the R channel was adjusted for all pixels (a value of 25 was chosen to be ideal). Then, different RBR thresholds were set depending on the luminance (Y) characteristics of the image [41,42]. The adjusted R brightness of 25 is the average brightness within 10° of the center of the image of the clearest sky (the average RGB brightness in the image was the smallest; 0300 UTC on January 3, 2019).
In the edge region of the image, the luminance is large, even in the cloud-free region. It can be increased from atmospheric turbidity (aerosol, haze, etc.) as well as the light pollution (by street lamps around the fields, or the lights from buildings) around the observation equipment [8,22,40,43]. Figure 4 is an example of classifying images and determination of RBR thresholds according to luminance and RBR characteristics of images from the collected data using the algorithm in this study (Figure 3). Figure 4b shows a hazy cloud-like shape in the edge region of the image despite the clear sky without clouds. This region presented high luminance, as shown in Figure 4e. Therefore, the RBR threshold of the image that was similar to this case was set high to 1, so that hazy regions in the image were not detected as clouds. In other words, if it exceeded the RBR threshold of 1, it was detected as clouds. Figure 4c is an overcast sky image, where the edge region of the image is darker than the center region of the image. Therefore, the luminance of the image that was similar to this case was lower in the edge regions than the center of the image. In this case, the RBR threshold was set to be 0.8 times lower than the average RBR, so that clouds in dark regions could be detected as clouds. Figure 4a shows that the luminance was high in the edge region of the image due to the cloud, but when the average RBR ( RBR ¯ ) in the center of the image was greater than 0.4, the RBR threshold was set as low as 0.75 to detect the cloud. In all cases, the RBR threshold was determined for each image based on the classification of the image. The following results were obtained when the image classification conditions ( Y e   ¯ >   Y c ¯ and RBR c ¯ > 0.4 ) and RBR thresholds (T = 0.75, 1.00, RBR ¯ × 0.8 ) determined in this study were changed within ±10% within 1% intervals. Only in up to approximately 9.85% of all cases, were image classification results changed, while the remaining cases showed the same results. In this case, the bias of DROM and ACOS was −0.53 tenths, RMSE was 2.11 tenths, and the correlation coefficient was 0.87; these did not show a significant difference from the results shown in Section 4. That is, the numerical values determined in this study were set as image classification conditions and thresholds sufficient to classify images and calculate cloud cover.
After defining the total number of pixels as Ntotal, the number of pixels that exceed the RBR threshold as Ncloud, and the number of mask pixels as Nmask, the cloud cover was calculated as
N cloud / ( N total N mask ) × 100   ( % )
In this case, Figure 4a shows that the cloud cover calculated based on the RBR threshold (T) of 0.75 (Figure 4g) was 91.37%. This equates to a cloud cover of 9 tenths, while the human observation of cloud cover at DROM was 8 tenths. Figure 4b shows that the cloud cover calculated based on the RBR threshold of 1 (Figure 4h) was 0.63%. This equates to a cloud cover of 0 tenths, while the human observation of cloud cover was also 0 tenths. Figure 4c shows that the cloud cover calculated based on the RBR threshold of 0.47 (Figure 4i) was 99.99%, equating to 10 tenths, while the human observation of cloud cover was 10 tenths.

4. Results

Figure 5 shows the tenth cloud cover scatterplot of ACOS and DROM for all the cases in 2019. This scatterplot represents the agreement of the cloud cover between the two observation methods in % frequency. The box of the scatterplot becoming closer to the color red means agreement in higher frequency. When the cloud cover was 0 tenths, a 27.83% agreement, which was the highest frequency, was observed. When the cloud cover was 10 tenths, a 12.23% agreement was observed. Therefore, 40.07% of all the cases were in agreement when the cloud covers were 0 and 10 tenths. In addition, 46.82% of all the cases were in agreement for cloud cover located on the 1:1 line. Agreement close to the 1:1 line was generally observed for the cases with high cloud cover, but ACOS was generally underestimated for the cases with low cloud cover. In this instance, the average cloud covers of ACOS and DROM were 4.58 and 4.86 tenths, exhibiting a difference of −0.28 tenths. The RMSE was 1.78 tenths, which was lower than 2 tenths. The correlation coefficient (R) between the two cloud covers was found to be 0.91. For reference, when comparing the monthly average data of day- and nighttime in MODIS of the Terra/Aqua satellite [44] and DROM (www.weather.go.kr), the average cloud covers of MODIS and DROM were 6.61 and 5.18 tenths, exhibiting a difference of 1.42 tenths. RMSE was 1.50 tenths and the correlation coefficient between both cloud covers was 0.88. In the case of calculating the daytime cloud cover using the ground-based imager, it was reported in previous studies [6,18] that bias did not exceed 0.5 tenths between those calculated by the imager and the observed cloud cover.
Figure 6 shows the results of Figure 5 decomposed by season. The seasonal scatterplots also exhibited high frequencies at 0 and 10 tenths in all seasons. In summertime, when the rainy season was concentrated due to the influence of the North Pacific air mass, the frequency of 10 tenths was highest (20.77%) among all seasons, and that of 0 tenths was lowest (9.70%), because cloudy weather continued for a long period. In addition, there were more changes in cloud cover compared with the other seasons, resulting in the largest error with an RMSE of 2.04 tenths and the lowest correlation coefficient (0.87). In wintertime, on the other hand, the frequency of 0 tenths was highest (41.67%), because clear weather continued due to the influence of the Siberian high [6]. Therefore, in the case of wintertime when the cloud cover change was not significant, the smallest error was observed, with an RMSE of 1.21 tenths, and the correlation coefficient was high (0.96). In summer and winter, the biases were −0.46 and −0.59 tenths, indicating that the cloud cover of ACOS was approximately 0.5 tenths lower than the human observation of cloud cover. In both seasons, the ACOS cloud cover was in relatively good agreement with the human observation of cloud cover for cases with high cloud cover (>5 tenths), but there were large differences between ACOS and human observations for cases with low cloud cover. This is because large calculation errors occurred in the sunrise and sunset images, which frequently occurred in summer and autumn when the length of the day increases among the 1900-0600 LST data collected for nighttime cloud cover calculation [45,46]. In other words, clouds were mistaken for the sky because the cloud region could not sufficiently scatter sunlight in the sunrise and sunset images and the brightness of the B channel increased (RBR decreased) due to Rayleigh scattering [2,7]. In this instance, as the sky, with the increased brightness, can be wrongly detected as clouds on the eastern edge of a sunrise image and on the western edge of a sunset image, additional algorithm improvement is required (Figure 7) [7,8]. When excluding 139 cases on these issues, the average cloud cover of ACOS and DROM were 4.66 and 4.84 tenths, exhibiting a bias of −0.17 tenths, RMSE of 1.52 tenths, and correlation coefficient of 0.94. As it is difficult to distinguish clouds from the sky in these images, the characteristics of sunrise and sunset images need to be analyzed for the continuous calculation of cloud cover during the day and night.
Figure 8 shows the different frequency distributions of ACOS and DROM by cloud cover for each season. In this frequency distribution, as the frequency of a 0 tenth difference increases, it means that the cloud cover calculated from ACOS is in good agreement with the human observation cloud cover of DROM. The frequency of a 0% difference was highest in winter (59.27%) and lowest in summer (31.23%). It was similar in spring (49.12%) and autumn (48.51%). The RMSE for all the cases in this study was 1.78 tenths. When errors within 2 tenths were allowed, seasonal agreement was observed, as shown in Table 3. When errors within 2 tenths were allowed, a high agreement of 94.86% was observed in winter. A high agreement of 82.17% was also observed in summer when RMSE was relatively high and the correlation coefficient was low. For all the cases, an agreement of 87.79% was observed when errors within 2 tenths were allowed.

5. Summary and Conclusions

In this study, ACOS, based on a digital camera and a cloud cover calculation algorithm, was developed for nighttime cloud cover calculation. The developed ACOS can continuously capture the images of the sky during day and night due to the different settings of ISO, F-stop, and shutter speed. During the day, it is relatively easy to distinguish between clouds and the sky due to the scattering characteristics of the visible region by sunlight [12,22]. At night, however, it is difficult to distinguish objects due to the absence of sunlight. Therefore, a camera sensor sensitive to light must be used and the long exposure of the lens is required to observe clouds at night. A distorted image captured using a camera with a fish-eye lens makes objects closer to its center look larger and those at the edge look smaller. In addition, obstacles, such as surrounding buildings, trees, and equipment, are included in images due to a wide FOV, and they cause errors during the cloud cover calculation. Therefore, for the production of sky images similar to human observation, obstacles in images were removed and distortion was corrected as pre-processing before cloud cover calculation in this study. The calculated ACOS cloud cover was compared with the nighttime cloud cover data of DROM, which were obtained through human observation performed during 1900–0600 LST in 2019. As ACOS was installed on the observation field of DROM, there was no difference in cloud cover caused by the distance between the observation points.
The RBR threshold is usually used for cloud cover calculation [7,18,23]. However, at night, the brightness of the R channel is high because of the long-wave radiation of the earth emitted from the ground [28,40]. Therefore, in this study, the brightness of R was corrected, and different RBR thresholds were set depending on the image characteristics using color space for luminance. It was found that the bias of ACOS and DROM was −0.28 tenths for all cases in 2019. RMSE was 1.78 tenths and the correlation coefficient was 0.91. In this instance, the frequency of the cases that exhibited differences of 0 tenths was 46.82% and that of the cases that exhibited differences of 2 tenths was 87.79%. As for seasons, in winter when clear weather generally continued, the bias was 0.04 tenths and the RMSE was 1.21 tenths. R was 0.96, which was the highest correlation value. In addition, the frequency of the cases that exhibited differences of 0 tenths was 59.27% and that of the cases that exhibited differences of 2 tenths was 94.86%, which was the highest frequency. In summer when the rainy season continued, on the other hand, there were large changes in cloud cover. The bias with the DROM human observation cloud cover was −0.46 tenths and the RMSE was 2.04 tenths. The correlation coefficient was 0.87, which was the lowest value. In this instance, the frequency of the cases that exhibited differences of 0 tenths was as low as 31.23%, but that of the cases that exhibited differences of 2 tenths was 82.17%, which showed a significant agreement. In summer and autumn when the daytime increases, there were large differences between the human observations of cloud cover and the calculated cloud cover in the sunrise and sunset images. Such differences must be improved in the future for continuous cloud cover calculation at night or during the day and night.
Cloud characteristics, such as cloud cover, cloud height, and cloud type, have been observed through the human eye and recorded in approximate numbers (in a rough manner) [30]. These data lack objectivity due to subjective judgement and also have low observational periodicity, even though they are highly useful for meteorological and climate research [2,6,9]. Moreover, there are shortcomings that are not observed in unmanned observatories, such as Automatic Weather System (AWS) and ASOS. At night, in particular, the observation method is different due to the absence of sunlight and continuous observation is difficult due to the nature of work, unlike during the daytime. Many studies [23,26,35,47,48] conducted research on nighttime cloud cover detection using infrared (IR) sensors. In this case, however, economic efficiency for replacing humans is low because such sensors are relatively expensive. For this reason, equipment and cloud cover calculation algorithms based on relatively inexpensive digital cameras have been developed [17,19,49]. Nevertheless, studies on nighttime cloud cover are scarce. Further research on the development of cloud detection equipment and algorithms that can replace human observation is required.

Author Contributions

Conceptualization, B.-Y.K. and J.W.C.; methodology, B.-Y.K.; software, B.-Y.K.; validation, B.-Y.K.; investigation, B.-Y.K. and J.W.C.; writing—original draft preparation, B.-Y.K.; writing—review and editing, B.-Y.K. and J.W.C.; visualization, B.-Y.K.; supervision, J.W.C.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Korea Meteorological Administration (KMA), grant number KMA2018-00222.

Acknowledgments

This work was funded by the Korea Meteorological Administration Research and Development Program “Development of Application Technology on Atmospheric Research Aircraft” under Grant (KMA2018-00222).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heinle, A.; Macke, A.; Srivastav, A. Automatic cloud classification of whole sky images. Atmos. Meas. Technol. 2010, 3, 557–567. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, S.; Zhang, L.; Zhang, Z.; Wang, C.; Xiao, B. Automatic cloud detection for all-sky images using superpixel segmentation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 354–358. [Google Scholar]
  3. Yang, J.; Min, Q.; Lu, W.; Yao, W.; Ma, Y.; Du, J.; Lu, T. An automated cloud detection method based on green channel of total sky visible images. Atmos. Meas. Technol. 2015, 8, 4671–4679. [Google Scholar] [CrossRef] [Green Version]
  4. Calbó, J.; Sabburg, J. Feature extraction from whole-sky ground-based images for cloud-type recognition. J. Atmos. Ocean. Technol. 2008, 25, 3–14. [Google Scholar] [CrossRef] [Green Version]
  5. Solomon, S. Climate Change 2007-The Physical Science Basis: Working Group I Contribution to the Fourth Assessment Report of the IPCC; Cambridge University Press: Cambridge, UK, 2007; Volume 4, pp. 1–996. [Google Scholar]
  6. Kim, B.Y.; Jee, J.B.; Zo, I.S.; Lee, K.T. Cloud cover retrieved from skyviewer: A validation with human observations. Asia-Pac. J. Atmos. Sci. 2016, 52, 1–10. [Google Scholar] [CrossRef]
  7. Li, X.; Lu, Z.; Zhou, Q.; Xu, Z. A Cloud Detection Algorithm with Reduction of Sunlight Interference in Ground-Based Sky Images. Atmosphere 2019, 10, 640. [Google Scholar] [CrossRef] [Green Version]
  8. Huo, J.; Lu, D. Cloud determination of all-sky images under low-visibility conditions. J. Atmos. Ocean. Technol. 2009, 26, 2172–2181. [Google Scholar] [CrossRef]
  9. Yun, H.K.; Whang, S.M. Development of a cloud cover reader from whole sky images. Int. J. Eng. Technol. 2018, 7, 33. [Google Scholar] [CrossRef]
  10. Peng, Z.; Yu, D.; Huang, D.; Heiser, J.; Yoo, S.; Kalb, P. 3D cloud detection and tracking system for solar forecast using multiple sky imagers. Sol. Energy 2015, 118, 496–519. [Google Scholar] [CrossRef] [Green Version]
  11. Long, C.N.; Sabburg, J.M.; Calbó, J.; Pages, D. Retrieving cloud characteristics from ground-based daytime color all-sky images. J. Atmos. Ocean. Technol. 2006, 23, 633–652. [Google Scholar] [CrossRef] [Green Version]
  12. Ghonima, M.S.; Urquhart, B.; Chow, C.W.; Shields, J.E.; Cazorla, A.; Kleissl, J. A method for cloud detection and opacity classification based on ground based sky imagery. Atmos. Meas. Technol. Discuss. 2012, 5, 4535–4569. [Google Scholar] [CrossRef]
  13. Yabuki, M.; Shiobara, M.; Nishinaka, K.; Kuji, M. Development of a cloud detection method from whole-sky color images. Polar Sci. 2014, 8, 315–326. [Google Scholar] [CrossRef] [Green Version]
  14. Shields, J.E.; Johnson, R.W.; Karr, M.E.; Weymouth, R.A.; Sauer, D.S. Delivery and Development of a Day/Night Whole Sky Imager with Enhanced Angular Alignment for Full 24 Hour Cloud Distribution Assessment; Final Report; Marine Physical Laboratory, Scripps Institution of Oceanography, University of California: San Diego, CA, USA, 1997; pp. 1–19. [Google Scholar]
  15. Shields, J.E.; Karr, M.E.; Johnson, R.W.; Burden, A.R. Day/night whole sky imagers for 24-h cloud and sky assessment: History and overview. Appl. Opt. 2013, 52, 1605–1616. [Google Scholar] [CrossRef]
  16. Long, C.N.; Slater, D.W.; Tooman, T. Total Sky Imager Model 880 Status and Testing Results; ARM Technical Report ARM TR-006; Department of Energy: Washington, DC, USA, 2001; pp. 1–36.
  17. Dev, S.; Savoy, F.M.; Lee, Y.H.; Winkler, S. Estimation of solar irradiance using ground-based whole sky imagers. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 7236–7239. [Google Scholar]
  18. Lothon, M.; Barnéoud, P.; Gabella, O.; Lohou, F.; Derrien, S.; Rondi, S.; Chiriaco, M.; Bastin, S.; Dupont, J.-C.; Haeffelin, M.; et al. ELIFAN, an algorithm for the estimation of cloud cover from sky imagers. Atmos. Meas. Technol. 2019, 12, 5519–5534. [Google Scholar] [CrossRef] [Green Version]
  19. Dev, S.; Savoy, F.M.; Lee, Y.H.; Winkler, S. WAHRSIS: A low-cost high-resolution whole sky imager with near-infrared capabilities. In Proceedings of the SPIE—The International Society for Optical Engineering, Baltimore, MD, USA, 5–9 May 2014; pp. 1–10. [Google Scholar]
  20. Dev, S.; Lee, Y.H.; Winkler, S. Systematic study of color spaces and components for the segmentation of sky/cloud images. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5102–5106. [Google Scholar]
  21. Chauvin, R.; Nou, J.; Thil, S.; Grieu, S. Modelling the clear-sky intensity distribution using a sky imager. Sol. Energy 2015, 119, 1–17. [Google Scholar] [CrossRef] [Green Version]
  22. Hosek, L.; Wilkie, A. An analytic model for full spectral sky-dome radiance. ACM Trans. Gr. (TOG) 2012, 31, 1–9. [Google Scholar] [CrossRef]
  23. Shields, J.E.; Karr, M.E.; Burden, A.R.; Mikuls, V.W.; Streeter, J.R.; Johnson, R.W.; Hodgkiss, W.S. Whole Sky Imager Characterization of Sky Obscuration by Clouds for the Starfire Optical Range; Scientific Report for AFRL Contract FA9451-008-C-0226, Technical Note 275, ADA556222; Marine Physical Laboratory, Scripps Institution of Oceanography, University of California: San Diego, CA, USA, 2010; pp. 1–102. [Google Scholar]
  24. Kreuter, A.; Blumthaler, M. Feasibility of polarized all-sky imaging for aerosol characterization. Atmos. Meas. Technol. 2013, 6, 1845. [Google Scholar] [CrossRef] [Green Version]
  25. Klebe, D.I.; Blatherwick, R.D.; Morris, V.R. Ground-based all-sky mid-infrared and visible imagery for purposes of characterizing cloud properties. Atmos. Meas. Technol. 2014, 7, 637. [Google Scholar] [CrossRef] [Green Version]
  26. Shields, J.E.; Karr, M.E. Radiometric calibration methods for day/night whole sky imagers and extinction imagers. Appl. Opt. 2019, 58, 5663–5673. [Google Scholar] [CrossRef]
  27. Shields, J.E.; Burden, A.R.; Karr, M.E. Atmospheric cloud algorithms for day/night whole sky imagers. Appl. Opt. 2019, 58, 7050–7062. [Google Scholar] [CrossRef]
  28. Dev, S.; Savoy, F.M.; Lee, Y.H.; Winkler, S. Nighttime sky/cloud image segmentation. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 345–349. [Google Scholar]
  29. Neto, M.; Luiz, S.; von Wangenheim, A.; Pereira, E.B.; Comunello, E. The use of Euclidean geometric distance on RGB color space for the classification of sky and cloud patterns. J. Atmos. Ocean. Technol. 2010, 27, 1504–1517. [Google Scholar] [CrossRef] [Green Version]
  30. Kazantzidis, A.; Tzoumanikas, P.; Bais, A.F.; Fotopoulos, S.; Economou, G. Cloud detection and classification with the use of whole-sky ground-based images. Atmos. Res. 2012, 113, 80–88. [Google Scholar] [CrossRef]
  31. Kim, B.Y.; Jee, J.B.; Jeong, M.J.; Zo, I.S.; Lee, K.T. Estimation of total cloud amount from skyviewer image data. J. Korean Earth Sci. Soc. 2015, 36, 330–340. [Google Scholar] [CrossRef]
  32. Li, Q.; Lu, W.; Yang, J.; Wang, J.Z. Thin cloud detection of all-sky images using Markov random fields. IEEE Geosci. Remote Sens. Lett. 2011, 9, 417–421. [Google Scholar] [CrossRef]
  33. Sazzad, T.S.; Islam, S.; Mamun, M.M.R.K.; Hasan, M.Z. Establishment of an efficient color model from existing models for better gamma encoding in image processing. Int. J Image Process. (IJIP) 2013, 7, 90. [Google Scholar]
  34. Shimoji, N.; Aoyama, R.; Hasegawa, W. Spatial variability of correlated color temperature of lightning channels. Result. Phys. 2016, 6, 161–162. [Google Scholar] [CrossRef] [Green Version]
  35. Cazorla, A.; Shields, J.; Karr, M.; Olmo, F.; Burden, A.; Alados-Arboledas, L. Determination of aerosol optical properties by a calibrated sky imager. Atmos. Chem. Phys. 2009, 9, 6417–6427. [Google Scholar] [CrossRef] [Green Version]
  36. Hughes, C.; Denny, P.; Jones, E.; Glavin, M. Accuracy of fish-eye lens models. Appl. Opt. 2010, 49, 3338–3347. [Google Scholar] [CrossRef] [Green Version]
  37. Cłapa, J.; Błasiński, H.; Grabowski, K.; Sękalski, P. A fisheye distortion correction algorithm optimized for hardware implementations. In Proceedings of the 21st International Conference Mixed Design of Integrated Circuits and Systems (MIXDES), Lublin, Poland, 19–21 June 2014; pp. 415–419. [Google Scholar]
  38. Inanici, M. Evalution of high dynamic range image-based sky models in lighting simulation. Leukos 2010, 7, 69–84. [Google Scholar] [CrossRef]
  39. Marquez, R.; Coimbra, C.F. Intra-hour DNI forecasting based on cloud tracking image analysis. Sol. Energy 2013, 91, 327–336. [Google Scholar] [CrossRef]
  40. Kyba, C.C.M.; Ruhtz, T.; Fischer, J.; Hölker, F. Red is the new black: How the colour of urban skyglow varies with cloud cover. Mon. Not. R. Astron. Soc. 2012, 425, 701–708. [Google Scholar] [CrossRef] [Green Version]
  41. Cauwerts, C.; Piderit, M.B. Application of High-Dynamic Range Imaging Techniques in Architecture: A Step toward High-Quality Daylit Interiors? J. Imaging 2018, 4, 19. [Google Scholar] [CrossRef] [Green Version]
  42. Yun, S.I.; Kim, K.S. Sky Luminance Measurements Using CCD Camera and Comparisons with Calculation Models for Predicting Indoor Illuminance. Sustainability 2018, 10, 1556. [Google Scholar] [CrossRef] [Green Version]
  43. Jechow, A.; Kolláth, Z.; Ribas, S.J.; Spoelstra, H.; Hölker, F.; Kyba, C.C. Imaging and mapping the impact of clouds on skyglow with all-sky photometry. Sci. Rep. 2017, 7, 1–10. [Google Scholar] [CrossRef] [PubMed]
  44. Ackerman, S.; Frey, R.; Strabala, K.; Liu, Y.; Gumley, L.; Baum, B.; Menzel, P. Discriminating Clear-Sky from Cloud with MODIS Algorithm Theoretical Basis Document (MOD35); ATBD-MOD-06, Version 6.1; Cooperative Institute for Meteorological Satellite Studies, University of Wisconsin—Madison: Madison, WI, USA, 2010; pp. 1–121. [Google Scholar]
  45. Lalonde, J.F.; Narasimhan, S.G.; Efros, A.A. What do the sun and the sky tell us about the camera? Int. J. Comput. Vis. 2010, 88, 24–51. [Google Scholar] [CrossRef]
  46. Alonso, J.; Batlles, F.J.; López, G.; Ternero, A. Sky camera imagery processing based on a sky classification using radiometric data. Energy 2014, 68, 599–608. [Google Scholar] [CrossRef]
  47. Feister, U.; Shields, J.; Karr, M.; Johnson, R.; Dehne, K.; Woldt, M. Ground-based cloud images and sky radiances in the visible and near infrared region from whole sky imager measurements. In Proceedings of the EUMETSAT Satellite Application Facility Workshop, German Weather Service and World Meteorological Organization, Dresden, German, 20–22 November 2000; pp. 79–88. [Google Scholar]
  48. Linfoot, A.; Alliss, R.J. A cloud detection algorithm applied to a whole sky imager instrument using neural networks. In Proceedings of the American Meteorological Society, 88th Annual Meeting, Chantilly, VA, USA, 20–25 January 2008; pp. 1–13. [Google Scholar]
  49. Dev, S.; Savoy, F.M.; Lee, Y.H.; Winkler, S. Design of low-cost, compact and weather-proof whole sky imagers for High-Dynamic-Range captures. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 5359–5362. [Google Scholar]
Figure 1. Exterior of Automatic Cloud Observation System (ACOS) (a) and its installation environment (b).
Figure 1. Exterior of Automatic Cloud Observation System (ACOS) (a) and its installation environment (b).
Remotesensing 12 02314 g001
Figure 2. Images (2100 LST, 23 June 2019) before (a) and after (b) obstacle (green area) and distortion correction.
Figure 2. Images (2100 LST, 23 June 2019) before (a) and after (b) obstacle (green area) and distortion correction.
Remotesensing 12 02314 g002
Figure 3. Flow chart of the nighttime cloud cover calculation algorithm of ACOS. The average red, green, and blue (RGB) brightness and red–blue ratio (RBR) is RGB ¯ and RBR ¯ , respectively. The average luminance of the edge (SZA 60–80°) and center (SZA 0–20°) region is Y e ¯ and Y c ¯ , respectively. RBR c ¯ is the average RBR of the center region. T is the RBR threshold.
Figure 3. Flow chart of the nighttime cloud cover calculation algorithm of ACOS. The average red, green, and blue (RGB) brightness and red–blue ratio (RBR) is RGB ¯ and RBR ¯ , respectively. The average luminance of the edge (SZA 60–80°) and center (SZA 0–20°) region is Y e ¯ and Y c ¯ , respectively. RBR c ¯ is the average RBR of the center region. T is the RBR threshold.
Remotesensing 12 02314 g003
Figure 4. Corrected images (a)–(c) and luminance (d)–(f) for each nighttime cloud cover calculation algorithm, and cloud cover classification (g)–(i) and RBR threshold (T). Green areas represent mask, blue areas the sky, and red areas clouds. The average luminance of the edge (SZA 60–80°) and center (SZA 0–20°) region is Y e ¯ and Y c ¯ , respectively. RBR c ¯ is the average RBR of the center region.
Figure 4. Corrected images (a)–(c) and luminance (d)–(f) for each nighttime cloud cover calculation algorithm, and cloud cover classification (g)–(i) and RBR threshold (T). Green areas represent mask, blue areas the sky, and red areas clouds. The average luminance of the edge (SZA 60–80°) and center (SZA 0–20°) region is Y e ¯ and Y c ¯ , respectively. RBR c ¯ is the average RBR of the center region.
Remotesensing 12 02314 g004
Figure 5. Tenth cloud cover scatterplot of ACOS and DROM for all the cases in 2019. The dotted line represents the 1:1 line and the solid line the regression line. R is the correlation coefficient. N is the number of cases.
Figure 5. Tenth cloud cover scatterplot of ACOS and DROM for all the cases in 2019. The dotted line represents the 1:1 line and the solid line the regression line. R is the correlation coefficient. N is the number of cases.
Remotesensing 12 02314 g005
Figure 6. Tenth cloud cover scatterplot of ACOS and DROM for each season (a)–(d) in 2019. The dotted line represents the 1:1 line and the solid line the regression line. N is the number of cases, and R is the correlation coefficient.
Figure 6. Tenth cloud cover scatterplot of ACOS and DROM for each season (a)–(d) in 2019. The dotted line represents the 1:1 line and the solid line the regression line. N is the number of cases, and R is the correlation coefficient.
Remotesensing 12 02314 g006
Figure 7. Distortion correction and RBR distribution images of the sunrise (a,c) and sunset (b,d) cases. For the sunrise case, the human observation cloud cover of DROM was 6 tenths while the cloud cover of ACOS was calculated to be 0 tenths. For the sunset case, the human observation cloud cover of DROM was 9 tenths while the cloud cover of ACOS was calculated to be 0 tenths.
Figure 7. Distortion correction and RBR distribution images of the sunrise (a,c) and sunset (b,d) cases. For the sunrise case, the human observation cloud cover of DROM was 6 tenths while the cloud cover of ACOS was calculated to be 0 tenths. For the sunset case, the human observation cloud cover of DROM was 9 tenths while the cloud cover of ACOS was calculated to be 0 tenths.
Remotesensing 12 02314 g007
Figure 8. Difference frequency distribution of DROM and ACOS by cloud cover for each season in 2019.
Figure 8. Difference frequency distribution of DROM and ACOS by cloud cover for each season in 2019.
Remotesensing 12 02314 g008
Table 1. Detailed specifications of ACOS.
Table 1. Detailed specifications of ACOS.
FunctionDescription
Size264 mm (L) × 264 mm (W) × 250 mm (H), 6.5 kg
Pixels2432 × 2432
Focal length8 mm, 180° fish-eye lens
SensorCMOS
ApertureF8 (daytime) ~ F11 (nighttime)
Sutter speeds1/1000s (daytime) ~ 5s (nighttime)
ISO100 (daytime) ~ 25,600 (nighttime)
Observation periods24-h operation, hourly observation for 10-min
Etc.Automatic heating (below –2 °C), 24-h ventilation
Table 2. Conversion table between the ACOS cloud cover (%) and Daejeon Regional Office of Meteorology (DROM) human observation cloud cover (tenth).
Table 2. Conversion table between the ACOS cloud cover (%) and Daejeon Regional Office of Meteorology (DROM) human observation cloud cover (tenth).
%≤55~1515~2525~3535~4545~5555~6565~7575~8585~9595<
Tenth012345678910
Table 3. Agreement frequency (%) according to the seasonal and annual cloud cover difference (Diff.) between ACOS and DROM in 2019.
Table 3. Agreement frequency (%) according to the seasonal and annual cloud cover difference (Diff.) between ACOS and DROM in 2019.
SeasonSpringSummerFallWinterAnnual
Diff.
± 0 tenths49.1231.2348.5159.2746.82
± 2 tenths88.1982.1786.3894.8687.79

Share and Cite

MDPI and ACS Style

Kim, B.-Y.; Cha, J.W. Cloud Observation and Cloud Cover Calculation at Nighttime Using the Automatic Cloud Observation System (ACOS) Package. Remote Sens. 2020, 12, 2314. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12142314

AMA Style

Kim B-Y, Cha JW. Cloud Observation and Cloud Cover Calculation at Nighttime Using the Automatic Cloud Observation System (ACOS) Package. Remote Sensing. 2020; 12(14):2314. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12142314

Chicago/Turabian Style

Kim, Bu-Yo, and Joo Wan Cha. 2020. "Cloud Observation and Cloud Cover Calculation at Nighttime Using the Automatic Cloud Observation System (ACOS) Package" Remote Sensing 12, no. 14: 2314. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12142314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop