Next Article in Journal
Erratum: Chance, E.W., et al. Identifying Irrigated Areas in the Snake River Plain, Idaho: Evaluating Performance across Compositing Algorithms, Spectral Indices, and Sensors. Remote Sens. 2017, 9, 546
Previous Article in Journal
Large-Scale, Multi-Temporal Remote Sensing of Palaeo-River Networks: A Case Study from Northwest India and its Implications for the Indus Civilisation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mathematical Modeling and Accuracy Testing of WorldView-2 Level-1B Stereo Pairs without Ground Control Points

1
Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 610031, China
2
College of Earth Sciences, Chengdu University of Technology, Chengdu 610059, China
*
Author to whom correspondence should be addressed.
Submission received: 23 June 2017 / Revised: 23 June 2017 / Accepted: 13 July 2017 / Published: 17 July 2017

Abstract

:
With very high resolution satellite (VHRS) imagery of 0.5 m, WorldView-2 (WV02) satellite images have been widely used in the field of surveying and mapping. However, for the specific WV02 satellite image geometric orientation model, there is a lack of detailed research and explanation. This paper elaborates the construction process of the WV02 satellite rigorous sensor model (RSM), which considers the velocity aberration, the optical path delay and the atmospheric refraction. We create a new physical inverse model based on a double-iterative method. Through this inverse method, we establish the virtual control grid in the object space to calculate the rational function model (RFM) coefficients. In the RFM coefficient calculation process, we apply the correcting characteristic value method (CCVM) and least squares (LS) method to compare the two experiments’ accuracies. We apply two stereo pairs of WV02 Level 1B products in Qinghai, China to verify the algorithm and test image positioning accuracy. Under the no-control conditions, the monolithic horizontal mean square error (RMSE) of the rational polynomial coefficient (RPC) is 3.8 m. This result is 13.7% higher than the original RPC positioning accuracy provided by commercial vendors. The stereo pair horizontal positioning accuracy of both the physical and RPC models is 5.0 m circular error 90% (CE90). This result is in accordance with the WV02 satellite images nominal positioning accuracy. This paper provides a new method to improve the positioning accuracy of the WV02 satellite image RPC model without GCPs.

Graphical Abstract

1. Introduction

Currently, rapid technological development of very high resolution satellites (VHRS) has increased commercial high-resolution optical satellite imagery resolution from 1 m to 0.3 m, showing enormous value in the field of surveying and mapping. Using a VHRS image, one can quickly obtain fundamental geographic information, such as digital elevation models (DEM), digital orthophoto maps (DOM), and digital line graphics (DLG), which have become some of the primary results of data collection. Before using the image for various applications, we must first build the correct geometric model based on the imaging principle of the satellite sensor. Currently, optical VHRS often use linear charge-coupled device (CCD) push-broom sensors to obtain images. Satellite linear array push-broom imaging differs from aerial images, with special characteristics, such as high flight altitude, a photographically narrow beam, and a small viewing angle [1]. There are two types of satellite geometric positioning models: the rigorous sensor model (RSM), also called the physical or camera model, and the rational function model (RFM).
Each of the scanning lines of the linear array CCD image is center projected, and the RSM reconstructs the photographic light beam based on this characteristic, establishing the collinearity condition and formulas [2] for geometric positioning. Toutin [3] proposed a 3D positioning method and a variety of sensors, such as the SPOT series, EROS A, QuickBird, and IKONOS, and has successful experimented with this model, which has been integrated into commercial software (PCI Geomatics). Crespi and Giannone [4,5,6,7] developed a physical model that was integrated into the SISAR software package and successfully used for EROS A, QuickBird, and other satellite data. BAE Systems’ software utilized by De Venecia et al. [8] uses a physical model to process ORBVIEW-3 data. We can summarize the characteristics of these models as follows: both are based on collinear formulas; both require the sensor parameters; the parameters of the model have a physical meaning; and the solution model of all parameters requires a certain ground control point. Since each VHRS metadata format and definition is different, there is no uniform physical model.
After the IKONOS satellite launched in 1999, the RFM became the mainstream image-processing model. In the beginning, RFM research primarily focused on the rational function model structure, the solution method of rational polynomial coefficients (RPCs), and the RFM precision. Tao and Hu [9,10,11] used aerial imagery, SPOT images, and IKONOS images to explore the collinear formula, two-dimensional and three-dimensional polynomial, perspective projection, and RFM correction methods differently. They found that the RFM correction method does not require sensor information and azimuth parameters, but has precision slightly worse than the collinear formulas. Cheng and Toutin [12] studied IKONOS geo-level images with the simple polynomial, rational function model, and the precision of the model, and the experimental results show that the positioning accuracy of the physical model is better than the other two effects. Yang [13] experimented on SPOT images showing that the RFM of the third-order, and even the second-order, denominator can replace the physical model. Grodecki and Dial’s [14,15,16] experiments on IKONOS images show that the RFM can be an alternative to the physical model, whose accuracy reaches the sub-pixel level even in the case of poor geometric conditions, verifying the RFM’s mapping capability and three-dimensional reconstruction in urban or mountainous areas.
Tao and Hu [9,10] studied the least squares principle to solve the calculation method of the RPCs and found that the correlation of RPCs leads to a singularity of the parameter array. Moreover, the regularization method can improve the number of design matrix conditions to avoid the least squares adjustment on the value of instability. They fit the physical model based on different forms of the RFM model and use the least squares and ridge estimations to calculate the RPC parameters. The experimental results show that the least squares estimation has the highest fitting accuracy, and the ridge estimation can obtain the RPC parameters with good structure (high-order terms close to zero) [17,18]. Qin et al. [19] proposed an algorithm of RPC parameters based on a global DEM without an initial value and, using a physical model to calculate the RPC parameters, the algorithm has been successfully tested with SPOT imagery.
In October 2009, DigitalGlobe (DG) launched a second WorldView series satellite, WorldView-2 (WV02). WV02 performance has improved significantly compared with WorldView-1, adding four standard multi-spectral bands of R, G, B, and NIR, and four new bands of Coast, Yellow, Red, and NIR2. DigitalGlobe is the world’s first company to provide an eight-band multi-spectral commercial remote sensing satellite. WV02 operates at 770 km in a sun synchronous orbit, with a panchromatic resolution of 0.46 m, and a multi-spectral resolution of 1.85 m at nadir. The specification horizontal geolocation accuracy CE90 (90% of circular error) and vertical accuracy (90% of linear error) is 5 m, while the mean square error (RMSE) is 2.3 m [20]. This accuracy is due to the direct sensor orientation without using any ground control points (GCPs), and with an off-nadir angle less than 30°, excluding terrain effects [21]. Recent studies have shown that the actual positioning accuracy of the WV02 satellite is better than the declared accuracy of 1–2 m. Readers can obtain more detailed information from Aguilar et al. [22,23] and David et al. [24].
The WV02 sensors use a push-boom scanner and the Basic Level-1B (L1B) image products provide a physical model and the RFM of two sets of parameters, but the commercial vendor did not provide their specific method of applying the physical model. If the users have an accurate physical model, the use of image metadata may improve RPC accuracy without GCPs. Currently, the research on this aspect is also inadequate. Therefore, it is necessary to invert the exact physical model according to the characteristics of the L1B image support document (ISD) and explore the method of improving the accuracy of the RFM under uncontrolled conditions.
In this paper, we focus on the construction of the physical model of the L1B image of the WV02 satellite under uncontrolled conditions and the calculation method of the coefficient of the RFM, which includes three main aspects:
  • Based on the characteristics of the WV02 ISD file, we develop the precise physical model of the WV02 satellite.
  • We establish a new inverse transform method from the physical model, calculate the object virtual control points, and convert them into image coordinates.
  • To overcome the problem of the morbidity normal formula in the process of solving for the RPC coefficients, we iterate using the correcting characteristic value method and compare these results with the least squares results.

2. Description of the WV02 L1B Dataset and Study Area

2.1. Characteristics of the WV02 L1B Image Products

Due to manufacturing CCD size constraints, push-broom satellite detectors usually stitch together a plurality of CCDs to obtain a larger scan width. The WV02 satellite sensor uses time delay integration (TDI) technology. The panchromatic focal plane uses 50 panchromatic sub-sensors in the center of the detector, and the multi-spectral focal plane installations on both sides of the panchromatic focal plane each contain more than 10 sub-spectral sensors [25]. According to photogrammetry theory, to determine the position of the object, one must first obtain the line of sight (LOS) direction element, which contains the interior orientation (IO) and exterior orientation (EO) parameters. We can obtain the relationship between each CCD probe and the sensor focal plane inside the camera from the ground and aerial calibrations, and the positional relationship between the satellite sensor and the satellite body. That is, the azimuth element of the sensor, provided by a geometric calibration file (*.GEO) file. The outer orientation elements of each images’ scan line are obtained by star tracker cameras and on-board GPS receivers, which can be interpolated by ephemeris (*.EPH) and attitude files (*.ATT). By filtering the ephemeris and position, and by generating a regular virtual scan line, we can resample the original image to obtain an image conforming to the ideal conformational condition. The process of the satellite acquiring the raw image and transferring it into the L1B image is shown in Figure 1. The L1B image did not provide any map projection; it is only conducting a radiation and sensor correction from the original image. The L1B image is the closest product to the raw image. As the attitude and satellite position changes slowly, resulting in a slow LOS direction change, each pixel resolution ground sample distance (GSD) is different after resampling.
In fact, in the process of image acquisition, each CCD probe has a tight imaging relationship; the image resampling will break the collinear relationship of the original image and the objective points. Users must understand the correlation between the original image and the L1B image, or resampling method, to re-establish a strict L1B image physical model. For example, SPOT imagery provides a level-1A image to level-1B image conversion polynomial. However, because DG does not provide the original image, it has not published its algorithm to establish the physical model. Therefore, it is necessary to model the WV02 satellite imaging geometry based on the content of the ISD files and the push-broom scanner rule.

2.2. Study Area and Dataset

The study area is located in western China, the Qinghai-Tibet Autonomous Region, on the northern slope of the Kunlun Mountains, in an area of approximately 400 km2 (Figure 2). The coordinates of the study area are UTM zone 46 N ellipsoid WGS84. The study area is complex and dangerous terrain, containing steep mountains, with elevations between 3000 m and 4300 m, and a relative height difference of 300–1000 m. The test area image data are two along-track stereo pairs of WV02 L1B products. Table 1 lists the image parameters.
To verify the image geolocation accuracy, we use the GPS measurement method in the study area, and establish 75 ground control points as independent checkpoints (ICPs). The ICP are outstanding points, properly selected to determine the locations of the objects, such as roads, houses, or other independent objects. These points can be accurately identified in both photos and on the ground. The ICP distribution is shown in Figure 3. The survey error in these control points is less than 5 cm both in the horizontal and vertical directions; the pointing error is less than 1 pixel (0.5 m).

3. The Physical Model of WV02

3.1. Calculation Principle

For a linear array CCD sensor, the strict geometric model of the direct positioning algorithm and process are almost identical. We can describe the satellite sensor imaging process as a series of spatial coordinate transformations of the LOS direction to determine the location of a point in Earth-centered, Earth-fixed (ECEF) coordinates. Figure 4 shows the spatial coordinate transformation process.
The physical model recovery LOS, based on the principle of collinear formulas, has two positioning methods: monolithic and stereoscopic. Monolithic positioning requires high-precision DEM support. The positioning process first calculates the line scan time based on the image coordinates. Using this time value, the ephemeris and attitude data are interpolated to obtain the outer azimuth elements and to combine the sensor inner azimuth elements. Finally, the LOS direction is retrieved to determine the intersection of the LOS and the Earth’s ellipsoid surface. Figure 5a shows the positioning principle and the calculation process. In the direct positioning solution, one must read the basic motion vector, attitude parameters, geometric calibration parameters, and other data in the image metadata file. These image support data are provided by DG for basic imagery. The direct location process to calculate Point P(r, c) is shown in Figure 5b, using only the signal image.
We can summarize the general physical mode for most linear array push-broom satellites as follows:
[ X Y Z ] W G S 84 = [ X G P S Y G P S Z G P S ] W G S 84 + R J 2000 W G S 84 R b o d y J 2000 ( [ D X D Y D Z ] + [ d X d Y d Z ] + λ R c a m e r a b o d y [ X Y Z ] c a m e r a )
where [ X        Y        Z ] W G S 84 T is P’s ECEF (WGS84) coordinate, and [ X G P S      Y G P S       Z G P S ] W G S 84 T is the GPS antenna phase center. [ d X      d Y       d Z ] W G S 84 T is the origin translation of the camera coordinate to the satellite body system, and [ X        Y        Z ] c a m e r a T is the coordinate of the ground point P in the camera system. R J 2000 W G S 84 is the rotation matrix of the J2000 coordinate system to ECEF(WGS84). R b o d y J 2000 is the rotation matrix of the satellite body system to the J2000 coordinate system. R c a m e r a b o d y is the rotation matrix of the satellite camera system to the satellite body system.

3.2. Simplification of the Physical Model

Based on the characteristics of the WV02 image support file, we further simplified Equation (1) as follows:
[ X Y Z ] c a m e r a = R f o c a l _ p l a n e det e c t o r [ 0 + x 0 y i + y 0 f ]
where the following:
R f o c a l _ p l a n e det e c t o r = [ cos γ sin γ 0 sin γ cos γ 0 0 0 1 ]
and R f o c a l _ p l a n e det e c t o r is the rotation matrix of the line array detector system to the camera system. The angle between the detector and the camera system z-axis is γ, which is the “detRotAngle” parameter in the GEO file. [ x 0   -   y i + y 0 f ] T is the ground point P coordinate in the instantaneous image system. ( x 0 , y 0 ) is the offset between the center of the first pixel (0, 0) and the origin of the camera coordinate system. We can obtain these data from the GEO file. In Equation (1), R J 2000 W G S 84 R b o d y J 2000 represents the conversion from the satellite body system to the orbital coordinate system and the orbital coordinate system to the WGS84 system. Then, we can write the following:
R J 2000 W G S 84 R b o d y J 2000 = R b o d y W G S 84
After geometric correction and resampling of the WV02 satellite raw image, we can consider that the GPS antenna phase center and the satellite body center are coincident and that the camera center and the satellite body center also coincide. Therefore, we have [ D X   D Y   D Z ] T = 0   and   [ d X   d Y   d Z ] T = 0 , and can simplify Equation (1) as the following:
[ X Y Z ] W G S 84 = [ X G P S Y G P S Z G P S ] W G S 84 + λ R b o d y W G S 84 R c a m e r a b o d y R det e c t o r c a m e r a [ 0 + x 0 y i + y 0 f ]
In Equation (5), [ X G P S      Y G P S       Z G P S ] W G S 84 T is obtained through the ephemeris file (*.EPH) interpolation, and λ is the unsolved scaling factor. R b o d y W G S 84 is the rotation matrix of the satellite body system to the WGS84 system. We can construction this rotation matrix as Equation (6).
R = [ q 1 2 q 2 2 q 3 2 + q 4 2 2 ( q 1 q 2 q 3 q 4 ) 2 ( q 1 q 3 + q 2 q 4 ) 2 ( q 1 q 2 + q 3 q 4 ) q 1 2 + q 2 2 q 3 2 + q 4 2 2 ( q 2 q 3 q 1 q 4 ) 2 ( q 1 q 3 q 2 q 4 ) 2 ( q 2 q 3 + q 1 q 4 ) q 1 2 q 2 2 + q 3 2 + q 4 2 ]
where (q1,q2,q3,q4) is the quaternion parameter which can be interpolated from the *.ATT file.
In Equation (5), the rotation matrix of the camera system and the body system is constructed by the image geometry correction file (*.GEO file). Based on Equation (6), this rotation matrix is stable for a long time after the satellite calibration. We can consider this rotation matrix to be a constant array. Let the rotation matrix R = [ R det e c t o r c a m e r a ] T [ R c a m e r a b o d y ] T [ R b o d y W G S 84    ] T , then Equation (5) can be rewritten as Equation (7).
[ X Y Z ] W G S 84 = [ X G P S Y G P S Z G P S ] W G S   84 + λ R T [ 0 + x 0 y i + y 0 f ]
Let P E ( t ) = [ X      Y      Z ] W G S 84 T and S E ( t ) = [ X G P S      Y G P S       Z G P S ] W G S 84 T , then the following:
u = [ u x u y u z ] = R T × [ 0 + x 0 y i + y 0 f ]
We can rewrite Equation (7) as follows:
P E = S E ( t ) + λ u    ,    u    = u / u
where PE is the point location in the object coordinates, SE(t) is the sensor projection center position at time “t“, λ is the scale factor, and u    is the unit vector of the LOS. Further, we can express Equation (9) in component form as follows:
{ X E = X E ( t ) + λ u x Y E = Y E ( t ) + λ u y Z E = Z E ( t ) + λ u z
Consider the following Earth ellipsoid formula:
X E 2 + Y E 2 ( a + h ) 2 + Z E 2 ( b + h ) 2 = 1
where a = 6,378,137.0 m and b = 6,356,752.3 m are the long and the short axes of the WGS84 reference ellipsoid, and h is the point ellipsoid elevation. We can calculate λ by using Equations (10) and (11) and then obtain the ground coordinates in Equation (10). By using DEM data (such as SRTM data), we can obtain the elevation value h. The solution process is an iterative process (see “SPOT 123-4-5 Geometry Handbook” [26] for details).

3.3. Exterior Orientation (EO) Parameter Interpolation

In Section 3.1, we must interpolate the ephemeris and quaternion data using the following method.

3.3.1. Ephemeris Interpolation

The precision ephemeris file records the spatial position of the satellite’s GPS receiver at a certain frequency (50 Hz for WV02). To obtain the satellite position at any given time, we must interpolate our fitted ephemeris data. Commonly used mathematical methods include linear interpolation, Lagrangian polynomial interpolation, cubic spline interpolation, Chebyshev fitting, and general polynomial fitting. Under normal circumstances, for remote sensing satellite platforms in space to run more smoothly, these interpolations or fit methods can achieve a high accuracy. In this paper, we use Chebyshev fitting [27] for satellite position and velocity, and its mathematical model is described below.
Assume the satellite ephemeris data in time intervals [ t 0 , t 0 + Δ t ] can use an nth order Chebyshev polynomial fitting function, where t 0 is the start epoch and Δ t is the fitting time interval. First, convert the variables t [ t 0 , t 0 + Δ t ] to τ [ 1 , + 1 ] , where τ = 2 Δ t ( t t 0 ) 1 , t [ t 0 , t 0 + Δ t ] . We can calculate the satellite location at any time t by using Equation (12).
X ( t ) = i = 0 n C X i T i ( τ ) Y ( t ) = i = 0 n C Y i T i ( τ ) Z ( t ) = i = 0 n C Z i T i ( τ ) }
where n is the order of the Chebyshev polynomial and the Chebyshev polynomial coefficients of the coordinates are the following:
T 0 ( τ ) = 1 , T 1 ( τ ) = τ , T n ( τ ) = 2 τ T n 1 ( τ ) T n 2 ( τ ) ; | τ | 1 , n 2 }
Based on the ephemeris file, we can find a number of m ( m > n + 1 ) satellite coordinates in the range [ t 0 , t 0 + Δ t ] using the least squares method to find the Chebyshev polynomial coefficient C X i , C Y i , C Z i ( i = 0 , 1 , 2 , , n ) . Then, we can calculate satellite coordinates by using Equation (11). The fitting method for the satellite’s velocity is similar.

3.3.2. Attitude Quaternion Interpolation

Currently, the majority of satellite image manufacturers use quaternions to express satellite attitude. Compared with the Euler angle, quaternions have some advantages, such as calculation speed, reduced storage space, and lack of singularity. The WV02 satellite attitude file provides satellite attitude data recorded at a 50 Hz frequency, and we can interpolate or fit these data to obtain the attitude data at any time point. The quadrature interpolation methods include linear and spline interpolation. In this paper, we use the spherical linear interpolation (SLERP) method [28]. The vector rotation quaternion mathematical definition is as follows:
q = q 0 + q 1 i + q 2 j + q 3 k
Equation (14) shows the mathematical model of the spherical interpolation, which shows that the direction of the vector is in the direction of interpolation in the rotation process.
q = sin ( ( 1 t ) ω ) sin ω q a + sin ( t ω ) sin ω q b
where ω is the angle between qa and qb, which can be calculated as follows:
ω = a r c cos ( q a q b )

3.4. WV02 Satellite Physical Model Refinement

Using the general physical model established in Section 3.2, we found the WV02 location error to be large during experiments. Through experimental analysis, we determined the physical model of WV02 requires further refinement. The purpose of this refinement is to correct the direction of the LOS based on the ISD file provided by DG. These corrections include velocity aberration, speed of light delay, and atmospheric refraction. The specific process is shown in Figure 6.

3.4.1. Velocity Aberration Correction

A satellite sensor in the process of scanning, due to the impact of the Earth’s angular velocity, results in the subject of the moving direction (trajectory line) and the sub-satellite track direction (heading line) being inconsistent, resulting in velocity aberration. The calculation of velocity aberration is as follows:
  • The velocity vector at a ground point is as follows:
    v g = E × P E
    where E is the Earth’s angular velocity vector, E = [ 0 0 Ω e ] T and Ω e is the Earth’s angular velocity, Ω e = 2 π / T , T = 86164.0905 .
  • Calculate the camera speed component that is orthogonal to the LOS as follows:
    v = v s v g
    where v s is the satellite camera speed vector.
  • The vector of the LOS after velocity aberration correction is as follows:
    u = u v v l i g h t
    where v l i g h t = 3 × 10 8 m/s.
  • The vector of the LOS after the velocity aberration correction is unitized and incorporated into the original physical model (Equation (8)), and the point coordinates are recalculated as follows:
    P E = S E ( t ) + λ    u    

3.4.2. Optical Path Delay Correction

The WV02 satellite has an operational altitude of approximately 770 km, and there is a delay in the reflected light of the ground point reaching the satellite sensor. This time delay introduces some error in calculating the coordinates of the ground point. This optical path delay correction can be calculated as follows:
P E = ( cos ( Δ a ) sin ( Δ a ) 0 sin ( Δ a ) cos ( Δ a ) 0 0 0 1 ) × P E
where Δ a is the angle of rotation of the Earth when the ground point reflected light reaches the satellite sensor. We can calculate Δ a as follows:
Δ a = | P S P E | v l i g h t Ω e
where P S is the satellite’s position vector at that moment and Ω e is the Earth’s rotation angular velocity.

3.4.3. Atmospheric Refraction Correction

In Earth observation with optical remote sensing satellites, the atmospheric refraction changes the direction of the LOS, resulting in atmospheric refraction geometric deviation. If the off-nadir angle is small, we can ignore the impact of this refraction on the positioning. Currently, optical remote sensing satellite imaging is agile and better able to achieve wide-angle side imaging. In the case of a large side view, we need an accurate model to improve the effect of atmospheric refraction. We can calculate the atmospheric refraction correction of the remote sensing platform from two directions. One is to calculate the change from the satellite to the ground according to the direction of the LOS. The other is to calculate the reflected light from the ground. Figure 7a shows the deviation of the atmospheric refraction from the direction of satellite sight. Many researchers have studied the atmospheric refraction error of remote sensing satellites [29,30,31]. In this paper, we use the calculation method in the literature [32] to calculate the length variation d of the LOS vector after the atmospheric refraction is projected onto the Earth’s surface. To facilitate the later calculation, we use an exponential function to fit the atmospheric refractive error d values at different lateral angles, as shown in Figure 7b.
After the atmospheric refraction is corrected, the coordinates of the ground point can be determined from Equation (23).
P E = P E d × [ sin ( a ) cos ( a ) 0 ]
where α is the mean azimuth of the satellite.
Now, after the velocity aberration correction, the optical path delay correction, and the atmospheric refraction correction, we have created a refined WV02 satellite physical model.

4. RFM Modeling

The rational function model (RFM) uses a rational function to represent the mathematical correspondence between the object point and the image point, which is independent of the sensor’s type and does not require sensor parameters. The calculation of the rational function coefficient requires a certain control point. According to the acquisition method of the control point, the calculation method of the rational function coefficient is divided into two methods: correlation with and without terrain. Generally, we use a method independent of the terrain, that is, we generate a large number of virtual control points to calculate the rational function coefficient. This method can achieve nearly the same precision as the rigorous sensor model [10,11,15,16].

4.1. Basic Principles

The rational function model is defined as the point coordinate’s polynomial ratio of the object space (X,Y,Z) to the image space (r,c). In this model, in addition to the polynomial coefficient, object space and image space point coordinates, and without other parameters, the rational function model of the mathematical expression is as follows:
r n = P 1 ( X n , Y n , Z n ) P 2 ( X n , Y n , Z n ) c n = P 3 ( X n , Y n , Z n ) P 4 ( X n , Y n , Z n ) }
In Equation (21), to overcome the rounding error caused by the large data level difference, it is necessary to normalize the coordinates in both the object and image spaces. By translating and scaling, we can regularize the pixel coordinates (r,c) and the object coordinates (X,Y,Z) to the normalized coordinates (rn,cn) and (Xn,Yn,Zn), whose values are in the range (−1.0, +1.0). The specific form of Pi is as follows:
P i = 1 , 2 , 3 , 4 ( X , Y , Z ) = i = 0 3 j = 0 3 k = 0 3 a i j k X i Y j Z k = a 0 + a 1 X + a 2 Y + a 3 Z + a 4 X Y + a 5 X Z + a 6 Y Z + a 7 X 2 + a 8 Y 2 + a 9 Z 2 + a 10 X Y Z + a 11 X 2 Y + a 12 X 2 Z + a 13 X Y 2 + a 14 Y 2 Z + a 15 X Z 2 + a 16 Y Z 2 + a 17 X 3 + a 18 Y 3 + a 19 Z 3
where aijk are the rational polynomial coefficients (RPCs).

4.2. Physical Reverse Model

To calculate the RPCs, we must obtain the image coordinates corresponding to the object’s virtual control points. This calculation is achieved by an inverse transform model (Figure 8).
There are two primary methods to realize the inverse transformation: the iterative inverse of the collinear formula model and the direct inverse transformation based on the rational polynomial model. The former is suitable for traditional CCD linear array push-broom images where the integration time is fixed; the high-resolution images of the SPOT5 satellite are typical. This method must calculate the corresponding line scanning time by iteration, and then calculate the coordinates of the corresponding point in the image space. To restore the original image-line relationship, one must also know the level-1B image and the raw image conversion relationship. SPOT images provide a cubic polynomial model for calculating the level-1B image from the original image [26]. This method is computationally complex and inefficient.
If we know the RPC inverse model coefficients, we can easily calculate the object space to the image space. For WV02 satellite imagery, DG does not provide an inverse model based on RFM, nor does it provide the raw image. Therefore, we cannot use either of these methods. In this paper, we proposed a two-layer iterative method to perform the inverse transformation method of the image coordinates. This method is insensitive to the initial bit value of the iteration, and the calculation method is simple and stable. The algorithm makes full use of the information of the WV02 image support file, and the most significant difference from the existing method is the use of a double iterative structure. The calculation steps for this method are as follows::
  • We enter the virtual control point object coordinates and latitude and longitude coordinates, and transform them into WGS84 Cartesian coordinates.
  • Based on the coordinates of the image’s four corners, we establish the rough correspondence between the object space and the image space by using the two-dimensional affine transformation method. We use the least squares method to calculate the transformation coefficients. We input the virtual control point object coordinates; then, we find the initial coordinates in the image space by using Equation (26).
    r = a 0 + a 1 X + a 2 Y c = b 0 + b 1 X + b 2 Y }
  • Based on the initial image coordinates and elevation, we use the physical model to calculate the coordinates in the object space.
  • To calculate the difference between the coordinates in the object space, we use the pixel resolution to transform the difference to the image space. To update the image coordinates, we re-enter the physical model. If the result is less than a certain tolerance, we exit the process.
  • We update the initial coordinates in the image space and return to Step 4. If the result is less than a certain tolerance, we exit the process.
  • We output the point coordinates in the object and image spaces.
The process is shown in Figure 9.

4.3. Calculation of the RPC Coefficients

Currently, the least squares (LS) iterative method is the primary method used to calculate the RPC parameters. Due to the excessive parameterization of the model, which causes inconsistency in the formula matrix, the RPC parameters are unstable and the least squares are not convergent. The result may have some biases from the true value. The ridge estimation method proposed by Hoerl and Kennard in 1970 is considered to be one of the effective ways to solve the ill-conditioned formula [33,34]. Tao and Hu use this method for calculating the RPC coefficients. Their research shows that ridge estimates can achieve well-structured, high-order terms close to zero for the RPC parameters. The disadvantage of the ridge estimation method is because the changing equivalent relation of the formula is changed, the estimation results are biased.

4.3.1. Correcting Characteristic Value Method (CCVM)

Xinzhou et al. [35] proposed a spectral correction iterative method, which is a method that not only maintains the numerical stability in the process of solving the formula but also does not change the equivalent relation of the formula, so the calculation result is not biased. In this paper, we use the least squares and correcting characteristic value methods to calculate the RPC coefficients and compare their results. The solution for the correcting characteristic value method is described below.
The error equation is as follows:
( A T P A ) X = A T P L
If we add X to both sides of Equation (27), we obtain the following:
( A T P A + E ) X = A T P L + X
where E is an n-order unit matrix. Since both sides of the equation contain unknown parameter X, they can only be solved by the iterative method. The iterative equation is as follows:
x ( n ) = ( A T P A + E ) 1 ( A T P L + x ( n 1 ) )
where x ( n ) is the nth spectrum of the X correcting characteristic value.
Since the matrix A T P A adds the unit matrix E, it becomes the matrix ( A T P A + E ) , reducing the number of conditions and improving the formula of morbidity. For any initial value x ( 0 ) , there are the following:
lim n x ( n ) = ( A T P A ) 1 ( A T P L )
Through the above proof, we can determine that the solution is an unbiased estimate.

4.3.2. RPC Coefficient Calculation Flow

Virtual control point generation selects a plane with a 10 × 10 grid size, stratifies the elevation into 10 layers and doubles the checkpoint plane density. The RPC coefficient calculation process is shown in Figure 10.

5. Results

5.1. Physical Model Experiment

We use the algorithm from Section 3 to experiment with the direct positioning accuracy of the WV02 L1B panchromatic monolithic image in the study area. We use all ground control points in the experimental area as independent checkpoints (ICPs) and use UTM projection to develop the plane coordinate system. We use the root mean square error (RMSE) to represent the positioning errors as follows:
R M S E X = ( X i m a g e X I C P ) 2 n R M S E Y = ( Y i m a g e Y I C P ) 2 n R M S E r = ( R M S E X ) 2 + ( R M S E Y ) 2 }
Four images’ direct geometric positioning errors are shown in Table 2. We use the stereo triangulation method to calculate the homonymous point coordinates from the cameras to the ground. Two stereo pair’s location errors are shown in Table 3.
We calculated the horizontal accuracy with a circular error at 90% confidence (CE90) and a height error with a linear error at 90% probability (LE90) (see Equation (32)) [36] as follows:
R M S E c = 0.5 ( R M S E min + R M S E max ) C E 90 = 2.146 · R M S E c L E 90 = 1.644 · R M S E H }
Four images and two stereo pairs’ horizontal location errors are shown in Figure 11.
As seen in Figure 11, the CE90 error of the monolithic image direct geometric positioning is 6.7 m, and the stereo pair’s CE90 error is 5.0 m. The ICP’s absolute vertical accuracy of the stereo pair is 3.0 m. This result is consistent with the WV02 nominal accuracy. It can be seen that the physical positioning errors are relatively stable, reflecting the existence of a certain deviation. Primarily due to star tracker cameras and on-board GPS receivers causing positioning errors, this error can almost be eliminated by using a few GCPs.

5.2. Rational Function Model Experiment

We use the RPC method, described in Section 4.3, LM and CCVM to determine the RPC solution. A 10 × 10 virtual control point grid is used, with the elevation divided into 10 layers. We use the calculated RPCs to calculate the image coordinate error of the ICPs. The calculation results for the four images are shown in Table 4. The values in bold indicate the horizontal error of the ICPs, in meters.
As seen in Table 4, the RFM positioning accuracy of the monolithic image is basically identical to the physical model. The positioning accuracy of the CCVM method in three images is nearly identical to the LS, but in one image (15JUL24042129) the CCVM positioning accuracy was significantly better. Specifically, if the design matrix is ill-posed during the calculation process (meaning the matrix contains a larger number of conditions), the LS solution may have a certain deviation and the CCVM solution is unbiased. From the monolithic image RFM location results, the RPC resolution is slightly better than the DG RPC precision in this experiment. The mean error in the monolithic horizontal positioning of the original RPC model is 4.4 m and our method is 3.8 m, which is 13.7% higher than that of the original RPC positioning provided by the imaging company. Additionally, we use the RFM-based stereo intersection method to calculate the positioning error of two stereo pairs (see Figure 12).
As seen in Figure 12, the single image RPC positioning accuracy (CE90) is 6.2 m, superior to the RSM positioning accuracy of 6.7 m. However, the accuracy of the stereoscopic positioning and the RSM are consistent at 5.0 m.

6. Discussion

6.1. The Influence of Elevation on the Accuracy of the Physical Model

The physical model algorithm in this paper must provide a reference DEM based on the DEM of the elevation of iterative interpolation. The accuracy of the elevation value has some effect on the accuracy of the physical model. To describe the effect of elevation error on the plane positioning accuracy, we calculated the change of the projection coordinates caused by a change of 1 m for each elevation under the condition that the image coordinates are unchanged, as shown in Table 5.
In Table 5, it can be seen that each 1 m elevation change due to a horizontal positioning error causes different changes in each image, in the range of 0.3 to 0.8 m. Therefore, we can first calculate the physical model using a rough DEM, then obtain the RPC coefficients, extract the higher-resolution DEM, and recalculate the physical model. This iterative process stops when the elevation value changes less than a specified amount.

6.2. RFM Fit Analysis

For the precision of RFM fitting the RSM, the conventional idea is to generate the virtual checkpoints and calculate the error between the two models. In this paper, the virtual checkpoint is generated based on the density of 20 × 20 × 10. The accuracy was checked and the results are shown in Table 6 and Figure 13.
In Table 6, it can be seen that in four images the CCVM accuracy is slightly better than the accuracy of LS. In the 15JUL24042129 image (the values in bold), this difference is especially obvious.
While with a virtual control point check the error between the RFM and the RSM is small, it is better to use an actual control point to check the accuracy. This also indicates that the RPC model may generally have better precision when using evenly distributed checkpoints, but with an irregularly distributed control point for inspection, accuracy, or undulation.
In the case of a large number of checkpoints, such as computer stereo matching obtaining the homonymous points, this difference becomes insignificant, and we can consider the physical model and the RFM model accuracy to be almost identical.

7. Conclusions

Based on the characteristics of the WV02 satellite ISD file, this paper elaborates the construction process of the WV02 satellite image geometric orientation using the RSM and RPC models. In the process of constructing the WV02 satellite image using the RSM model, we considered the influence of velocity aberration, optical path delay and atmospheric refraction in the satellite imaging process, and established a new physical inverse model. The inverse model differs from the existing model by two iterations in each of the object and image spaces. Through this inverse model, we can convert point positions of the object space into the image space. In this manner, the virtual control grid is established in the object space to calculate the RPC coefficients. Two methods, CCVM and LS, are used to compare the RPC coefficient calculation process. The CCVM provides basically the same accuracy as the LS, but it is more stable and has superior accuracy. Under the uncontrolled condition, the monolithic horizontal RMSE of the RPC model is 3.8 m. The stereo pairs’ horizontal positioning accuracy of the physical and RPC models are both 5.0 m (with CE90). This paper resolves the RPC accuracy better than the original RPC location accuracy provided by the commercial vendors. Although this work focuses on WorldView-2 data, VHR stereo pairs collected by DigitalGlobe, such as IKONOS, QuickBird-2, and GeoEye-1, can also be analyzed using the methods described in this paper.

Acknowledgments

This work was financially supported by the Educational Commission of the Sichuan Province of China (grants No. 17ZA0031 and 16ZA0091) and the Sichuan Excellent Engineer Education Program (grant No. 13Z00208). The authors would like to thank the DigitalGlobe Company (Westminster, CO, USA) for providing the WV02 datasets and the Qinghai Tianfeng Mining Co., Ltd. (Golmud, the Qinghai Province of China) for assistance with the field survey work.

Author Contributions

J.Y. implemented the methodology and organized the manuscript. The initial experiments were realized with the help of X.L. and he provided valuable suggestions for the revision. In the field survey work, T.X. provided assistance and created illustrations of the production. All authors have read and approved the final manuscript and agreed to be listed as authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Okamoto, A. Orientation theory of ccd line-scanner images. Int. Arch. Photogramm. Remote Sens. 1988, 27, 609–617. [Google Scholar]
  2. Light, D.L.; Brown, D.; Colvocoresses, A.; Doyle, F.; Davies, M.; Ellasal, A.; Junkins, J.; Manent, J.; McKenney, A.; Undrejka, R. Satellite photogrammetry. Man. Photogramm. 1980, 4, 883–977. [Google Scholar]
  3. Toutin, T. Review article: Geometric processing of remote sensing images: Models, algorithms and methods. Int. J. Remote Sens. 2004, 25, 1893–1924. [Google Scholar] [CrossRef]
  4. Giannone, F. A Rigorous Model for High Resolution Satellite Imagery Orientation. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.529.4356&rep=rep1&type=pdf (accessed on 23 June 2017).
  5. Crespi, M.; Giannone, F.; Poli, D. Analysis of Rigorous Orientation Models for Pushbroom Sensors. Applications with Quickbird. Available online: http://www.isprs.org/proceedings/xxxvi/part1/papers/T02-07.pdf (accessed on 23 June 2017).
  6. Crespi, M.; Fratarcangeli, F.; Giannone, F.; Pieralice, F. Sisar: A Rigorous Orientation Model for Synchronous and Asynchronous Pushbroom Sensors Imagery. Available online: http://www.isprs.org/proceedings/XXXVI/1-W51/paper/crespi_etal.pdf (accessed on 23 June 2017).
  7. Crespi, M.; Fratarcangeli, F.; Giannone, F.; Pieralice, F. A new rigorous model for high-resolution satellite imagery orientation: Application to eros a and quickbird. Int. J. Remote Sens. 2012, 33, 2321–2354. [Google Scholar] [CrossRef]
  8. De Venecia, K.J.; Paderes, F., Jr.; Walker, A.S. Rigorous sensor modeling and triangulation for orbview-3. In Proceedings of the ASPRS Annual Conference, Reno, NV, USA, 1–5 May 2006. [Google Scholar]
  9. Tao, C.V.; Yong, H. 3d reconstruction methods based on the rational function model. Photogramm. Eng. Remote Sens. 2002, 68, 705–714. [Google Scholar]
  10. Tao, C.V.; Hu, Y. A comprehensive study of the rational function model for photogrammetric processing. Photogramm. Eng. Remote Sens. 2001, 67, 1347–1358. [Google Scholar]
  11. Tao, C.V.; Hu, Y. Use of the rational function model for image rectification. Can. J. Remote Sens. 2001, 27, 593–602. [Google Scholar] [CrossRef]
  12. Cheng, P.; Toutin, T. Ortho rectification and dem generation from high resolution satellite data. In Proceedings of the 22nd Asian Conference on Remote Sensing, Singapore, 5–9 November 2001; p. 9. [Google Scholar]
  13. Yang, X. Accuracy of rational function approximation in photogrammetry. In Proceedings of the ASPRS Annual Conference, Taipei, Taiwan, 4–8 December 2000; pp. 22–26. [Google Scholar]
  14. Grodecki, J.; Dial, G. Block adjustment of high-resolution satellite images described by rational polynomials. Photogramm. Eng. Remote Sens. 2003, 69, 59–68. [Google Scholar] [CrossRef]
  15. Dial, G.; Grodecki, J. Block adjustment with rational polynomial camera models. In Proceedings of the ASPRS 2002 Conference, Washington, DC, USA, 19–26 April 2002; pp. 22–26. [Google Scholar]
  16. Dial, G.; Grodecki, J. RPC Replacement Camera Models. Available online: http://128.46.154.21/jshan/proceedings/asprs2005/Files/0031.pdf (accessed on 23 June 2017).
  17. Hu, Y.; Tao, V.; Croitoru, A. Understanding the Rational Function Model: Methods and Applications. Available online: http://www.isprs.org/proceedings/XXXV/congress/comm4/papers/423.pdf (accessed on 23 June 2017).
  18. Hu, Y.; Tao, C.V. Updating solutions of the rational function model using additional control information. Photogramm. Eng. Remote Sens. 2002, 68, 715–724. [Google Scholar]
  19. Xuwen, Q.; Shufang, T.; Youtang, H.; Guo, Z. The algorithm for parameters of rpc model without initial value. Remote Sens. Land Resour. 2005, 4, 7–11. [Google Scholar]
  20. DigitalGlobe. Worldview-2 Spacecraft Information and Specifications. Available online: https://www.digitalglobe.com/resources/satellite-information (accessed on 9 March 2017).
  21. DigitalGlobe. Accuracy of Worldview Products. Available online: https://dg-cms-uploads-production.s3.amazonaws.com (accessed on 9 March 2017).
  22. Aguilar, M.A.; del Mar Saldana, M.; Aguilar, F.J. Generation and quality assessment of stereo-extracted dsm from geoeye-1 and worldview-2 imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1259–1271. [Google Scholar] [CrossRef]
  23. Aguilar, M.A.; del Mar Saldaña, M.; Aguilar, F.J. Assessing geometric accuracy of the orthorectification process from geoeye-1 and worldview-2 panchromatic images. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 427–435. [Google Scholar] [CrossRef]
  24. Shean, D.E.; Alexandrov, O.; Moratto, Z.M.; Smith, B.E.; Joughin, I.R.; Porter, C.; Morin, P. An automated, open-source pipeline for mass production of digital elevation models (dems) from very-high-resolution commercial stereo satellite imagery. ISPRS J. Photogramm. Remote Sens. 2016, 116, 101–117. [Google Scholar] [CrossRef]
  25. Updike, T.; Comp, C. Radiometric Use of Worldview-2 Imagery. Available online: https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/104/Radiometric_Use_of_WorldView-2_Imagery.pdf (accessed on 9 March 2017).
  26. Riazanoff, S. Spot 123-4-5 Geometry Handbook; Tech Rep. GAEL-P135-DOC-001; GAEL Consultant: Champs-sur-Marne, France, 2004. [Google Scholar]
  27. Boyd, J.P. Chebyshev and Fourier Spectral Methods, 2nd ed.; Courier Corporation: North Chelmsford, MA, USA, 2001; pp. 19–57. [Google Scholar]
  28. Barrera, T.; Hast, A.; Bengtsson, E. Incremental spherical linear interpolation. In Proceedings of the Annual Special Theme-Environmental Visualization Conference (SIGRAD), Gold Coast, Australia, 17–22 May 2004; pp. 7–10. [Google Scholar]
  29. Noerdlinger, P.D. Atmospheric refraction effects in earth remote sensing. ISPRS J. Photogramm. Remote Sens. 1999, 54, 360–373. [Google Scholar] [CrossRef]
  30. Saastamoinen, J. Contributions to the theory of atmospheric refraction. Bull. Géod. (1946–1975) 1973, 107, 13–34. [Google Scholar] [CrossRef]
  31. Saastamoinen, J. Introduction to practical computation of astronomical refraction. Bull. Géod. (1946–1975) 1972, 106, 383–397. [Google Scholar] [CrossRef]
  32. Ming, Y.; Zhiyong, W.; Chengy, W.; Bingyang, Y. Atmosphere refraction effectsin object locating for optical satellite remote sensing images. Acta Geod. Cartogr. Sin. 2015, 44, 995–1002. [Google Scholar]
  33. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  34. Hoerl, A.E.; Kennard, R.W. Ridge regression: Applications to nonorthogonal problems. Technometrics 1970, 12, 69–82. [Google Scholar] [CrossRef]
  35. Xinzhou, W.; Dingyou, L.; Qianyong, Z.; Hailan, H. The iterration by correcting characteristic value and its application in surveying data processing. J. Heilongjiang Inst. Technol. 2001, 15, 3–6. [Google Scholar]
  36. Greenwalt, C.R.; Shultz, M.E. Principles of Error Theory and Cartographic Applications; Aeronautical Chart And Information Center: St. Louis, MO, USA, 1962.
Figure 1. Raw and level-1B image sensor geometry: (a) raw image; and (b) using a virtual scan line to generate the level-1B image with smooth ephemeris and attitude.
Figure 1. Raw and level-1B image sensor geometry: (a) raw image; and (b) using a virtual scan line to generate the level-1B image with smooth ephemeris and attitude.
Remotesensing 09 00737 g001
Figure 2. Study area location and stereo pair composition (gray indicates the study area).
Figure 2. Study area location and stereo pair composition (gray indicates the study area).
Remotesensing 09 00737 g002
Figure 3. Ground control points (GCPs) distribution in the test area.
Figure 3. Ground control points (GCPs) distribution in the test area.
Remotesensing 09 00737 g003
Figure 4. Spatial coordinate conversion process.
Figure 4. Spatial coordinate conversion process.
Remotesensing 09 00737 g004
Figure 5. Physical sensor model geolocation: (a) the positioning principle with a single image; and (b) the physical sensor model direct location process.
Figure 5. Physical sensor model geolocation: (a) the positioning principle with a single image; and (b) the physical sensor model direct location process.
Remotesensing 09 00737 g005
Figure 6. WV02 satellite physical model refinement.
Figure 6. WV02 satellite physical model refinement.
Remotesensing 09 00737 g006
Figure 7. WV02 satellite physical model refinement: (a) the deviation of atmospheric refraction from the direction of the satellite’s sight; and (b) an exponential function to fit the atmospheric refractive error d, where β is the mean off-nadir view angle of the satellite.
Figure 7. WV02 satellite physical model refinement: (a) the deviation of atmospheric refraction from the direction of the satellite’s sight; and (b) an exponential function to fit the atmospheric refractive error d, where β is the mean off-nadir view angle of the satellite.
Remotesensing 09 00737 g007
Figure 8. Physical direct and reverse models.
Figure 8. Physical direct and reverse models.
Remotesensing 09 00737 g008
Figure 9. The physical reverse model.
Figure 9. The physical reverse model.
Remotesensing 09 00737 g009
Figure 10. The WV02 rational polynomial coefficient (RPC) generation workflow.
Figure 10. The WV02 rational polynomial coefficient (RPC) generation workflow.
Remotesensing 09 00737 g010
Figure 11. WV02 rigorous sensor model (RSM) location errors: (a) monolithic image; and (b) stereo pair’s location errors.
Figure 11. WV02 rigorous sensor model (RSM) location errors: (a) monolithic image; and (b) stereo pair’s location errors.
Remotesensing 09 00737 g011
Figure 12. WV02 rational function model (RFM) location errors: (a) single image; and (b) stereo pair.
Figure 12. WV02 rational function model (RFM) location errors: (a) single image; and (b) stereo pair.
Remotesensing 09 00737 g012
Figure 13. The difference between the RSM and RFM geolocations of the virtual checkpoints: (a) Correcting Characteristic Value Method (CCVM); and (b) Least Squares (LS) (image is 15JUL24042129, with a height plane of 3408 m).
Figure 13. The difference between the RSM and RFM geolocations of the virtual checkpoints: (a) Correcting Characteristic Value Method (CCVM); and (b) Least Squares (LS) (image is 15JUL24042129, with a height plane of 3408 m).
Remotesensing 09 00737 g013
Table 1. Details of the WorldView-2 basic (L1B) stereo pairs in the study area.
Table 1. Details of the WorldView-2 basic (L1B) stereo pairs in the study area.
Product Order ID054471964010_01054471964020_01
Product Name15JUL10043733-15JUL10043857-15JUL24042129-15JUL24042233-
Generation Time (UTC)10 July 2015
T07:04:36
10 July 2015
T07:05:08
24 July 2015
T06:03:26
24 July 2015
T06:04:03
Num Rows × Num Columns27,164 × 34,58424,356 × 35,18022,716 × 35,18028,048 × 34,445
Scan DirectionForwardForwardForwardForward
Mean Product GSD (m)0.5160.5530.6490.518
Satellite Azimuth Angle 350.000210.50040.30097.400
In Track View Angle (°)16.600–23.00027.7000.600
Cross Track View Angle (°)–6.500–8.70018.80017.600
Off Nadir View Angle (°) 17.80024.50033.10017.600
Cloud Cover (%)0.0050.0060.0000.000
Table 2. Horizontal positioning error for single images.
Table 2. Horizontal positioning error for single images.
Scene Number Number of ICPsMean Off Nadir View Angle (°)RMSEX (m)RMSEY (m)RMSEr (m)
15JUL10043733-4917.86.31.66.5
15JUL10043857-4624.51.61.92.5
15JUL24042129-4833.14.13.85.6
15JUL24042233-3817.61.71.32.2
Table 3. Horizontal positioning error of stereo pairs.
Table 3. Horizontal positioning error of stereo pairs.
Stereo Pair Scene Number Mean Off Nadir View Angle (°)ICPs Num RMSEX (m)RMSEY (m)RMSEr (m)
I15JUL10043733 + 15JUL1004385717.8 + 24.5403.41.73.9
II15JUL24042233 + 15JUL2404212917.6 + 33.1311.42.22.6
Table 4. The rational function model (RFM) errors of the independent checkpoints (ICPs) (meters).
Table 4. The rational function model (RFM) errors of the independent checkpoints (ICPs) (meters).
MethodCorrecting Characteristic Value Method (CCVM)Least Squares(LS)DigitalGlobe (DG)
Scene NumberRMSEXRMSEYRMSErRMSEXRMSEYRMSErRMSEXRMSEYRMSEr
15JUL10043733-5.11.65.35.11.65.411.95.16.7
15JUL10043857-2.31.82.92.31.93.01.52.52.9
15JUL24042129-2.83.94.82.84.25.04.61.94.9
15JUL24042233-1.51.32.01.51.32.01.82.43.0
Mean2.9 2.2 3.8 2.9 2.3 3.9 5.0 3.0 4.4
Table 5. Influence of elevation error on plane direct location.
Table 5. Influence of elevation error on plane direct location.
Scene Number Elevation Change Value (m)∆x (m)∆y (m)∆d (m)
15JUL10043733-10.35–0.060.36
15JUL10043857-1–0.46–0.270.53
15JUL24042129-10.580.480.75
15JUL24042233-1–0.050.340.34
Table 6. WV02 single image RFM fit errors (pixels).
Table 6. WV02 single image RFM fit errors (pixels).
RPC MethodCCVMLS
Scene NumberRMSErowRMSEcolMAX Error in RowMAX Error in ColumnRMSErowRMSEcolMAX Error in RowMAX Error in Column
15JUL100437330.1170.104–0.297–0.2570.1180.1040.2990.257
15JUL100438570.0760.142–0.1760.3360.0750.1450.175–0.346
15JUL240421290.0920.069–0.3361.276 0.0920.147–0.3366.444
15JUL240422330.0870.168–0.1990.4460.0870.181–0.1980.473

Share and Cite

MDPI and ACS Style

Ye, J.; Lin, X.; Xu, T. Mathematical Modeling and Accuracy Testing of WorldView-2 Level-1B Stereo Pairs without Ground Control Points. Remote Sens. 2017, 9, 737. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9070737

AMA Style

Ye J, Lin X, Xu T. Mathematical Modeling and Accuracy Testing of WorldView-2 Level-1B Stereo Pairs without Ground Control Points. Remote Sensing. 2017; 9(7):737. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9070737

Chicago/Turabian Style

Ye, Jiang, Xu Lin, and Tao Xu. 2017. "Mathematical Modeling and Accuracy Testing of WorldView-2 Level-1B Stereo Pairs without Ground Control Points" Remote Sensing 9, no. 7: 737. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9070737

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop