Next Article in Journal
Generalized Deep Neural Network Model for Cuffless Blood Pressure Estimation with Photoplethysmogram Signal Only
Previous Article in Journal
Image Compressive Sensing via Hybrid Nonlocal Sparsity Regularization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On-Orbit Calibration of Installation Matrix between Remote Sensing Camera and Star Camera Based on Vector Angle Invariance

School of Instrumentation and Opto-electronic Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Submission received: 20 August 2020 / Revised: 26 September 2020 / Accepted: 1 October 2020 / Published: 4 October 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
To achieve photogrammetry without ground control points (GCPs), the precise measurement of the exterior orientation elements for the remote sensing camera is particularly important. Currently, the satellites are equipped with a GPS receiver, so that the accuracy of the line elements of the exterior orientation elements could reach centimeter-level. Furthermore, the high-precision angle elements of the exterior orientation elements could be obtained through a star camera which provides the direction reference in the inertial coordinate system and star images. Due to the stress release during the launch and the changes of the thermal environment, the installation matrix is variable and needs to be recalibrated. Hence, we estimate the cosine angle vector invariance of a remote sensing camera and star camera which are independent of attitude, and then we deal with long-term on-orbit data by using batch processing to realize the accurate calibration of the installation matrix. This method not only removes the coupling of attitude and installation matrix, but also reduces the conversion error of multiple coordinate systems. Finally, the geo-positioning accuracy in planimetry is remarkably higher than the conventional method in the simulation results.

1. Introduction

A high-resolution satellite image (HRSI) is important for high-precision geospatial information. It is widely used in many fields such as 3D shoreline extraction and coastal mapping, Digital Terrain Model (DTM) and Digital Surface Model (DSM) generation and national topographic mapping. Many of the above remote sensing applications require an HRSI with high accuracy [1]. The HRSI reconstruction method is mainly divided into four steps: feature extraction, feature matching, exterior orientation element estimation and final resampling [2,3,4]. For the high-precision exterior orientation element estimation, it is the key to realizing high-precision geo-positioning.
In geometric photogrammetry, when the image coordinate of the remote sensing image is determined, the geo-positioning depends on the accuracy of the provided external orientation element. The line element is obtained by the GPS receiver, the angle element can be obtained by the star camera or star sensor in the earth center inertial coordinate system [5]. Then, we get the angle element of the remote sensing camera by calculating the installation relationship between the remote sensing camera and star camera or star sensor (hereinafter called the installation matrix instead). Before the satellite launches, satellite designers measure the installation matrix on the ground. Due to the stress release during the launch and the changes of the thermal environment, the changed installation matrix will result in the failure of geo-positioning. So, it requires on-orbit calibration.
The early published works in this area usually seem to use the remote sensing camera to obtain its exterior orientation elements through the GCPs. The attitude determination equipment outputs the satellite’s attitude. Then, the installation matrix could be solved [6]. After the launch of the SPOT-5 satellite in 2002, the French Space Center established the angle error model of the optical axis between the star sensor and remote sensing camera. They used the global distribution of test sites to calculate the boresight direction of the remote sensing camera, then they calculated the error of the installation matrix with the attitude and orbit control subsystem (AOCS). Furthermore, the error model of the internal orientation elements was proposed which was fitted by the fifth polynomial. Finally, the positioning accuracy of SPOT-5 single scene without GCP could reach 50 m [7]. This method establishes the basic on-orbit calibration process of the installation matrix, but it does not research the requirements of GCPs and the real orientation of each pixel [8]. Dial G and Grodecki J establish a joint calibration of the IKONOS satellite based on the Field Angle Map (FAM) and the interlock angle between optical axis of the remote sensing camera and star sensor. FAM could solve the real orientation of each pixel in CCDs. Then, they use the knife-edge targets in a set of independent images to calibrate interlock angle errors, and after that, they compute the mean interlock angle correction. Finally, the accuracy can reach 12 m (RMS) in planimetry and 10 m (RMS) in elevation without GCP [9,10]. This analysis establishes the angle model and puts forward requirements for the types of GCPs, which makes the identification and extraction of GCPs more accurate and visible. However, the method of obtaining the exterior orientation elements of the remote sensing camera directly through the GCPs will lead to a strong correlation of the azimuth parameters, which leads to errors in the solution of the installation matrix [11].
In order to reduce the strong coupling of azimuth parameters, the following published works report the methods which are based on Taylor expansion and an iterative solution of a remote sensing camera’s angle elements. They thought of the angle elements as unknown quantities, which are expanded in the Taylor series at the center of the scene and solved iteratively in the strict imaging model. Yuan X. X thought there was the constant difference which was the error of the remote sensing image point direction and the true direction. The image point direction is calculated from the image attitude, and the true direction is the direction which is from the satellite to the GCP. This constant difference is the deviation of the initial installation matrix. Through the line of sight consistency of a small number of GCPs in the geocentric rectangular coordinate system and image coordinate system, the deviation matrix can be calculated and then compensated. After they compensated the installation matrix of the QuickBird satellite, the positioning accuracy without GCPs can increase from 8.59 to 3.09 m [12]. Wang Q. L. used the similar method which established several collinear equations through the GCPs, and he calculated the installation matrix by the initial values of ground calibration. This work was efficient because he could obtain the stable exterior orientation elements through the iteration of the internal and external parameters calculation. Then, the installation matrix could be calculated easily. Each GCP can get an installation matrix. He acquired the optimal installation matrix by establishing the optimization objective equation. Finally, the RMS deviation reached 9.3 m [13]. For the two methods above, these methods can solve the attitude of the remote sensing camera by Taylor expansion to reduce the dependence on the satellite attitude determination system, but they cannot avoid the coupling problem between the random error of the attitude and the installation matrix [14]. At the same time, for the satellite attitude determination system and the remote sensing camera work that is unsynchronized, the fitting and interpolation calculation of external parameters also brings errors [15].
In this paper, an improved geo-positioning of satellite imagery is proposed to meet the requirements of high accuracy observation. There are three advantages for using this method. First, we introduce the concept of the star camera, which is an optical load of the remote sensing camera’s platform. The star camera combines two functions: one is autonomous navigation attitude measurement such as the star sensor, and the other is the external orientation elements acquirement of the remote sensing camera’s current image. In order to avoid fitting and interpolation errors caused by the satellite attitude determination system and the remote sensing camera, they work unsynchronized. The star camera and the remote sensing camera shoot simultaneously through the synchronization signal, the star camera provides the real-time attitude, and then it calculates the exterior orientation elements of the remote sensing camera through the installation matrix. In terms of camera models, we use the rigorous imaging model (the sensor model of multi-line array CCD can refer to Section 3). Second, unlike many coordinate system transformations in previous algorithms, this method only involves simple conversion between the World Geodetic System 1984 (WGS84), the earth-centered inertial reference frame (ECI, here is J2000), the star camera, and the remote sensing camera coordinate system. It does not need to consider the satellite motion in the orbital coordinate system. We only use the GPS position in real-time, image data of remote sensing camera and star camera in synchronization time to complete the high-precision geo-positioning. Third, unlike the existing algorithms which need to calculate the star sensor and remote sensing camera’s own attitude matrix, the input calibration data only consist of a star vector and ground calibration target vector (GCT vector). Because the number of star vectors captured by the star camera is large, it can reduce the dependence on the number of GCTs. For details, please see Section 2.
The remainder of this paper is organized as follows. A brief introduction of this method is given in Section 2. Section 3 describes the simulation process and the results of the algorithm, including building an imaging model, simulation database preparation, and the verification of simulation calculation in detail. Section 4 presents some conclusions.

2. Algorithm

In this part, we propose an algorithm, which is not only needed to use the complex sensor model design of a rigorous sensor model, but also can reduce the coordinate system transformations and demand of traditional on-orbit calibration for the number of GCTs. The core of this method is: when the remote sensing camera’s external orientation elements are determined, the vector angles between the GCT vector (the GCT is the projection of corresponding image points on the ground) and star vector are the rotation invariances. Then, the installation matrix is calculated by the method of the double-vector attitude determination and optimized in real-time by batch processing on orbit.
The method’s steps are as follows: the remote sensing image is acquired by a synchronous pulse, and the star point image is also acquired by the star camera at the same time. The corresponding relationship between the remote sensing image characteristic points and the GCTs are obtained through the feature extraction and recognition. When the GCTs are located, the GCT vectors are obtained from the satellite position (the direction of the vectors is from the satellite body pointing to the GCTs). Because the satellite’s external orientation elements (position and attitude) are known, the vector angles between the GCT vector and the star vector can be calculated precisely. The GCT vectors in the remote sensing image coordinate system and the star vectors in the star camera coordinate system also could be calculated. So, the installation matrix can be calibrated by these invariances.
Figure 1 is a flow chart of the paper method. We can clearly see that there is no estimation of the motion state of the remote sensing camera in the method. When the position of the remote sensing camera is known, the coordinates of the ground point and the identified star point are all accurately known, so V ^ i t r u e V ^ j t r u e is the constant depends on remote sensing camera position. If the installation matrix is known precisely, W ^ i ο W ^ j is equal to V ^ i t r u e V ^ j t r u e in the ideal imaging model. However, because of the existence of the initial installation matrix error, camera distortion residual, model error and others, the variable W ^ i ο W ^ j is approximately equal to the constant V ^ i t r u e V ^ j t r u e . Through the method in this paper, we associate the error of the installation matrix with the optimization equation composed of W ^ i ο W ^ j and V ^ i t r u e V ^ j t r u e , and the estimation error of input data. Finally, we optimize the error of the installation matrix by using the coordinates of stars and the corresponding GCTs.

2.1. Mathematical Model

To obtain the ground coordinate on earth from the remote sensing image coordinate, it is necessary to use a mathematical model to describe the relationship between the two sets of coordinates. Each GCT will provide a set of two collinear condition equations derived from image coordinates, satellite position and GCT coordinates in the conventional inertial system (in the field of aerospace, we usually use J2000 as the conventional inertial system, so the conventional inertial system is hereinafter referred to as J2000) [16,17].
[ 0 y - y 0 - f ] = λ M [ X X P Y Y P Z Z P ]
Now, describe the components of this Equation (1). y is the image coordinate, y 0 is the principal point of the image (the principal point is the point where the optical axis intersects the image plane). M is an orthogonal rotation matrix that represents the transformation from J2000 to the image sensor coordinate system. The origin of the remote sensing camera coordinate system is located in the perspective center, the Y-axis is parallel to the detector array, the X-axis points to the pushbroom direction and the Z-axis is perpendicular to the plane of X and Y axis while pointing to the observation target. f is the focal length of star camera. λ is a scale factor. ( X , Y , Z ) is the ground point’s coordinate in the J2000. We usually get the ground points based on local geodetic data but calculate in J2000. There are three steps to coordinate system transformation. First, correct the height of geoid to separate geoid and ellipsoid. Then, the coordinates are transformed from a geodetic coordinate system to a geocentric coordinate system in local data. Third, the geocentric coordinates system can be converted from local to WGS84. Finally, WGS84 coordinates are transformed into the J2000 coordinate system. ( X P , Y P , Z P ) is the perspective center coordinate of remote sensing camera in the J2000.
The M matrix changes with time t. It can be expressed as:
M = R C S R S I
where R C S is the installation matrix between star camera and remote sensing camera. R S I is the rotation matrix from the J2000 to the star camera coordinate system, also named the attitude matrix. R S I is given by the star camera. R C S is obtained by ground calibration. For the stress release during the launch and the change in the working environment, there is a deviation from the initial value, so R C S needs to be calibrated on orbit.
The observation star vector of the star camera satisfies the condition as:
u = R S I v
u is the representation of the star vector in the star camera coordinate system, and v is the representation in the J2000.
Where
u = 1 ( x ¯ ) 2 + ( y ¯ ) 2 + f 2 [ - x ¯ y ¯ f ] , v = [ cos δ cos α cos δ sin α sin δ ]
the right ascension and declination of star in the star catalog are expressed as α and δ . The above equation is based on the ideal imaging model. In fact, there will be lens distortion in star cameras, and we get the Equation (4).
{ X = X + d x Y = Y + d y
( X , Y ) are the image coordinates of target in ideal imaging model, and ( X , Y ) are the actual imaging coordinates. ( d x , d y ) is the distortion value.
{ d x = x ¯ ( q 1 r 2 + q 2 r 4 ) + [ p 1 ( r 2 + 2 x ¯ 2 ) + 2 p 2 x ¯ y ¯ ] d y = y ¯ ( q 1 r 2 + q 2 r 4 ) + [ p 2 ( r 2 + 2 y ¯ 2 ) + 2 p 1 x ¯ y ¯ ]
{ x ¯ = X X 0 y ¯ = Y Y 0 r 2 = x ¯ 2 + y ¯ 2
( X 0 , Y 0 ) is the principal point of the star camera. (q1, q2) is the radial distortion, and (p1, p2) is the tangential distortion.

2.2. Equation Derivation of R C S

The star vector in the coordinate system of the remote sensing camera can be expressed as w , then:
w = R C S u = R C S R S I v
We use Equations (1)–(3) and (7) to calculate the vector angles between GCT vectors and star vectors in the same coordinate system.
To avoid the influence of the attitude, the angle cosine which is independent of the attitude is selected for estimation. We deduce the relationship between the measured parameters (including the original vectors and the angle cosine), and we explain the non-simultaneous observation properties between the attitude and the absolute installation matrix. Based on the statistical characteristics of the measured parameters, the maximum likelihood estimation of the relative attitude was derived. Then, the redundancy of the measured parameters is discussed, and the decomposition algorithm is used to obtain the maximum independent measurement subset.
The star vectors U ^ i , k in the star camera coordinate system which can be observed are expressed as:
U ^ i , k = U ^ i , k true + Δ U ^ i , k
U ^ i , k true is the true value of the star vector, Δ U ^ i , k is the measurement noise, i is the vector number, i = 1 , 2 , , n k , k is the time series, k = 1 , 2 , , N .
Similarly, the observation vectors (here means GCT vectors or star vectors after coordinate system conversion) W ^ i , k in the remote sensing camera coordinate system can be expressed as:
W ^ i , k = W ^ i , k true + Δ W ^ i , k
Here, the same observation vectors in remote sensing camera and star camera coordinate system are expressed as follows:
U ^ i , k = R C S T W ^ i , k = R C S T W ^ i , k true + R C S T Δ W ^ i , k U ^ i , k true + Δ U ^ i , k
In general, the installation matrix is not known for its true value, and the initial value R C S 0 can only be obtained by ground calibration before launch. Therefore, the error matrix is defined like this:
R C S = M R C S 0
Define the error vector θ , which has the following relationship with the error matrix M [18]:
M e θ = I + ( sin | θ | | θ | ) θ + ( 1 cos | θ | | θ | 2 ) θ 2
Among them, I = [ 1 0 0 0 1 0 0 0 1 ] , e { · } is matrix series, θ [ 0 θ 3 - θ 2 - θ 3 0 θ 1 θ 2 - θ 1 0 ] , θ 1 , θ 2 , θ 3 are structural error angles which are usually small.
Here, we can simplify this equation:
M = I + θ + O ( | θ | 2 ) I + θ
So, we can the Equation (10) in first-order approximation:
U ^ i , k = R C S ο T M T W ^ i , k true + Δ U ^ i , k
Set V ^ i as the reference vector which is the expression of the observation vector of remote sensing camera in J2000.
W ^ i , k true = R C I V ^ i true = R C I V ^ i - R C I Δ V ^ i
R C I is the rotation matrix from J2000 to the remote sensing camera coordinate system, Δ V ^ i is the reference vector error and, supposing it is gaussian, white noise with zero mean E { Δ V ^ i Δ V ^ T i } = R V ^ i .
Because the error of the reference vector is too small, it is generally negligible. Therefore, the relationship between the star vectors and the reference vectors is expressed as:
U ^ i , k = R C S T R C I V ^ i + Δ U ^ i , k = R C S ο T M T R C I V ^ i + Δ U ^ i , k
As can be seen from the above equation, the value of the star vectors U ^ i , k does not change for the following transformations:
R C S T R C S ( M R C S 0 T M R C S 0 ) R C I T R C I
T is an arbitrary rotation matrix. Therefore, it is impossible to get the installation matrix deviation from the attitude measurement of the star camera, so it also means estimating the sensor installation and the attitude cannot be achieved at the same time.
In this point, we introduce a new variable W ^ i , k ο which means observation vectors in the uncalibrated remote sensing camera coordinate system.
W ^ i , k ο R C S ο U ^ i , k = M T W ^ i , k M T R C I V ^ i , k + Δ W ^ ο i , k
We expand the M matrix like this: M I + θ , so:
W ^ i , k ο ( I - θ ) W ^ i , k true + Δ W ^ ο i , k = W ^ i , k true + W ^ i , k true θ + Δ W ^ ο i , k
It can be known from the equation that the W ^ i , k ο , s first-order approximation is still associated with the attitude through W ^ i , k true .
W ^ i , k ο · W ^ j , k = V ^ i t r u e · V ^ j t r u e + ( W ^ i , k true × W ^ j , k t r u e ) · θ + W ^ i , k true · Δ W ^ j , k + Δ W ^ i , k ο · W ^ j , k true + Δ W ^ i , k ο · Δ W ^ j , k
Δ W ^ i , k ο · Δ W ^ j , k is the minterm. So, we make z i j , k and Δ z i j , k :
z i j , k W ^ i , k ο · W ^ j , k - V ^ i t r u e · V ^ j t r u e = ( W ^ i , k true × W ^ j , k t r u e ) · θ + Δ z i j , k ( - W ^ i , k ο × W ^ j , k ) · θ + Δ z i j , k
Δ z i j , k = W ^ i , k true · Δ W ^ j , k + Δ W ^ i , k ο · W ^ j , k true W ^ i , k ο · Δ W ^ j , k + Δ W ^ i , k ο · W ^ j , k
For detailed derivation of measurement noise and covariance in this section, they are given explicitly in Appendix A.

2.3. Calibration Method of θ

Assuming that the star camera has n s , k observed vectors at the time k , and the remote sensing camera has n c , k observed vectors, so the number of z i j , k is n s , k n c , k .
To build a measurement vector Ζ that has n s , k n c , k elements.
Z [ z 11 , k , ... , z n i n j , k ] T
So, we get the measurement equation: Z k = H k θ + Δ Z k .
H k can be built from the Equation (21) and Δ Z k is gaussian white noise sequence with the covariance of P Z k .
Using the theory of maximum likelihood estimation [19,20], the negative log-likelihood function of the maximum likelihood estimation θ is:
J ψ ( θ ) = 1 2 k = 1 N [ ( Z k H k θ ) T P Z k 1 ( Z k H k θ ) + log det P Z k + log 2 π n s k n c k ]
We solve the minimum value of the above equation and get the normal equation.
P θ θ 1 θ = k = 1 N H k T P Z k 1 Z k P θ θ 1 = k = 1 N H k T P Z k 1 H k
P θ θ is the covariance matrix of the estimation error. The relative conversion error can be estimated from the above equation.
For detailed of redundancy problem of θ calculation, decomposition algorithm and star vector measurement error, we have written explicitly in Appendix B.
To explain the method more clearly, the following will explain the specific steps:
Obtain the measurement vector U ^ i , k and uncalibrated installation matrix R C S 0 , then we get the W ^ i , k ο in uncalibrated remote sensing camera coordinate system;
Calculate angle cosine error z i j , k ;
For each k moment, construct the n s , k n c , k measurement vector Z k ;
Calculate the measurement sensitivity matrix H k , and its dimension is n s , k n c , k × 3 ;
Calculate R W ^ i , k and G W ^ i , k ;
Calculate B k and then get U k , S k by singular value decomposition (SVD). Determine the number of singular values l max ;
Calculate ζ k and C k ;
Preserve the first l max line to get ζ ˜ k , C ˜ k and S ˜ k ;
Take the above variables into the measurement equation to get θ and P θ θ ;
Calculate the error matrix M and installation matrix.
Repeat steps ① to ⑩ until the error matrix approaches the unit matrix. Generally, only one iteration can achieve the effect. The definition of B k , U k , S k , ζ k , C k and etc. can be found in Appendix B.
In this paper, a uniform estimate method of the installation matrix is proposed. It can be used to estimate error matrix M of spacecraft sensors on orbit. Firstly, a statistical model of the measurement error of the star camera is proposed. A set of attitude independent measurements can be obtained; here means angle cosine of observation vectors are obtained. Based on these measurements, the error estimation is obtained. To facilitate numerical calculation, a decomposition algorithm is proposed.
It should be noted that the calculation of z i j , k contains two similar variable subtractions, the result’s significant digits are less than the original vectors’. When the result of variables subtraction is the order of arc-second and nearly 6 significant figures are lost. Therefore, double precision should be used in calculations. Besides, the first-order terms are retained only in the linearization process. Besides, the proportional non-linear error of | θ | 2 can be eliminated by iterative methods. Finally, because the mean of z i j , k is zero, it is good for removing gross error through calculating variance.

3. Algorithm Simulation

3.1. Virtual CCD Sensor Model

In the actual data processing, we obtain the images from four independent CCDs which have different rigorous sensor models. Too many models bring too much calculation consumption. Therefore, the four original CCD images are spliced into a virtual image, it can avoid the effect of internal and external errors from the original image (see Figure 2). The virtual imaging CCD has the following advantages [21]:
There is no lens distortion, so it is an ideal pinhole imaging method to eliminate the lens distortion caused by multi CCDs;
The virtual CCD is a single line array located on the focal plane which can eliminate the nonlinear distortion of multi CCDs and the splicing distortion of multi CCDs, and tilt distortion of CCDs also can be eliminated;
In reality, the pixel size of real CCDs is different, and the virtual CCD pixel size is uniform and easy to calculate.
To better reduce the location error of virtual CCD, we use the internal orientation elements to calculate direction angle ( φ x i , φ y i ) to the pixel ( x i , y i ) on the ideal imaging surface. Repeat the above step to bring all the actual pixels into the calculation, then use the least square method to find the best fitting line as the position of virtual CCD. The expression is y = a x + b : where a and b are unknown numbers, the observation equation is established:
V = A x L
where A = [ x 1 1 x n 1 ] n × 2 , L = [ y 1 y n ] n × 1 , x = [ a b ] , n is pixel number.
So, we define the ideal position is:
x = ( A T A ) 1 A T L
x [ x 1 , x n ] , x i is the uniform distribution pixel position of the virtual CCD.
Because of the change of the real light direction, and the based high surface is changed, so it will import elevation error. In this paper, four TDI (Time Delayed and Integration) CCDs are proposed to be installed by a splicing reflector. Four CCDs are installed by overlapping. The splicing accuracy is better than 2 μ m . The focal length of the remote sensing camera is set at 10m and the resolution along the orbit is 0.5 m. The 1250 km elevation error may cause a 0.5-pixel offset on the image plane [22]. This shows that the virtual imaging based on the average elevation reference plane can realize the seamless splicing of four CCDs, which provides the theoretical basis for the following calculation.

3.2. Simulation Model

Firstly, the orbit model is established to generate six elements of orbit data ω i (the external orientation elements), and the data frequency is 1Hz [23].
Select N calibration points from the whole virtual image and records the image coordinates. According to the virtual CCD sensor model and Digital Elevation Mode (DEM) data, we obtain the corresponding three-dimensional space coordinates of the ground points by iteration. At the same time, we record the select the calibration point’s row number and calculate the exposure time t i of each calibration point. For linear pushbroom CCD, the exterior orientation elements and satellite camera attitude of each scan row are continuously changing. Therefore, if the satellite moves slowly and smoothly, the exterior orientation line elements can be obtained by polynomial interpolation or general polynomial fitting. Generally speaking, there is no significant difference between the second-order general polynomial and the eighth order Lagrange polynomial interpolation results, but on the premise of obtaining high-precision exterior orientation elements, the linear part and high-frequency part of system error can be separated by the second-order polynomial which can better reflect the actual physical significance [24].
Set the installation relationship of the star camera and remote sensing camera as shown below:
To facilitate discussion, we define the satellite’s Z-axis as the optical axis of the remote sensing camera. The theoretical quaternion of the star camera is calculated according to the orbit data of the above, provided in the previous paragraph, and the theoretical installation relationship provided in Figure 3. These quaternion data are used to calculate the star image from the virtual star camera. Star image is represented by 0~255 gray level, and gaussian white noise with zero mean is added as image noise.

3.3. Simulation Verification

The simulations proposed in this paper are under different levels of noise. These calculations were implemented in MATLAB in the Microsoft Windows environment on Quad-Core i7 4.0 GHz PC. The geo-position errors decomposition is shown in Table 1. The star camera and the remote sensing camera parameters are shown in Table 2 and Table 3.
The distortion model of the star camera refers to the Equations (4)–(6). The radial distortion (q1, q2) and tangential distortion (p1, p2) are all displayed in the first and second order. The distortion values refer to the laboratory calibration results of the star camera. The remote sensing camera starts to number the pixels from the starting point of the CCD. We define the principal point of the CCD as a no distortion pixel. As the distance between each pixel and the principal point increases, the distortion also increases. The sampling pixels and corresponding distortion values are shown in Table 3. The orbit of the remote sensing satellite is the sun-synchronous orbit, the orbit altitude is 530 km. The GPS positioning accuracy is 2 m. The vibration error of the satellite platform is 0.1″. The asynchronous error of satellite clock is 0.1 ms. The distortion calibration residual of remote sensing camera is set to 0.3 pixels [25]. In order to more realistically simulate the motion of the remote sensing camera on orbit, we use the real motion data of the remote sensing camera on-orbit as the motion simulation.
The Table 1 shows the partial simulation geo-positioning errors. Table 2 and Table 3 show the parameters of star camera and remote sensing camera. The distortion term of the remote sensing camera refers to the distortion of the sampling pixels in each distance with the CCD starting point as the origin.
It is worth mentioning that in actual situations, the pitch and roll angles of the remote sensing camera show an oscillating waveform rather than static: please see Figure 4. In order to show the attitude changes better, the position 0 in Y-ordinate is the angle mean. We capture the observation data for 24 s as a display.
The triaxial deviation of the star camera and remote sensing camera installation matrix is set to [−0.003, 0.002, 0.001], and the unit is the radian. Therefore, the deviation matrix is:
M = [ 1 0.001 0.002 0.001 1 0.003 0.002 0.003 1 ]
We calculate the star information in the virtual field of view based on the attitude of the star camera. The attitude is obtained from the simulation. We use star information to generate virtual star images. By adding gaussian image noise to the background of the virtual star image, the accuracy of the centroiding can be controlled.
For the star camera’s centroiding error, the general error is about 0.05 pixel. The accuracy of the star camera’s centroiding is σ s pixel and σ s [ 0.05 , 0.10 ] . The selected GCT error in the remote sensing camera’s image obeys the gaussian zero mean and the standard deviation of σ c pixel, σ c [ 0.1 , 1.0 ] [26]. Here we define a complete exposure period of the remote sensing camera as one frame.

3.3.1. Performance and Robustness of the Algorithm

In order to show the calculation error of installation matrix clearly, we use the three-axis deviation angle as the result of Euler angle transformation of deviation matrix through Z-X-Y rotation orders in the following discussion.
For example, when σ s = 0.06 , σ c = 0.1 (pixel), the figure below is the error between the calculated three-axis deviation angle and the truth deviation angle, the unit is arc-second. It can be clearly seen that when after about 150 frames, the X and Y axis error tends to 0, and the error estimation of the rolling axis is about 0.2 arc-second. Even in the case of large error, such as σ s = 0.10 , σ c = 1.0 (pixel), the estimation of deviation angle converges very fast, and the estimation error of the rolling axis is about 0.5 arc-second. For the efficiency of the algorithm in 400 frames of data, if Sk (Appendix B) is less than the set threshold, it will be judged that it has converged. Obviously, it is not the final convergence result of the algorithm. Because the error of z-axis is 5–10 times larger than that of x-axis and y-axis, when the error of x-axis and y-axis have satisfied the convergence condition, the error of z axis has not approached to 0. Therefore, it can be seen from Figure 5 and Figure 6 that this method can converge to the ideal result after 150 frames.
Test the robustness of this method under a different centroiding error of the star camera and remote sensing camera. The range of centroiding error which we tested is σ s [ 0.05 , 0.10 ] , σ c [ 0.1 , 1.0 ] (pixel). In each case, the error of triaxial after convergence is shown as follows in Figure 7:
It can be seen from the above three figures in Figure 7 that the accuracy of the method for calculating the installation matrix is accurate and stable. When setting different centroiding errors of the star camera and remote sensing camera, the triaxial estimation error’s mean value is [ 0.0018 , 0.0001 , 0.2602 ], for X, Y and Z axis in arc-second. The estimation error’s standard deviation is [ 0.0159 , 0.0105 , 0.0927 ] in arc-second.

3.3.2. Comparison with the Wang’s Algorithm

Wang’s calibration algorithm for the on-orbit installation matrix calibration is introduced here for comparison [13]. The difference between the two algorithms is that Wang’s algorithm needs the attitude from the star sensor, and he calculates the transformation of the star sensor, remote sensing camera, satellite and satellite orbit coordinate systems. It obtains the motion matrix R c o which represents each remote sensing image relative to the orbit coordinate systems through a data iteration. Finally, the installation matrix is solved easily. Different from the method in this paper, Wang’s method can only calculate the installation matrix independently for each frame of image, but it cannot directly increase the number of frames to improve the accuracy of the method.
Wang uses a third-order polynomial to fit the motion state of the remote sensing camera. The residuals of three angles after fitting are as shown in Figure 8. We can see that the high-order polynomials cannot accurately describe the motion of the remote sensing camera, even if we increase the order of the polynomials from the 5th to the 7th, there is no significant improvement. That shows that in practical situations it is inaccurate to use a polynomial method to describe the time-varying state of a remote sensing camera. In addition, Wang’s method only solves polynomials for each remote sensing image. As the order of polynomials increases, the coefficients to be solved also increase, which requires more calculation points.
Then, we show the schematic diagram of the deviation angle error calculated by Wang’s algorithm under different error conditions.
We can compare Figure 9 with Figure 7 and find that the deviation angle error of Wang’s algorithm is larger than the result of the method in this paper.
In Table 4, we can see in the case of σ s = 0.07 , σ c = 0.5 (pixel), the angle error of three-axis is reduced by 5–14 times. However, in the case of σ s = 0.10 , σ c = 1.0 (pixel), the angle error of three-axis is reduced more than 10 times. We can also see that the error of the deviation angle does not change significantly with the increase of the noise of the star camera and remote sensing camera by using the paper algorithm. The reason is we use a joint calculation method with multi-frame data, and the random error of the image points in the star camera and the remote sensing camera will not affect the calibration of the installation matrix. Since the input error is added in the simulation process, the error of x-axis and y-axis fluctuates around 0. The error of the Z axis is about 5–10 times that of the x -axis and y-axis, which is related to the low accuracy of the star camera Z axis. The calculation error of the installation matrix directly leads to the decline of the final geo-positioning accuracy. The schematic diagrams using the same verification point to the geo-position of two algorithms and are shown in Figure 10.
Figure 10 shows that with the decrease of the location error of the star camera’s centroiding, the positioning accuracy of the geo-positioning also decreases. When the centroiding accuracy of star camera is 0.05 pixel, the attitude accuracy of the star camera is about 0.5″ (3 σ ).
Let us combine Table 5 with Figure 10. When we set the σ s = 0.05 , σ c = 0.3 pixel, there is a total of seven real remote sensing images (6144 pixels × 26000 pixels × 4 pieces), and the whole star images are generated by the calculation of the virtual field of view. In total, 20 ground test points are selected from one image, and the experiment of geo-positioning accuracy is carried out with the algorithm. It can be seen that the geo-positioning RMSEs (root mean square errors) in planimetry along the orbit direction and the vertical orbit are 4.32 m and 3.54 m which is better than the 5.96 m and 5.17 m using Wang’s method. Combine the results between the two methods, and this paper method is about 2.5 m less than Wang’s method on average, which is related to the accuracy improvement of the calculation of the installation matrix by using the paper method.
Furthermore, the reason for the large geo-positioning error of Wang’s is each set that composed of a star image and remote sensing image is calculated independently, and one set only provides one installation matrix R s c . This calculation method leads to high coupling between R c o and R s c (Refer to Equation (17)). The R c o ’s accuracy directly leads to the final R s c ’s accuracy. We can see in Figure 8, the R c o ’s model cannot fit well which causes the residual error. This error couples to R s c directly.

4. Conclusions

In this paper, a novel algorithm on-orbit for installation matrix estimation is proposed. This method can realize the on-orbit calibration of the installation matrix accurately. We can use the ultra-high precision star camera as the measuring instrument for the external orientation elements measurement and then realize the high-precision geo-positioning. This method has the following characteristics: (1) To realize the high precision earth observation by using a star camera, we should calculate the installation matrix between the star camera and the remote sensing camera firstly. When the remote sensing camera’s external orientation elements are determined, the vector angle between the GCT vector and the star vector is deduced as a rotation invariance. (2) For the data redundant of measurement vectors, we propose a batch processing algorithm based on SVD, which is used to process long-term data. This method is not sensitive to short-term data loss and is suitable on orbit. This can make full use of multi-frame data to improve the accuracy of the installation matrix calculation
This method can use most information of the star vector and the GCT vector, and it avoids the complicated satellite on-orbit movement compensation which impedes the estimation of the installation matrix. At the same time, a batch processing algorithm based on SVD is added to enhance the on-orbit data processing ability and robustness of the algorithm. The simulation results demonstrate that the method can effectively improve the accuracy estimation of the installation matrix. In a wide range of sensor noise, the error of the X, Y-axis is about the order of 10−2 arc-second. Then, we achieve a high-precision geo-positioning measurement by using the above result. By comparing it with Wang’ on-orbit algorithm, we have improved the ground geo-positioning accuracy by above 20%. So, we have achieved an algorithm that only needs simple data on-orbit, with high calculation accuracy and strong robustness. It is also suitable for an on-orbit multi-frame joint calculation. It provides better theoretical support for future photogrammetry without GCPs. Future work will obtain more complete practical results of the on-orbit data to verify the actual effect of the algorithm and apply the algorithm to remote sensing satellites to evaluate the long-term performance under on-orbit conditions.

Author Contributions

Y.T.: conceptualization, formal analysis, methodology, investigation, software, writing—original draft preparation, writing—review and editing. J.L.: data curation, validation, writing—conceptualization, funding acquisition, supervision, writing—review and editing. X.W.: investigation, validation, supervision, funding acquisition. Z.W.: conceptualization, writing—review and editing, supervision. G.W.: investigation, resources, data analysis and curation. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by the National Natural Science Foundation of China under Grant (61705005), the Beijing Gold-Bridge Project under Grant (ZZ19019) and National Key Research and Development Program of China under Grant (2019YFA0706002).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Suppose Δ U ^ i , k is the gaussian white noise with zero mean and covariance of R U ^ i , k , that means Δ U ^ i , k ~ N ( 0 , R U ^ i , k ) , and it is worth noting that different observation vectors are independent of each other.
E { Δ U ^ i , k Δ U ^ i ' , k T } = δ i i ' δ k k ' R U ^ i , k
E {·} is the expectation operator. If the observation vector is a unit vector and the first-order component of Δ U ^ i , k is perpendicular to U ^ i , k , then [27],
Δ U ^ i , k U ^ i , k = 0
R U ^ i , k is the singular matrix, so here is:
R U ^ i , k U ^ i , k true = 0
The above three equations are first-order approximations. Generally, the covariance R is small. This approximation will not affect the estimation of the installation matrix.
Similar to (8), W ^ i , k true is the true value of the observation vector, Δ W ^ i , k is the measurement noise, Supposing Δ W ^ i , k is gaussian white noise with zero mean and covariance of R W ^ i , k , that means Δ W ^ i , k ~ N ( 0 , R W ^ i , k ) .
E { Δ W ^ i , k Δ W ^ i ' , k T } = δ i i ' δ k k ' R W ^ i , k Δ W ^ i , k W ^ i , k = 0 R W ^ i , k W ^ i , k true = 0
We can get R U ^ i , k = R C S T R W ^ i , k R C S .
For the Equation (19) there is Δ W ^ ο i , k ~ N ( 0 , R W ^ ο i , k ) , R W ^ ο i , k = R C S ο R U ^ i , k R C S ο T .
For the Equation (20), the derivation process is provided here.
W ^ i , k ο · W ^ j , k = W ^ i , k true · W ^ j , k t r u e + ( W ^ i , k true × W ^ j , k t r u e ) · θ + W ^ i , k true · Δ W ^ j , k + Δ W ^ i , k ο · W ^ j , k true + Δ W ^ i , k ο · Δ W ^ j , k = V ^ i t r u e · V ^ j t r u e + ( W ^ i , k true × W ^ j , k t r u e ) · θ + W ^ i , k true · Δ W ^ j , k + Δ W ^ i , k ο · W ^ j , k true + Δ W ^ i , k ο · Δ W ^ j , k
For z i j , k and Δ z i j , k , it can be seen that the first-order expression of the angle cosine difference has nothing to do with attitude, and the above variables need to satisfy the following constraints.
E { Δ z i j , k } = 0 ; E { Δ z i j , k 2 } = E k ( i | j | i ) + E k ( j | i | j ) ; E { Δ z i j , k Δ z i u , k } = E k ( j | i | u ) ; E { z i j , k z u v , k } = 0 ; i , j , u , v   are   different
There is E k ( i | j | u ) W ^ i , k ο T R W ^ j , k ο W ^ u , k ο .

Appendix B

Appendix B.1. Redundancy Problem

Measurement vectors z i j , k are not all independent. Suppose the star camera has n s , k directly observed vectors, remote sensing camera has n c , k directly observed vectors at the k time, so there are 2 n s , k + 2 n c , k degrees of freedom. However, there are n s , k n c , k combinations. Since the attitude has three degrees of freedom and z i j , k are independent of the attitude, so there are only 2 n s , k + 2 n c , k - 3 independent z i j , k vectors. When n s , k n c , , k > 2 n s , k + 2 n c , k 3 , z i j , k are redundant and P θ θ is a singular matrix. At this time, the Equation (A6) cannot be directly used to estimate the covariance matrix error, so singular value decomposition is needed to calculate the θ .

Appendix B.2. Decomposition Algorithm

The observation vector error covariance matrix in the remote sensing camera coordinate system can be expressed as:
R W ^ i , k = G W ^ i , k G W ^ i , k T
The covariance matrix R W ^ i , k can be decomposed into one matrix left multiply its transport matrix. Here we define this matrix named G W ^ i , k . So, the observation vector error can be expressed as:
Δ W ^ i , k = G W ^ i , k ε i , k
for E { ε i , k } = 0 and E { ε i , k ε i , k T } = δ i i I 3 × 3 , so we get the equation:
z i j , k = ( - W ^ i , k ο × W ^ j , k ) · θ + B i j , k i ε i , k o + B i j , k j ε j , k
where B i j , k i = ( W ^ j , k ) T G W ^ ο i , k , B i j , k j = ( W ^ i , k ) T G W ^ j , k .
In summary, we rewrite the measurement equation as
Z k = H k θ + B k ε k
ε k [ ε 1 , k o T , ... , ε n s , k , k o T , ε 1 , k T , ... , ε n c , k , k T ] T ~ N ( 0 , I 3 ( n s , k + n c , k ) × 3 ( n s , k + n c , k ) ) , B k is a n s , k n c , k × 3 ( n s , k + n c , k ) matrix, H k is a n s , k n c , k × 3 matrix and P Z k = B k B k T .
We calculate singular value decomposition (SVD) like B k = U k S k V k T . U k and V k are orthogonal matrix obviously, S k is a n s , k n c , k × 3 ( n s , k + n c , k ) diagonal matrix and there is ( S k ) 11 ( S k ) 22 ... 0 . So P Z k = U k S k S k T U k T = U k D k U k T and it is a n s , k n c , k × n s , k n c , k positive semidefinite matrix.
We multiply the Equation (A10) by U k T from left side, so we get:
ζ k = C k θ + S k ε k
where ε k V k T ε k ~ N ( 0 , I 3 ( n s , k + n c , k ) × 3 ( n s , k + n c , k ) ) , and ζ k U k T Z k , C k U k T H k .
Assuming ( S k ) 11 ( S k ) 22 ... ( S k ) l max , k l max , k 0 , so the first l max , k components of ζ k are independent of each other. Here, only preserve the first l max , k lines of ζ k , C k , S k matrix and rewrite as ζ ˜ k = C ˜ k θ + S ˜ k ε ˜ k . The tilde indicates line truncation.
So, we change the Equation (22) as the following:
J ψ ( θ ) = 1 2 k = 1 N [ ( ζ ˜ k C ˜ k θ ) T P ζ ˜ k 1 ( ζ ˜ k C ˜ k θ ) + log det P ζ ˜ k + log 2 π ( 2 n c , k + 2 n s , k 3 ) ]
Similar to Equation (26) we get:
P θ θ 1 θ = k = 1 N C ˜ k T D ˜ k 1 ζ ˜ k P θ θ 1 = k = 1 N C ˜ k T D ˜ k 1 C ˜ k
SVD result obtains the largest subset that is sensitive to installation matrix error.

Appendix B.3. Star Vector Measurement Error

The star vector measurement error can be expressed by the QUEST measurement model, of which the error vector is symmetrical about this vector. The error covariance matrix can be expressed as:
E { Δ U ^ i , k Δ U ^ i , k T } = σ i , k 2 ( I U ^ i , k true U ^ i , k true T ) E { Δ W ^ i , k ο Δ W ^ i , k ο T } = σ i , k 2 ( I W ^ i , k ο W ^ i , k ο T )
Combine with the Equations (A4) and (A14), and there is:
E k ( i | j | u ) = σ j , k 2 ( W ^ i , k ο × W ^ j , k ) · ( W ^ u , k ο × W ^ j , k )
For the decomposition of measurement error covariance matrix, we can get R W ^ i , k = ( σ i , k W ^ i , k ) ( σ i , k W ^ i , k ) T , then B i j , k i = ( W ^ j , k ) T ( σ i , k W ^ i , k ο ) = σ i , k ( W ^ i , k ο × W ^ j , k ) T , B i j , k j = σ j , k ( W ^ j , k × W ^ i , k ο ) T .

References

  1. Yavari, S.; Valadan Zoej, M.J.; Sahebi, M.R.; Mokhtarzade, M. An Automatic Novel Structural Linear Feature-Based Matching Based on New Concepts of Mathematically Generated Lines and Points. Photogramm. Eng. Remote Sens. 2016, 82, 365–376. [Google Scholar] [CrossRef]
  2. Brown, L.G. A Survey of Image Registration Techniques. ACM Comput. Surv. 1992, 24, 325–376. [Google Scholar] [CrossRef]
  3. Zitova, B.; Flusser, J. Image Registration Methods: A Survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef] [Green Version]
  4. Lou, X.; Huang, W.; Zhou, C. A Method for Fast Resampling of Remote Sensing Imagery. J. Remote Sens. 2002, 6, 96–101. [Google Scholar]
  5. Junfeng, X.; Xinming, T.; Fan, M.; Li, G.; Zhu, G.; Wang, Z.; Fu, X.; Gao, X.; Dou, X. ZY3-02 Laser Altimeter Footprint Geolocation Prediction. Sensors 2017, 17, 2165. [Google Scholar]
  6. Mi, W.; Yuan, T.; Yufeng, C. Development of On-orbit Geometric Calibration for High Resolution Optical Remote Sensing Satellite. Geomat. Inf. Ence Wuhan Univ. 2017, 42, 1580–1588. [Google Scholar]
  7. Valorge, C.; Meygret, A.; Lebègue, L. Forty years of experience with SPOT in-flight calibration[M]//Post-launch calibration of satellite sensors. In Post-launch Calibration of Satellite Sensors; Taylor & Francis Group London: Milton, UK, 2004; pp. 119–133. [Google Scholar]
  8. Li, C.R.; Tang, L.L.; Ma, L.L.; Zhou, Y.S.; Gao, C.X.; Wang, N.; Li, X.H.; Wang, X.H.; Zhu, X.H. Comprehensive calibration and validation site for information remote sensing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 1233. [Google Scholar] [CrossRef] [Green Version]
  9. Dial, G.; Bowen, H.; Gerlach, F.; Grodecki, J.; Oleszczuk, R. IKONOS satellite, imagery, and products. Remote Sens. Environ. 2003, 88, 23–36. [Google Scholar] [CrossRef]
  10. Grodecki, J.; Dial, G. IKONOS geometric accuracy validation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 50–55. [Google Scholar]
  11. Zhang, J.; Zhang, Z. Strict Geometric Model Based on Affine Transformation for Remote Sensing Image with High Resolution. Ed. Board Geomat. Inf. Ence Wuhan Univ. 2002, 34, 309–312. [Google Scholar]
  12. Yuan, X.X.; Yu, J.P. Calibration of Constant Angular Error for High Resolution Remotely Sensed Imagery. Acta Geod. Cartogr. Sin. 2008, 37, 36–41. [Google Scholar]
  13. Wei, X.G.; Wang, Q.L.; Li, J.; He, H.Y. On-orbit calibration for cross-angle between optical axes of star sensor and remote sensing camera. Guangxue Jingmi Gongcheng/Opt. Precis. Eng. 2013, 21, 274–280. [Google Scholar]
  14. Dial, G.; Grodecki, J. Test ranges for metric calibration and validation of high-resolution satellite imaging systems. In Proceedings of the International Workshop on Radiometric and Geometric Calibration, Gulfport, MS, USA, 2–5 December 2003. [Google Scholar]
  15. Zhang, G.; Guan, Z. High-frequency attitude jitter correction for the Gaofen-9 satellite. Photogramm. Rec. 2018, 33, 264–282. [Google Scholar] [CrossRef]
  16. Jiang, Y.; Cui, Z.; Zhang, G.; Wang, J.; Xu, M.; Zhao, Y.; Xu, Y. CCD distortion calibration without accurate ground control data for pushbroom satellites. ISPRS J. Photogramm. Remote Sens. 2018, 142, 21–26. [Google Scholar] [CrossRef]
  17. Pi, Y.; Li, X.; Yang, B. Global Iterative Geometric Calibration of a Linear Optical Satellite Based on Sparse GCPs. IEEE Trans. Geoence Remote Sens. 2019, 58, 436–446. [Google Scholar] [CrossRef]
  18. Shuster, M.D.; Oh, S.D. Three-axis attitude determination from vector observations. J. Guid. Control 1981, 4, 70–77. [Google Scholar] [CrossRef]
  19. Sorenson, Harold Wayne. Parameter Estimation: Principles and Problems; Wiley-Interscience: New Jersey, NJ, USA, 1980; Volume 9. [Google Scholar]
  20. Nahi, N.E. Estimation Theory and Applications; Wiley: New York, NY, USA, 1969. [Google Scholar]
  21. Zhang, G.; Liu, B.; Jiang, W.S. Inner FOV stitching algorithm of spaceborne optical sensor based on the virtual CCD line. J. Image Graph. 2012, 17, 696–701. [Google Scholar]
  22. Poli, D.; Zhang, L.; Gruen, A. Orientation of satellite and airborne imagery from multi-line pushbroom sensors with a rigorous sensor model. Int. Arch. Photogramm. Remote Sens. 2004, 35, 130–135. [Google Scholar]
  23. Wang, M.; Cheng, Y.; Tian, Y.; He, L.; Wang, Y. A New On-Orbit Geometric Self-Calibration Approach for the High-Resolution Geostationary Optical Satellite GaoFen4. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1670–1683. [Google Scholar] [CrossRef]
  24. Pan, H.; Zhang, G.; Tang, X.; Li, D.; Zhu, X.; Zhou, P.; Jiang, Y. Basic products of the ZiYuan-3 satellite and accuracy evaluation. Photogramm. Eng. Remote Sens. 2013, 79, 1131–1145. [Google Scholar] [CrossRef]
  25. Wang, M.; Fan, C.; Yang, B.; Jin, S.; Pan, J. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration. Sensors 2016, 16, 1203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Wang, M.; Cheng, Y.; Chang, X.; Jin, S.; Zhu, Y. On-orbit geometric calibration and geometric quality assessment for the high-resolution geostationary optical satellite GaoFen4. ISPRS J. Photogramm. Remote Sens. 2017, 125, 63–77. [Google Scholar] [CrossRef]
  27. Dial, G.; Grodecki, J. Block adjustment with rational polynomial camera models. In Proceedings of the ACSM-ASPRS Annual Conference, Washington, DC, USA, 19–26 April 2002. [Google Scholar]
Figure 1. Method Flow Chart.
Figure 1. Method Flow Chart.
Sensors 20 05667 g001
Figure 2. The diagram of Virtual CCD and real CCD relative position.
Figure 2. The diagram of Virtual CCD and real CCD relative position.
Sensors 20 05667 g002
Figure 3. The installation relationship of satellite camera and satellite.
Figure 3. The installation relationship of satellite camera and satellite.
Sensors 20 05667 g003
Figure 4. Imaging Attitude Observations of the Remote Sensing Camera.
Figure 4. Imaging Attitude Observations of the Remote Sensing Camera.
Sensors 20 05667 g004
Figure 5. Deviation angle error(arc-second) estimation with frame in σ s = 0.06 , σ c = 0.1 (pixel).
Figure 5. Deviation angle error(arc-second) estimation with frame in σ s = 0.06 , σ c = 0.1 (pixel).
Sensors 20 05667 g005
Figure 6. Deviation angle error (arc-second) estimation with frame in σ s = 0.10 , σ c = 1.0 (pixel).
Figure 6. Deviation angle error (arc-second) estimation with frame in σ s = 0.10 , σ c = 1.0 (pixel).
Sensors 20 05667 g006
Figure 7. Triaxial deviation angle of calculated installation matrix under different errors of the star camera and remote sensing camera by using this paper algorithm. The unit of centroiding error is pixel for X and Y-axis and arc-second for Z-axis.
Figure 7. Triaxial deviation angle of calculated installation matrix under different errors of the star camera and remote sensing camera by using this paper algorithm. The unit of centroiding error is pixel for X and Y-axis and arc-second for Z-axis.
Sensors 20 05667 g007
Figure 8. Residual Results of Attitude Fitting.
Figure 8. Residual Results of Attitude Fitting.
Sensors 20 05667 g008
Figure 9. Triaxial deviation angle of calculated installation matrix under different errors of star camera and remote sensing camera by using the Wang’s algorithm. The unit of centroiding error is pixel for X and Y-axis and arc-second for Z-axis.
Figure 9. Triaxial deviation angle of calculated installation matrix under different errors of star camera and remote sensing camera by using the Wang’s algorithm. The unit of centroiding error is pixel for X and Y-axis and arc-second for Z-axis.
Sensors 20 05667 g009
Figure 10. The mean value of the geo-positioning error of test ground points after on-orbit correction of installation matrix. The unit of geo-positioning error is the meter for Z-axis.
Figure 10. The mean value of the geo-positioning error of test ground points after on-orbit correction of installation matrix. The unit of geo-positioning error is the meter for Z-axis.
Sensors 20 05667 g010
Table 1. Geo-Positioning Errors Decomposition.
Table 1. Geo-Positioning Errors Decomposition.
Error ItemsGeo-Positioning Errors/
meter
NameValue
GPS positioning accuracy2 m2
vibration of the satellite platform0.1″0.25
asynchronous error of satellite clock0.1 ms0.75 (the orbit direction)
distortion calibration residual of remote sensing camera0.3 pixel0.21
Table 2. Star Camera Parameters.
Table 2. Star Camera Parameters.
Focal Length (mm)100
Field of View (°) 18 × 14
Data Rate (Hz)5
Principal Point (Pixel)(2560,1920)
Pixel Size (μm)6.4
Distortionp12.29 ∗ 10−5
p2−2.96 ∗ 10−5
q1−9.30 ∗ 10−7
q2−4.25 ∗ 10−9
Table 3. Remote Sensing Camera Parameters.
Table 3. Remote Sensing Camera Parameters.
Focal Length (m)10
CCD Pixel 6144 each
CCD Pixel Size (m) 1 × 10 - 5
Principal Point (Pixel)3067
Distortion (Pixel, μm)561077.2
203866.3
30670
5784892.1
59121082.7
Table 4. The Deviation Angle Error of Different Parameter.
Table 4. The Deviation Angle Error of Different Parameter.
Camera   Error   [ σ s , σ c ] (Pixel)Wang’s Algorithm the Deviation Angle ErrorPaper Algorithm the Deviation Angle Error
X (″)Y (″)Z (″)X (″)Y (″)Z (″)
0.06,0.30.3198−0.5958−7.49170.03830.0202−0.2788
0.07,0.50.3644−0.66864.4221−0.0640−0.0464−0.5072
0.08,0.50.1656−0.73324.7857−0.0809−0.0441−0.1987
0.09,0.70.5519−0.6898−8.4521−0.00640.0038−0.6839
0.10,1.00.1837−0.7338−9.62780.01210.01350.6246
Table 5. Geo-Positioning Results of the Different Algorithm.
Table 5. Geo-Positioning Results of the Different Algorithm.
Camera Error
/Pixel
Orbit Direction/MeterVertical Orbit Direction/Meter
σ c σ s WangThe PaperWangThe Paper
0.300.055.964.325.173.54
0.300.0556.654.415.563.64
0.300.0606.314.535.823.77
0.300.0656.974.716.203.87
0.300.0707.394.926.244.15
0.300.0757.254.986.464.18
0.300.0807.735.126.844.31
0.300.0858.025.187.184.34
0.300.0908.135.297.164.49
0.300.0958.395.388.574.53
0.300.0109.235.298.114.48

Share and Cite

MDPI and ACS Style

Tang, Y.; Wei, Z.; Wei, X.; Li, J.; Wang, G. On-Orbit Calibration of Installation Matrix between Remote Sensing Camera and Star Camera Based on Vector Angle Invariance. Sensors 2020, 20, 5667. https://0-doi-org.brum.beds.ac.uk/10.3390/s20195667

AMA Style

Tang Y, Wei Z, Wei X, Li J, Wang G. On-Orbit Calibration of Installation Matrix between Remote Sensing Camera and Star Camera Based on Vector Angle Invariance. Sensors. 2020; 20(19):5667. https://0-doi-org.brum.beds.ac.uk/10.3390/s20195667

Chicago/Turabian Style

Tang, Yujie, Zhenzhong Wei, Xinguo Wei, Jian Li, and Gangyi Wang. 2020. "On-Orbit Calibration of Installation Matrix between Remote Sensing Camera and Star Camera Based on Vector Angle Invariance" Sensors 20, no. 19: 5667. https://0-doi-org.brum.beds.ac.uk/10.3390/s20195667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop