1. Introduction
A high-resolution satellite image (HRSI) is important for high-precision geospatial information. It is widely used in many fields such as 3D shoreline extraction and coastal mapping, Digital Terrain Model (DTM) and Digital Surface Model (DSM) generation and national topographic mapping. Many of the above remote sensing applications require an HRSI with high accuracy [
1]. The HRSI reconstruction method is mainly divided into four steps: feature extraction, feature matching, exterior orientation element estimation and final resampling [
2,
3,
4]. For the high-precision exterior orientation element estimation, it is the key to realizing high-precision geo-positioning.
In geometric photogrammetry, when the image coordinate of the remote sensing image is determined, the geo-positioning depends on the accuracy of the provided external orientation element. The line element is obtained by the GPS receiver, the angle element can be obtained by the star camera or star sensor in the earth center inertial coordinate system [
5]. Then, we get the angle element of the remote sensing camera by calculating the installation relationship between the remote sensing camera and star camera or star sensor (hereinafter called the installation matrix instead). Before the satellite launches, satellite designers measure the installation matrix on the ground. Due to the stress release during the launch and the changes of the thermal environment, the changed installation matrix will result in the failure of geo-positioning. So, it requires on-orbit calibration.
The early published works in this area usually seem to use the remote sensing camera to obtain its exterior orientation elements through the GCPs. The attitude determination equipment outputs the satellite’s attitude. Then, the installation matrix could be solved [
6]. After the launch of the SPOT-5 satellite in 2002, the French Space Center established the angle error model of the optical axis between the star sensor and remote sensing camera. They used the global distribution of test sites to calculate the boresight direction of the remote sensing camera, then they calculated the error of the installation matrix with the attitude and orbit control subsystem (AOCS). Furthermore, the error model of the internal orientation elements was proposed which was fitted by the fifth polynomial. Finally, the positioning accuracy of SPOT-5 single scene without GCP could reach 50 m [
7]. This method establishes the basic on-orbit calibration process of the installation matrix, but it does not research the requirements of GCPs and the real orientation of each pixel [
8]. Dial G and Grodecki J establish a joint calibration of the IKONOS satellite based on the Field Angle Map (FAM) and the interlock angle between optical axis of the remote sensing camera and star sensor. FAM could solve the real orientation of each pixel in CCDs. Then, they use the knife-edge targets in a set of independent images to calibrate interlock angle errors, and after that, they compute the mean interlock angle correction. Finally, the accuracy can reach 12 m (RMS) in planimetry and 10 m (RMS) in elevation without GCP [
9,
10]. This analysis establishes the angle model and puts forward requirements for the types of GCPs, which makes the identification and extraction of GCPs more accurate and visible. However, the method of obtaining the exterior orientation elements of the remote sensing camera directly through the GCPs will lead to a strong correlation of the azimuth parameters, which leads to errors in the solution of the installation matrix [
11].
In order to reduce the strong coupling of azimuth parameters, the following published works report the methods which are based on Taylor expansion and an iterative solution of a remote sensing camera’s angle elements. They thought of the angle elements as unknown quantities, which are expanded in the Taylor series at the center of the scene and solved iteratively in the strict imaging model. Yuan X. X thought there was the constant difference which was the error of the remote sensing image point direction and the true direction. The image point direction is calculated from the image attitude, and the true direction is the direction which is from the satellite to the GCP. This constant difference is the deviation of the initial installation matrix. Through the line of sight consistency of a small number of GCPs in the geocentric rectangular coordinate system and image coordinate system, the deviation matrix can be calculated and then compensated. After they compensated the installation matrix of the QuickBird satellite, the positioning accuracy without GCPs can increase from 8.59 to 3.09 m [
12]. Wang Q. L. used the similar method which established several collinear equations through the GCPs, and he calculated the installation matrix by the initial values of ground calibration. This work was efficient because he could obtain the stable exterior orientation elements through the iteration of the internal and external parameters calculation. Then, the installation matrix could be calculated easily. Each GCP can get an installation matrix. He acquired the optimal installation matrix by establishing the optimization objective equation. Finally, the RMS deviation reached 9.3 m [
13]. For the two methods above, these methods can solve the attitude of the remote sensing camera by Taylor expansion to reduce the dependence on the satellite attitude determination system, but they cannot avoid the coupling problem between the random error of the attitude and the installation matrix [
14]. At the same time, for the satellite attitude determination system and the remote sensing camera work that is unsynchronized, the fitting and interpolation calculation of external parameters also brings errors [
15].
In this paper, an improved geo-positioning of satellite imagery is proposed to meet the requirements of high accuracy observation. There are three advantages for using this method. First, we introduce the concept of the star camera, which is an optical load of the remote sensing camera’s platform. The star camera combines two functions: one is autonomous navigation attitude measurement such as the star sensor, and the other is the external orientation elements acquirement of the remote sensing camera’s current image. In order to avoid fitting and interpolation errors caused by the satellite attitude determination system and the remote sensing camera, they work unsynchronized. The star camera and the remote sensing camera shoot simultaneously through the synchronization signal, the star camera provides the real-time attitude, and then it calculates the exterior orientation elements of the remote sensing camera through the installation matrix. In terms of camera models, we use the rigorous imaging model (the sensor model of multi-line array CCD can refer to
Section 3). Second, unlike many coordinate system transformations in previous algorithms, this method only involves simple conversion between the World Geodetic System 1984 (WGS84), the earth-centered inertial reference frame (ECI, here is J2000), the star camera, and the remote sensing camera coordinate system. It does not need to consider the satellite motion in the orbital coordinate system. We only use the GPS position in real-time, image data of remote sensing camera and star camera in synchronization time to complete the high-precision geo-positioning. Third, unlike the existing algorithms which need to calculate the star sensor and remote sensing camera’s own attitude matrix, the input calibration data only consist of a star vector and ground calibration target vector (GCT vector). Because the number of star vectors captured by the star camera is large, it can reduce the dependence on the number of GCTs. For details, please see
Section 2.
The remainder of this paper is organized as follows. A brief introduction of this method is given in
Section 2.
Section 3 describes the simulation process and the results of the algorithm, including building an imaging model, simulation database preparation, and the verification of simulation calculation in detail.
Section 4 presents some conclusions.
2. Algorithm
In this part, we propose an algorithm, which is not only needed to use the complex sensor model design of a rigorous sensor model, but also can reduce the coordinate system transformations and demand of traditional on-orbit calibration for the number of GCTs. The core of this method is: when the remote sensing camera’s external orientation elements are determined, the vector angles between the GCT vector (the GCT is the projection of corresponding image points on the ground) and star vector are the rotation invariances. Then, the installation matrix is calculated by the method of the double-vector attitude determination and optimized in real-time by batch processing on orbit.
The method’s steps are as follows: the remote sensing image is acquired by a synchronous pulse, and the star point image is also acquired by the star camera at the same time. The corresponding relationship between the remote sensing image characteristic points and the GCTs are obtained through the feature extraction and recognition. When the GCTs are located, the GCT vectors are obtained from the satellite position (the direction of the vectors is from the satellite body pointing to the GCTs). Because the satellite’s external orientation elements (position and attitude) are known, the vector angles between the GCT vector and the star vector can be calculated precisely. The GCT vectors in the remote sensing image coordinate system and the star vectors in the star camera coordinate system also could be calculated. So, the installation matrix can be calibrated by these invariances.
Figure 1 is a flow chart of the paper method. We can clearly see that there is no estimation of the motion state of the remote sensing camera in the method. When the position of the remote sensing camera is known, the coordinates of the ground point and the identified star point are all accurately known, so
is the constant depends on remote sensing camera position. If the installation matrix is known precisely,
is equal to
in the ideal imaging model. However, because of the existence of the initial installation matrix error, camera distortion residual, model error and others, the variable
is approximately equal to the constant
. Through the method in this paper, we associate the error of the installation matrix with the optimization equation composed of
and
, and the estimation error of input data. Finally, we optimize the error of the installation matrix by using the coordinates of stars and the corresponding GCTs.
2.1. Mathematical Model
To obtain the ground coordinate on earth from the remote sensing image coordinate, it is necessary to use a mathematical model to describe the relationship between the two sets of coordinates. Each GCT will provide a set of two collinear condition equations derived from image coordinates, satellite position and GCT coordinates in the conventional inertial system (in the field of aerospace, we usually use J2000 as the conventional inertial system, so the conventional inertial system is hereinafter referred to as J2000) [
16,
17].
Now, describe the components of this Equation (1). is the image coordinate, is the principal point of the image (the principal point is the point where the optical axis intersects the image plane). M is an orthogonal rotation matrix that represents the transformation from J2000 to the image sensor coordinate system. The origin of the remote sensing camera coordinate system is located in the perspective center, the Y-axis is parallel to the detector array, the X-axis points to the pushbroom direction and the Z-axis is perpendicular to the plane of X and Y axis while pointing to the observation target. is the focal length of star camera. is a scale factor. is the ground point’s coordinate in the J2000. We usually get the ground points based on local geodetic data but calculate in J2000. There are three steps to coordinate system transformation. First, correct the height of geoid to separate geoid and ellipsoid. Then, the coordinates are transformed from a geodetic coordinate system to a geocentric coordinate system in local data. Third, the geocentric coordinates system can be converted from local to WGS84. Finally, WGS84 coordinates are transformed into the J2000 coordinate system. is the perspective center coordinate of remote sensing camera in the J2000.
The M matrix changes with time t. It can be expressed as:
where
is the installation matrix between star camera and remote sensing camera.
is the rotation matrix from the J2000 to the star camera coordinate system, also named the attitude matrix.
is given by the star camera.
is obtained by ground calibration. For the stress release during the launch and the change in the working environment, there is a deviation from the initial value, so
needs to be calibrated on orbit.
The observation star vector of the star camera satisfies the condition as:
is the representation of the star vector in the star camera coordinate system, and
is the representation in the J2000.
Where
the right ascension and declination of star in the star catalog are expressed as
and
. The above equation is based on the ideal imaging model. In fact, there will be lens distortion in star cameras, and we get the Equation (4).
are the image coordinates of target in ideal imaging model, and
are the actual imaging coordinates.
is the distortion value.
is the principal point of the star camera. (q1, q2) is the radial distortion, and (p1, p2) is the tangential distortion.
2.2. Equation Derivation of
The star vector in the coordinate system of the remote sensing camera can be expressed as
, then:
We use Equations (1)–(3) and (7) to calculate the vector angles between GCT vectors and star vectors in the same coordinate system.
To avoid the influence of the attitude, the angle cosine which is independent of the attitude is selected for estimation. We deduce the relationship between the measured parameters (including the original vectors and the angle cosine), and we explain the non-simultaneous observation properties between the attitude and the absolute installation matrix. Based on the statistical characteristics of the measured parameters, the maximum likelihood estimation of the relative attitude was derived. Then, the redundancy of the measured parameters is discussed, and the decomposition algorithm is used to obtain the maximum independent measurement subset.
The star vectors
in the star camera coordinate system which can be observed are expressed as:
is the true value of the star vector, is the measurement noise, is the vector number, , is the time series, .
Similarly, the observation vectors (here means GCT vectors or star vectors after coordinate system conversion)
in the remote sensing camera coordinate system can be expressed as:
Here, the same observation vectors in remote sensing camera and star camera coordinate system are expressed as follows:
In general, the installation matrix is not known for its true value, and the initial value
can only be obtained by ground calibration before launch. Therefore, the error matrix is defined like this:
Define the error vector
, which has the following relationship with the error matrix M [
18]:
Among them, , is matrix series, , are structural error angles which are usually small.
Here, we can simplify this equation:
So, we can the Equation (10) in first-order approximation:
Set
as the reference vector which is the expression of the observation vector of remote sensing camera in J2000.
is the rotation matrix from J2000 to the remote sensing camera coordinate system, is the reference vector error and, supposing it is gaussian, white noise with zero mean .
Because the error of the reference vector is too small, it is generally negligible. Therefore, the relationship between the star vectors and the reference vectors is expressed as:
As can be seen from the above equation, the value of the star vectors
does not change for the following transformations:
T is an arbitrary rotation matrix. Therefore, it is impossible to get the installation matrix deviation from the attitude measurement of the star camera, so it also means estimating the sensor installation and the attitude cannot be achieved at the same time.
In this point, we introduce a new variable
which means observation vectors in the uncalibrated remote sensing camera coordinate system.
We expand the M matrix like this:
, so:
It can be known from the equation that the
, s first-order approximation is still associated with the attitude through
.
is the minterm. So, we make
and
:
For detailed derivation of measurement noise and covariance in this section, they are given explicitly in
Appendix A.
2.3. Calibration Method of
Assuming that the star camera has observed vectors at the time , and the remote sensing camera has observed vectors, so the number of is .
To build a measurement vector
that has
elements.
So, we get the measurement equation: .
can be built from the Equation (21) and is gaussian white noise sequence with the covariance of .
Using the theory of maximum likelihood estimation [
19,
20], the negative log-likelihood function of the maximum likelihood estimation
is:
We solve the minimum value of the above equation and get the normal equation.
is the covariance matrix of the estimation error. The relative conversion error can be estimated from the above equation.
For detailed of redundancy problem of
calculation, decomposition algorithm and star vector measurement error, we have written explicitly in
Appendix B.
To explain the method more clearly, the following will explain the specific steps:
- ①
Obtain the measurement vector and uncalibrated installation matrix , then we get the in uncalibrated remote sensing camera coordinate system;
- ②
Calculate angle cosine error ;
- ③
For each moment, construct the measurement vector ;
- ④
Calculate the measurement sensitivity matrix , and its dimension is ;
- ⑤
Calculate and ;
- ⑥
Calculate and then get , by singular value decomposition (SVD). Determine the number of singular values ;
- ⑦
Calculate and ;
- ⑧
Preserve the first line to get , and ;
- ⑨
Take the above variables into the measurement equation to get and ;
- ⑩
Calculate the error matrix and installation matrix.
Repeat steps ① to ⑩ until the error matrix approaches the unit matrix. Generally, only one iteration can achieve the effect. The definition of
,
,
,
,
and etc. can be found in
Appendix B.
In this paper, a uniform estimate method of the installation matrix is proposed. It can be used to estimate error matrix M of spacecraft sensors on orbit. Firstly, a statistical model of the measurement error of the star camera is proposed. A set of attitude independent measurements can be obtained; here means angle cosine of observation vectors are obtained. Based on these measurements, the error estimation is obtained. To facilitate numerical calculation, a decomposition algorithm is proposed.
It should be noted that the calculation of contains two similar variable subtractions, the result’s significant digits are less than the original vectors’. When the result of variables subtraction is the order of arc-second and nearly 6 significant figures are lost. Therefore, double precision should be used in calculations. Besides, the first-order terms are retained only in the linearization process. Besides, the proportional non-linear error of can be eliminated by iterative methods. Finally, because the mean of is zero, it is good for removing gross error through calculating variance.
4. Conclusions
In this paper, a novel algorithm on-orbit for installation matrix estimation is proposed. This method can realize the on-orbit calibration of the installation matrix accurately. We can use the ultra-high precision star camera as the measuring instrument for the external orientation elements measurement and then realize the high-precision geo-positioning. This method has the following characteristics: (1) To realize the high precision earth observation by using a star camera, we should calculate the installation matrix between the star camera and the remote sensing camera firstly. When the remote sensing camera’s external orientation elements are determined, the vector angle between the GCT vector and the star vector is deduced as a rotation invariance. (2) For the data redundant of measurement vectors, we propose a batch processing algorithm based on SVD, which is used to process long-term data. This method is not sensitive to short-term data loss and is suitable on orbit. This can make full use of multi-frame data to improve the accuracy of the installation matrix calculation
This method can use most information of the star vector and the GCT vector, and it avoids the complicated satellite on-orbit movement compensation which impedes the estimation of the installation matrix. At the same time, a batch processing algorithm based on SVD is added to enhance the on-orbit data processing ability and robustness of the algorithm. The simulation results demonstrate that the method can effectively improve the accuracy estimation of the installation matrix. In a wide range of sensor noise, the error of the X, Y-axis is about the order of 10−2 arc-second. Then, we achieve a high-precision geo-positioning measurement by using the above result. By comparing it with Wang’ on-orbit algorithm, we have improved the ground geo-positioning accuracy by above 20%. So, we have achieved an algorithm that only needs simple data on-orbit, with high calculation accuracy and strong robustness. It is also suitable for an on-orbit multi-frame joint calculation. It provides better theoretical support for future photogrammetry without GCPs. Future work will obtain more complete practical results of the on-orbit data to verify the actual effect of the algorithm and apply the algorithm to remote sensing satellites to evaluate the long-term performance under on-orbit conditions.