Open Access
16 April 2020 On-orbit installation matrix calibration and its application on AGRI of FY-4A
Boyang Chen, Xiaoyan Li, Gaoxiong Zhang, Qiang Guo, Yapeng Wu, Baoyong Wang, Fansheng Chen
Author Affiliations +
Abstract

Image navigation is a primary process for on-orbit optical payloads involving environmental disaster monitoring, meteorological observation, and the positioning and tracking of space-aeronautics targets. However, because they are affected by solar illumination and orbital heat flux, as well as shock and vibration during launch, the installation structures between the instruments and satellite platform, especially for geostationary satellites, will inevitably generate a displacement resulting in the reduction of positioning accuracy. During the application of FengYun-4A (FY-4A), it is found that the further away from the subsatellite point, the greater the positioning error of Advanced Geostationary Radiation Imager (AGRI) on FY-4A will be. The positioning error can reach 14 pixels at the Arabian Peninsula in operational images. In addition, compared with orbital and attitude measurement errors, long-term observations show that the installation matrix is likely to be the most significant factor determining the navigation accuracy of AGRI. Therefore, an on-orbit installation matrix calibration approach as well as a high-precision navigation algorithm is proposed to modify the positioning error of AGRI. Experimental results show that the navigation error of the processed images corrected by the proposed method can be reduced to 1.3 pixels, which greatly improves the navigation procession of AGRI. In general, this method could be a supplement to the correction of positioning error for geostationary payloads.

1.

Introduction

Image navigation (IN) is an exceedingly crucial and fundamental prerequisite for applications involving the monitoring, forecasting, and warning of devastating weather, climate, and natural disasters for meteorological satellites.1,2 Usually, IN indicates the quantitative mapping relationship between the image points in the image coordinate system and the corresponding objects in the Earth-centered rotating (ECR) coordinate system.3 The characteristics of the sensors, distortions of the optical system, alignment matrix from the instrument to the satellite, attitude derived from star trackers and gyros, and the satellite position and velocity vectors play a significant role in determining the navigation accuracy of satellites.46 In addition, suffering from the spatial thermal elastic deformation, the geometric positioning model of geostationary satellites changes greatly,1,2,7 which inevitably has a bad effect on the positioning accuracy. Hence, an efficient method for promoting navigation accuracy is very important for geostationary satellites.

Up to now, various efforts have been put into on-orbit navigation accuracy analysis and improvement for geostationary satellites.2 For example, focusing on the control of five main factors of orbit accuracy, integrity of IN parameters, viewing zone adjustment, beta angle computation, and the moment of sunshine pressure, the Chinese National Satellite Meteorological Center achieved a navigation accuracy of <1.0 infrared pixel at a subsatellite point for the spinning FengYun-2B (FY-2B) by deriving IN parameters from the image center time series.810 The landmark matching technique is not adopted in the navigation of FY-2B.8

The Advanced Himawari Imager (AHI) of Himawari-8, the next-generation geostationary meteorological satellite of the Japan Meteorological Agency (JMA), adopted the satellite orbit and imager attitude to precisely identify longitude and latitude coordinates for the corresponding individual pixels in images.7,11 Because of the misalignment and thermal distortion between the instrument and the satellite, the attitude of the imager is not necessarily the same as that of the satellite. The AHI’s attitude, accordingly, is adjusted using a precise landmark based on pattern matching for coastlines and the image navigation is accurate to within 1 km.12,13

The absolute positioning accuracy of the GaoFen-4 (GF-4) satellite, China’s first civilian high-resolution geostationary optical satellite, has a strong relationship with the imaging time and area since the misalignments caused by the thermal environment in a high orbit are completely erratic. After calibration of the internal and external errors comprising detector errors, lens distortion, focal length error, orbit and attitude measurements error, and the camera installation errors, the positioning performance within 1 pixel with a few ground control points (GCPs) has been realized for GF-4 in the panchromatic, the near-infrared, and the intermediate infrared sensors.14,15

The Advanced Baseline Imager (ABI), one of the major payloads aboard the three-axis stabilized Geostationary Operational Environmental Satellite-R (GOES-R), observes a certain set of stars for image navigation and registration. In addition to the orbit and attitude drifts, sensor noises,1619 and uncertainty of the star detection, the navigation errors of the ABI primarily consist of the diurnal thermal elastic deformations, the motion, and the corresponding disturbance of the moving mechanisms.20,21 After on-orbit calibration, the navigation errors of GOES-R using star observations in conjunction with associated data are demonstrated to be around 1 pixel.22

Compared with the detection mission and the resolution of the lighting sounder, the Flexible Combined Imager (FCI) aboard the three-axis stabilized Meteosat Third Generation (MTG) satellite has more stringent requirements for an absolute geolocation accuracy of better than 250 m (1σ) at a subsatellite point.23 The contributors that affect the navigation performance are extremely complicated by the inclusion of scan pointing knowledge, instrument and platform thermoelastic distortions, attitude and orbit knowledge, line of sight (LOS) knowledge, and the microvibrations.24,25 Routinely, the associated navigation parameters are estimated with the measurements extracted from images such as landmarks and star observations.25,26

FengYun-4A (FY-4A) geostationary meteorological satellite, the first satellite of the FY-4 series launched on December 11, 2016, is the second-generation geostationary meteorological satellite of China. Advanced Geostationary Radiation Imager (AGRI), a 14-channel imager that replaces the Visible and Infrared Spin Scan Radiometer (VISSR) on FengYun-2, is one of the main payloads on FY-4A. There are four detectors for every infrared channel of AGRI. In particular, the specifications of the VISSR, AGRI, ABI, FCI, and AHI are shown in Table 1.

Table 1

Specification comparison of Geostationary Meteorological Satellite Imagers.

VISSRAGRIABIFCIAHI
Bands number514161616
Spatial resolution (km)Visible1.250.5 to 10.5 to 10.5 to 10.5 to 1
Near-infrared2111
Infrared54222
Temporal resolution (min)301551010
Signal noise ratio (visible)200@ρ100%200@ρ100%300@ρ100%200@ρ100%
Noise equivalent temperature difference (infrared)0.2 K@300 K0.2 K@300 K0.1 K@300 K0.2 K@300 K
Modulation transfer function>0.2>0.2>0.2>0.2>0.2
LocationE99.5°E105°W75°E0°E140°

During the image radiance comparison with FY-2, we found that the image positioning error of AGRI is too large to meet the demand of high precision navigation. Specifically, the landmarks from Global Self-Consistent Hierarchical High-Resolution Shoreline (GSHHS) show a performance of 14 pixels at the Arabian Peninsula in the operational images, which consequently has a considerable effect on the applications of the Global Space-Based Inter-Calibration System calibration. Therefore, it is exceedingly imperative to propose an efficient method to improve the navigation accuracy of AGRI.

In this paper, we proposed an on-orbit installation matrix calibration approach based on the accurate GCPs for the navigation of AGRI and demonstrated the effectiveness of this method based on the on-orbit observation images in the visible band. First, the positioning model of AGRI based on the GCPs and the mathematics methods are elaborated in detail. Meanwhile, the definition of the positioning error and the assessments of the GCPs accuracy are described. Then, the simulative analysis is adopted to show that installation matrix is one of the most significant factors affecting the navigation accuracy of AGRI and must be calibrated. The database, including error simulation and on-orbit observation data, is used to demonstrate the efficiency of the proposed method. Finally, the conclusion and the prospects are given.

2.

Methodology

This section elaborately describes the rigorous geometric imaging model of AGRI, the proposed installation matrix solving algorithm, and the assessment of navigation precision.

Figure 1 shows the overall flowchart of the proposed method. First, after constructing the geometric positioning model, we will perform the landmark simulation and calculate the LOS for the preparation calibration. Specially, the landmarks are simulated and obtained based on the GSHHS and the coastline template matching. Then, the installation matrix is calibrated with the GCPs and the observation ephemeris data. Finally, the renavigation and the resamping are implemented based on the calculated installation matrix. In the following, the proposed method will be described in detail.

Fig. 1

Flowchart of the proposed method.

JARS_14_2_024507_f001.png

2.1.

Rigorous Imaging Model

As shown in Figs. 2 and 3, the rigorous imaging model of AGRI describes the mapping relationship between the image points in the image coordinate system and the corresponding objects in the ECR coordinate system, which is a very crucial foundation for the on-orbit geometric calibration.2,14,15 In general, the remote sensing satellite observes the Earth via a complex optical system. In terms of the obtained images, every pixel in the image coordinate system has a mapping area on the surface of Earth. For high accurate applications of remote sensing data, the relationship between the pixel location coordinate of (i,j) and the latitude and longitude (latitude, longitude) of the corresponding area on the Earth’s surface is supposed to be known. The pixel location coordinate is determined by the angles between the LOS and primary optical axis, which means (i,j)(α,β). The α and β are the stepping angles of the East–West (EW) and North–South (NS) scanning mirrors, respectively.

Fig. 2

Rigorous imaging model of AGRI. OSCS_XSCSYSCSZSCS is the SCS, OOCS_XOCSYOCSZOCS is the OCS, OECI_XECIYECIZECI is the ECI coordinate system, and OECR_XECRYECRZECR is the ECR coordinate system.

JARS_14_2_024507_f002.png

Fig. 3

Optical path of AGRI. LOS0 is the unit exit vector of the optical axis. OICS_XICSYICSZICS is the ICS and OSCS_XSCSYSCSZSCS is the SCS. θ, ϕ, and ψ are the installation angles between ICS and SCS, respectively.

JARS_14_2_024507_f003.png

The complete transformation from the image coordinate system of (i,j) to the geocentric geodetic coordinate system (GGCS) of (latitude, longitude) can be expressed by Eqs. (1)–(5) as follows:

Eq. (1)

[vxECRvyECRvzECR]=A*B*C*D*RORB2ECI*RSAT2ORB*RINS2SAT*RNS*REW*[010],

Eq. (2)

RORB2ECR=A*B*C*D*RORB2ECI,
where (0,1,0)T is the unit vector coinciding with the optical axis and (vxECR,vyECR,vzECR)T is the corresponding direction vector in the ECR coordinate system. A, B, C, and D are the pole motion matrix, sidereal time matrix, nutation matrix, and precession of the equinoxes matrix, respectively. RORB2ECR, RORB2ECI, RSAT2ORB, and RINS2SAT are the transformation matrices from the orbital coordinate system (OCS) to the ECR coordinate system, from the OCS to the Earth-center inertial (ECI) coordinate system, from the satellite coordinate system (SCS) to the OCS, and from the instrument coordinate system (ICS) to the SCS, respectively. RECR2ECI(Ω,i,e,ax,ϖ,M) and (Ω,i,e,ax,ϖ,M) are the six orbital elements. REW and RNS are the reflective matrices of the EW and NS scanning mirrors, respectively. In particular, compared to the NS scanning mirror, the EW scanning mirror in AGRI is closer to the focal plane. Therefore, the order of REW and RNS in Eq. (1) is not changeable. We have

Eq. (3)

{(xyz)=(vxECRvyECRvzECR)*tx2+y2a2+z2b2=1,

Eq. (4)

L=atan(y/x)tanB=zx2+y2*(1+Ne2z*sinB)H=zsinBN*(1e2),

Eq. (5)

x=(N+H)*cosB*cosLy=(N+H)*cosB*sinLz=[N*(1e2)+H]*sinB,
where t is the scale factor and (x,y,z) is the corresponding coordinate in the ECR coordinate system. a and b are the semimajor axis and semiminor axis of the Earth ellipsoid model, namely the Internal Terrestrial Reference System. (B,L,H) is the coordinate in the GGCS. N is the ellipsoid radius of the curvature in the prime vertical and e is the ellipsoid eccentricity.27

Based on Eq. (1), the observing vector in ICS can be transformed to the ECR coordinate system with the associated transformation matrices. The rotating matrix RA2B in Eq. (1) represents the transformation from A coordinate system to B coordinate system and can be described as follows:

Eq. (6)

RA2B=Rz(ψ)*Rx(θ)*Ry(ϕ),
where θ, ϕ, ψ are the three Euler angles from A coordinate system to B coordinate system. Every rotation can be expressed as follows:

Eq. (7)

Rx(θ)=[1000cosθsinθ0sinθcosθ],Ry(ϕ)=[cosϕ0sinϕ010sinϕ0cosϕ],Rz(ψ)=[cosψsinψ0sinψcosψ0001].

As we know, during the process of observation, the infrared radiation of the targets will be captured by the optical system. Therefore, theoretically, there will be only one intersection between the LOS in the ECR coordinate system and the Earth’s surface. According to the geometric relationship, the intersection point must be the solution of Eq. (3). Based on the analysis above, it is undoubtable that the instant installation angles of the camera could be calculated with the locations of the observed targets. In this paper, the installation matrix is solved by the obtained GCPs to modify the navigation error of the camera.

2.2.

Installation Matrix Solving Algorithm

In this part, the installation matrix solving algorithm based on the GCPs is described in detail.

Factually, the orbit and attitude measurement errors are random errors, whereas the installation angle error is a systematic error.14,15 Although calibrated in the laboratory before launch, the installation matrix usually varies during operation in orbit. Affected by solar illumination, orbital heat flux, and the shock and vibration during launch, the installation angles are not constant but have a complex variation tendency.2,14 In addition, because of the strong correlation between the orbit and attitude measurement errors and installation angles,14 the orbit and attitude measurement random errors are likely to be compensated when calibrating the installation angles by the obtained image. In this paper, we proposed a correction method by calculating the installation angles of θ, ϕ, ψ to modify the final navigation error. We have

Eq. (8)

LOS=[vxECRvyECRvzECR]=PLPSPLPS,
where PL=(xL,yL,zL) and PS=(xSAT,ySAT,zSAT) are the coordinates of the corresponding GCPs and satellite in the ECR coordinate system, respectively.

According to Eqs. (1) and (8), we have

Eq. (9)

W1*RINS2SAT*W2*(010)=LOS,
where W1=A*B*C*D*RORB2ECI*RSAT2ORB and W2=RNS*REW.

Then, Eq. (9) can be presented as

Eq. (10)

RINS2SAT*(vxtrvytrvztr)=W11*LOS,
where [vxtrvytrvztr]T=W2*[010]T.

As described above, θ, ϕ, ψ are the three installation angles between the AGRI and the satellite platform and RINS2SAT is the transformation matrix from the ICS to SCS. Consequently, we have

Eq. (11)

RINS2SAT=Rz(ψ)*Rx(θ)*Ry(φ)=[c11c12c13c21c22c23c31c32c33]=(cosψ*cosϕ+sinψ*sinθ*sinϕsinψ*cosθcosψ*sinϕ+sinψ*sinθ*sinϕsinψ*cosϕ+cosψ*sinθ*sinϕcosψ*cosϕsinψ*sinϕ+cosψ*sinθ*cosϕcosθ*sinϕsinθcosθ*cosϕ),
where sin2(γ)+cos2(γ)=1, γ=θ, ϕ, ψ.

Then

Eq. (12)

[c11c12c13c21c22c23c31c32c33]*(vxtrvytrvztr)=[c14c24c34],
where W11*LOS=[c14c24c34]T.

Based on the equations above, it could be noted that there are only three variables, namely the installation angles of θ, ϕ, ψ in Eq. (12). Because Eqs. (3) and (4) are quadratic equations, there is more than one solution for the coordinates of (x,y,z). Considering there is only one intersection between the LOS and the Earth’s surface, the solutions against the practical application will be rejected. Theoretically, the more accurate landmarks adopted in the process of solving, the more accurate the θ, ϕ, ψ will be. For calculating the equations accurately, the gradient descent method based on the fact that only a little error exists between the real installation matrix and the theoretical installation matrix is adopted to search the optimal solution of the variables. As shown in Algorithm 1, the pseudocode of the solving algorithm is given in detail.

Algorithm 1

Installation matrix solving algorithm.

Input: The lab calibrated installation matrix parameters: θ0, ϕ0, ψ0
Output: The calculated installation matrix parameters: θ^, ϕ^, ψ^
1. Initialize the installation matrix as:
RINS2SAT0=[c110c120c130c210c220c230c310c320c330]
with θ0, ϕ0, ψ0.
2. Calculate the [vxtrvytrvztr]T and [c14c24c34]T based on the observing time, orbit, and attitude.
3. Calculate the errors with the estimated RINS2SAT0 as:
f10=vxtr*c110+vytr*c120+vztr*c130c140f20=vxtr*c210+vytr*c220+vztr*c230c240f30=vxtr*c310+vytr*c320+vztr*c330c340.
4. Calculate the total error as: F0=12i=13(fi0)2
5. While Fτ>ε, (τ[0,N1max1]) do
Fτθ=f1τf1τθ+f2τf2τθ+f3τf3τθFτϕ=f1τf1τϕ+f2τf2τϕ+f3τf3τϕFτψ=f1τf1τψ+f2τf2τψ+f3τf3τψθτ+1=θτλFτθϕτ+1=ϕτλFτϕψτ+1=ψτλFτψ.
  Run steps 2 to 4 using θτ+1, ϕτ+1, and ψτ+1.
  end while
6. The final obtained installation matrix parameters:
θ^=θτ+1ϕ^=ϕτ+1ψ^=ψτ+1.

2.3.

Assessment of Navigation Precision

In this paper, the defined angle errors of the LOS are adopted to represent the navagation errror of the satellite. The angle errors between the calculated observing vector and the real observing vector are calculated as

Eq. (13)

Aerr=arccos(LOS*LOS),
where LOS is the real observing vector based on the coordinates of the landmarks and LOS is the calculated observing vector via the vector transformation equation based on the angles of scanning mirrors. Aerr is the intersection angle between LOS and LOS.

According to the calculated Aerr, the average of the angle errors can be calculated for assessing the navigation error:

Eq. (14)

Aerr_avg=1NAerri1,
where Aerr_avg is the calculated average of the intersection angles.

Based on the parameters of the instrument apparatus, the navigation error measured by pixel can be expressed as

Eq. (15)

PE(pixel)=Aerr_avg/IFOV,
where IFOV is the instantaneous field of view of the AGRI.

3.

Experimental Results and Discussion

In order to demonstrate the effectiveness of the proposed method, the simulative data and the on-orbit observation images are adopted to calculate the installation matrix and the positioning error of the AGRI. First, the simulative measurement errors of orbit, attitude, and installation angles are used to demonstrate the considerable effects of installation angles on the navigation accuracy. Then, the simulative control points with random and systematic noise are employed to show the stability of the solving algorithm. Subsequently, more than one year of on-orbit observation images are used to analyze the navigation accuracy of AGRI. In this paper, the experimental results at 13:00 (satellite local time) on October 21, 2018, are randomly selected to show the navigation accuracy of the proposed method, because the sunlight at 13:00 (satellite local time) literally has little effect on the research of positioning precision.

3.1.

Simulation Experiments

Based on the whole transformation process from the image coordinate of (i,j) to the geodetic coordinate of (latitude, longitude), we analyzed the potential errors during the navigation. As shown in Eq. (1), A, B, C, D are the polar motion matrix, sidereal time matrix, nutation matrix, and precession of the equinoxes matrix, respectively, which are the certain coefficients during the operation of AGRI. RORB2ECI is a function of the orbital elements and RORB2ECI is a function of the satellite attitude, which is measured by the star sensor, Earth sensor, and the gyroscope mounted on the satellite. The RINS2ORB is the installation matrix between the instrument and the satellite platform. For FY-4A, the orbit precision is 100 m, the attitude precision is 5 angle seconds, and the experience from the GEOS-8 geostationary meteorological satellite shows that the variation of the installation matrix can reach 1000 mrad.28 In order to analyze the influences of the orbital measurement error, attitude error, and installation matrix error on navigation accuracy, all these parameters are amplified three times in the experiments. Figures (4Fig. 5)–(6) show the variation tendency of the navigation errors with the errors of orbit, attitude, and installation matrix.

Fig. 4

Navigation error caused by orbit measurement errors.

JARS_14_2_024507_f004.png

Fig. 5

Navigation error caused by attitude measurement errors.

JARS_14_2_024507_f005.png

Fig. 6

Navigation error caused by installation matrix errors.

JARS_14_2_024507_f006.png

It should be noted from Fig. 4 that although the orbit error is 300 m, which is three times bigger than that of real orbit precision, the positioning error is <0.5 visible pixel. As shown in Fig. 5, because the x and y directions are symmetrical, only three lines can be seen. Although the attitude error is 15 angle seconds, which is three times larger than that of real attitude precision, the positioning error is less than four visible pixels. Similarly, in Fig. 6, there are also only three lines that could be seen. However, it should be noticed that the errors of the installation matrix could lead to a huge positioning error, which could never be negligible. According to the experimental results above, it is safe to conclude that compared with the errors of orbit and attitude, the bias of installation angles could have a more considerable effect on the navigation accuracy of AGRI.

Here, another simulative experiment is designed to analyze the precision and robustness of the proposed algorithm. The simulation experiments are operated with the following steps: (1) randomly generate 30, 50, 100, 300, 500, 1000, and 2000 control points from the observation images and calculate the theoretical observing vectors with the orbital measurement data, the attitude, and the initial installation matrix of AGRI. (2) Add random noises of mean = 0, variance = 0, 0.2, 0.4, 0.6, and 0.8, and systematic noises of mean = 0, 0.2, 0.4, 0.6, variance = 0 to all the ground points sets correspondingly. (3) Calculate the actual observing vectors using the same orbital measurement data, attitude, and initial installation matrix in step (1). (4) Calculate the navigation error with the proposed method mentioned in Sec. 2.

As shown in Fig. 7, it should be noted that the navigation error caused by the control points with random noise is obviously decreased when the number of control points increases. More importantly, the navigation error will not decrease any more once the number of control points reaches 500 because the solving algorithm turns out to be steady. It should also be noted that the navigation error can be <0.5  pixels with enough control points despite the large random noise. However, as shown in Fig. 8, the navigation error derived from the control points with systematic noise cannot be reduced by using more control points and the navigation error can be really stable with enough control points. Consequently, it could be concluded that the installation matrix solving algorithm can be steady and precise enough even with random and systematic noises.

Fig. 7

Algorithm precision analysis with random noise.

JARS_14_2_024507_f007.png

Fig. 8

Algorithm precision analysis with systematic noise.

JARS_14_2_024507_f008.png

3.2.

On-Orbit Experimental Results

In this part, different numbers of GCPs are adopted to demonstrate the effectiveness of the proposed installation matrix calibration method and to assess the influence of GCPs errors on the navigation accuracy of AGRI. Images at 13:00 (satellite local time) on October 21, 2018, in five different regions are selected to show the navigation accuracy of AGRI.

First, all 26 control points in Table 2 are used to calculate the theoretical installation matrix. Then, the navigation results derived from 5 to 26 control points are calculated correspondingly. As shown in Fig. 9, the points in the red line represent the navigation errors assessed by all 26 GCPs. In this process, first the installation angles are calculated with the GCPs, and the GCPs number corresponds to the value of the horizontal axis. Then, we put all 26 GCPs into the obtained geometric imaging model (with the calculated installation angles) to calculate the navigation errors of this process. Similarly, the points in the black line represent the navigation errors assessed by all the involved GCPs (not all 26 GCPs). First, the installation angles are calculated with the GCPs and then these GCPs are adopted to assess the navigation error of this process. All the navigation errors are calculated with Eqs. (13) and (14). From the changing tendency of the red and black lines in Fig. 9, it is not difficult to find that the accuracy of the GCPs will affect the assessment of the navigation accuracy. It should be noted that the points in the red line fluctuate slightly and turn out to be flat and stable, which indicates that there is a large error in the GCPs and the error is stable under different installation angles. Differently, the points in the black line imply that fewer GCPs could get better navigation results, which means the effects from GCPs with large errors are eliminated.

Table 2

Navigation precision with quality control and without quality control.

No.LatitudeLongitudeAerr(i) (pixel)
True valueSolving by 26 GCPsSolving by 11 GCPsTrue valueSolving by 26 GCPsSolving by 11 GCPsSolving by 26 GCPsSolving by 11 GCPs
117.02717.02217.021123.581123.572123.5620.9601.095
232.62732.61232.609137.790137.778137.7741.2181.3
322.42122.43422.43168.97868.96868.9731.250.838
49.2979.2969.296119.935119.947119.9451.2591.068
532.48032.46632.463133.873133.858133.8541.3021.398
621.79521.79121.797114.153114.168114.1661.5531.226
714.40914.40614.407129.357129.373129.371.6131.342
845.27545.24945.245132.760132.745132.741.6421.903
947.68347.65347.648132.454132.439132.4341.8072.088
1025.80325.82425.80357.331657.32057.3321.8370.111
1143.48143.51643.513135.019135.048135.0442.2622.045
1235.05935.085136.781136.7712.678
1331.46331.452131.139131.1672.756
1423.64923.62058.50458.5522.996
1535.13135.110129.126129.0793.359
1611.98311.95350.80450.7813.485
1723.82523.815120.200120.1623.514
1811.76511.761133.912133.8624.333
198.8228.858115.090115.0734.468
2025.53125.571113.482113.5124.533
2118.51318.465120.602120.5944.877
2235.26135.247136.827136.8965.465
2322.51722.57959.79059.7375.679
2424.85624.91666.6766.6785.781
2516.35416.389123.008122.9616.168
2635.59435.522129.456129.3916.209
PE3.191.31

Fig. 9

Navigation precision with quality control and without quality control.

JARS_14_2_024507_f009.png

Since the random error of sampling is 0.5 pixels, it is considered that the navigation errors within 0.5 pixels (calculated with the corresponding GCPs) is the random error, which can be suppressed by means of a mean filter. Therefore, as the accuracy of GCPs is improved within 0.5 pixels, the θ, ϕ, and ψ solved by the proposed method tend to be efficient and stable. In the visible spectrum, the navigation error reaches the least of 1.3 pixels. However, the GCPs with a location error of more than 0.5 pixel cannot be used to solve the θ, ϕ, and ψ because the location error of the GCPs will have a detrimental effect on the navigation result. Obviously, the more GCPs that are used with quality control of 0.5 pixel, the more robust the solving results are.

In order to show the experiment results more clearly, five typical areas including the Arabian Peninsula are selected to compare the navigation precision of the operational image and the image processed by the proposed method in Fig. 10. The selected five places are distributed intentionally in the North–East, North–West, South–West, South–East, and middle of the image, correspondingly. Thus, these areas can reveal the navigation precision of the whole image. As shown in Fig. 11, the GSHHS landmarks are almost near the observing landmarks. Specifically, the location errors between the landmarks are around 1 pixel, which is approximately the same as the numerical measurement results.

Fig. 10

Navigation precision verification of the selected five typical areas.

JARS_14_2_024507_f010.png

Fig. 11

Comparison of navigation precision. (a) Arabian Peninsula from operational image. (b) Arabian Peninsula from ours. (c) Suzu in Japan from operational image. (d) Suzu in Japan from ours. (e) Caluula in Africa from operational image. (f) Caluula in Africa from ours. (g) Kolaka Utara in Indonesia from operational image. (h) Kolaka Utara in Indonesia from ours. (i) Mourning Peninsula in Australia from operational image. (j) Mourning Peninsula in Australia from ours.

JARS_14_2_024507_f011.png

4.

Conclusion

In this work, in order to improve the navigation precision of AGRI, a new on-orbit installation matrix calibration approach is suggested. Based on the GCPs, the rigorous imaging model and the associated installation matrix solving method are presented in detail. Accordingly, some simulation experiments based on the actual parameters of FY-4A have been made to validate the effectiveness and stability of the proposed method. More importantly, when applied in the on-orbit observation images of AGRI, the new parameters of the installation matrix calibrated can decrease the positioning error to around 1.3 pixels, which is better than that of 14 pixels before correction. Although this installation matrix calibration approach is proposed for the navigation of AGRI, it is still suitable and versatile for other high-precision positioning research of other satellites because of the imaging similarity of different cameras.

Acknowledgments

Boyang Chen and Xiaoyan Li contributed equally to this paper. This work was supported by the National Natural Science Foundation of China (Grant No. 41375023). The authors declare no conflicts of interest.

References

1. 

H. P. Zhang et al., “Accurate star centroid detection for the advanced geosynchronous radiation imager of Fengyun-4A,” IEEE Access, 6 7987 –7999 (2018). https://doi.org/10.1109/ACCESS.2018.2798625 Google Scholar

2. 

X. Y. Li et al., “A correction method for thermal deformation positioning error of geostationary optical payloads,” IEEE Trans. Geosci. Remote Sens., 57 7986 –7994 (2019). https://doi.org/10.1109/TGRS.2019.2917716 IGRSD2 0196-2892 Google Scholar

3. 

G. W. Rosborough, D. G. Baldwin and W. J. Emery, “Precise AVHRR image navigation,” IEEE Trans. Geosci. Remote Sens., 32 (3), 644 –657 (1994). https://doi.org/10.1109/36.297982 IGRSD2 0196-2892 Google Scholar

4. 

L. Yang, “Automated landmark matching of FY-2 visible imagery with its applications to the on-orbit image navigation performance analysis and improvements,” Chin. J. Electron., 23 (3), 649 –654 (2014). https://doi.org/10.1002/cta.1881 CHJEEW 1022-4653 Google Scholar

5. 

Y. Liu, L. Yang and F. S. Chen, “Multispectral registration method based on stellar trajectory fitting,” Opt. Quantum Electron., 50 (4), 189 (2018). https://doi.org/10.1007/s11082-018-1458-4 OQELDI 0306-8919 Google Scholar

6. 

X. Y. Li et al., “Improved distortion correction method and applications for large aperture infrared tracking cameras,” Infrared Phys. Technol., 98 82 –88 (2019). https://doi.org/10.1016/j.infrared.2019.02.009 IPTEEY 1350-4495 Google Scholar

7. 

X. J. Xiong et al., “Himawari-8/AHI latest performance of navigation and calibration,” Proc. SPIE, 9881 98812J (2016). https://doi.org/10.1117/12.2240200 PSISDG 0277-786X Google Scholar

8. 

F. Lu et al., “Attempts to improve GOES image navigation,” in Proc. of 8th Int. Winds Workshop, (2006). Google Scholar

9. 

F. Lu, X. H. Zhang and J. M. Xu, “Image navigation for the FY2 geosynchronous meteorological satellite,” J. Atmos. Oceanic Technol., 25 (7), 1149 –1165 (2008). https://doi.org/10.1175/2007JTECHA964.1 JAOTES 0739-0572 Google Scholar

10. 

J. Xu, F. Lu and Q. Zhang, “Automatic navigation of FY-2 geosynchronous meteorological satellite images,” in 6th Int. Winds Workshop, (2002). Google Scholar

11. 

C. Da, “Preliminary assessment of the Advanced Himawari Imager (AHI) measurement onboard Himawari-8 geostationary satellite,” Remote Sens. Lett., 6 (8), 637 –646 (2015). https://doi.org/10.1080/2150704X.2015.1066522 Google Scholar

12. 

A. Okuyama et al., “Preliminary validation of Himawari-8/AHI navigation and calibration,” Proc. SPIE, 9607 96072E (2015). https://doi.org/10.1117/12.2188978 PSISDG 0277-786X Google Scholar

13. 

F. F. Yu et al., “Evaluation of Himawari-8 AHI geospatial calibration accuracy using SNPP VIIRS SNO data,” in IEEE Int. Geosci. and Remote Sens. Symp., (2016). https://doi.org/10.1109/IGARSS.2016.7729755 Google Scholar

14. 

M. Wang et al., “On-orbit geometric calibration and geometric quality assessment for the high-resolution geostationary optical satellite GaoFen4,” ISPRS J. Photogramm. Remote Sens., 125 63 –77 (2017). https://doi.org/10.1016/j.isprsjprs.2017.01.004 IRSEE9 0924-2716 Google Scholar

15. 

M. Wang et al., “A new on-orbit geometric self-calibration approach for the high-resolution geostationary optical satellite GaoFen4,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 11 (5), 1670 –1683 (2018). https://doi.org/10.1109/JSTARS.2018.2814205 Google Scholar

16. 

J. X. Jia et al., “High spatial resolution shortwave infrared imaging technology based on time delay and digital accumulation method,” Infrared Phys. Technol., 81 305 –312 (2017). https://doi.org/10.1016/j.infrared.2017.01.017 IPTEEY 1350-4495 Google Scholar

17. 

J. X. Jia et al., “Destriping algorithms based on statistics and spatial filtering for visible-to-thermal infrared pushbroom hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., 57 (6), 4077 –4091 (2019). https://doi.org/10.1109/TGRS.2018.2889731 IGRSD2 0196-2892 Google Scholar

18. 

Z. Y. Hu et al., “A method for the characterization of intra-pixel response of infrared sensor,” Opt. Quantum Electron., 51 74 (2019). https://doi.org/10.1007/s11082-019-1790-3 OQELDI 0306-8919 Google Scholar

19. 

X. Y. Cheng et al., “A relative radiometric correction method for airborne SWIR hyperspectral image using the side-slither technique,” Opt. Quantum Electron., 51 105 (2019). https://doi.org/10.1007/s11082-019-1816-x OQELDI 0306-8919 Google Scholar

20. 

K. Ellis et al., “GOES-R advanced baseline imager image navigation and registration,” in 5th GOES Users’ Conf., (2008). Google Scholar

21. 

S. Houchin et al., “Image navigation and registration performance assessment evaluation tools for GOES-R ABI and GLM,” in IEEE Int. Geosci. and Remote Sens. Symp., 2074 –2077 (2017). https://doi.org/10.1109/IGARSS.2017.8127390 Google Scholar

22. 

J. L. Carr, D. Herndon and S. Reehl, “Verifying the accuracy of geostationary weather satellite image navigation and registration,” in AGU Fall Meeting Abstracts, (2012). Google Scholar

23. 

H. Kinter, D. Just and B. Mullet, “Meteosat Third Generation navigation approach,” in Proc. 22nd Int. Symp. Space Flight Dyn., (2011). Google Scholar

24. 

P. Righetti and H. Kinter, “Feasibility of meteosat third generation collocation using a single s-band tracking station and large inclination maneuvers,” J. Aerosp. Eng., 3 (2), 54 (2011). https://doi.org/10.7446/jaesa.0302.05 JAEEEZ 0893-1321 Google Scholar

25. 

T. Chambon et al., “On-ground evaluation of MTG image navigation and registration (INR) performances,” Proc. SPIE, 8889 88891J (2013). https://doi.org/10.1117/12.2028721 PSISDG 0277-786X Google Scholar

26. 

H. Madani, J. L. Carr and C. Schoeser, “Image registration using AutoLandmark,” in IEEE Int. Geosci. and Remote Sens. Symp., 3778 –3781 (2004). https://doi.org/10.1109/IGARSS.2004.1369945 Google Scholar

27. 

A. A. Kamel and K. M. Ong, “Satellite camera attitude determination and image navigation by means of earth edge and landmark measurement,” U.S. Patent 6023291 A (2000).

28. 

K. A. Kelly, J. F. Hudson and N. Pinkine, “GOES-8 and -9 image navigation and registration operations,” Proc. SPIE, 2812 (1996). https://doi.org/10.1117/12.254123 PSISDG 0277-786X Google Scholar

Biography

Boyang Chen received his BS degree from the University of Science and Technology of China in 2003 and his PhD from Chinese Academy of Science in 2008. He is currently a member of the faculty of the National Satellite Meteorological Center, Beijing, China. His research interests include calibration and validation, image processing, and image assessment. He is the leader of Calibration and Validation System of the ground segment of FY-4A.

Xiaoyan Li received his BS degree in mechanism design, manufacturing, and automatization from Northwest A&F University, Xi’an, China, in 2016. He is currently pursuing his PhD in electronic circuit and system at Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China. His current research interests include accurate navigation and geometric calibration of remote sensing satellites.

Gaoxiong Zhang received his BS degree from Huazhong University of Science and Technology in 2010 and his master’s degree from Shanghai Academy of Spaceflight Technology in 2013. He is currently a faculty of Shanghai Institute of Satellite Engineering, Shanghai, China. At present, he is mainly engaged in the research of large aircraft system design technology.

Qiang Guo received his MS degree in signal and information processing and his PhD in electronic science and technology from Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China, in 2001 and 2003, respectively. He is currently the second chief designer of the ground-segment systems for both Fengyun-2 and Fengyun-4 serial satellites. His research interests include on-orbit radiometric calibration, geolocation, and specification evaluation for spaceborne instruments.

Yapeng Wu received his bachelor’s degree from North University of China in 2004 and received his master’s degree from Northwest University in 2008. From October 2008 to March 2012, he was at the Space Laser Information Technology Research Center of Shanghai Institute of Optics and Precision Machinery, Chinese Academy of Sciences. He is mainly engaged in satellite remote sensing data processing and robot vision navigation and other related research works.

Baoyong Wang received his BS degrees in measurement and control technology and instrument from Changchun University of Science and Technology in 2008 and his MS degree in measuring and testing technology from Zhejiang University in 2011. Since 2011, he has been engaged in AGRI as an optical engineer with Shanghai Institute of Technical Physics of the Chinese Academy of Sciences. His research interests include aerospace camera optical design, stray light suppression, and optical inspection.

Fansheng Chen received his BS degree in optoelectronic information engineering and his PhD in physical electronics from Shandong University, Jinan, China, in 2002, and Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai, China, in 2007, respectively. His research interests include the design of spatial high-resolution remote sensing and detection payloads, high-speed and low noise information acquisition technology, and infrared dim small target detection technology.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Boyang Chen, Xiaoyan Li, Gaoxiong Zhang, Qiang Guo, Yapeng Wu, Baoyong Wang, and Fansheng Chen "On-orbit installation matrix calibration and its application on AGRI of FY-4A," Journal of Applied Remote Sensing 14(2), 024507 (16 April 2020). https://doi.org/10.1117/1.JRS.14.024507
Received: 26 December 2019; Accepted: 3 April 2020; Published: 16 April 2020
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Calibration

Satellites

Meteorological satellites

Satellite imaging

Satellite navigation systems

Image processing

Imaging systems

Back to Top