Next Article in Journal
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor
Next Article in Special Issue
An Improved Adaptive Received Beamforming for Nested Frequency Offset and Nested Array FDA-MIMO Radar
Previous Article in Journal
The Taste of Commercially Available Clarithromycin Oral Pharmaceutical Suspensions in the Palestinian Market: Electronic Tongue and In Vivo Evaluation
Previous Article in Special Issue
Development of Wind Speed Retrieval from Cross-Polarization Chinese Gaofen-3 Synthetic Aperture Radar in Typhoons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode

1
School of Electronic Science and Engineering, Nanjing University, Nanjing 210023, China
2
Nanjing Research Institute of Electronics Technology, Nanjing 210039, China
*
Author to whom correspondence should be addressed.
Submission received: 18 December 2017 / Revised: 18 January 2018 / Accepted: 30 January 2018 / Published: 3 February 2018
(This article belongs to the Special Issue First Experiences with Chinese Gaofen-3 SAR Sensor)

Abstract

:
Gaofen-3 (GF-3) is China’ first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.

1. Introduction

Gaofen-3 (GF-3), launched on 10 August 2016, is China’s first C-band multi-polarization synthetic aperture radar (SAR) satellite, mainly used in the fields of ocean surveillance, land observation, disaster reduction, and water conservation [1]. With 12 imaging modes, GF-3 not only covers the traditional stripmap and scan mode, but also provides the sliding spotlight mode for the first time [2]. The resolution of sliding spotlight mode in GF-3 is 1 meter, which is the highest resolution in a C-band multi-polarized SAR satellite in the world.
Stripmap mode and spotlight mode are two of the most common modes of operation in SAR [3]. Stripmap mode can provide images of large areas continuously, but the azimuth resolution cannot be arbitrarily increased; spotlight mode is achieved by controlling the scanning speed of the antenna so that it can fix on a certain ground area to improve the coherent integration time (CPI), so the azimuth resolution can be higher, but the imaging range is limited by the antenna beam width. In GF-3, sliding spotlight mode is a novel SAR mode, which increases the azimuth resolution by controlling the speed of the antenna irradiation to the area on the ground, and its imaging area is larger than the spotlight mode and its resolution can be higher than that of stripmap mode with the same size antenna [4]. Sliding spotlight mode is a good compromise between high-resolution and large-area imaging. At present, foreign advanced SAR systems, such as PAMIR and TerraSAR-X, all have adopted this imaging mode [5].
In this paper, the key techniques of spaceborne sliding spotlight SAR such as satellite imaging geometrical parameters and ambiguity elimination are analyzed, and two SAR algorithms—Convolution BackProjection (CBP) and Polar Format Algorithm (PFA)—are studied.
The back projection algorithm comes from the field of computer-aided tomography (CAT) [6] and is a point-by-point image reconstruction algorithm which has unique advantages [7,8]: no restrictions on the size of the imaging scene, available partial magnification of any part of the scene of interest, and no frequency domain interpolation is needed to ensure image quality. The biggest drawback of CBP algorithm, however, is the large amount of computation. The fast CBP algorithm has two directions: one is to divide the sub-apertures, and the other is to divide sub-images. The methods improve the efficiency based on the space or time sampling redundancy in the CBP algorithm. Sub-aperture method [9,10] refers to the technique whereby the whole aperture is divided into several smaller sub-apertures, each corresponding to a narrow range of bandwidth, which means that the sampling frequency in the azimuth domain can be lower to reduce the required interpolation so as to reduce the computation. The image of each sub-aperture is of low resolution, and the high resolution image is finally obtained by their coherent superposition. In sub-image methods [11,12], the entire imaging is divided into many small sub-images to reduce the spatial bandwidth in both range and azimuth domain. Correspondingly, the signal sampling rate can be reduced in two dimensions in order to improve the computation efficiency. In this paper, the fast realization method of CBP based on Quadtree recursive sub-image segmentation is studied, which is realized by sub-image interception, fast filtering and down-sampling to reduce the computation.
PFA is a classical spotlight SAR imaging algorithm [13,14], which is simple and efficient and also very suitable for sliding spotlight mode. Because of the assumption of plane wave front, the residual phase error will cause defocus in PFA to limit the size of the effective imaging scene. Because of the larger range of the imaging scene in sliding spotlight mode, the range of the small effective scene is not sufficient to meet the actual requirements. A modified PFA algorithm based on sub-aperture processing for wide scene with high resolution was proposed by Dorry [15], and has been improved in [16], but the overlap rate between two adjacent sub-apertures is very high, and the process is very complex. In this paper, based on the CBP algorithm of Quadtree recursive sub-image segmentation, it is further extended to the PFA algorithm, which greatly improves the effective imaging scene size of PFA. Finally, the proposed algorithms are verified by simulation and measured data.

2. Model of Spaceborne Sliding Spotlight SAR

2.1. Geometric Model

The sliding spotlight mode is a compromise between the stripmap mode and the spotlight mode [3]. In this mode, the radar slows down the moving speed of the antenna by controlling the beam direction, and increases the coherent integration time to obtain a higher azimuth resolution than possible with the traditional stripmap mode. At the same time, because the antenna beam still has a certain moving speed on the ground, a larger imaging area can be obtained than with the spotlight mode.
The basic model of sliding spotlight SAR is shown in Figure 1. The horizontal axis is the slow time tm, and the satellite moves at a constant speed v along the axis. RB is the vertical distance between the satellite and the point P. During the flight, the antenna beam center always directs to a point on the ground (shown as the black dot in Figure 1, the antenna always points to a certain imaginary point in the spotlight mode, and to a hypothetical point underground in sliding spotlight mode). The origin O of the axis is the time at which the satellite is located directly at the point of the imaginary point, that means the distance from the imaginary point to the satellite is shortest during the flight time. According to the squint angle, the azimuth coordinates of the satellite can be obtained in each pulse.
Typical orbit parameters for spaceborne SAR simulation is shown in Table 1.

2.2. The Acquisition of Parameters in Spaceborne SAR

The geometric pattern of spaceborne SAR is shown in Figure 2. Suppose the radius of the Earth is Re, the height of the satellite orbit from the ground is h, and if the center φ0 is known, the other main parameters can be calculated according to the following formulas [3]:
Corresponding center angle:
ϕ 0 = asin [ ( R e + h ) / R e sin φ 0 ]
Center slanting distance:
R s 0 = ( R e + h ) cos φ 0 R e 2 cos 2 φ 0 ( h 2 + 2 h R e ) sin 2 φ 0
The orbit of the satellite is approximately an elliptical eccentricity, and in general, the orbit can be treated as a circle. If the orbit is a circumference, the relationship between the orbital period P and the orbit radius Rs is as follows:
P 2 = 4 π 2 R s 3 μ e
μe = 3.986 × 1014 m3/s2 is the Earth gravitational constant. Correspondingly, the angular velocity of the satellite is:
ω s = 2 π P = μ e R s 3
The satellite inertia speed is:
V s = R s ω s = μ e R s
Rs = Re + h, Re is the radius of the Earth, and h is the height of satellite orbit. Then the velocity of the satellite can be obatined.
The slanting distance between satellite and targets is the most important parameter in SAR processing. This distance varies with azimuth and time. Assuming that the flight path is a partial line, the Earth is locally flat and not rotating. The distance between the satellite and the target point is R(η), which is given by the following hyperbolic equation:
R ( η ) = R 0 2 + ( V r η ) 2
In this hypothetical case, hyperbolic equation is also suitable for spaceborne situations, but V is not the physical speed, but the virtual speed chosen to make the real distance equation conform to the hyperbolic model.
As shown in Figure 3, the distance between the target and the satellite at time η is:
R ( η ) = ( R e sin β e ) 2 + ( H sin ω s η ) 2 + ( H cos ω s η R e R e cos β e ) 2
In the case of small angle, the distance in the upper form can be expressed as:
R ( η ) H 2 + R e 2 2 H R e cos β e + ( H ω s ) ( R e ω s cos β e ) η 2 = R 0 2 + ( H ω s ) ( R e ω s cos β e ) η 2
R 0 = H 2 + R e 2 2 H R e cos β e
According to the hypothesis of the local circular orbit, s is the orbital velocity of the satellite Vs, while Reωscosβe is the velocity of the beam coverage rate, that is the value of the ground speed Vg based on the assumption that the earth is locally spherical near the point C, so Vg is parallel to Vs.
The final result is as follows:
R ( η ) = R 0 2 + V s V g η 2
The equivalent velocity can be obtained by:
V r V g V s

3. Ambiguity Elimination in Sliding Spotlight SAR

In spaceborne SAR, the selection of the PRF is usually only about 1.2 times the Doppler bandwidth produced by the antenna illumination range [17]. The relationship between the Doppler bandwidth required in the SAR and the resolution is as follows:
B min = V r ρ a
In the spaceborne sliding spotlight SAR, since the PRF selection is limited by many conditions, the PRF is usually only slightly larger than the instantaneous Doppler bandwidth, and is much smaller than the entire signal Doppler bandwidth. The Doppler ambiguity phenomenon exists in the echos, so it is necessary to study the solution of eliminating ambiguity in sliding spotlight SAR.
This section uses the dechirp operation to resolve the Doppler ambiguity [18]. The reference function of the SPECAN operation is selected to be related to the frequency of the distance. The difference in the SPENCAN operation results in different operations of phase compensation and interpolation. The specific algorithm flow is shown in Figure 4.
The biggest difference between sliding spotlight SAR and spotlight SAR is that the imaging area of sliding spotlight SAR is larger than the radiation area of the antenna. Therefore, if we do the dechirp operation at the center of scene, the Doppler bandwidth is still greater than the pulse repetition frequency, so the azimuth data need to be divided into several sub-apertures, each sub-aperture can have a certain overlap [19].
In the analysis of algorithm, we only consider the oblique plane imaging. In sliding spotlight mode, the echo signal can be written as follows:
S ( ω , u m o h u ) = σ ( x n , r n ) exp ( j 2 ( ω + ω c ) c ( x n u m o ) 2 + ( R c + r n ) 2 ) .
In the above formula, Rc is the satellite’s distance to the center of the scene, umo is the position of the satellite at each moment.
Use the following signal forms to dechirp data:
S d e ( ω , u ) = exp ( j 2 ( ω + ω c ) c ( u a ) 2 + ( R c ) 2 ) .
The echo after dechirp can be expressed as:
S 1 ( ω , n ) = i = 1 N σ ( x n , r n ) exp ( j 2 ( ω + ω c ) c ( R c r n n Δ u a x n n 2 ( Δ u a ) 2 + R c 2 + ξ ( n Δ u a , x n , r n ) ) ) .
Δumohu = Vr/PRF, ξ(n·Δumohu, xn, rn) is residual error term based on the plane wave assumption. The Doppler bandwidth of the signal is only determined by the scene-induced Doppler bandwidth. That is to say, if the scene-induced Doppler bandwidth in the sub-aperture is more than the pulse repetition rate, then we can use the interpolation method to increase PRF. In the interpolation process, we can perform the Fourier transform on the above equation, and then perform the inverse Fourier transform after the zero-padding, which is equivalent to interpolating the above equation. The number of zeroes is determined by the number of ambiguity. After interpolation, the signal can be expressed as:
S 1 ( ω , i ) = i = 1 N σ ( x n , r n ) exp ( j 2 ( ω + ω c ) c ( R c r n i Δ u x n i 2 ( Δ u ) 2 + R c 2 + ξ ( i Δ u , x n , r n ) ) ) .
In order to recover the echo signal without ambiguity, inverse dechirp processing also needs to be performed. In this case, the compensation vector for inverse dechirp processing is:
S I d e ( ω , i ) = exp ( j 2 ( ω + ω c ) c ( i Δ u ) 2 + ( R c ) 2 ) .
After inverse dechirp processing, the echo can be expressed as follows:
S ( ω , i ) = i = 1 N σ ( x n , r n ) exp ( j 2 ( ω + ω c ) c ( x n i Δ u ) 2 + ( R c + r n ) 2 ) .
The above is the expression of distance-to-frequency signal that does not produce Doppler ambiguity in the signal domain, so the problem of Doppler ambiguity can be completely solved after the above operation. After this, there is no Doppler ambiguity in the echo data. In this case, the data can be imaged by CS algorithm, wavenumber domain algorithm, polar coordinate algorithm and the like. In the following sections, we will discuss two SAR imaging algorithms suitable for sliding spotlight mode.

4. Convolution Backprojection Algorithm for Sliding Spotlight SAR

4.1. CBP Algorithm

The Convolution BackProjection (CBP) algorithm is a point-by-point imaging algorithm, which is a point-to-point image reconstruction process. The CBP algorithm can be artificially set according to the resolution requirements and the actual situation for different modes and different frequency bands. No matter how great the range migration is, the CBP algorithm can accumulate its energy for each point along its own migration curve [20]. The process is shown in Figure 5:
Step 1: Construct a ground pixel grid point
According to the resolution requirement, the pixel grid points of different pixel intervals are constructed in the ground imaging area, and the azimuth and distance coordinates of each pixel are recorded, to guarantee that the resolution in azimuth and range domain of the two-dimensional pixel unit are basically matched.
Step 2: Reverse-projection
(a) Range pulse compression
There is no need for multi-points accumulation after pulse compression, all oversampling points are left.
(b) Determination of the beam coverage of ground pixels
According to the corresponding azimuth of each pixel, the range coordinates, the antenna position and azimuth beam width of each pulse, it can be judged which of the pixels is within the coverage of the pulse beam and recorded.
The judgement is based on the two window functions in the echo expression of the ground target. The point target located in the two window functions can be covered by the pulse beam:
w 1 = rect ( t ^ 2 R ( t m ; R B ) c T r ) .
w 2 = rect ( A v t m x X )
(c) Pixel-by-pixel reverse-projection
Then we calculate the distance between each pixel in the imaging area and the corresponding antenna position of the pulse, and then the range data is interpolated according to the distance to obtain the different energy contribution of the pulse to the different pixels covered by the pulse. For the same pixel, the energy from different pulses of its contribution needs coherent accumulation.
In the time domain imaging algorithm such as CBP, the ground pixel are set artificially based on the resolution requirements and the actual situation, and the interval can be slightly smaller than the resolution generally, according to the desired image geometric direction.

4.2. Fast CBP Algorithm Based on Image Segmentation

The backprojection is a point to point image reconstruction process, which requires large amounts of interpolation operations resulting in huge computation [21]. Therefore, in this paper a fast implementation method based on Quadtree sub-image segmentation is discussed below.
In the pre-processing phase of the CBP algorithm, match filtering and motion compensation are needed according to the center of the scene, which are equivalent to the two-dimension dechirp processing of the original echo signal, eliminating the second order phase of the signal. The echo signal of single target after the two-dimensional dechirp is as follows [21]:
S P B ( t , f τ ) = rect ( t T a ) · rect ( f γ T r ) exp [ j 4 π c f τ ( R a R p ) ] · exp { j 4 π c f c [ R a ( t ) R p ( t ) ] } .
The first exponential term is a nearly single frequency signal related to the distance from radar to the target, and the range profile can be obtained through Fourier transform with fτ. Suppose wr is the range scope of the scene, then the bandwidth in range domain after dechirp processing is:
B r = 2 w r c .
The second exponential term is also a nearly single frequency signal related to the azimuth position of the target, and the azimuth profile can be obtained through Fourier transform with t. Suppose wa is the azimuth scope of the scene, then the bandwidth in azimuth domain after dechirp processing is [22]:
B a = 2 w a v λ c R a c .
where λc is the wavelength, Rac is the distance between the center point of the aperture and the center of the scene. In the CBP algorithm, if the scene decreases, the bandwidth in azimuth and range domain corresponding to the phase history will be reduced. Accordingly, the sampling rate can be reduced while increasing the sampling interval in azimuth and range domain to reduce the computation. Therefore, a CBP algorithm based on sub-image processing is used below. The schematic diagram is shown in Figure 6, and the specific process includes the following steps [23]:
Step 1: Sub-image segmentation
The segmentation of sub-image is based on the quadrant, and the whole image is divided into four sub-images according to the quadrant. The pixels are evenly distributed in the range and azimuth domain. For an image that includes N × N pixels, corresponding to Figure 6, each sub-image should contain N/2 × N/2 pixels. The scene is reduced to half of the original in both range and azimuth domain, so the sampling rate of data in range and azimuth domain can also be reduced to half in processing.
Step 2: Filtering the original phase history, then down-sample in the spatial frequency domain.
The filtering should be based on the range and the center point of each sub-image. The original phase history in the full scene is considered:
S W ( t , f τ ) = rect ( t T a ) ( x , y ) W g ( x , y ) rect ( f τ B r ) exp [ j 4 π c ( f c + f τ ) ( R a R t ) ] dxdy .
where W represents the ground illuminated area, and g(x, y) represents the reflectivity of point target with the coordinate of (x, y). The original phase history array size is N × N, and the sampling intervals in (t, fτ) domain are T0 and Fs/N. So the discrete values of t and fτ are:
t = N 2 T 0 + m T m = 0 , 1 , 2 , , N 1
f τ = F s 2 + n F s N n = 0 , 1 , 2 , , N 1
A basic image can be obtained from this data array through the basic linear RD algorithm [23]. Except the center of the scene, the other points could have some defocus. In order to avoid energy leakage caused by defocusing, motion re-compensation are needed, and then linear RD algorithm and low-pass filtering can be applied. In summary, the fast filtering process includes:
(a) Motion re-compensation to the center of the each sub-image.
For each sub-image, the motion compensation function is constructed based on the the central point in the sub-image, and the echo data is compensated by pulse by pulse. Taking sub-image A as an example, the phase compensation factor is:
ϕ s A ( t , f τ ) = exp [ j 4 π c f τ ( R s A R a ) ] exp [ j 4 π c f c ( R s A R a ) ] .
where RsA = RsA(t) is the instantaneous distance between the phase center of the antenna and the center of sub-image A, and it can be described in coordinates:
R s A = x a x s A ) 2 + y a y s A ) 2 + z a z s A ) 2 .
The new phase history is obtained after phase compensation:
S W c A ( t , f τ ) = rect ( t T a ) ( x , y ) W g ( x , y ) rect ( f τ B r ) exp [ j 4 π c ( f c + f τ ) ( R s A R t ) ] dxdy .
(b) Two dimensional imaging and window interception.
At this time an image of the full scene can still be obtained by two-dimensional FFT, but now the center of the scene has been transferred to the center of each sub-image. Then it is convenient to extracting the sub-image data from the central part of the large two-dimensional data array of the full image according to the subscripts. The array size after interception is N/2 × N/2.
(c) Returning to the phase history domain.
The scene of this sub-image is reduced by half in both range and azimuth, so the sampling rate can be also reduced by half in both range and azimuth domain. After returning the image to the space-frequency domain through IFFT, the amount of data is reduced to the 1/4 of original, only containing the information of sub-image A. The signal returning to data domain is:
S W c A ( t , f τ ) = rect ( t T a ) ( x , y ) A g ( x , y ) rect ( f τ B r ) exp [ j 4 π c ( f c + f τ ) ( R s A R t ) ] dxdy .
Here the sampling points is half of the original, and the sampling interval is doubled. So the sampling intervals in (t, fτ) domain are 2T0 and 2Fs/N. The discrete values of t and fτ are:
t = N 2 T 0 + 2 m T m = 0 , 1 , 2 , , N 2 1 .
f τ = F s 2 + n F s N / 2 n = 0 , 1 , 2 , , N 2 1
The flow diagram of fast filtering is shown in Figure 7.
Step 3: Backprojection.
Still taking sub-image A as an example, SsA(t, fτ) is the down-sample signal Pθ_A(U) in frequency domain containing only information of sub-image A, so the reconstruction formula is changed into the following:
g ¯ A ( x , y ) = θ m / 2 θ m / 2 U 1 U 2 P θ _ A ( U ) exp ( j U R Δ A ) | U | dUd θ .
The backprojection process is still realized by interpolation and summation. The pixel numbers of each sub-image is N/2 × N/2, and the pulse numbers for backprojection is also reduced to N/2, so the number of interpolation for each sub-image is N3/8 and total is N3/2 for all of the four sub-images, only half comparing to the normal process.
Step 4: Sub-images mosaic.
The full image is obtained by sub-images mosaic according to the original segmentation rules.

4.3. Further Improvement with Recursive Segmentation

The improvement of computation reduction is limited by only one level image segmentation so it is necessary to recursively segment the image further to reduce the sample rate and computation. As discussed below, it mainly includes the following two parts of recursive decomposition. Part one is to segment the image recursively. First the full image is decomposed into four sub-images by level-1 segmentation, and then each sub-image is decomposed into four smaller sub-images by level-2 segmentation again.
Part two is to segment the echo data recursively, including filtering and down-sampling. The original data is filtered according to the input sub-image parameters to ensure that the filtered data only contains the information of each sub-image. The flow diagram of fast CBP implementation is shown in Figure 8.

4.4. Simulation Results

The simulation parameters are shown in Table 2, and the distribution maps of 121 simulation points are given in Figure 9. The space between each points is 400 m, and the imaging area is 4000 m × 4000 m. The radar echo signal after matching filter and motion compensation is a two dimensional matrix of 4096 × 4096 points. The result of fast imaging simulation is shown in Figure 10 with different segmentation levels.
The simulation results are shown in Figure 10, and the error positions of point targets in each image are estimated in Table 3.
The target response profiles are shown in Figure 11 and Figure 12.
Evaluation results of point targets simulation are shown in Table 4.
From the above, it can be seen that all the images are well focused in each level. But each segmentation level is equivalent to re-sampling and cumulative error is inevitable, so the number of segmentation levels is a compromise between image quality and computation.

5. PFA Algorithm for Sliding Spotlight SAR

5.1. PFA Algorithm Based on Image Segmentation

The Polar Format Algorithm is a classical spotlight SAR imaging algorithm [24]. The algorithm uses the polar coordinate scheme to store the data, and effectively solves the problem of the cross-resolution unit moving away from the central scattering point of the imaging area. Since the PFA uses the assumption of the plane wavefront, and the actual wavefront is curved, the error introduced mainly appears as the primary and secondary spatial phase error of the spatial frequency, which corresponds to the geometric distortion and defocusing of image and limits the effective imaging scene size of PFA algorithm [25].
A novel PFA algorithm based on image quadtree partition of image is studied below, which shares the same key idea of the quadtree CBP algorithm and mainly includes three steps:
Step 1: Sub-image segmentation.
The basic LRD algorithm without precise focusing is used to get a blurred SAR image from the whole scene, and then based on the idea of digital spotlight, the echo data can be filtered and segmented into several sub-images through the quadtree method, which is shown in Figure 13.
Step 2: Motion compensation and PFA sub-imaging.
Remove the echo data of each sub-image to the center of each sub-scene, and use PFA algorithm for precise focusing. Taking the sub-scene center as the reference center, the second-order motion compensation for the data of each sub-image is given, and the reference function of compensation is:
S ¯ n ( t , f τ ) = exp { j 4 π ( f c + f τ ) c [ r n , o ( t ) r 0 ( t ) ] } , n = 1 , 2 , ,   N .
where n represents the nth sub-beam, N represents the number of sub-images, rn,o(t) represents the instantaneous distance from the satellite to the center of the nth sub-scene. The data of each sub-image after the motion compensation can be written as:
S n ( t , f τ ) = p Ι n exp { j 4 π ( f c + f τ ) c [ r n , o ( t ) r p ( t ) ] }   .
Through the second-order motion compensation above, the motion error of the center of each sub-beam scene is accurately compensated. Although the compensation is invariant in the each sub-beam, as long as the sub-beam is designed narrow enough to ensure that the sub-scene is within the effective imaging scene radius of the PFA, the residual error of the non-center point can be completely negligible.
Step 3: Mosaic of sub-images.
The full scene free focus large graph can be obtained by mosaicking each sub-image. In the PFA algorithm based on sub-images, it is not necessary to a divide the full image into very small sub-images like in the CBP algorithm. The main purpose of fast CBP is to reduce the amount of interpolation in the backprojection process to reduce the computation, so the sub-images must be small enough to achieve this purpose effectively. The PFA imaging algorithm itself is very simple and efficient. Therefore, the level of sub-image segmentation can be reduced, as long as each sub-image is within the range of the effective focusing scene.
Under certain parameter conditions, the effective imaging scene radius of PFA is determined by the following formula [23]:
r 0 2 ρ a K a R a c λ c .
According to the parameters of GF-3 SAR, it only needs Quadtree segmentation for only one time to satisfy the sub-image without defocus. Similar to the CBP algorithm, the flowchart of PFA algorithm based on Quadtree sub-image segmentation is shown in Figure 14.

5.2. Simulation Results

The simulation parameters are same with Table 2, and the distribution of 225 simulation points are given in Figure 15. The space between each points is 800 m, and the imaging area is 11,200 m × 11,200 m, which is far beyond the effective imaging scene size. According to the relationship between the actual scene range and the effective scene range, we can find that only one level segmentation of 4 sub-images can be controlled within the effective imaging scene radius.
The echo signal after matching filter and motion compensation is a two dimensional matrix of 16,384 × 16,384 points. Image obtained by PFA is shown in Figure 16a, while the corresponding imaging range is beyond the actual scene, it can be seen that the image has obvious geometric distortion, and error position of the point target is max to 15 and 16 m in range and azimuth domain compared to the original point target distribution in region A.
The imaging result of one sub-image is shown in Figure 16b, corresponding to region A. It can be seen from the figure, the distortion has been preliminarily corrected. The full image after mosaic is shown in Figure 16c, and the geometric distortion has been all corrected, well focus and with no error position. The target response characteristics are shown in Figure 17 and Figure 18.
Evaluation results of point targets simulation are shown in Table 5.

6. Measured Data Results

Furthermore, we validate the proposed PFA algorithm by the measured data of an airborne SAR. The slant range of the data is 25 km, and the theoretical resolution is 0.15 m × 0.15 m. Under these parameters, the effective imaging scene radius of PFA is about 300 m, and the corresponding imaging area is about 3600 m and 1100 m in the azimuth and range domain respectively, which is far beyond the effective imaging scene of PFA. Therefore, it is necessary to divide the whole image into small sub-images and re-focus in order to improve the image quality.
After LRD processing without precise focusing, a basic image with a two dimensional matrix of 8192 × 24,000 points is obtained. According to the relationship between the effective radius of PFA imaging and the whole scene size, the whole image is divided into 11 × 16 = 176 sub-images. Each sub-image contains 1024 × 2048 pixels with overlapping 256 × 512 pixels.
Measured data results of modified PFA based on sub-images are shown in Figure 19, and a local area in the image is shown in Figure 20 specifically. It can be seen that the modified PFA algorithm can significantly improve the focusing effect of the image. Figure 21 is the profile of point target in the image, and Hamming window is used in RAW data process on order to reduce the sidelobe levels.
Evaluation results of point targets in the SAR image are shown in Table 6.

7. Conclusions

In this paper, the key techniques in spaceborne sliding spotlight SAR are analyzed, including the imaging geometric parameters, the method of equivalent velocity acquisition and ambiguity elimination. The mechanism of the CBP algorithm and PFA algorithm for SAR imaging are studied, and the process of the fast application in spaceborne sliding spotlight SAR is analyzed in detail.
The large amount of computation is the main problem that restricts the application of CBP. In order to reduce the computation, a fast implementation method based on Quadtree sub-images is studied in this paper. The azimuth filtering and down sampling based on sub-image interception and FFT method for fast filtering is studied to avoid interpolation process, and through the method of Quadtree recursive decomposition, the processing efficiency of CBP can be greatly improved and the image quality can be ensured.
PFA is also a classical algorithm for spotlight SAR. In order to expand the scope of PFA effective scenes for the sliding spotlight mode, based on the idea of sub-images, a modified PFA algorithm suitable for a large scene is proposed. As long as the sub-images are small enough, it is within the effective range of traditional PFA imaging scene restrictions. Finally, after geometric distortion correction, all sub-images are stitched to get the full image without defocus.
Based on the proposed algorithm, the sliding spotlight SAR imaging can be realized quickly according to the process, and the algorithms are validated by simulation and measured data to lay the foundation for future spaceborne applications.

Author Contributions

S.S. wrote the manuscript, X.N. and S.S. implemented the method and analyzed the experimental data; X.Z. revised the manuscript. All authors read and approved the final version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, J.; Qiu, X.; Han, B.; Xiao, D. Study on geo-location of sliding spotlight mode of GF-3 satellite. In Proceedings of the 2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Singapore, 1–4 September 2015. [Google Scholar]
  2. Sun, J.; Yu, W.; Deng, Y. The SAR Payload Design and Performance for the GF-3 Mission. Sensors 2017, 17, 2419. [Google Scholar] [CrossRef] [PubMed]
  3. Tang, Y.; Wang, Y.; Zhang, B. A study of sliding spotlight SAR imaging mode. J. Electron. Inf. Technol. 2007, 29, 26–29. [Google Scholar]
  4. Jiang, W.; Li, M. A study on Sliding Spotlight mode of spaceborne SAR system. Mod. Radar 2011, 33, 17–19. [Google Scholar]
  5. Mittermayer, J.; Lord, R.; Borner, E. Sliding spotlight SAR processing for Terra SAR-X using a new formulation of the extended chirp scaling algorithm. In Proceedings of the IGARSS 2003 IEEE International Geoscience and Remote Sensing Symposium (IEEE Cat. No.03CH37477), Toulouse, France, 21–25 July 2003; pp. 1462–1464. [Google Scholar]
  6. Bauck, J.L.; Jenkins, W.K. Tomographic Processing of Spotlight-mode Synthetic Aperture Radar Signals with Compensation for Wavefront Curvature. In Proceedings of the ICASSP-88, International Conference on Acoustics, Speech, and Signal Processing, New York, NY, USA, 11–14 April 1988. [Google Scholar]
  7. Jakowatz, C.V.; Wahl, D.E.; Yocky, D.A. Beamforming as a foundation for spotlight-mode SAR image formation by backprojection. In Proceedings of the SPIE Defense and Security Symposium, Orlando, FL, USA, 16–20 March 2008; Volume 6970. [Google Scholar]
  8. Kak, A.C.; Slaney, M. Principles of Computerized Tomographic Imaging; IEEE Press: New York, NY, USA, 1988. [Google Scholar]
  9. Ulander, L.M.H.; Hellsten, H.; Stenstrom, G. Synthetic-aperture radar processing using fast factorized backprojection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 760–776. [Google Scholar] [CrossRef]
  10. Yegulalp, A.F. Fast backprojection algorithm for synthetic aperture radar. In Proceedings of the IEEE Radar Conference Radar into the Next Millennium (Cat. No.99CH36249), Waltham, MA, USA, 22–22 April 1999. [Google Scholar]
  11. Xiao, S.; Munson, D.C., Jr.; Basu, S.; Bresler, Y. An N2 log N backprojection algorithm for SAR image formation. In Proceedings of the 2000 Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154), Pacific Grove, CA, USA, 29 October–1 November 2000; Volume 1, pp. 3–7. [Google Scholar]
  12. McCorkle, J.; Rofheart, M. An order N2 log(N) backprojector algorithm for focusing wide-angle wide-bandwidth arbitrary-motion synthetic aperture radar. In Proceedings of the Aerospace/Defense Sensing and Controls, Orlando, FL, USA, 8–12 April 1996; Volume 2747, pp. 25–36. [Google Scholar]
  13. Nie, X.; Zhu, D.; Zhu, Z. Application of Synthetic Bandwidth Approach in SAR Polar Format Algorithm Using the Deramp Technique. Prog. Electromagn. Res. 2008, 80, 447–460. [Google Scholar] [CrossRef]
  14. Jakowatz, J.C.V.; Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C.; Thompson, P.A. Spotlight Mode Synthetic Aperture Radar: A Signal Processing Approach; Kluwer Academic Publisher: Boston, MA, USA, 1996. [Google Scholar]
  15. Doerry, A.W. Synthetic Aperture Radar Processing with Polar Formatted Subapertures. In Proceedings of the 1994 28th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 31 October–2 November 1994; Volume 2, pp. 1210–1215. [Google Scholar]
  16. Mao, X.; Zhu, D.; Nie, X.; Zhu, Z. A Two Dimensional Overlapped Subaperture Polar Format Algorithm Based on Stepped-chirp Signal. Sensors 2008, 8, 3438–3446. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, C. Synthetic Aperture Radar Principle, System Analysis and Application; Science Press: Beijing, China, 1989. [Google Scholar]
  18. Nie, X. Study on high resolution spaceborne sliding spotlight SAR imaging method. Electron. Technol. Softw. Eng. 2016, 4, 71–73. [Google Scholar]
  19. Liu, H.; Song, H.; Cheng, Z. Comparative study on stripmap mode, spotlight mode andsliding spotlight mode. J. Grad. School Chin. Acad. Sci. 2011, 28, 410–417. [Google Scholar]
  20. Desai, M.D.; Jenkins, W.K. Convolution backprojection image reconstruction for spotlight modesynthetic aperture radar. IEEE Trans. Image Process. 1992, 1, 505–517. [Google Scholar] [CrossRef] [PubMed]
  21. Zhu, D.; Zhu, Z. Study of azimuth—Swept Imaging SAR. Acta Geod. Cartogr. Sin. 2005, 26, 208–213. [Google Scholar]
  22. Carrara, W.G.; Goodman, R.S.; Majewski, R.M. Spotlight Synthetic Aperture Radar Signal Processing Algorithms; Artech House: Boston, MA, USA, 1995. [Google Scholar]
  23. Nie, X.; Zhu, D.; Zhu, Z. Fast implementation of convolution backprojection algorithm in spotlight SAR. Trans. Nanjing Univ. Aeronaut. Astronaut. 2010, 27, 333–337. [Google Scholar]
  24. Zhu, D.; Zhu, Z. Range Resampling in the Polar Format Algorithm for Spotlight SAR Image Formation using the Chirp-Z Transform. IEEE Trans. Signal Process. 2007, 55, 1011–1023. [Google Scholar] [CrossRef]
  25. Nie, X. Research on key technique of highly squint sliding spotlight SAR imaging with varied range bin. J. Electron. Inf. Technol. 2016, 38, 3122–3128. [Google Scholar]
Figure 1. Basic model of sliding spotlight SAR.
Figure 1. Basic model of sliding spotlight SAR.
Sensors 18 00455 g001
Figure 2. Geometric model of spaceborne sliding spotlight SAR.
Figure 2. Geometric model of spaceborne sliding spotlight SAR.
Sensors 18 00455 g002
Figure 3. The geometric relationship, indicating the speed of satellite and beam coverage area.
Figure 3. The geometric relationship, indicating the speed of satellite and beam coverage area.
Sensors 18 00455 g003
Figure 4. Flow chart of ambiguity elimination.
Figure 4. Flow chart of ambiguity elimination.
Sensors 18 00455 g004
Figure 5. Flow chart of CBP algorithm.
Figure 5. Flow chart of CBP algorithm.
Sensors 18 00455 g005
Figure 6. Schematic diagram of fast CBP algorithm based on image segmentation.
Figure 6. Schematic diagram of fast CBP algorithm based on image segmentation.
Sensors 18 00455 g006
Figure 7. Flow diagram of fast filtering.
Figure 7. Flow diagram of fast filtering.
Sensors 18 00455 g007
Figure 8. Flow chart of fast realization based on Quadtree sub-image recursive segmentation.
Figure 8. Flow chart of fast realization based on Quadtree sub-image recursive segmentation.
Sensors 18 00455 g008
Figure 9. Simulation point targets distribution.
Figure 9. Simulation point targets distribution.
Sensors 18 00455 g009
Figure 10. Imaging simulation result. (a) Divided to 128 × 128 points; (b) Divided to 64 × 64 points; (c) Divided to 32 × 32 points; (d) Divided to 16 × 16 points; (e) Divided to 8 × 8 points; (f) Divided to 4 × 4 points.
Figure 10. Imaging simulation result. (a) Divided to 128 × 128 points; (b) Divided to 64 × 64 points; (c) Divided to 32 × 32 points; (d) Divided to 16 × 16 points; (e) Divided to 8 × 8 points; (f) Divided to 4 × 4 points.
Sensors 18 00455 g010
Figure 11. (a) Conventional CBP; (b) Divided to 32 × 32 points; (c) Divided to 16 × 16 points.
Figure 11. (a) Conventional CBP; (b) Divided to 32 × 32 points; (c) Divided to 16 × 16 points.
Sensors 18 00455 g011
Figure 12. Imaging simulation result. Azimuth profile (a) Conventional CBP; (b) Divided to 32 × 32 points; (c) Divided to 16 × 16 points; Range profile (d) Conventional CBP; (e) Divided to 32 × 32 points; (f) Divided to 16 × 16 points.
Figure 12. Imaging simulation result. Azimuth profile (a) Conventional CBP; (b) Divided to 32 × 32 points; (c) Divided to 16 × 16 points; Range profile (d) Conventional CBP; (e) Divided to 32 × 32 points; (f) Divided to 16 × 16 points.
Sensors 18 00455 g012
Figure 13. Diagram of pre-filtering process.
Figure 13. Diagram of pre-filtering process.
Sensors 18 00455 g013
Figure 14. Flowchart of PFA algorithm based on sub-image segmentation.
Figure 14. Flowchart of PFA algorithm based on sub-image segmentation.
Sensors 18 00455 g014
Figure 15. Simulation point targets distribution.
Figure 15. Simulation point targets distribution.
Sensors 18 00455 g015
Figure 16. (a) Result of PFA; (b) Local sub-image; (c) Full image after mosaic.
Figure 16. (a) Result of PFA; (b) Local sub-image; (c) Full image after mosaic.
Sensors 18 00455 g016
Figure 17. Two dimensional characteristic of point target. (a) PFA; (b) PFA based on sub-images.
Figure 17. Two dimensional characteristic of point target. (a) PFA; (b) PFA based on sub-images.
Sensors 18 00455 g017
Figure 18. Comparison of point target profile. (a) Azimuth profile of PFA; (b) Azimuth profile of PFA based on sub-images; (c) Range profile of PFA; (d) Range profile of PFA based on sub-images.
Figure 18. Comparison of point target profile. (a) Azimuth profile of PFA; (b) Azimuth profile of PFA based on sub-images; (c) Range profile of PFA; (d) Range profile of PFA based on sub-images.
Sensors 18 00455 g018
Figure 19. Measured data results of PFA based on sub-images.
Figure 19. Measured data results of PFA based on sub-images.
Sensors 18 00455 g019
Figure 20. Detail of the local area.
Figure 20. Detail of the local area.
Sensors 18 00455 g020
Figure 21. Point target profile. (a) Range profile; (b) Azimuth profile.
Figure 21. Point target profile. (a) Range profile; (b) Azimuth profile.
Sensors 18 00455 g021
Table 1. Orbit simulation parameters.
Table 1. Orbit simulation parameters.
ParametersDiscriptionValue
A/KmSemimajor axis7000
eEccentricity0.001
i/°Inclination97.0
Ω/°Ascending node−0.0999845
ω/°Argument of perigee−0.0121851
Table 2. Simulation parameters.
Table 2. Simulation parameters.
ParametersValueParametersValue
Carrier Frequency5.4 GHzWavelength0.056 m
Signal bandwidth240 MHzSampling frequency300 MHz
Height of platform750 KmPulse width20 us
Incidence angle30°Size of sub-images4 × 4~128 × 128
Azimuth size of antenna7.5 mType of windowNone
Range resolution1.0 mAzimuth resolution1.2 m
Table 3. Error positions of the point targets.
Table 3. Error positions of the point targets.
ParametersMean Error (Pixel)Standard Deviation (Pixel)
Divided to 128 × 128 points0.220.06
Divided to 64 × 64 points0.310.10
Divided to 32 × 32 points0.370.12
Divided to 16 × 16 points0.440.17
Divided to 8 × 8 points0.490.21
Divided to 4 × 4 points0.520.24
Table 4. Evaluation results of point targets simulation.
Table 4. Evaluation results of point targets simulation.
Conventional CBPDivided to 32 × 32 PointsDivided to 16 × 16 Points
Theoretical Range Resolution1.00 m
Simulation Range Resolution1.05 m1.06 m1.09 m
Range PSLR−13.71 dB−13.55 dB−13.23 dB
Theoretical Azimuth Resolution1.19 m
Simulation Azimuth Resolution1.23 m1.26 m1.31 m
Azimuth PSLR−13.51 dB−13.15 dB−13.03 dB
Table 5. Evaluation results of point targets simulation.
Table 5. Evaluation results of point targets simulation.
Conventional PFAPFA with Sub-Images
Theoretical Range Resolution1.00 m
Simulation Range Resolution1.18 m1.05 m
Range PSLR−13.23 dB−13.26 dB
Theoretical Azimuth Resolution1.19 m
Simulation Azimuth Resolution1.22 m
Azimuth PSLR−7.15 dB−12.86 dB
Table 6. Evaluation results of point targets.
Table 6. Evaluation results of point targets.
Range ResolutionRange PSLRAzimuth ResolutionAzimuth PSLR
Modified PFA0.16 m21.5 dB0.18 m20.9 dB

Share and Cite

MDPI and ACS Style

Shen, S.; Nie, X.; Zhang, X. Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode. Sensors 2018, 18, 455. https://0-doi-org.brum.beds.ac.uk/10.3390/s18020455

AMA Style

Shen S, Nie X, Zhang X. Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode. Sensors. 2018; 18(2):455. https://0-doi-org.brum.beds.ac.uk/10.3390/s18020455

Chicago/Turabian Style

Shen, Shijian, Xin Nie, and Xinggan Zhang. 2018. "Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode" Sensors 18, no. 2: 455. https://0-doi-org.brum.beds.ac.uk/10.3390/s18020455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop