Next Article in Journal
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System
Next Article in Special Issue
Kinematic/Dynamic SLAM for Autonomous Vehicles Using the Linear Parameter Varying Approach
Previous Article in Journal
Kids’ Emotion Recognition Using Various Deep-Learning Models with Explainable AI
Previous Article in Special Issue
A Review on Visual-SLAM: Advancements from Geometric Modelling to Learning-Based Semantic Scene Understanding Using Multi-Modal Sensor Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of an Online Adaptive Parameter Tuning vSLAM Algorithm for UAVs in GPS-Denied Environments

Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan 701, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 16 September 2022 / Revised: 11 October 2022 / Accepted: 11 October 2022 / Published: 21 October 2022

Abstract

:
In recent years, unmanned aerial vehicles (UAVs) have been applied in many fields owing to their mature flight control technology and easy-to-operate characteristics. No doubt, these UAV-related applications rely heavily on location information provided by the positioning system. Most UAVs nowadays use a global navigation satellite system (GNSS) to obtain location information. However, this outside-in 3rd party positioning system is particularly susceptible to environmental interference and cannot be used in indoor environments, which limits the application diversity of UAVs. To deal with this problem, in this paper, a stereo-based visual simultaneous localization and mapping technology (vSLAM) is applied. The presented vSLAM algorithm fuses onboard inertial measurement unit (IMU) information to further solve the navigation problem in an unknown environment without the use of a GNSS signal and provides reliable localization information. The overall visual positioning system is based on the stereo parallel tracking and mapping architecture (S-PTAM). However, experiments found that the feature-matching threshold has a significant impact on positioning accuracy. Selection of the threshold is based on the Hamming distance without any physical meaning, which makes the threshold quite difficult to set manually. Therefore, this work develops an online adaptive matching threshold according to the keyframe poses. Experiments show that the developed adaptive matching threshold improves positioning accuracy. Since the attitude calculation of the IMU is carried out based on the Mahony complementary filter, the difference between the measured acceleration and the gravity is used as the metric to online tune the gain value dynamically, which can improve the accuracy of attitude estimation under aggressive motions. Moreover, a static state detection algorithm based on the moving window method and measured acceleration is proposed as well to accurately calculate the conversion mechanism between the vSLAM system and the IMU information; this initialization mechanism can help IMU provide a better initial guess for the bundle adjustment algorithm (BA) in the tracking thread. Finally, a performance evaluation of the proposed algorithm is conducted by the popular EuRoC dataset. All the experimental results show that the developed online adaptive parameter tuning algorithm can effectively improve the vSLAM accuracy and robustness.

Graphical Abstract

1. Introduction

Due to high flexibility and excellent maneuverability, UAVs have become one of the most popular aerial vehicle platforms over the past few years. Moreover, the outdoor applications of UAVs are well known to the public, while this paper focuses on visual positioning in GPS-denied environments. Therefore, several indoor applications of UAVs are particularly illustrated. For example, UAVs can provide appropriate assistance to humans for those high-risk missions such as rescue operations and indoor equipment maintenance in nuclear power plants [1]. Besides, UAVs can also effectively enhance human productivity, such as being used for intelligent warehousing management in large factories or plant care and monitoring in greenhouses [2]. Military indoor inspection, indoor photography and scenes scanning for construction purposes are also common indoor applications for UAVs. In addition, with the booming development of autonomous UAV applications, the underlying technologies that UAVs heavily rely on, such as localization and attitude estimation, have gradually attracted the attention of scholars and have become a research hotspot.
UAVs can obtain location and attitude information through many different types of sensors. Depending on the signal source, the sensors can be classified into outside-in types, such as GNSS, VICON (a motion capture system), laser scanner, ultra-wideband (UWB), Wi-Fi, and so on. However, the information from this type of sensor cannot be acquired without the pre-set external facilities. As a consequence, these outside-in localization methods cannot work properly in unexplored environments. On the contrary, inside-out types, such as onboard cameras, radars, light detection and ranging (LiDAR), etc., can receive information independently without any 3rd equipment. In terms of computational power requirements, even though using inside-out type sensors is more costly to sense the ego-motion of UAVs, they can deal with the autonomous navigation problem in unknown or even dynamic environments compared to outside-in type sensors and have significant advantages in hardware cost considerations as well. Moreover, the inside-out perception strategy offers UAVs the ability to sense the environment, also known as mapping. The mapping capability is essential for real-time obstacle avoidance, trajectory planning, and other related autonomous functions. All in all, it can be inferred that SLAM technology based on the inside-out type sensors will play a pivotal role in the development of autonomous UAVs under GPS-denied environments.
The vSLAM is favored by UAV developers for its lightweight and low-power consumption camera sensors. Moreover, rich color information not only makes the map more valuable but also has excellent potential for removing dynamic obstacles [3,4,5,6]. It is also worth mentioning that the well-known open-source monocular SLAM solutions based on static visual features (indirect methods) are parallel tracking and mapping (PTAM) [7] and ORB-SLAM (1–2) [8,9], while large-scale direct monocular SLAM (LSD-SLAM) [10], semi-direct visual odometry (SVO) [11] and direct sparse odometry (DSO) [12] are based on direct methods. However, using only a monocular camera for positioning inevitably causes scale ambiguity and further makes it difficult to directly provide valid information. Therefore, in practice, additional sensors are needed to compensate for the scale, as in [13,14], where this problem is addressed by using radar and barometer, respectively. Besides, with the popularity of micro-electro-mechanical-system (MEMS), integration between vision and IMU, also known as visual-inertial odometry (VIO), such as monocular visual-inertial system (VINS-Mono) [15] and robust visual inertial odometry (ROVIO) [16], has received high attention in recent years, and has been applied in many novel fields like augmented reality (AR) and virtual reality (VR). Unlike monocular cameras, RGB-D cameras can directly access scale and depth information through structured light or Time-of-Flight (ToF). The associated famous vSLAM open-source solutions include ElasticFusion [17], dense visual odometry (DVO) [18], RGB-D SLAM v2 [19], and KinectFusion [20]. Although RGB-D cameras have a strong ability to build high-dense maps, their weight, size, and price will bring a considerable burden to UAVs compared to general RGB cameras. The short measurement distance by an RGB-D sensor also restricts the size applicability of the operating environment. Thus, stereo cameras with lightweight, low cost, low-power consumption, and long measurement distance are obviously more suitable for UAV perceptions. [21] proposes the stereo multi-state constraint Kalman filter (S-MSCKF), which fuses stereo vision and IMU in a tightly coupled manner to construct a highly robust and real-time localization system, and also solves the problem of large state dimension in the traditional filter-based vSLAM architecture. Based on the parallel multi-threading architecture proposed in [7], Pire et al. divide the overall system into front-end tracking and back-end optimization threads performed in parallel, thereby improving time cost and positioning accuracy [22].
Although vSLAM is capable of achieving effective exploration in a GPS-denied environment, the associated algorithms involve a considerable number of parameters or thresholds. In general, these parameters are not only fixed after system startup but also mostly lack intuitive physical meaning, making it difficult to have a well-founded manual adjustment. For UAVs with aggressive motions or in complex operating environments, these parameters require instant online updates to keep the system in optimal condition. For the above reasons, the concept of dynamic threshold is introduced into the field of vSLAM, such as dynamically adjusting threshold value [23], which controls the selection of keyframe based on field-of-view (FoV) repetition rate. Meanwhile, according to the results of several experiments, it is found that the selection of feature matching threshold has a dominant effect on positioning accuracy. Therefore, how to effectively select the matching threshold online according to different situations will be one of the key discussion issues in this paper.
In fact, not only the vSLAM system has the requirement of online parameter adjustment, but also the Mahony complementary filter [24] for UAVs attitude estimations. As the UAVs are under aggressive motions, the measured acceleration varies drastically, which will cause biased attitude estimation. The biased estimates not only deteriorate the basic flight stability of UAVs, but also lower the vSLAM accuracy. Therefore, this paper proposes to adjust the K p online in Mahony complementary filter to further improve the attitude estimation accuracy, not only to achieve better flight performance of UAVs, but also to prepare for the information fusion with the vSLAM system. The following summarizes the three major problems that are going to be addressed in this paper:
  • Online adaptive parameter tuning for feature matching threshold, which lacks physical meaning.
  • Online adaptive K p gain adjustment for Mahony complementary filter to resist aggressive motions.
  • Fusion and motion compensation loop design between vSLAM and IMU.
Following the above-mentioned issues, the further proposed online adaptive parameter tuning algorithm and motion compensation loop in this paper has the following three primary contributions.
  • The matching threshold and the K p gain, which are not easy to determine via manual tuning, are adjusted adaptively according to the UAVs’ flight status.
  • The proposed online adaptive parameter tuning algorithm can effectively improve the pose estimation accuracy and can enhance frame per second (FPS) by up to 70% and 29%, respectively, in the EuRoC dataset.
  • The developed motion compensation loop subroutine can effectively utilize IMU information to improve the anti-shading robustness of the original vSLAM performance. Moreover, incorporating the presented online adaptive parameter tuning algorithm can further improve the robustness to a higher level.

2. The Framework of the vSLAM System

This paper develops a proposed online adaptive parameter tuning algorithm based on the S-PTAM [25], whose performance is comparable to the state-of-the-art ORB-SLAM2 [9] and has better accuracy than stereo large-scale direct SLAM (S-LSD-SLAM) [26]. The most significant feature of S-PTAM is that the overall system involves two independent threads, i.e., the tracking thread and the mapping thread. The former mainly performs real-time online localization, and the latter is responsible for local map optimization. In order to introduce the framework of the proposed algorithm, it is essential to briefly explain several important concepts in these threads and the settings adopted in this paper.

2.1. Coordinate Setup

EuRoC [27] dataset is used for the final algorithm validation. The euRoC dataset not only provides stereo image sequences and IMU measurements but also the relative pose relationships between sensors and the ground truth about 6 degrees of freedom (DoF) pose, which will be used as a reference in this paper. More details about this dataset can be accessed in the “Data Availability Statement” section at the end of this paper. In order to facilitate the comparison of results, this paper refers to its coordinate settings. The whole coordinates system contains a left camera frame ( C ), right camera frame ( C r i g h t ), body frame ( B ), world frame ( W ), and navigation frame ( S ). Both camera frames take the front shooting direction as the z-axis, while the right and bottom of the shooting direction as the x-axis and y-axis, respectively. The optical center of the camera is taken as the origin. The body frame is the same as the z-axis parallel to the front shooting direction but different from the camera frames; the upper and right sides of the shooting direction are the x-axis and the y-axis. The world frame whose z-axis is parallel to gravity is defined by instruments that provide ground truth poses, such as the Vicon tracking system or Leica MS50. In order to show the definition of all frames more clearly, Figure 1 is provided to help to understand.
It must be noted that the vSLAM system does not know any information about the world frame during the positioning process, which means that the world frame cannot be used as the reference frame to describe the pose, so it is necessary to introduce the navigation frame. In this paper, the first left camera frame, after activating the vSLAM system, is defined as the navigation frame, and all subsequent positioning information is described as the left camera frame with respect to the navigation frame, as shown in Figure 2.

2.2. Keyframe Selection

The current tracking frame is treated as a keyframe as long as it meets the keyframe selection criteria. It is worth mentioning that the core strategy of keyframe selection is to check whether the whole map can provide enough feature points for the current tracking frame to match. S-PTAM in [25] selects the keyframe that has the most covisible points with the current tracking frame and then takes the number of map points observed by this keyframe as the reference. When the number of map points matched with the current tracking frame is less than half of the reference or less than twenty, the tracking frame is considered a keyframe. In short, the keyframe observes more places that have not been explored before, as shown in Figure 3.

2.3. Tracking Thread

S-PTAM [25] will use the feature matching information between keyframes and map points to maintain a covisibility relationship to manage the situation that each map point observed by multiple keyframes, as shown in Figure 4.
By searching for keyframes that have a covisibility relationship with the previous tracking frame, the map points that may be observed at that moment can be quickly predicted from the global map, namely the submap. Then, the initial guess of the left camera pose can be predicted by the constant velocity motion model, which can combine with frustum culling to further remove the map points that cannot be seen in the submap, and further obtain the local map. Thereafter, matching features between the stereo image at the moment and the local map are conducted. Finally, according to the matching result, BA is applied to obtain the optimal pose, which takes the output of the constant velocity motion model as the initial guess. The overall tracking thread is to repeat the above steps continuously. The associated detailed flowchart is summarized in Figure 5.
There are two noteworthy points for the aforementioned algorithm flow chart:
  • The output of the constant velocity motion model may be a weak initial guess, especially when UAVs are in aggressive motions such as sharp turnings or lost image information.
  • The optimization of the BA is highly dependent on the accuracy of feature matching.
In other words, the initial guess and the outliers of the matching pair must be handled carefully during the development process. For the former, this paper will obtain a more reliable value by using IMU information, which will be explained clearly in Section 4 and Section 5. First of all, let’s introduce feature matching and BA in the following section.

2.4. Feature Extraction and Matching

Before feature matching, feature extraction is performed on the stereo images to find the 2D static feature points, which consist of two parts, key points and descriptors. It is worth mentioning that the key points cannot be matched directly, while the further extracted descriptors allow the key points have the ability to describe the surrounding areas, which makes it possible to perform feature matching. In this paper, the Shi-Tomasi corner detection algorithm [28] proposed by Jianbo and Tomasi is used for the detection of the key point, while binary robust independent elementary features (BRIEF) [29] is used for the descriptors extraction. Figure 6. shows the key point detection results for the EuRoC dataset. Examining Figure 6b, it is obvious that in the low-illuminated area, fewer key points will be detected, which may further lead to positioning divergence or drift.
The BRIEF descriptor is a 256-bit binary vector, therefore, the Hamming distance is adopted to evaluate the feature similarity. For example, if there are two binary vectors, such as 10001 and 01011, with three different bits in the same position between them, then the Hamming distance is defined as three. As a result, the larger the Hamming distance, the lower the similarity of the two vectors. Following the above description, in this paper, each 3D map point on the local map is checked against all the feature points on the current stereo images by the brute-force feature matching algorithm, further matching those with the minimum Hamming distance and below the matching threshold. The illustration of the overall matching algorithm is shown in Figure 7, while Figure 8 shows the result of feature matching in the EuRoC dataset. At the same time, since stereo rectification is performed in advance, the ideal matching pairs can be guaranteed to have close v values, which can be used as a physical constraint to preliminarily filter out the extreme outliers. However, as illustrated in Figure 8, although this strong matching constraint can remove most of the outliers, certain mismatching pairs remain inevitable, as shown by the pink lines in Figure 8. Therefore, the BA algorithm, which is susceptible to mismatching, must introduce an additional mechanism to maintain the pose estimation accuracy. The related details will be presented in the next section.

2.5. Bundle Adjustment

Before constructing the BA cost function, the camera model must be established first. The pinhole model is often used to describe the conversion mechanism between the camera frame and the pixel frame. The projection can be expressed as:
u v = 1 s f x 0 c u 0 f y c v R C R t 0 1 p R 1 1 : 3 h R C T , p R ,
where u , v are pixel coordinate values, while f x , f y are focal lengths (in pixels) in the u and v directions, respectively. c u and c v are the corresponding principal points. R C R and t are the rotation matrix and translation vector, respectively, for the reference frame ( R ) with respect to the camera frame ( C ), p R is the map point position represented in the reference frame, s is the scale factor, the subscript 1 : 3 means to take the first three values of the vector, and R C T is a transform matrix that can be written as:
R C T = R C R t 0 1 SE 3 ,
In essence, the BA algorithm is to solve for the optimal camera poses and map point positions by minimizing the re-projection error and its cost function can be expressed as Equation (3).
arg min T S C j , e s t   p i , e s t S J = k = 1 m u k v k h T S C j , e s t , p i , e s t S 2
where T S C j , e s t is the transform matrix for the navigation frame with respect to the j t h camera frame, p i , e s t S is the map point position represented in the navigation frame, and u k , v k are the pixel value on the j t h camera which match to the map point p i , e s t S . And re-projection error can be defined as:
e k = u k v k h T S C j , e s t , p i , e s t S ,
where e k 2 × 1 . Substitute Equation (4) into Equation (3) and convert it into a vector form.
J = e r e T e r e ,
where e r e can be written as:
e r e = e 1 e m 2 m × 1 ,
After defining the cost function, the derivatives of the state, i.e., camera poses T S C j , e s t and map points, p i , e s t S must be determined in order to construct the Jacobian matrix of e r e that will be used in the optimization procedure. However, it is worth noting that the rotating matrix itself has six constraints, that is, the L2-norm of each column and row must equal 1. In other words, it must guarantee that these constraints always hold during each iteration of the optimization, which will not only make the process difficult to perform but also increase the time spent. Fortunately, by introducing Lie algebra [30], this problem can be solved perfectly.
Firstly, define a small pose perturbation as shown in Equation (7).
Δ T j , e s t = Δ R Δ t 0 1 SE 3 ,
Equation (7) can be written in the form of Lie algebra by logarithmic mapping, shown as follows.
ξ j , e s t = ρ φ = ln Δ T j , e s t S E 3 se 3 ,
where ρ 3 × 1 , φ 3 × 1 , and S E is defined as:
0 a 6 a 5 a 1 a 6 0 a 4 a 2 a 5 a 4 0 a 3 0 0 0 0 S E 3 = a 1 a 2 a 3 a 4 a 5 a 6 ,
Alternatively, Equation (8) can be written as Equation (10) by exponential mapping.
Δ T j , e s t = exp ξ j , e s t S E 3 ,
where S E 3 is the inverse operation of Equation (9) and can be defined as:
a 1 a 2 a 3 a 4 a 5 a 6 S E 3 = 0 a 6 a 5 a 1 a 6 0 a 4 a 2 a 5 a 4 0 a 3 0 0 0 0 ,  
An additional similar marker S O 3 must be introduced, whose definition is shown in Equation (12), and both S E 3 S O 3 will be used in the subsequent derivation.
a 1 a 2 a 3 S O 3 = 0 a 3 a 2 a 3 0 a 1 a 2 a 1 0 ,
Then, the derivative of the re-projection error for small pose perturbation (represented as the Lie algebra) can be obtained by the chain rule.
e k ξ j , e s t = e k p i , e s t C p i , e s t C ξ j , e s t ,
where p i , e s t C is defined as:
p i , e s t C = T S C j , e s t p i , e s t S 1 1 : 3 = R S C j , e s t p i , e s t S + t j , e s t ,
Therefore, according to Equations (1) and (14), the first term e k / p i , e s t C in Equation (13) can be derived as follows.
e k p i , e s t C = u k v k h T j , e s t , p i , e s t S p i , e s t C = f x p i , e s t , z C 0 f x p i , e s t , x C p i , e s t , z C 2 0 f y p i , e s t , z C f y p i , e s t , y C p i , e s t , z C 2 ,
where p i , e s t C = p i , e s t , x C p i , e s t , y C p i , e s t , z C T , and the scale factor s is equal to p i , e s t , z C . Next, the left perturbation model will be introduced in order to obtain the second term p i , e s t C / ξ j , e s t in Equation (13), which is the derivative of Lie algebra.
p i , e s t C ξ j , e s t = ( Δ T j , e s t T S C j , e s t [ p i , e s t S 1 ] ) 1 : 3 [ ρ T φ T ] T = ( exp ( ξ S E 3 ) T S C j , e s t [ p i , e s t S 1 ] ) 1 : 3 [ ρ T φ T ] T ( ( I + ξ S E 3 ) [ R S C j , e s t t j , e s t 0 1 ] [ [ p i , e s t S 1 ] ] ) 1 : 3 [ ρ T φ T ] T = ( [ φ S O 3 ρ 0 0 ] [ R S C j , e s t p i , e s t S + t j , e s t 1 ] ) 1 : 3 [ ρ T φ T ] T = ( φ S O 3 ( R S C j , e s t p i , e s t S + t j , e s t ) + ρ ) [ ρ T φ T ] T = ( ( R S C j , e s t p i , e s t S + t j , e s t ) S O 3 φ + ρ ) [ ρ T φ T ] T = [ I ( p i , e s t C ) S O 3 ]
The derivative of the re-projection error with respect to the pose is obtained by substituting Equations (15) and (16) into Equation (13), which yields
e k ξ j , e s t = f x p i , e s t , z C 0 f x p i , e s t , x C p i , e s t , z C 2 0 f y p i , e s t , z C f y p i , e s t , y C p i , e s t , z C 2 I p i , e s t C S O 3 2 × 6 ,
The derivative of the re-projection error with respect to the map point can be obtained by using the chain rule again, which gives
e k p i , e s t S = e k p i , e s t C p i , e s t C p i , e s t S ,
It is worth noting that the first term e k / p i , e s t C of Equation (18) has been derived from Equation (15). Therefore, only the second term p i , e s t C / p i , e s t S needs to be derived. This result can be obtained directly from Equation (14), as shown in the following.
p i , e s t C p i , e s t S = R S C j , e s t p i , e s t S + t j , e s t p i , e s t S = R S C j , e s t ,
Substituting Equations (15) and (19) into Equation (18) yields
e k p i , e s t S = f x p i , e s t , z C 0 f x p i , e s t , x C p i , e s t , z C 2 0 f y p i , e s t , z C f y p i , e s t , y C p i , e s t , z C 2 R S C j , e s t 2 × 3 ,
According to Equations (17) and (20), the Jacobian matrix of e r e can be constructed by
J e r e = e 1 p 1 , e s t S e 1 p i , e s t S e 1 ξ 1 , e s t e 1 ξ j , e s t e m p 1 , e s t S e m p i , e s t S e m ξ 1 , e s t e 1 ξ j , e s t 2 m × 3 i + 6 j ,
Based on the Jacobian matrix Equation (21), the increment of state can be solved by applying Levenberg–Marquardt algorithm (LM) to further obtain the optimal camera poses and map point positions. That is
Δ x = J e r e T J e r e + λ I 1 J e r e T e r e ,
where Δ x is defined as:
Δ x = Δ p 1 S T Δ p i S T ξ 1 T ξ j T T ,
In each optimization iteration, the state will be continuously updated according to Equation (24) until the increment Δ x is small enough to stop. Moreover, the termination condition applied in this paper is that the root means square of Δ x is less than 10 9 .
p i , e s t S = p i , e s t S + Δ p i S , i 1 , i   T S C j , e s t = exp ξ j ^ S E 3 T S C j , e s t , j 1 , j
However, it is worth noting that BA is sensitive to outliers, namely feature mismatching. To solve this issue, some strategies for filtering outliers must be designed. In this paper, the Huber loss will be used to deal with this problem.
Different from the square error expressed in Equation (3), the cost function is now modified as follows by considering the Huber loss.
J = k = 1 m L u k v k h T S C j , e s t , p i , e s t S ,
where L is the Huber function and is defined as:
L e = i = 1 n e ¯ i   ;   e ¯ i = 1 2 e i 2 e i δ e ¯ i = δ e i 1 2 δ 2 e i > δ ,
where e n × 1 , e i represents the i t h element of the vector e and δ is the Huber threshold which is set to 5.991 in this paper and must be set manually in advance. It is evident that when the error is greater than the threshold δ , the error will show a linear growth instead of the original squared increment, which can effectively eliminate the large error caused by the outliers.
A detailed derivation and explanation about how to realize the outliers suppression into the state increment calculation shown in Equation (22) are given in the following. With the use of the Huber function (26), let us redefine the problem as searching for a state increment Δ x based on a fixed state x f i x that can make the modified cost function as small as possible. That is
J = L e r e x f i x , Δ x ,
By using the Taylor expansion, the first-order approximation can be obtained
J L e ¯ r e ,
where e ¯ r e is defined as:
e ¯ r e = e r e x f i x + J e r e Δ x ,
Using the chain rule, the derivative of the cost function with respect to the state increment can be expressed by
L e ¯ r e Δ x = e ¯ r e Δ x T diag e ¯ r e diag e ¯ r e 1 L e ¯ r e e ¯ r e = J e r e T diag e ¯ r e b = J e r e T B e ¯ r e
where b and B are defined by
b = diag e ¯ r e 1 L e ¯ r e e ¯ r e ,
and
B = diag b ,
respectively.
According to Equations (26) and (31), the detailed configuration of the diagonal matrix B can be written as:
B = ψ e ¯ r e , 1 0 0 0 ψ e ¯ r e , 2 0 0 0 ψ e ¯ r e , 2 m ,
where e ¯ r e , 1 , e ¯ r e , 2 and e ¯ r e , 2 m are the 1st, 2nd and 2mth element of the vector e ¯ r e , respectively. The ψ is defined as:
ψ e = 1 e δ δ e e > δ ,
Substituting Equation (29) into Equation (30) yields
L e ¯ r e Δ x = J e r e T B e ¯ r e = J e r e T B e r e x f i x + J e r e Δ x ,
If Equation (35) is zero, one has
J e r e T B e r e x f i x + J e r e Δ x = 0 Δ x = J e r e T B J e r e 1 J e r e T B e r e x f i x
Compared with Equation (22), obviously, the cost function with Huber loss is almost equivalent to iteratively reweighted least squares (IRLS). Therefore, based on Equations (36) and (24), the BA algorithm with outlier rejection can be carried out to obtain the optimal camera poses and map point positions.
Following the above description, not only the sensitivity of the BA with respect to the initial value but also the resistance against outliers will be tested. More specifically, the ground truth pose of the camera ( T S C g t ) is multiplied by a perturbation as the initial value, which is shown in Equation (37).
T S C i n i t i a l   g u e s s = T S C g t exp ζ S E 3 ,
where ζ s e 3 . Finally, we plot the root mean square error (RMSE) of the estimated camera pose according to different initial guesses T S C i n i t i a l   g u e s s , which are represented as different L2-norm values ζ . At the same time, it must be emphasized that the BA used in the tracking thread does not optimize the numerous map points, but only the current camera poses in order to improve real-time performance, also known as motion-only BA [8]. Therefore, the following tests (with outliers) are all based on this type of BA. The first row of Figure 9 shows the results without Huber, while the second row presents those with Huber.
Apparently, with the aid of the Huber loss, the performance of BA is much more accurate, which means that the Huber loss can effectively resist the impact of outliers. Meanwhile, the sensitivity against the initial value perturbation is also much lower. At the right side of the horizontal axis of each graph in Figure 9, it also illustrates that if the initial guesses are quite poor, the estimated state will still diverge even using the Huber loss. Fortunately, the image frame rate is mostly between 20 and 30. The UAV’s pose state usually does not change much in such a short period of time (about 0.05 s). Therefore, a terrible initial guess is less likely to be created under normal circumstances.

3. Online Adaptive Matching Threshold Tuning for vSLAM System

3.1. Accuracy Analysis under Different Matching Thresholds

After introducing the problems faced by BA, let’s move on to another issue. Feature matching plays a quite important role in the feature-based vSLAM system, and the matching result will directly affect many critical procedures, such as keyframe selection, BA, triangulation used in mapping, and so on. Undoubtedly, all of the above will directly or indirectly impact the final vSLAM accuracy. Meanwhile, as mentioned in Section 2.4, feature matching relies heavily on the matching threshold, which is a Hamming distance with no physical meaning and must be set before starting the vSLAM system. Based on the above description, a correlation between the matching threshold and the positioning accuracy definitely exists. As a result, a guide regarding the selection of the threshold should be provided.
Take the MH_01_easy series as an example, and observe the positioning accuracy with matching thresholds 10, 15 and 20, respectively. As listed in Table 1, the absolute trajectory error (ATE) and relative pose error (RPE) [31] are adopted as the benchmark to evaluate the accuracy.
According to Table 1, it can be seen that the RPE and ATE are not the smallest for the most severe matching threshold, 10. The reason is that when the matching threshold is very small, it drastically reduces the number of correct match pairs and thus loses the constraint for state optimization. However, the increase in the matching threshold does not necessarily guarantee an increase in localization accuracy. Table 1 clearly reveals that a robust vSLAM should be able to adjust the associated matching threshold automatically. The online automatic threshold scheduling not only improves the overall localization accuracy but prevents tedious manual adjustment as well. In the following section, an online adaptive matching threshold tuning algorithm is proposed.

3.2. Online Adaptive Matching Threshold Tuning Algorithm

This research tends to use more physically meaningful displacement and yaw angle differences between keyframes as indicators to adjust the matching threshold. As mentioned in Section 2.2, the overlap rate between the keyframe’s FoV and the current global map is relatively low. For example, when the displacement and yaw angle difference between two adjacent keyframes are less than certain thresholds defined as t h r e s m , 1 and t h r e s y a w , 1 , it represents a contradiction with the definition of a keyframe. Therefore, it can be inferred that the threshold of the Hamming distance is too severe, and then increases the value immediately. In the opposite case, the matching threshold will be reduced. Because the initial stereo image is the most stable, the initial matching threshold is set to 10, and an upper bound t h r e s m a x and lower bound t h r e s m i n are set to limit the matching threshold range. The overall process is summarized in Figure 10, and the parameter settings used in this paper are listed in Table 2.

4. Online Adaptive Parameter Tuning for Mahony Complementary Filter

4.1. Mahony Complementary Filter

IMUs is an indispensable component for UAVs, and most flight control algorithms rely heavily on them for attitude estimation. In other words, IMU information is basically available on UAVs. Besides, it is expected that the positioning accuracy or robustness can be improved by fusing IMU and vSLAM information. Therefore, this paper not only uses vSLAM to obtain positioning information but also uses Mahony complementary filter to calculate the UAV’s attitude from IMU measurements. First, a brief introduction to the Mahony complementary filter will be given below.
The first step to carrying out the Mahony complementary filter algorithm is to integrate the gyroscope measurements or angular velocity, which can be expressed as
q G ¯ B k = normalize q G ¯ B k 1 + 0.5 Δ k Ω ω k 1 , m e s q G ¯ B k 1 ,
where G ¯ represents the gravity frame, whose z-axis is always parallel to the gravity. q G ¯ B k is the normalized quaternion, which represents the attitude of the gravity frame with respect to the body frame at the moment k . ω k 1 , m e s is the angular velocity measurement at the moment k 1 and Δ k is the sampling period of the IMU, which is 0.005 s in the EuRoC dataset. Ω is defined as:
Ω ω = 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0   ;   ω = ω x ω y ω z ,
The unit gravity vector a ¯ k , e s t is further estimated by the attitude q G ¯ B k derived from Equation (38), as shown in Equation (40).
a ˜ k , e s t = q G ¯ B k * 0 0 0 1 q G ¯ B k   ;   a ˜ k , e s t = 0 a ¯ k , e s t ,
where a ¯ k , e s t is the estimated unit gravity vector, the star mark in q G ¯ B k * represents the conjugate operation on the quaternion, and is defined as:
q 1 q 2 q 3 q 4 T q 1 q 2 q 3 q 4 T = q 1 q 1 q 2 q 2 q 3 q 3 q 4 q 4 q 1 q 2 + q 2 q 1 + q 3 q 4 q 4 q 3 q 1 q 3 q 2 q 4 + q 3 q 1 + q 4 q 2 q 1 q 4 + q 2 q 3 q 3 q 2 + q 4 q 1
After obtaining the estimated unit gravity vector, the error e k can be calculated by comparing a ¯ k , e s t it with the acceleration measurement
e k = a ¯ k , e s t × a k , f i l t e r a k , f i l t e r ,
where a k , f i l t e r represents the acceleration measurement after passing through a low-pass filter (LPF) at moment k . The setting of the LPF in this paper will be described in Section 5.1. Then, a P gain K p is applied for IMU pose estimation correction purpose
u k = K p e k ,
The correction effort u k is used to compensate for the angular velocity measurement. As a result, the new attitude estimation at the next moment can be derived by integrating the compensated angular velocity shown as follows
q G ¯ B k + 1 = normalize q G ¯ B k + 0.5 Δ k Ω ω k , m e s u k q G ¯ B k ,
However, there are some problems in this process, which will be analyzed and improved in the following section.

4.2. Online Adaptive K p Tuning

According to Equations (42) and (43), the attitude error correction (44) should have a premise. The acceleration measurement is supposed to contain gravity information only to truly reflect the accumulated error of attitude. In other words, the compensation timing of Equation (44) should be restricted. Put simply, when the difference between acceleration measurement and gravity is greater than a threshold t h r e s n o r m , the compensation effort u k should be disabled.
Based on the above description, a confidence level is considered to represent the difference between the acceleration measurement and gravity. For example, the smaller the difference between the two, the less the UAV’s acceleration there is in acceleration measurement, which can reflect the accumulated error more truly. Based on this motivation, this paper further proposes an adaptive K p , applied in Equation (43), in compliance with different motions in order to improve the estimation accuracy.
The following presents several adjustment strategies (including control and experimental group) to analyze the accuracy improvement.
  • Pure Integration (control group):
The attitude is directly obtained by integrating the angular velocity according to Equation (38).
  • Arctan Method (control group):
Directly obtain the Euler angle directly through Equation (45).
ϕ = atan 2 a k , y , f i l t e r , a k , z , f i l t e r θ = atan 2 a k , x , f i l t e r , a k , y , f i l t e r 2 + a k , z , f i l t e r 2
  • Pure Mahony (control group):
No matter if the difference between acceleration measurement and gravity is smaller than the threshold t h r e s n o r m , the angular velocity is always compensated according to Equation (44).
  • Conditional Method (experimental group):
If the difference between acceleration measurement and gravity is greater than the threshold t h r e s n o r m , u k will be set to zero.
  • Adaptive Method Version. 1 (experimental group):
Based on the Conditional Method, K p in Equation (43) is further adjusted according to Equation (46).
u k = K p exp abs a k , f i l t e r g t h r e s n o r m e k ,
where g is gravity acceleration and is set to 9.82121 in this paper.
  • Adaptive Method Version. 2 (experimental group):
Based on the Adaptive Method Version.1, a minor change is made
u k = K p exp abs a k , f i l t e r g κ t h r e s n o r m e k ,
where κ is an additional parameter to adjust the sensitivity of K p for acceleration change.
  • Adaptive Method Version. 3 (experimental group):
Based on the Adaptive Method Version.2, modify Equation (47) as the following adaption law
u k = K p + Δ K p exp abs a k , f i l t e r g κ t h r e s n o r m e k ,
Table 3 shows the parameter settings used in this paper. Then, the EuRoC dataset is used to evaluate the above seven adjustment strategies. Besides, the estimated results are shown in Figure 11, Figure 12 and Figure 13. These figures show the roll and pitch angle estimation results for different adjustment strategies and are presented in the first and second rows of each figure, respectively, while the third row represents a flag that indicates whether the difference between measured acceleration and gravity is smaller than the threshold t h r e s n o r m . Finally, the RMSE of the Euler angle is listed in Table 4.
Examining Table 4, it can be observed that Pure Mahony achieved good accuracy performance in the MH_01_easy and MH_02_easy series. The main reasons are further analyzed. According to [27], it can be known that the UAV’s motions in both the series are relatively slow in the whole dataset, which means that the error e k is reliable most of the time, so the Pure Mahony, which always compensates the angular velocity, will have high accuracy in attitude estimation. At the same time, we observe that the Conditional Method has quite higher accuracy than Pure Mahony for aggressive motions series, such as MH_03_medium, MH_04_difficult and MH_05_difficult. These experimental results strongly show that conditional compensation can improve accuracy in aggressive motions definitely in regard to the Adaptive Method Version. 1~3, we can find that the Adaptive Method Version.3 has more stable and accurate results, which sufficiently shows that the online K p adjustment is helpful for the attitude estimation in aggressive motions. Finally, in the MH_04_difficult series, the results show that the Pure Integration achieves a good accuracy performance in the roll angle; this is because the EuRoC dataset uses a higher grade IMU. According to [27], MH_04_difficult has the shortest duration among the five series, so the accumulation error induced by pure integration is not significant. There is no doubt that using pure integration to obtain the attitude will definitely cause estimation drift. Briefly, this paper will adopt the Adaptive Method Version. 3 for the attitude calculation.

5. Motion Compensation Loop Design

5.1. Static State Detection Algorithm

With the above introduction and analysis of the Mahony complementary filter, it is obvious that the attitude calculated through the IMU is relative to the gravity frame, while the information obtained by vSLAM is relative to the navigation frame. Therefore, information alignment from the IMU gravity frame to the vSLAM navigation frame is essential for pose compensation.
Equation (49) shows the process of converting the IMU information into the same reference frame as vSLAM.
T C S k , i m u = T B 0 S T G ¯ B 0 T B G ¯ k , i m u T C B ,
where T B G ¯ k , i m u is the k t h pose information for the body frame relative to the gravity frame calculated by the IMU, T C S k , i m u is the converted IMU information, and B 0 is the first body frame. Besides, according to the definition of the navigation frame, T B 0 S is the inverse matrix of T C B , which represents the relative pose between the body frame and the left camera frame, and it is worth mentioning that T B 0 S and T C B are fixed values only related to the hardware setup, and must be derived from the calibration procedure in advance. Fortunately, in the EuRoC dataset, these values have been precisely provided. It is obvious that each conversion will involve T G ¯ B 0 . In other words, it must be as accurate as possible in order to avoid negative compensation effects. Meanwhile, when a UAV is stationary, the acceleration measurements which do not involve linear acceleration are much cleaner. Based on the stationary property of the UAV, accelerometer measurements in a static state will be applied to obtain accuracy T G ¯ B 0 before activating the vSLAM system. First, in order to know when the UAV is stationary exactly, this paper further proposes a static state detection algorithm, which will be illustrated in detail below.
However, even in the stationary status, the accelerometer inevitably contains high-frequency noise and thereby causes inaccurate pose estimation. An LPF, as shown in Equation (50), is applied to suppress the high-frequency noise appearing in the acceleration measurements
a k , f i l t e r = β a k 1 , f i l t e r + 1 β a k , m e s ,
where a k , f i l t e r is the k t h filtered acceleration measurement, a k , m e s is the k t h unfiltered acceleration measurement, while β can be defined as:
β = 1 1 + 2 π f c Δ k ,
where f c is the cutoff frequency and is set to be 0.4775 Hz in this paper.
After filtering the acceleration measurements, they are placed into a moving window (500 pieces in size). When the moving window is full, the judgment procedure will be triggered. If the standard deviation in this moving window is less than t h r e s s t d , which is set to be 0.02, and the difference between the latest measurement and the gravity is less than t h r e s n o r m as well, the moment is considered to be stationary. However, if the condition is not satisfied, then 70% history data in the moving window will be cleared, and the above action will be repeated again until the static state is detected. The flowchart of the overall detection algorithm is illustrated in Figure 14, while Figure 15 shows the results of the static state detection for the EuRoC dataset. In Figure 15, the filtered 3-axis acceleration and the L2-norm of acceleration are illustrated, respectively. In addition, the timing of the static state determined by the proposed algorithm is presented as well. According to the detection results, the dataset MH_01_easy series is considered to be stationary at 23.5 s, 11.25 s for the MH_03_medium series and 13 s for the MH_04_difficult series. The results also show that the UAVs are not immediately detected as a static state when they are stationary. The reason is that the detection accuracy will significantly affect the subsequent compensation procedure, so the relevant thresholds in the static state detection algorithm are set more severely to guarantee the detection quality.

5.2. Motion Compensation Proccess

In order to fit the body frame setup and facilitate the calculation of G ¯ B 0 T , the gravity frame is rotated by −90 degrees according to its y-axis first, and the x-axis of the rotated gravity frame will be parallel to the opposite direction of gravity. Based on the coordinate configuration, when the UAV is stationary, the relationship between acceleration measurements and gravity can be expressed by
a a v e r , x a a v e r , y a a v e r , z = R g 0 0 ,
where a a v e r , x , a a v e r , y , a a v e r , z are the average of the latest measurements within 0.5 s in moving window, ϕ 0 , θ 0 and ψ 0 are the euler angles of the body frame with respect to the rotated gravity frame in static state, and R is defined as:
R = cos ϕ 0 sin ϕ 0 0 sin ϕ 0 cos ϕ 0 0 0 0 1 cos θ 0 0 sin θ 0 0 1 0 sin θ 0 0 cos θ 0 1 0 0 0 cos ψ 0 sin ψ 0 0 sin ψ 0 cos ψ 0 ,
Substituting Equation (53) into Equation (52) gives
a a v e r , x a a v e r , y a a v e r , z = g cos ϕ 0 cos θ 0 sin ϕ 0 cos θ 0 sin θ 0 ,
According to Equation (54), ϕ 0 and θ 0 can be calculated by
ϕ 0 = atan 2 a a v e r , y , a a v e r , x θ 0 = atan 2 a a v e r , z , a a v e r , x 2 + a a v e r , y 2 ,
Let ψ 0 equal to 0 degrees, then the rotation matrix of the first body frame with respect to the gravity frame can be derived as follows
R B 0 G ¯ = 0 0 1 0 1 0 1 0 0 H ,
where H is defined as
= [ cos ( θ 0 ) 0 sin ( θ 0 ) 0 1 0 sin ( θ 0 ) 0 cos ( θ 0 ) ] [ cos ( ϕ 0 ) sin ( ϕ 0 ) 0 sin ( ϕ 0 ) cos ( ϕ 0 ) 0 0 0 1 ]
Note that the settings of ψ 0 will not affect the attitude estimation (in Mahony complementary filter). In this paper, we further define that the origin of both the gravity frame and first body frame are coincident, as shown in Equation (58).
t o r i g i n , G ¯ = 0 0 0 T ,
According to Equations (56) and (58), T G ¯ B 0 can be obtained as follows
T G ¯ B 0 = T B 0 G ¯ 1 = R B 0 G ¯ t o r i g i n , G ¯ 0 1 1 ,
And the flowchart of the T G ¯ B 0 calculation is illustrated in Figure 16.
The estimated ϕ 0 and θ 0 for the EuRoC dataset are listed in Table 5. The errors are calculated by comparing them with the ground truth. According to Table 5, it can be found that the proposed algorithm can accurately detect the stationary state and the associated estimation errors for ϕ 0 and θ 0 are almost less than 0.1 degree, which meets the accuracy demand of T G ¯ B 0 .
After obtaining the accurate T G ¯ B 0 , based on the attitude estimated by Mahony complementary filter, the position information can be derived by integrating the free acceleration twice.
t G ¯ k , o r i g i n , B = t G ¯ k 1 , o r i g i n , B + v G ¯ k 1 , o r i g i n , B Δ k + 0.5 a G ¯ k 1 , f r e e Δ k 2 v G ¯ k , o r i g i n , B = v G ¯ k 1 , o r i g i n , B + a G ¯ k 1 , f r e e Δ k ,
where t G ¯ k , o r i g i n , B and v G ¯ k , o r i g i n , B represent the k t h position and velocity of the UAV under the gravity frame, respectively, and a G ¯ k , f r e e is the k t h acceleration measurement with gravitational component removed, also known as free acceleration and can be defined as:
a G ¯ k , f r e e = R B G ¯ k a k , f i l t e r 0 0 g ,
where R B G ¯ k is obtained by converting q G ¯ B k , which is derived from the Mahony complementary filter. Besides, it is worth mentioning that in the EuRoC dataset, every ten sample periods 10 Δ k , both IMU and image information will be aligned, which is good timing to fuse them. Therefore, first define T B G ¯ k + 10 as:
T B G ¯ k + 10 = R B G ¯ k + 10 t G ¯ k + 10 , o r i g i n , B 0 1 ,
and convert the reference coordinate system of T B G ¯ k + 10 by Equation (49) to further obtain T C S i + 1 , c o m , which will be served as the initial guess of BA in the tracking thread at moment i + 1 , replacing the constant velocity motion model. These processes are illustrated in Figure 17.
However, due to the double integration used in Equation (60), the position t G ¯ k , o r i g i n , B and velocity v G ¯ k , o r i g i n , B error will accumulate rapidly. Therefore, the position as well as velocity will be reset by the compensated vSLAM output T C S i + 1 to reduce the accumulated error. Firstly, convert the reference frame of T C S i + 1 by Equation (63).
T B G ¯ k + 10 , r e s = T B 0 G ¯ T S B 0 T C S i + 1 T B C ,
where T B G ¯ k + 10 , r e s is defined as:
T B G ¯ k + 10 , r e s = R B G ¯ k + 10 , r e s t G ¯ k + 10 , r e s 0 1 ,
Thus, the reset position t G ¯ k + 10 , r e s can be obtained, while the reset speed v G ¯ k + 10 , r e s can be derived from the previous reset position, as shown in Equation (65).
v G ¯ k + 10 , r e s = t G ¯ k + 10 , r e s t G ¯ k , r e s 10 Δ k ,
After obtaining the reset position and reset speed, Equation (60) can be performed again based on them. The overall compensation subroutine working in real-time is illustrated in Figure 18.

6. Experiment Verification

In this section, the advantages of the proposed algorithm are verified by using the EuRoC dataset. Not only the pose estimation accuracy, FPS and anti-shading robustness are examined, but also the online adaptive tuning process of both matching thresholds and K p is shown to give the readers a more comprehensive understanding of the proposed algorithm.

6.1. Ablation Studied for Accuracy Comparison

This section mainly analyzes the estimation accuracy of the following four different strategies so as to independently evaluate the effort of both online adaptive tuning algorithms:
  • Case. 1: not to use both proposed online adaptive tuning algorithms.
  • Case. 2: only use the online adaptive matching threshold tuning algorithm.
  • Case. 3: only use the online adaptive K p tuning algorithm.
  • Case. 4: use both proposed online adaptive tuning algorithms.
Moreover, to validate the contribution of the proposed algorithm for localization accuracy objectively, the loop closure for drift compensation in the following experiments is not considered. Based on the above four conditions, the corresponding accuracy analysis results are shown in Figure 19, Figure 20, Figure 21 and Figure 22. In Figure 19 and Figure 20, the error bars in each dataset series with respect to the above four different strategies are displayed from left to right.
Experimental examinations clearly show that the proposed online adaptive matching threshold tuning algorithm can effectively improve the accuracy. In particular, in the MH_03_medium series, 63% and 70% accuracy improvement is achieved in RPE and ATE, respectively. Even though the accuracy did not improve in the MH_04_difficult series, the overall accuracy did not change drastically too much, and meanwhile, according to Figure 20, the attitude estimation accuracy is still improved. However, we also know that the online adaptive K p tuning algorithm is not significantly helpful for accuracy improvement. Even though in Table 4, the IMU data in each dataset series has been tested independently, and results show that the online adaptive K p tuning algorithm can improve the attitude estimation accuracy effectively. Based on the conclusion given in Figure 9, it can be further inferred that since the motion-only BA with Huber robust kernel significantly reduces the sensitivity to the initial value, the pose compensation with IMU does not have a noticeable effect. However, for severe situations, such as when the stereo cameras are temporarily occluded, the IMU’s pose compensation will play an important role in maintaining the positioning stability, which will be tested and analyzed in the next section. Figure 23 shows the time cost for the four different strategies, while Table 6 lists the computer specifications used in this paper. Besides, from the onboard implementation point of view, the Intel NUC, a lightweight and small computer with comparable computing power, is sufficient to carry out the proposed algorithms. Figure 23 clearly shows that the online adaptive matching threshold tuning algorithm can improve FPS effectively. The reason is that the proposed adaptive matching threshold tuning algorithm can reasonably adjust the matching threshold to avoid over-triggering the keyframe selection, namely excessive mapping and involving too many map points that need optimization.
Figure 24 shows the automatic tuning process of matching threshold and K p parameter (take the MH_01_easy and MH_03_medium series, for example). Figure 24 shows the matching threshold is not always being adjusted at every frame, but only when the keyframe is established, which corresponds to the content described in Section 3.2. In contrast, the online adaptive K p tuning is performed at every moment, as long as the acceleration information is available.

6.2. Anti-Shading Robustness Test

Since the IMU information is integrated into the vSLAM algorithm, the UAVs’ flight trajectories can temporarily rely on the motion compensation loop, which further enhances the overall anti-shading robustness of the original vSLAM. Following the above statement, the robustness of the overall system against shading will be examined. The examination method is to turn off the left and right camera images simultaneously to simulate the shading situation. In order to analyze the anti-shading performance more comprehensively, the following three vSLAM scenarios will be examined and analyzed.
  • Case. A: without using both the online adaptive parameter tuning algorithm and the motion compensation loop subroutine.
  • Case. B: only using motion compensation loop subroutine.
  • Case. C: using the proposed online adaptive parameter tuning algorithm and the motion compensation loop subroutine.
Table 7 shows the corresponding results, including the time stamp for image loss, and the pose accuracy represented in RPE and ATE.
According to Table 7, Case. A, the original vSLAM system, is unable to perform localization successfully without using both the proposed online adaptive algorithm and the motion compensation loop subroutine. In addition, in Case B, even though the system can work properly in MH_01_easy, MH_02_easy and MH_05_difficult series, the UAVs’ localizations still fail for MH_03_medium and MH_04_difficult series. On the contrary, by using the proposed online adaptive algorithm and the motion compensation loop subroutine, namely Case. C, not only the pose estimation can be successfully conducted in all data series, but also the localization accuracy can be maintained. The above experimental results firmly reveal that the developed motion compensation loop subroutine can indeed increase the robustness against shading by the addition of IMU information, and more importantly, a better level of robustness performance will be achieved by further combining online adaptive parameter tuning algorithms. The key factor is that the online adaptive parameter tuning algorithm can effectively increase the attitude estimation accuracy, thereby improving the motion compensation quality, which heavily relies on the estimated free acceleration and attitude shown in Equation (61).
The trajectories of the anti-shading experiment are shown in Figure 25. The black crosses in these figures represent the localization results (Case. C) via using the proposed algorithm during the critical image loss period. These figures also show that the positioning system without the proposed algorithm not only fails to track after image loss, but also makes a straight-line estimation based on the constant velocity motion model. This experiment implies that there will be less chance to pull back the trajectory under aggressive motions even when the image information recovers. On the contrary, these failures can be solved successfully by applying the proposed method.
In the following, we present the results of anti-shading robustness tests in parallel, as shown in Figure 26, to make the robustness analysis of the proposed vSLAM more intuitive. Figure 26a shows the localization results for three different scenarios after masking the stereo camera at the 849th frame in the MH_03_medium series. It is obvious that the proposed algorithm is the only survivor that completes the indoor flight localization and mapping examination successfully, while the others all fail to track the flight trajectories; this result highlights again that the proposed algorithm is effective in improving the robustness against shading for the vSLAM system. In addition, Figure 26b shows the localization results for three different scenarios before the stereo camera is masked. Even though all these three scenarios conduct localization successfully, the proposed algorithm has the lowest number of keyframes, which implies that the developed vSLAM algorithm can preserve precise vSLAM with fewer map feature points. It is worth mentioning that fewer map feature points requirement reduces the computational loading and saves a lot of storage memory space; these advantages make the proposed algorithm more suitable for practical applications of light UAVs.

7. Conclusions

UAVs are one of the most promising vehicles in recent years, and the underlying positioning technology is receiving more and more attention. However, the over-reliance on GPS has limited the application diversity of UAVs. In order to solve the navigation problem in GPS-denied or interferenced environments, the stereo vSLAM solution is gradually gaining attention. Therefore, this paper presents an improved UAVs vSLAM system based on the S-PTAM architecture and Mahony complementary filter. The proposed online adaptive matching threshold tuning algorithm and online adaptive K p tuning algorithm can improve the overall vSLAM accuracy and robustness. These adaptive tuning algorithms will automatically adjust the influential matching threshold online and the K p gain used in the Mahony complementary filter, respectively. The adaptive mechanisms eliminate the tedious manual adjustment of these non-physically meaningful parameters. In order to fuse the IMU and vSLAM information, an additional static state detection algorithm is proposed, which has been tested and proven to be accurate in detecting static state. After the static state has been detected, an accurate initial relationship can be computed, which will greatly help the subsequent information conversion. Besides, the error of the initial relationship is almost less 0.1 degree. Based on this accurate initial relationship, a good conversion mechanism between IMU and vSLAM is established. Finally, a couple of experimental studies are used to validate the proposed algorithm and the vSLAM architecture. The results show that the online adaptive matching threshold tuning algorithm can improve localization accuracy and FPS effectively. Moreover, an anti-shading robustness test was further addressed. Experiments firmly show that the vSLAM robustness against temporary image loss can be achieved successfully by incorporating the proposed algorithms.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/s22208067/s1, Vedio S1: An Online Adaptive Parameter Tuning vSLAM Algorithm for UAVs in GPS-denied Environments-Series (3).

Author Contributions

Conceptualization, C.-L.C. and C.-C.P.; methodology, R.H. and C.-C.P.; validation, R.H. and C.-C.P.; formal analysis, R.H.; investigation, C.-L.C., R.H. and C.-C.P.; data curation, R.H.; writing—original draft preparation, C.-L.C. and R.H.; writing—review and editing, C.-C.P.; visualization, R.H.; supervision, C.-C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Technology under Grant No. MOST 110-2221-E-006-095 and MOST 111-2923-E-006-004-MY3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The official website of the EuRoC dataset, including downloadable links is https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (accessed on 10 October 2022); EuRoC dataset provides stereo image sequences, IMU measurements, ground truth about 6 DoF pose and relative pose relationships between sensors which are included in the sensor.yaml file in each dataset series folder; The information about sensor type, UAVs used, dataset series characteristics and other details can be accessed in [27] or the official website presented above.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khosiawan, Y.; Park, Y.; Moon, I.; Nilakantan, J.M.; Nielsen, I. Task scheduling system for UAV operations in indoor environment. Neural Comput. Appl. 2019, 31, 5431–5459. [Google Scholar] [CrossRef] [Green Version]
  2. Khosiawan, Y.; Nielsen, I. A system of UAV application in indoor environment. Prod. Manuf. Res. 2016, 4, 2–22. [Google Scholar] [CrossRef] [Green Version]
  3. Kaneko, M.; Iwami, K.; Ogawa, T.; Yamasaki, T.; Aizawa, K. Mask-slam: Robust feature-based monocular slam by masking using semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 258–266. [Google Scholar]
  4. Wang, Y.; Zell, A. Improving Feature-based Visual SLAM by Semantics. In Proceedings of the 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS), Genova, Italy, 12–14 December 2018; pp. 7–12. [Google Scholar]
  5. Yu, C.; Liu, Z.; Liu, X.J.; Xie, F.; Yang, Y.; Wei, Q.; Fei, Q. DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1168–1174. [Google Scholar]
  6. Truong, P.H.; You, S.; Ji, S. Object Detection-based Semantic Map Building for A Semantic Visual SLAM System. In Proceedings of the 2020 20th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea, 13–16 October 2020; pp. 1198–1201. [Google Scholar]
  7. Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Washington, DC, USA, 13–16 November 2007; pp. 225–234. [Google Scholar]
  8. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  9. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  10. Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM. In Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
  11. Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
  12. Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
  13. Andert, F.; Mejias, L. Improving monocular SLAM with altimeter hints for fixed-wing aircraft navigation and emergency landing. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 1008–1016. [Google Scholar]
  14. Yang, T.; Li, P.; Zhang, H.; Li, J.; Li, Z. Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments. Electronics 2018, 7, 73. [Google Scholar] [CrossRef] [Green Version]
  15. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  16. Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar]
  17. Whelan, T.; Salas-Moreno, R.F.; Glocker, B.; Davison, A.J.; Leutenegger, S. ElasticFusion: Real-time dense SLAM and light source estimation. Int. J. Robot. Res. 2016, 35, 1697–1716. [Google Scholar] [CrossRef] [Green Version]
  18. Kerl, C.; Sturm, J.; Cremers, D. Dense visual SLAM for RGB-D cameras. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2100–2106. [Google Scholar]
  19. Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D Mapping With an RGB-D Camera. IEEE Trans. Robot. 2014, 30, 177–187. [Google Scholar] [CrossRef]
  20. Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Singapore, 26–29 October 2011; pp. 127–136. [Google Scholar]
  21. Sun, K.; Mohta, K.; Pfrommer, B.; Watterson, M.; Liu, S.; Mulgaonkar, Y.; Taylor, C.J.; Kumar, V. Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight. IEEE Robot. Autom. Lett. 2018, 3, 965–972. [Google Scholar] [CrossRef]
  22. Pire, T.; Fischer, T.; Castro, G.; De Cristóforis, P.; Civera, J.; Jacobo Berlles, J. S-PTAM: Stereo Parallel Tracking and Mapping. Robot. Auton. Syst. 2017, 93, 27–42. [Google Scholar] [CrossRef] [Green Version]
  23. Chen, W.; Zhu, L.; Lin, X.; Guan, Y.; He, L.; Zhang, H. Dynamic Strategy of Keyframe Selection with PD Controller for VSLAM Systems. IEEE/ASME Trans. Mechatron. 2021, 27, 115–125. [Google Scholar] [CrossRef]
  24. Euston, M.; Coote, P.; Mahony, R.; Kim, J.; Hamel, T. A complementary filter for attitude estimation of a fixed-wing UAV. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 340–345. [Google Scholar]
  25. Uoip. Stereo_Ptam. Available online: https://github.com/uoip/stereo_ptam (accessed on 20 August 2022).
  26. Engel, J.; Stückler, J.; Cremers, D. Large-scale direct SLAM with stereo cameras. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1935–1942. [Google Scholar]
  27. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  28. Jianbo, S.; Tomasi. Good features to track. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  29. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary Robust Independent Elementary Features. In Proceedings of the Computer Vision—ECCV 2010, Berlin, Heidelberg, 5–11 September 2010; pp. 778–792. [Google Scholar]
  30. Gao, X.; Zhang, T.; Liu, Y.; Yan, Q. 14 Lectures on Visual SLAM: From Theory to Practice; Publishing House of Electronics Industry: Beijing, China, 2017. [Google Scholar]
  31. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
Figure 1. Coordinate setup: (a) Illustration of coordinate setup on UAV (modify from [27]); (b) Illustration of world frame and body frame; (c) Illustration of world frame, right camera frame, and left camera frame.
Figure 1. Coordinate setup: (a) Illustration of coordinate setup on UAV (modify from [27]); (b) Illustration of world frame and body frame; (c) Illustration of world frame, right camera frame, and left camera frame.
Sensors 22 08067 g001
Figure 2. Illustration of navigation frame.
Figure 2. Illustration of navigation frame.
Sensors 22 08067 g002
Figure 3. Keyframe selection criteria.
Figure 3. Keyframe selection criteria.
Sensors 22 08067 g003
Figure 4. Illustration of covisibility relationship.
Figure 4. Illustration of covisibility relationship.
Sensors 22 08067 g004
Figure 5. Flowchart of tracking thread.
Figure 5. Flowchart of tracking thread.
Sensors 22 08067 g005
Figure 6. The implementation of the Shi-Tomasi corner detector: (a) Detection results in a well-illuminated environment; (b) Detection results in a low-illuminated environment.
Figure 6. The implementation of the Shi-Tomasi corner detector: (a) Detection results in a well-illuminated environment; (b) Detection results in a low-illuminated environment.
Sensors 22 08067 g006
Figure 7. Brute-force feature matching.
Figure 7. Brute-force feature matching.
Sensors 22 08067 g007
Figure 8. The result of feature matching pairs in the EuRoC dataset. (The left and right image represent the images from the left and right camera of the stereo camera, respectively.)
Figure 8. The result of feature matching pairs in the EuRoC dataset. (The left and right image represent the images from the left and right camera of the stereo camera, respectively.)
Sensors 22 08067 g008
Figure 9. Optimization results of motion-only BA according to different initial guesses: (a) RMSE of Euler angle (without using Huber); (b) RMSE of position (without using Huber); (c) RMSE of Euler angle (with using Huber); (d) RMSE of position (with using Huber).
Figure 9. Optimization results of motion-only BA according to different initial guesses: (a) RMSE of Euler angle (without using Huber); (b) RMSE of position (without using Huber); (c) RMSE of Euler angle (with using Huber); (d) RMSE of position (with using Huber).
Sensors 22 08067 g009
Figure 10. Flowchart of online adaptive matching threshold tuning algorithm.
Figure 10. Flowchart of online adaptive matching threshold tuning algorithm.
Sensors 22 08067 g010
Figure 11. Roll and pitch angle obtained from different adjustment strategies for MH_01_easy series.
Figure 11. Roll and pitch angle obtained from different adjustment strategies for MH_01_easy series.
Sensors 22 08067 g011
Figure 12. Roll and pitch angles were obtained from different adjustment strategies for the MH_03_medium series.
Figure 12. Roll and pitch angles were obtained from different adjustment strategies for the MH_03_medium series.
Sensors 22 08067 g012
Figure 13. Roll and pitch angles were obtained from different adjustment strategies for the MH_04_difficult series.
Figure 13. Roll and pitch angles were obtained from different adjustment strategies for the MH_04_difficult series.
Sensors 22 08067 g013
Figure 14. Flowchart of the proposed static state detection algorithm.
Figure 14. Flowchart of the proposed static state detection algorithm.
Sensors 22 08067 g014
Figure 15. Filtered acceleration measurements and the results of static state detection: (a) For MH_01_easy series; (b) For MH_03_medium series; (c) For MH_04_difficult series.
Figure 15. Filtered acceleration measurements and the results of static state detection: (a) For MH_01_easy series; (b) For MH_03_medium series; (c) For MH_04_difficult series.
Sensors 22 08067 g015
Figure 16. Flowchart of T G ¯ B 0 calculation.
Figure 16. Flowchart of T G ¯ B 0 calculation.
Sensors 22 08067 g016
Figure 17. Alignment between vSLAM and IMU data.
Figure 17. Alignment between vSLAM and IMU data.
Sensors 22 08067 g017
Figure 18. Real-time motion compensation loop subroutine.
Figure 18. Real-time motion compensation loop subroutine.
Sensors 22 08067 g018
Figure 19. Position accuracy comparison.
Figure 19. Position accuracy comparison.
Sensors 22 08067 g019
Figure 20. Euler angle accuracy comparison.
Figure 20. Euler angle accuracy comparison.
Sensors 22 08067 g020
Figure 21. RPE comparison.
Figure 21. RPE comparison.
Sensors 22 08067 g021
Figure 22. ATE comparison.
Figure 22. ATE comparison.
Sensors 22 08067 g022
Figure 23. FPS comparison.
Figure 23. FPS comparison.
Sensors 22 08067 g023
Figure 24. Adaptive tuning process of matching threshold and K p : (a) For MH_01_easy series; (b) For MH_03_medium series.
Figure 24. Adaptive tuning process of matching threshold and K p : (a) For MH_01_easy series; (b) For MH_03_medium series.
Sensors 22 08067 g024
Figure 25. Estimated trajectories in anti−shading robustness test: (a) For MH_01_easy series; (b) For MH_03_medium series; (c) For MH_04_difficult series.
Figure 25. Estimated trajectories in anti−shading robustness test: (a) For MH_01_easy series; (b) For MH_03_medium series; (c) For MH_04_difficult series.
Sensors 22 08067 g025
Figure 26. The results of the anti-shading robustness test with MH_03_medium series: (a) Anti-shading robustness examination (Supplementary File Vedio S1); (b) Key frame examination (X-Y-Z axis Unit: meter).
Figure 26. The results of the anti-shading robustness test with MH_03_medium series: (a) Anti-shading robustness examination (Supplementary File Vedio S1); (b) Key frame examination (X-Y-Z axis Unit: meter).
Sensors 22 08067 g026
Table 1. Accuracy comparison according to the different matching thresholds for MH_01_easy series.
Table 1. Accuracy comparison according to the different matching thresholds for MH_01_easy series.
SeriesError Type Matching   Threshold = 10 Matching   Threshold = 15 Matching   Threshold = 20
MH_01_easyRPE0.4832510.5034350.404268
ATE0.5788360.6153210.503354
Table 2. Parameter settings for the proposed online adaptive matching threshold tuning algorithm.
Table 2. Parameter settings for the proposed online adaptive matching threshold tuning algorithm.
ParameterSetting Value
t h r e s m , 1 0.65
t h r e s y a w , 1 5
t h r e s m , 2 1
t h r e s y a w , 2 6
t h r e s m a x 45
t h r e s m i n 5
Table 3. Parameter setting for the proposed online adaptive K p tuning algorithm.
Table 3. Parameter setting for the proposed online adaptive K p tuning algorithm.
ParameterSetting Value
t h r e s n o r m 0.01
K p 0.15
κ 12
Δ K p *0.4
* In MH_02_easy and MH_03_medium series, it will be set to 0.01.
Table 4. Results of online adaptive K p tuning algorithm for EuRoC dataset. (Unit: deg).
Table 4. Results of online adaptive K p tuning algorithm for EuRoC dataset. (Unit: deg).
Method\SeriesMH_01_EasyMH_02_EasyMH_03_MediumMH_04_DifficultMH_05_Difficult
Roll
(RMSE)
Pitch
(RMSE)
Roll
(RMSE)
Pitch
(RMSE)
Roll
(RMSE)
Pitch
(RMSE)
Roll
(RMSE)
Pitch
(RMSE)
Roll
(RMSE)
Pitch
(RMSE)
Pure Integration1.21400.89970.3969 0.30900.17390.20190.286120.20800.322920.2605
Arctan Method2.00941.76201.51911.63334.38813.98342.44542.61232.43232.4165
Pure Mahony0.353730.29630.25130.35000.887040.64330.727190.57960.6052 0.5560
Conditional Method1.17340.52150.30130.20870.12460.15920.29440.18380.27040.2456
Adaptive Method Version.11.16340.59300.32260.21820.12720.16510.29490.18990.28470.2471
Adaptive Method Version. 21.17240.52700.303240.20910.124120.15950.29460.18450.27180.2457
Adaptive Method Version. 31.04870.41660.298740.20840.125690.15900.29280.18020.22160.2448
Table 5. Results of a static state detection algorithm for the EuRoC dataset.
Table 5. Results of a static state detection algorithm for the EuRoC dataset.
SeriesTime Stamp (Second) for Triggering Static State Detection Algorithm (Checked from Figure 15)The True State of The UAV is Stationary or Not (Checked from Images) Estimated   ϕ 0   and   θ 0   ( deg ) Angle Error (deg) *
MH_01_easy23.5yes ϕ 0 4.0044−0.008034
θ 0 15.6614−0.034309
MH_02_easy28.75yes ϕ 0 3.35130.001947
θ 0 15.5490−0.047756
MH_03_medium11.25yes ϕ 0 4.86580.050396
θ 0 15.6173−0.045054
MH_04_difficult13yes ϕ 0 0.4097−0.073378
θ 0 24.17700.113550
MH_05_ difficult14.75yes ϕ 0 0.1934−0.006640
θ 0 23.99300.095132
* The error is defined as the ground truth minus the estimated value.
Table 6. Computer specifications used for the proposed algorithm.
Table 6. Computer specifications used for the proposed algorithm.
CPURAM
Intel Core i7-11800H @2.30GHz16 GB
Table 7. Results of anti-shading robustness test.
Table 7. Results of anti-shading robustness test.
ScenariosSeriesTime Stamp (Second) for Image LossRPE (Meter)ATE (Meter)
Case. A
(Pure vSLAM)
MH_01_easy55.45~56.95Tracking FailTracking Fail
MH_02_easy55.45~57.95Tracking FailTracking Fail
MH_03_medium42.35~44.35Tracking FailTracking Fail
MH_04_difficult34.95~36.45Tracking FailTracking Fail
MH_05_ difficult74.95~77.45Tracking FailTracking Fail
Case. B
(vSALM with the proposed motion compensation loop)
MH_01_easy55.45~56.950.52000.6119
MH_02_easy55.45~57.950.42100.5292
MH_03_medium42.35~44.35Tracking FailTracking Fail
MH_04_difficult34.95~36.45Tracking FailTracking Fail
MH_05_ difficult74.95~77.451.13780.8813
Case. C
(vSALM with the proposed online adaptive parameter tuning and motion compensation loop)
MH_01_easy55.45~56.950.25780.3290
MH_02_easy55.45~57.950.36860.4471
MH_03_medium42.35~44.350.42480.4615
MH_04_difficult34.95~36.450.70990.7794
MH_05_ difficult74.95~77.451.08060.8940
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, C.-L.; He, R.; Peng, C.-C. Development of an Online Adaptive Parameter Tuning vSLAM Algorithm for UAVs in GPS-Denied Environments. Sensors 2022, 22, 8067. https://0-doi-org.brum.beds.ac.uk/10.3390/s22208067

AMA Style

Chen C-L, He R, Peng C-C. Development of an Online Adaptive Parameter Tuning vSLAM Algorithm for UAVs in GPS-Denied Environments. Sensors. 2022; 22(20):8067. https://0-doi-org.brum.beds.ac.uk/10.3390/s22208067

Chicago/Turabian Style

Chen, Chieh-Li, Rong He, and Chao-Chung Peng. 2022. "Development of an Online Adaptive Parameter Tuning vSLAM Algorithm for UAVs in GPS-Denied Environments" Sensors 22, no. 20: 8067. https://0-doi-org.brum.beds.ac.uk/10.3390/s22208067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop