Next Article in Journal
Defence against Black Hole and Selective Forwarding Attacks for Medical WSNs in the IoT
Previous Article in Journal
Mapping Vineyard Leaf Area Using Mobile Terrestrial Laser Scanners: Should Rows be Scanned On-the-Go or Discontinuously Sampled?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR

Department of Electrical Engineering, University of Ulsan, Namgu, Ulsan 680-749, Korea
*
Author to whom correspondence should be addressed.
Submission received: 21 November 2015 / Revised: 14 January 2016 / Accepted: 15 January 2016 / Published: 19 January 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people.

1. Introduction

Although the global position system (GPS) is normally used in pedestrian navigation systems [1,2], alternative techniques are needed for GPS-denied environments such as indoors [3,4,5,6,7,8,9], urban canyons [10,11], underground [12,13,14], and mountain regions where GPS signals are weak or unavailable. Two research methods are mainly used in the indoor position estimation [15,16,17,18,19,20]. The first method is based on an existing network of receivers or transmitters placed at known locations. This method is known as beacon-based navigation in which position is estimated using triangulation (or trilateration) method from measured ranges (or angles). The method usually uses different technologies such as vision, ultrasound or short range radio which are generally named local positioning systems (LPSs). The survey of LPSs can be found in [15]. The second method is based on dead reckoning algorithms using sensors installed on a person or an object to locate them and known as beacon-free navigation. Since there is no environment installation requirement, this method is preferred in some applications. Several dead reckoning approaches using inertial measurement unit (IMUs) have been proposed. In [16,21], the position of a person is estimated using an inertial navigation algorithm (INA). Since its accuracy degrades over time, additional sensors are often used with IMUs such as a time of arrival based on LPSs [17], received signal strength (RSS) based on LPSs [16] or distance sensors [18,19,20].
In [16], persons are accurately located by combining an active RFID technology with an inertial navigation system in which the received signal strengths obtained from several active RFID tags are used to aid a foot-mounted IMU based position estimation. This method requires that RFID tags be installed at the known locations. In [17], the RF 3D location system is applied to improve the accuracy. RF receivers are preinstalled outside around building and an RF transmitter is attached on a foot. The position of a foot is computed using time-of-arrival from the transmitter to each receiver. In [18], the position error from dead reckoning is corrected by deploying ultrasound beacons as landmarks. Another method is using radar in [19] and distance sensors in [20] to improve the accuracy. Since these papers use a floor as the reference plane, the distance sensor gives information on the foot height. Although there is no installation requirement as the methods in [16,17,18], the methods in [19,20] only improve accuracy of the foot height estimation but do not improve accuracy of horizontal position estimation. Since the foot height is an important information in the gait analysis, the methods in [19,20] can be used effectively for the gait analysis. However, the horizontal position (that is, a person’s position) is a key information in the pedestrian navigation. Thus, the methods in [19,20] are not suitable for pedestrian navigation.
Our method improves the estimation accuracy using a distance sensor in addition to foot-mounted IMUs. We recognize that an important problem in pedestrian navigation is heading correction, and we use a vertical plane (such as a wall) as a reference plane to update heading and position. The proposed system does not require any installation in the environment and any prior knowledge on environment (such as a map).

2. Overview System

An IMU (Xsens MTi) is attached on the shoe as in Figure 1. The IMU contains three-axis accelerometers and gyroscopes with 100 Hz sampling frequency. A LIDAR (light detection and ranging, model LL-905-PIN-01 [22]) is also attached on the shoe and used as a distance sensor. The distance is obtained by measuring flight time of infrared light with 30 m measurement range and 33.33 Hz sampling frequency. Body coordinate system (BCS) and world coordinate system (WCS) are used in this paper. The BCS coincides with the IMU coordinate system. The z axis of the WCS is pointing upward while the x and y axes are chosen arbitrarily. The origin of the WCS is assumed to be on the floor. The notation [ a ] w ( [ a ] b ) is used to denote that vector a is represented in the world (body) coordinate system.
Figure 1. IMU and distance sensor on a shoe.
Figure 1. IMU and distance sensor on a shoe.
Sensors 16 00120 g001
To compute a LIDAR point’s position, parameters of distance sensor such as the position and pointing direction with respect to BCS are required. In Figure 1, vector [ r l ] b R 3 denotes the position of a distance sensor while unit vector [ n l ] b R 3 represents the pointing direction.
The distance sensor parameters ( [ r l ] b , [ n l ] b in Figure 1) can be determined using a ruler and a protractor. However, it is not easy to get high accurate parameters using this method. Thus, the parameters are calibrated in the next section.
The main idea of this paper is using vertical planes (such as walls) along a walking path to update position and heading. For example, if a person is walking on a long straight corridor, we can use the wall as a reference to update position and heading. The proposed algorithm automatically detects existence of vertical planes (such as walls) and uses them as references. Thus, no prior map information is required.
The detection of vertical planes during walking is explained using an example indoor environment in Figure 2. Suppose a person is walking along the black colored pedestrian path. While walking, a foot touches the floor almost periodically. When a foot is on the floor, the velocity of a foot is zero, and this time interval is called a zero velocity interval (ZVI). At each ZVI i, we compute LIDAR point P i based on position and heading estimated using the INA. At ZVI 2, the vertical plane 1 is defined using all LIDAR points between P 1 and P 2 (including P 1 and P 2 ). Now, suppose a person is at the ZVI 3 and LIDAR point P 3 is computed. If P 3 is close to the vertical plane 1, it is assumed that P 3 is on the vertical plane 1. Since we know the vertical plane 1 equation and the distance from the vertical plane 1, we can update position and heading. At ZVI 4, LIDAR point P 4 is not near the vertical plane 1. At this time, vertical plane 1 is not used to update position and heading. At ZVI 5, a new vertical plane is formed. This process is repeated during walking.
Figure 2. Vertical plane detection using LIDAR.
Figure 2. Vertical plane detection using LIDAR.
Sensors 16 00120 g002

3. Distance Sensor Calibration

In this section, the sensor unit is handheld while LIDAR is pointing at a floor for the calibration. The coordinate of a LIDAR point with respect to the WCS is computed as follows:
b w = C b w b b + r = C b w r l + d n l b + r
where r is the position of IMU with respect to WCS, C b w is the rotation matrix from BCS to WCS, b b = r l + d n l b is the position of LIDAR point with respect to BCS and d is the distance from a distance sensor to LIDAR point. r and C b w can be computed from the INA.
We setup the distance sensor and IMU so that r l b = k n l b (see Figure 1). Since n l b is a unit vector, k is the distance between IMU and the distance sensor. Since k ( 3 cm) is very small compared with d (1∼5 m), k is measured by a ruler. Figure 3 describes our method to estimate [ n l ] b . In the calibration process, LIDAR is pointed at a floor with different poses, where the height is measured with a ruler.
Figure 3. LIDAR calibration (the sensor unit is handheld while LIDAR is pointing at the floor for the calibration).
Figure 3. LIDAR calibration (the sensor unit is handheld while LIDAR is pointing at the floor for the calibration).
Sensors 16 00120 g003
Since r l b = k n l b , Equation (1) becomes
b w = C b w k + d n l b + r
Since the floor is flat and horizontal, the height of IMU is identified by
0 0 1 k + d C b w n l b = h
where h is the height of the IMU, which is measured by a ruler.
If we repeat the measurement n times with different poses, we have
0 0 1 k + d 1 C 1 , b w 0 0 1 k + d 2 C 2 , b w 0 0 1 k + d n C n , b w n l b = h 1 h 2 h n
n l b is estimated by minimizing the following
S 1 n l b S 2 2 2
where
S 1 = 0 0 1 k + d 1 C 1 , b w 0 0 1 k + d 2 C 2 , b w 0 0 1 k + d n C n , b w  and  S 2 = h 1 h 2 h n
The analytic solution to the least squares problem is given by
n l b = S 1 T S 1 1 S 1 T S 2

4. Kalman Filter Combining an INA and Compensation Using LIDAR

4.1. Basic INA

In this subsection, a basic INA is given. This basic algorithm is not a new result and is from [23,24].
Let v R 3 and r R 3 denote the velocity and position of IMU with respect to the WCS. Let C ( q ) R 3 × 3 be the direction cosine matrix corresponding to the quaternion [25] q R 4 which represents the rotation relationship between the BCS and WCS.
The quaternion, velocity and position are related as follows [23]:
q ˙ = 1 2 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0 q v ˙ = C ( q ) T [ a ] b r ˙ = v
where ω is the angular velocity of the BCS with respect to the WCS and [ a ] b R 3 is the acceleration in the BCS.
The gyroscope output ( y g R 3 ) and accelerometer output ( y a R 3 ) are given by
y g = ω + v g + b g y a = [ a ] b + C ( q ) [ g ˜ ] w + v a + b a
where [ g ˜ ] w R 3 is the local gravitational vector in the WCS. b g R 3 and b a R 3 are biases of gyroscope and accelerometer, respectively.
The numerical integration algorithm to integrate Equation (6) (replacing [ a ] b by y a C ( q ) g ˜ and replacing ω by y g ) is given in [26]. Let q ^ , r ^ and v ^ be the integrated values.
There are errors in q ^ , r ^ and v ^ due to sensor noises, represented by q _ R 3 , r _ R 3 and v _ R 3 :
q _ = 0 3 × 1 I 3 ( q ^ * q ) r _ = r r ^ v _ = v v ^
where ⊗ denotes the quaternion multiplication and q * is the conjugate quaternion of q. The three dimensional (instead of four dimensional) error description of the quaternion in Equation (8) is from [27].
The state of a Kalman filter is defined by
x = q _ b g r _ v _ b a R 15 × 1
The system equation for the Kalman filter is given by [28]:
x ˙ ( t ) = A ( t ) x ( t ) + w ( t )
where
A ( t ) = [ y g × ] 1 2 I 0 0 0 0 0 0 0 0 0 0 0 I 0 2 C ( q ^ ) T [ y a × ] 0 0 0 0 0 0 0 0 0 , w ( t ) = 1 2 v g w b g 0 C ( q ^ ) T v a w b a
[ a × ] R 3 × 3 is a skew symmetric matrix corresponding to a vector a R 3 × 1 . The noises w b g and w b a represent small variation of biases.
Two measurement equations are used in this paper. The first one is the measurement equation based on the ZVIs, and the other is the measurement equation using the distance sensor (which is given in Section 4.2).
When a foot is touching the ground during walking, its velocity must be zero. This leads to resetting the velocity error in INA. In [29], ZVIs are detected directly using a Doppler velocity sensor. However, ZVIs can be detected indirectly using zero velocity detection algorithms [30,31].
A simple zero velocity detection algorithm is used in this paper. If the following conditions are satisfied, the discrete time index k is assumed to belong to ZVIs
y g , i B g , k N g 2 i k + N g 2 y a , i y a i 1 B a , k N a 2 i k + N a 2
where N g and N a are integers.
During ZVIs, we have the following zero velocity updating equation:
z v = H v x + v z e r o
where
z v = 0 3 × 1 v ^ R 3 × 1
H v = 0 3 × 9 I 3 0 3 × 3

4.2. Proposed Position and Heading Updating Algorithm Using the Distance Sensor

The proposed position and heading updating algorithm is shown in Figure 4. As illustrated in Figure 2, the basic idea of the proposed algorithm is to use vertical planes as references to update position and heading. The key issue is when to form a vertical plane and when to abandon an existing vertical plane. This issue is first discussed. At ZVI i, LIDAR point P i is given by
P i w = C i , b w b i b + r i
where C i , b w R 3 × 3 is a rotation matrix from BCS to WCS, and b i b R 3 is a LIDAR point with respect to BCS at ZVI i.
If there is no defined vertical plane, all LIDAR points in one walking step which is identified by ZVI i 1 and i ( i > 1 ) are used to define the first vertical plane. All of these points could be on a vertical plane (such as a wall). On the other hand, it is also possible that these points are not on a plane. For example, if there is a passing person, some LIDAR points could be from the person. In addition, it is possible that some points are from a door, which is not on the wall. We first make an assumption that all LIDAR points are on the same plane and derive the plane equation. After that, we verify the same plane assumption in Equation (18). Assume that P i , j R 3 × 1 ( 1 j N ) are all LIDAR points during the step i-th (N is number of LIDAR points in the step), and ( n i , d i ) are the vertical plane equation parameters satisfying
n i T P i , j = d i
The plane equation parameters ( n i , d i ) are computed by minimizing the following:
S 3 n i d i S 4 2 2
where
S 3 = P i , 1 T P i , 2 T P i , N T  and  S 4 = 1 1 1
Figure 4. Proposed position and heading updating algorithm using the distance sensor.
Figure 4. Proposed position and heading updating algorithm using the distance sensor.
Sensors 16 00120 g004
The minimizing solution is given by
n i d i = S 3 T S 3 1 S 3 T S 4
Since n i is a normal vector, we have
d i = 1 S 3 T S 3 1 S 3 T S 4 n i = d i S 3 T S 3 1 S 3 T S 4
Now, we check whether the LIDAR points are from the same vertical plane. If LIDAR points are near the plane ( n i , d i ) (first and second conditions in Equation (18)) and the plane is vertical (third condition in Equation (18)), we assume that LIDAR points are from the same vertical plane.
1 N j = 1 N | n i T P i , j d i | α m e a n max | n i T P i , j d i | α m a x | cos 1 n i ( 3 ) 90 0 | α a n g l e
where α m e a n , α m a x and α a n g l e are threshold parameters. Note that the inclination angle of the plane equation ( n i , d i ) is given by cos 1 n i ( 3 ) . If Equation (18) is satisfied, the estimated plane ( n i , d i ) becomes the first defined vertical plane. If Equation (18) is not satisfied, the first vertical plane is searched in the next step.
Now, we have the vertical plane equation ( n k , d k ) and the next LIDAR point P i is obtained. If this point belongs to the vertical plane, we use the plane equation to update position and heading of a pedestrian. The LIDAR point P i is determined to belong to the vertical plane ( n k , d k ) if the following is satisfied
| n k T P i d k | α p l a n e
where α p l a n e is threshold.
If LIDAR point P i does not satisfy Equation (19), the point P i could be from an obstacle (such as passing person or door) or could be from a new wall. For example, LIDAR point P i in Figure 5a is from an obstacle (the next LIDAR point P i + 1 satisfies Equation (19)) while the point P i in Figure 5b,c is from a new wall ( P i + 1 does not satisfy Equation (19)). In the proposed algorithm, a new vertical plane is only formed using Equation (17) when Equation (18) is satisfied (Figure 5c).
We are going to derive position and heading updating algorithm using LIDAR point P i when the point belongs to the vertical plane ( n k , d k ) . Since P i is on the plane ( n k , d k ) , we have (see Equation (13))
n k T C T [ b i ] b + r i = d k
where C denotes C w b ( q i ) . From [32], we have
C q T = C q ^ T + 2 C q ^ T q _ ×
From Equation (20), we have
n k T C q ^ i T + 2 C q ^ i T q _ i × [ b i ] b + r ^ i + r _ i d k = 0
Equation (22) can be rewritten as follows:
n k T C q ^ i T [ b i ] b + r ^ i d k = 2 n k T C q ^ i T [ b i ] b × q _ i n k T r _ i
This equation is used as a measurement equation in Kalman filter when P i belongs to the vertical plane
z p = H p x + v p
where v p is a measurement noise and
z p = n k T C q ^ i T [ b i ] b + r ^ i d k H p = 2 n k T C q ^ i T [ b i ] b × 0 1 × 3 n k T 0 1 × 3 0 1 × 3
Figure 5. New vertical plane detection.
Figure 5. New vertical plane detection.
Sensors 16 00120 g005

5. Experiments and Results

Three experiments are performed to verify the proposed algorithm. In the experiments, an IMU (Xsens MTi sensor unit) and distance sensor (LIDAR) are attached on the shoe as shown in Figure 6. Since the algorithm verification is the main purpose of the experiments, the measurement data are collected and processed in Matlab offline.
The first experiment is walking along a corridor (84 m) with many doors and passing persons as in Figure 7. The results of the first experiment are shown in Figure 8 (pure INA) and Figure 9 (the proposed algorithm). In the figures, the pedestrian path is drawn in a green color and the vertical wall plane is expressed by the blue line. If LIDAR points satisfy Equation (19), they are used for the measurement update in Equation (24). These “updated LIDAR points” are represented by red “*” symbols. Those points not used for the measurement update are represented by blue “o” symbols. In Figure 8, the estimated pedestrian path tends to drift over time since there is no updating in theheading. In Figure 9, the estimated pedestrian walking path is more accurate since the heading is corrected using the wall information. As can be seen in the zoomed area in Figure 9, the proposed algorithm is working robustly even with many doors and passing persons.
Figure 6. IMU and LIDAR system for the experiment.
Figure 6. IMU and LIDAR system for the experiment.
Sensors 16 00120 g006
Figure 7. Corridor with doors and passing persons.
Figure 7. Corridor with doors and passing persons.
Sensors 16 00120 g007
The final position errors after 84 m walking (Figure 8 and Figure 9) are given in Table 1. The root mean square (RMS) position error of a pure INA is 9.23 m. The proposed method gives a better RMS error 1.54 m.
Figure 8. Pedestrian path using a pure INA.
Figure 8. Pedestrian path using a pure INA.
Sensors 16 00120 g008
Figure 9. Pedestrian path using the proposed method.
Figure 9. Pedestrian path using the proposed method.
Sensors 16 00120 g009
Table 1. Final position error comparison in the straight line corridor walking (unit: m).
Table 1. Final position error comparison in the straight line corridor walking (unit: m).
Experiment NumberPosition Error (Pure INA)Position Error (Proposed Method)
14.381.74
29.481.55
310.481.54
411.611.40
58.501.43
RMS9.231.54
Walking is a complex process which depends a lot on each person’s walking style. To see whether the proposed method is affected by different walking styles, the proposed method is tested with five subjects. Each subject is asked to walk 30 m along a corridor three times (thus there are total 15 walking), and the result is shown in Table 2. As can be seen from Table 2, the proposed algorithm can work well independent of subjects, and the position error mean is significantly reduced from 2.03 m (using pure INA) to 0.42 m (using the proposed method).
The second experiment is about walking around a U shaped ( 39 × 26 . 5 m) corridor with starting position (0, 0) and final position (0, 22.26 m). The final position is marked by the red star in Figure 10 and Figure 11. There are four doors which are colored in black. The estimated pedestrian path are shown in Figure 10 (pure INA) and Figure 11 (the proposed method). In Figure 10, the heading tends to drift over time since the heading is not corrected as in the Figure 8 case. In Figure 11, we can notice that the heading is effectively corrected using the wall information.
The RMS position error of a pure INA is 3.88 m. The proposed method gives a better RMS error 0.58 m using the wall information (see Table 3).
The third experiment is to show that our method can work well with a complex map. In Figure 12, a person started from room 1 and arrived at room 2 after walking a 60 m corridor. Finally, the person went around room 2 and came back to the corridor. The red square in Figure 12 is the starting position, the estimated pedestrian path is shown in the green color and the red star is the final position. We can see that the vertical plane updating is not used inside rooms 1 and 2 (there is no updated LIDAR point inside rooms) due to obstacles.
Table 2. Final position error comparison in the corridor walking with different walking style.
Table 2. Final position error comparison in the corridor walking with different walking style.
Experiment NumberWalking Speed (km/h)Stride Length (m)Stride Speed (stride/s)Position Error (Pure INA) (m)Position Error (Proposed Method) (m)
14.141.250.922.040.45
24.151.250.921.320.34
34.571.251.022.570.52
44.591.300.983.210.54
54.721.301.013.110.37
65.191.500.962.250.43
75.431.501.011.670.35
85.441.501.011.700.44
95.461.501.012.530.52
105.531.501.021.220.36
115.541.501.031.260.42
125.571.501.032.330.37
136.061.501.122.580.39
146.071.501.120.840.31
156.101.501.131.800.51
Mean2.030.42
Figure 10. Pedestrian path in the U shaped corridor using a pure INA.
Figure 10. Pedestrian path in the U shaped corridor using a pure INA.
Sensors 16 00120 g010
Figure 11. Pedestrian path in the U shaped corridor using the proposed method.
Figure 11. Pedestrian path in the U shaped corridor using the proposed method.
Sensors 16 00120 g011
Figure 12. Pedestrian with complex path.
Figure 12. Pedestrian with complex path.
Sensors 16 00120 g012
Table 3. Final position error comparison in the U shaped corridor walking (unit: m).
Table 3. Final position error comparison in the U shaped corridor walking (unit: m).
Experiment NumberPosition Error (Pure INA)Position Error (Proposed Method)
13.010.20
25.200.64
31.940.64
42.670.91
55.320.13
RMS3.880.58

6. Discussion

The experiment in Table 3 demonstrates that the proposed algorithm can work well with a complex walking path. In Figure 12, there are some updated LIDAR points (red “*” symbols) along the corridor, but there is no updated LIDAR point in room 1 and room 2. This means a pure INA is applied in room 1 and room 2 where there are many obstacles and short walls. The vertical wall plane is automatically detected when the pedestrian enters the corridor. Thus, the estimated pedestrian walking path is more accurate while walking along corridors.
Although Table 2 shows that the proposed method can work well with different walking styles, the proposed algorithm could be affected by walking styles since the sensor is mounted on a shoe. The walking style mainly affects the zero velocity detection used in the INA. This paper uses a simple zero velocity algorithm (see Equation (11)) where its parameters ( N g , N a , N g and N a ) are chosen so that the algorithm can detect ZVIs for most normal walking styles. These fixed parameters are used during all experiments. To show the effect of this algorithm, another experiment is done for a subject who was asked to walk 3.6 m in a straight line with different speeds. An optical marker and an IMU were attached to his shoe. The IMU was used to detect zero velocity points using zero velocity algorithm, and the marker is used to track the trajectory of the foot using a ground truth system (Flex-13 OptiTrack system).
In Figure 13 and Figure 14, the detected zero velocity points are represented in red “*” symbols and the trajectory of a foot (by an optical tracker) is presented by the blue line. The algorithm can detect ZVIs for three different speeds.
Figure 13. Trajectory of foot and detected zero velocity points.
Figure 13. Trajectory of foot and detected zero velocity points.
Sensors 16 00120 g013
Figure 14. z-position of foot and detected zero velocity points.
Figure 14. z-position of foot and detected zero velocity points.
Sensors 16 00120 g014
Although the algorithm can detect all occurences of ZVIs, the detected length of ZVI is shorter than the true length of ZVI due to fixed parameters for all walking speeds. This is the drawback of the simple zero velocity algorithm.
The experiment data in Table 2 are used to verify the robustness of zero velocity detection and the result is shown in Table 4. As can be seen in Table 4, there is no missing ZVI detection with walking speed up to 6 km/h. Thus, our algorithm can work well with different walking styles using the fixed parameters. However, the last seven pieces of data, which were obtained by asking subjects to walk as fast as possible, show the drawback of this algorithm. The algorithm with fixed parameters can not detect ZVIs in very high speed walking. For example, there are 22 ZVIs, but the zero velocity algorithm only successfully detects four of them in the last data.
The drawback can be solved by using adjustable parameters for zero velocity detection. For example, the parameters will be adjusted based on the current velocity of the foot. The drawback also can be solved by directly measuring ground touching intervals (for example, using force sensors).
Table 4. Zero velocity detection with different walking styles.
Table 4. Zero velocity detection with different walking styles.
Stride Speed (Stride/s)Walking Speed (km/h)Stride Length (m)Number Zero Velocity Interval (True)Missing Zero Velocity Detection
0.924.141.25250
0.965.191.50210
0.984.591.30240
1.014.721.30240
1.015.461.50210
1.015.441.50210
1.015.431.50210
1.024.571.25250
1.025.531.50210
1.035.571.50210
1.035.541.50210
1.126.061.50210
1.126.071.50210
1.136.101.50211
1.165.961.43224
1.176.041.43228
1.266.781.502117
1.286.941.502118
1.326.811.432219
1.356.951.432218

7. Conclusions

The proposed method uses vertical planes (such as walls) to improve heading and position estimation accuracy. The vertical planes are constructed using a distance sensor.
The proposed method is verified through three experiments in a straight corridor, a U shaped corridor and a complex walking path. Experiment results in Table 1, Table 2 and Table 3 show that the proposed method gives a better RMS error compared with a pure INA. Furthermore, the zoomed area in Figure 9 indicates that the proposed algorithm is working robustly even with many doors and passing persons.
The paper also shows the drawbacks and solutions of using a simple zero velocity algorithm through the experiment in discussion section.
Since there are many vertical walls indoors, the proposed algorithm can be used for almost any indoor environments. Furthermore, the proposed system does not require any installation on environment and any prior knowledge on environment.
As a future research topic, this paper can be improved by using two distance sensors which point to different sides (left and right). By using two distance sensors, vertical planes (such as walls) in two sides of corridor will be recognized. This could improve our algorithm, especially when one side loses tracking vertical plane due to obstacles or no wall.

Acknowledgments

This work was supported by National Research Foundation of Korea Grant funded by the Korean Government (No. 2011-0021978).

Author Contributions

Young Soo Suh contributed to main algorithm design. Duy Duong Pham implemented the algorithm in Matlab and did all the experiments. Both authors wrote the paper together.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.; Hu, H. IMU/GPS based pedestrian localization. In Proceedings of the 2012 4th Computer Science and Electronic Engineering Conference, Colchester, UK, 12–13 September 2012; pp. 23–28.
  2. Langer, M.; Kiesel, S.; Ascher, C.; Trommer, G. Deeply Coupled GPS/INS integration in pedestrian navigation systems in weak signal conditions. In Proceedings of the 2012 International Conference on Indoor Positioning and Indoor Navigation, Sydney, Australia, 13–15 November 2012; pp. 1–7.
  3. Chen, G.; Meng, X.; Wang, Y.; Zhang, Y.; Tian, P.; Yang, H. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization. Sensors 2015, 15, 24595–24614. [Google Scholar] [CrossRef] [PubMed]
  4. Deng, Z.A.; Wang, G.; Hu, Y.; Wu, D. Heading Estimation for Indoor Pedestrian Navigation Using a Smartphone in the Pocket. Sensors 2015, 15, 21518–21536. [Google Scholar] [CrossRef] [PubMed]
  5. Jimenez, A.; Seco, F.; Prieto, J.; Guevara, J. Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IMU. In Proceedings of the 2010 7th Workshop on Positioning Navigation and Communication, Dresden, Germany, 11–12 March 2010; pp. 135–143.
  6. Renaudin, V.; Combettes, C. Magnetic, Acceleration Fields and Gyroscope Quaternion (MAGYQ)-Based Attitude Estimation with Smartphone Sensors for Indoor Pedestrian Navigation. Sensors 2014, 14, 22864–22890. [Google Scholar] [CrossRef] [PubMed]
  7. Placer, M.; Kovacic, S. Enhancing Indoor Inertial Pedestrian Navigation Using a Shoe-Worn Marker. Sensors 2013, 13, 9836–9859. [Google Scholar] [CrossRef] [PubMed]
  8. Jin, Y.; Motani, M.; Soh, W.S.; Zhang, J. SparseTrack: Enhancing Indoor Pedestrian Tracking with Sparse Infrastructure Support. In Proceedings of the IEEE INFOCOM, San Diego, CA, USA, 14–19 March 2010; pp. 1–9.
  9. He, Z.; Renaudin, V.; Petovello, M.G.; Lachapelle, G. Use of High Sensitivity GNSS Receiver Doppler Measurements for Indoor Pedestrian Dead Reckoning. Sensors 2013, 13, 4303–4326. [Google Scholar] [CrossRef] [PubMed]
  10. Broggi, A.; Cerri, P.; Ghidoni, S.; Grisleri, P.; Jung, H.G. A New Approach to Urban Pedestrian Detection for Automatic Braking. IEEE Trans. Intell. Transp. Syst. 2009, 10, 594–605. [Google Scholar] [CrossRef]
  11. Baranski, P.; Strumillo, P. Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation. Sensors 2012, 12, 6764–6801. [Google Scholar] [CrossRef] [PubMed]
  12. Dickens, J.; van Wyk, M.; Green, J. Pedestrian detection for underground mine vehicles using thermal images. In Proceedings of the AFRICON, Livingstone, South Africa, 13–15 September 2011; pp. 1–6.
  13. Yuan, Y.; Chen, C.; Guan, X.; Yang, Q. An Energy-Efficient Underground Localization System Based on Heterogeneous Wireless Networks. Sensors 2015, 15, 12358–12376. [Google Scholar] [CrossRef] [PubMed]
  14. Liu, Z.; Li, C.; Wu, D.; Dai, W.; Geng, S.; Ding, Q. A Wireless Sensor Network Based Personnel Positioning Scheme in Coal Mines with Blind Areas. Sensors 2010, 10, 9891–9918. [Google Scholar] [CrossRef] [PubMed]
  15. Hightower, J.; Borriello, G. Location Systems for Ubiquitous Computing. Computer 2001, 34, 57–66. [Google Scholar] [CrossRef]
  16. Ruiz, A.; Granja, F.; Prieto Honorato, J.; Rosas, J. Accurate Pedestrian Indoor Navigation by Tightly Coupling Foot-Mounted IMU and RFID Measurements. IEEE Trans. Instrum. Meas. 2012, 61, 178–189. [Google Scholar] [CrossRef]
  17. Amendolare, V.; Cyganski, D.; Duckworth, R.; Makarov, S.; Coyne, J.; Daempfling, H.; Woodacre, B. WPI precision personnel locator system: Inertial navigation supplementation. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; pp. 350–357.
  18. Fischer, C.; Muthukrishnan, K.; Hazas, M.; Gellersen, H. Ultrasound-aided Pedestrian Dead Reckoning for Indoor Navigation. In Proceedings of the 1st ACM International Workshop on Mobile Entity Localization and Tracking in GPS-less Environments, San Francisco, CA, USA, 19 September 2008; pp. 31–36.
  19. Zhou, C.; Downey, J.; Stancil, D.; Mukherjee, T. A Low-Power Shoe-Embedded Radar for Aiding Pedestrian Inertial Navigation. IEEE Trans. Microw. Theory Tech. 2010, 58, 2521–2528. [Google Scholar] [CrossRef]
  20. Duong, P.D.; Suh, Y.S. Foot Pose Estimation Using an Inertial Sensor Unit and Two Distance Sensors. Sensors 2015, 15, 15888–15902. [Google Scholar] [CrossRef] [PubMed]
  21. Xu, Z.; Wei, J.; Zhang, B.; Yang, W. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors. Sensors 2015, 15, 7708–7727. [Google Scholar] [CrossRef] [PubMed]
  22. STMicroelectronics. LIDAR (Light Detection and Ranging Module) Datasheet. Available online: https://cdn.sparkfun.com/datasheets/Sensors/Proximity/lidarlite2DS.pdf (accessed on 19 January 2016).
  23. Nam, C.N.K.; Kang, H.J.; Suh, Y.S. Golf Swing Motion Tracking Using Inertial Sensors and a Stereo Camera. IEEE Trans. Instrum. Meas. 2014, 63, 943–952. [Google Scholar] [CrossRef]
  24. Titterton, D.H.; Weston, J.L. Strapdown Inertial Navigation Technology; Peter Peregrinus Ltd.: London, UK, 1997. [Google Scholar]
  25. Kuipers, J.B. Quaternions and Rotation Sequences: A Primer with Applications to Orbits, Aerospace, and Virtual Reality; Princeton University Press: Princeton, NJ, USA, 1999. [Google Scholar]
  26. Savage, P.G. Strapdown Inertial Navigation Integration Algorithm Design Part 1: Attitude Algorithms. J. Guid. Control Dyn. 1998, 21, 19–28. [Google Scholar] [CrossRef]
  27. Markley, F.L. Multiplicative vs. Additive Filtering for Spacecraft Attitude Determination. In Proceedings of the 6th Cranfield Conference on Dynamics and Control of Systems and Structures in Space, Riomaggiore, Italy, 18–22 July 2004; pp. 467–474.
  28. Suh, Y.S.; Park, S. Pedestrian Inertial Navigation with Gait Phase Detection Assisted Zero Velocity Updating. In Proceedings of the 4th International Conference on Autonomous Robots and Agents, Wellington, New Zealand, 10–12 February 2009; pp. 336–341.
  29. Hawkinson, W.; Samanant, P.; McCroskey, R.; Ingvalson, R.; Kulkarni, A.; Haas, L.; English, B. GLANSER: Geospatial Location, Accountability, and Navigation System For Emergency Responders. In Proceedings of the Position Location and Navigation Symposium, Myrtle Beach, SC, 24–26 April 2012; pp. 98–105.
  30. Park, S.K.; Suh, Y.S. A Zero Velocity Detection Algorithm Using Inertial Sensors for Pedestrian Navigation Systems. Sensors 2010, 10, 9163–9178. [Google Scholar] [CrossRef] [PubMed]
  31. Skog, I.; Nilsson, J.O.; Handel, P. Evaluation of zero-velocity detectors for foot-mounted inertial navigation systems. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Zurich, Switzerland, 15–17 September 2010; pp. 1–6.
  32. Markley, F.L.; Crassidis, J.L. Fundamentals of Spacecraft Attitude Determination and Control; Springer: New York, NY, USA, 2014. [Google Scholar]

Share and Cite

MDPI and ACS Style

Pham, D.D.; Suh, Y.S. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR. Sensors 2016, 16, 120. https://0-doi-org.brum.beds.ac.uk/10.3390/s16010120

AMA Style

Pham DD, Suh YS. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR. Sensors. 2016; 16(1):120. https://0-doi-org.brum.beds.ac.uk/10.3390/s16010120

Chicago/Turabian Style

Pham, Duy Duong, and Young Soo Suh. 2016. "Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR" Sensors 16, no. 1: 120. https://0-doi-org.brum.beds.ac.uk/10.3390/s16010120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop