Next Article in Journal
A Smart Image Encryption Technology via Applying Personal Information and Speaker-Verification System
Previous Article in Journal
Efficient Super-Resolution Method for Targets Observed by Satellite SAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Gait Phase Recognition Method Based on DPF-LSTM-CNN Using Wearable Inertial Sensors

School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130025, China
*
Author to whom correspondence should be addressed.
Submission received: 27 May 2023 / Revised: 14 June 2023 / Accepted: 23 June 2023 / Published: 26 June 2023
(This article belongs to the Section Wearables)

Abstract

:
Gait phase recognition is of great importance in the development of rehabilitation devices. The advantages of Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) are combined (LSTM-CNN) in this paper, then a gait phase recognition method based on LSTM-CNN neural network model is proposed. In the LSTM-CNN model, the LSTM layer is used to process temporal sequences and the CNN layer is used to extract features A wireless sensor system including six inertial measurement units (IMU) fixed on the six positions of the lower limbs was developed. The difference in the gait recognition performance of the LSTM-CNN model was estimated using different groups of input data collected by seven different IMU grouping methods. Four phases in a complete gait were considered in this paper including the supporting phase with the right hill strike (SU-RHS), left leg swimming phase (SW-L), the supporting phase with the left hill strike (SU-LHS), and right leg swimming phase (SW-R). The results show that the best performance of the model in gait recognition appeared based on the group of data from all the six IMUs, with the recognition precision and macro-F1 unto 95.03% and 95.29%, respectively. At the same time, the best phase recognition accuracy for SU-RHS and SW-R appeared and up to 96.49% and 95.64%, respectively. The results also showed the best phase recognition accuracy (97.22%) for SW-L was acquired based on the group of data from four IMUs located at the left and right thighs and shanks. Comparably, the best phase recognition accuracy (97.86%) for SU-LHS was acquired based on the group of data from four IMUs located at left and right shanks and feet. Ulteriorly, a novel gait recognition method based on Data Pre-Filtering Long Short-Term Memory and Convolutional Neural Network (DPF-LSTM-CNN) model was proposed and its performance for gait phase recognition was evaluated. The experiment results showed that the recognition accuracy reached 97.21%, which was the highest compared to Deep convolutional neural networks (DCNN) and CNN-LSTM.

1. Introduction

In recent years, the number of stroke patients with hemiplegia has increased rapidly due to the aging of the population. Such patients are unable to live independently due to the impairment of lower limb functions, which has brought a heavy burden to families and society [1]. Intelligent lower limb rehabilitation equipment can provide effective rehabilitation training and motion assistance for patients [2,3]. Gait recognition, as one of the key technologies in the control part of rehabilitation equipment, plays an important role in the motion assistance process [4,5,6]. Gait, or walking, is a cyclic movement exhibiting reoccurring patterns while maintaining static and dynamic balance. A complete gait cycle can be divided into the supporting phase and swinging phase. Both feet contact the ground in the supporting phase consuming about 60% of the whole gait cycle and only one foot contacts the ground in the swinging phase consuming about 40% of the whole gait cycle [7]. The gait recognition results with high accuracy can not only improve the control effect of the rehabilitation equipment developed based on the human–computer cooperative control strategy but also be applied to the clinical treatment plan formulation of stroke or other lower limb dysfunction diseases [8].
The threshold method was generally used to complete gait recognition in most previous studies [9], but the threshold value is an empirical value which could be affected by the subjective factors of the designer. With the rapid development of artificial intelligence technology, various types of neural network models have been applied to gait recognition such as the image-based method and wearable inertial sensor-based method. The image-based gait recognition method generally obtains a series of human motion images from a high-speed camera system and then uses the neural network model to extract the feature information for gait recognition [10,11,12]. Although the accuracy of the image-based gait recognition method is high, it is difficult to apply this method to real-time control of rehabilitation equipment due to the high cost and inconvenient installation. The application of inertial sensors in gait recognition has attracted more and more researchers’ attention due to its advantages of small size, low cost, and high accuracy [13,14]. Based on the kinematic parameters of lower limb joints collected by inertial sensors, the neural network model for gait recognition with high accuracy can be realized. An integrated network model SBLSTM was proposed and recognized the gait phases effectively [15]. A Deep Convolutional Neural Networks (DCNN) recognition method based on IMU was published, the method performed best in the recognition of the swing phase and worst in the recognition of the terminal stance [16]. A new multi-model LSTM network for gait recognition was proposed in [17], the model performed better than other LSTM models. A graph convolutional network model (GCNM) for gait phase classification to control a lower limb exoskeleton system was presented [18], the model could recognize four gait phases of one leg with high accuracy. A novel gait pattern recognition method combined with LSTM and CNN was proposed and the recognition accuracy unto 97.78% [19]. However, the purpose of all the aforementioned methods mainly focused on improving the accuracy of gait recognition without attention to the numbers and locations of used sensors. The accuracy of gait recognition using neural network models with two IMUs attached to thighs, shanks, and feet, respectively was compared [20]; however, only two IMUs were used.
The LSTM model is good at processing time-series data, and the CNN is an expert in processing data with spatial structure characteristics. Therefore, this paper combined the advantages of the two models, and a gait recognition method based on LSTM-CNN neural network model was proposed. The self-developed wireless sensor system including the wearable inertial sensor sub-system and force platform sub-system could simultaneously capture six joint angles and ground reaction force, then the collected data could be used to make training and verification data sets. The difference in the gait recognition performance of the LSTM-CNN model was estimated using different groups of input data collected by seven different IMU grouping methods. Based on the results of the first part of the experiment, this paper proposes a novel gait recognition method based on DPF-LSTM-CNN and verifies its performance in the second part of the experiment. Ulteriorly, a novel gait recognition method based on Data Pre-Filtering Long Short-Term Memory and Convolutional Neural Network (DPF-LSTM-CNN) model was proposed and its performance for gait phase recognition was evaluated.

2. Methods

2.1. Data Collection

In order to meet the data set requirements for neural network model training in this paper, our team has developed a wireless sensor system including a wearable inertial sensor sub-system (IMU) and force platform sub-system. The system could simultaneously collect the angle signals of six joints of lower limbs and real-time plantar force signals during human gait. JY901 was selected as the inertial sensor to collect joint angles. The force platform sub-system consisted of six independent force measuring plates to measure the foot force when the human is walking. The Arduino Nano (Shenzhen Mingjiatai Electronics Co., Ltd., Shenzhen, China) was selected as the master control unit and collected joint angles and plantar force signals at a frequency of 100 Hz during human walking, and simultaneously transmits real-time data with the host computer. The nrf24l01 (Shenzhen Mingjiatai Electronics Co., Ltd.) wireless sensor was selected to realize wireless data transmission of the sensor system, as shown in Figure 1.

2.2. Data Preprocessing

In order to make the collected data have the same dimensions and improve the recognition accuracy of the neural network model, it is necessary to preprocess the original data collected by the IMUs. Linear interpolation and data normalization are common methods for data preprocessing include. Linear interpolation can solve the packet loss problem of sensor data during transmission, while data normalization can limit the collected data with different amplitudes and dimensions to the specified range through certain mathematical operations to obtain standardized dimensionless data. After the above data processing, the training complexity of the neural network can be reduced and the accuracy of gait recognition can be improved. In this paper, the raw data was divided with different gait phase information using the sliding window segmentation method and each sliding window contains 20 samples.
The mathematical expression of linear interpolation is as follows:
y = x x 0 x 1 x 0 ( y 1 y 0 ) + y 0 ,
where (x0, y0) and (x1, y1) represent the known sampling point information, and (x, y) is the unknown sampling point information at the x sampling point.
The mathematical expression of data normalization is as follows:
x ˜ i = x i x min x max x min ( y max y min ) + y min ,
where xi is the element i in the input vector before being normalized, and xmax and xmin are the maximum and minimum values of xi, respectively. ymax and ymin are the upper and lower limits of the normalized data, the values are set to 1 and −1, respectively.
The whole gait process was divided into the following four different gait phases in this paper: supporting phase with right hill strike (SU-RHS), left leg swing phase (SW-L), supporting phase with left hill strike (SU-LHS), and right leg swing phase (SW-R). Since the category of gait phase is discrete feature, this paper used one hot vector to represent the real distribution for the output of gait phase recognition model, all gait sub-phases are denoted as follows:
{ SU RHS   [ 1   0   0   0 ] SW L   [ 0   1   0   0 ] SU LHS   [ 0   0   1   0 ] SW R   [ 0   0   0   1 ] .
In order to record and calculate the label of IMUs training data, the plantar force data measured by the self-made force platform system was used as the reference standard [21,22]. The research work in [23] showed that when the threshold value of foot force was selected as 20 N, the contact between the foot and the force platform could be accurately judged. When the measured foot force was less than 20 N, the foot was considered to have completely disengaged from the force platform; when the measured foot force was greater than 20 N, the foot was considered to have completely contacted with the force platform.

2.3. Structure of LSTM-CNN Neural Network

2.3.1. Structure of LSTM

As an improved neural network model based on recurrent neural network (RNN), LSTM neural network is mainly used to solve the problem of gradient disappearance and gradient explosion during long sequence training. In short, LSTM can perform better in longer time series than ordinary RNN. The reason why LSTM can solve the long-term dependency problem of RNN is that it introduces a gate mechanism to control the flow and loss of features. The storage unit of LSTM contains three gate structures: forgetting gate, input gate, and output gate. The forgetting gate determines the information that needs to be retained and discarded from the previous storage unit, the input gate parses the information that needs to be updated to the storage unit, and the output gate determines the output information according to the input and storage unit [24,25]. It is precise because LSTM has a unique internal structure that it can process time series signals well. The internal structure of LSTM unit is shown in Figure 2.
{ i t = σ ( W i h t 1 + W i X t + b i ) f t = σ ( W f h t 1 + W f X t + b f ) O t = σ ( W O h t 1 + W O X t + b O ) .
Ct Update formula and output of the whole unit ht are as follows:
{ h t = O t × tanh ( C t ) C t = f t × C t 1 + i t × tanh ( W c h t 1 + W c X t + b c ) ,
where xt represents input, ht represents the hidden layer state at time t, and [W] and [b] represent the weight matrix and deviation term of LSTM unit, respectively.

2.3.2. Structure of CNN

The structure of CNN neural network model is shown in Figure 3. CNN network mainly includes three parts: convolution layer, pooling layer, and full connection layer. Among them, the convolution layer is mainly used to extract the characteristics of the input data, the pooling layer is mainly used to compress the amount of data and parameters, reduce over fitting, and the full connection layer is used for classification. As the most important part of the CNN network, the convolution layer mainly uses the convolution kernel to extract features. The output of the convolution layer can be expressed by the following formula:
x j l = f ( i = 1 M ( x i l 1 × W j l ) + b j l ) ,
where xjl is the output feature of the l-layer, Wjl is the convolution kernel weight of the l-layer, xil1 is the input data feature of the l-layer, bjl is the corresponding offset, and f is the activation function. ReLU is used as activation function in this paper.
Pooling layer is connected behind convolution layer to reduce over fitting during network training. There are mainly two pooling methods: maximum pooling and average pooling. When the maximum pooling method is adopted, the point with the largest eigenvalue in the pooled area is taken as the new feature point. When the average pooling method is adopted, the average eigenvalue in the pooled area is taken as the new feature point. In this paper, the maximum pooling method is adopted. The input data is transformed into the final feature samples after being processed by multi-layer convolution layer and pooling layer, and the full connection layer maps these features to the sample label space to enable the CNN network to realize gait recognition.

2.3.3. Structure of LSTM-CNN

Figure 4 shows the LSTM-CNN neural network model structure used for gait recognition in this paper including a double-layer LSTM network and a double-layer CNN network. In LSTM, the two hidden layers contain 128 units, and the activation function used by the hidden layer is the tanh function. In CNN, the two convolution layers have 64 and 128 convolution kernels, respectively, the size of the convolution kernels is 1 × 3, and the convolution step is 1. The size of the pool group in the two layers of pooling layer is 2 × 2, the step size of the first layer of pooling layer is 2, and the step size of the second layer of pooling layer is 1. The size of the pool group in the two layers of pooling layer is 2 × 2, the step size of the first layer of pooling layer is 2, and the step size of the second layer of pooling layer is 1. In this paper, cross entropy is used as the loss function of neural network model, and Adam optimizer algorithm is used to optimize the parameters of LSTM-CNN network.

2.4. Evaluation Method

In order to evaluate the performance of the neural network model used in this paper in gait recognition, accuracy (Acc), and F1-score are used as performance evaluation indicators.
The formula of Acc is as follows:
Acc = TP + TN TP + TN + FP + FN ,
where TP, TN, FP, and FN represent true positive, true negative, false positive, and false negative, respectively.
The F1-score is calculated based on two evaluation indicators: precision (Pre) and recall (Rec). Due to the interaction between precision and recall, it is sometimes difficult to compare the performance differences between different methods through these two indicators. The F1-score combines the results of precision and recall, which can represent the performance of the model in terms of precision and recall. When the F1-score is high, both accuracy and recall are high. However, F1-score, precision, and recall are generally used for the evaluation of binary tasks, which is not applicable to gait recognition, a typical multi classification task studied in this paper. Macro-F1, macro-precision, and macro-recall proposed in [26] are usually used to evaluate the model performance of multi category tasks. The macro-precision, macro-recall, and macro-F1 score are calculated as follows:
{ Pr e m a c r o = 1 n i = 1 n ( TP TP + FP × 100 % ) Rc e m a c r o = 1 n i = 1 n ( TP TP + FN × 100 % ) F 1 m a c r o = 2 × Pr e m a c r o × Rc e m a c r o Pr e m a c r o + Rc e m a c r o × 100 % .
In addition to accuracy and macro-F1 score, confusion matrix is often used to evaluate the performance of neural network models [27]. The mathematical expression of the confusion matrix is as follows:
C = ( c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 c 41 c 42 c 43 c 44 )
The formula of each element in the matrix is as follows:
c i j = n i j n i ,
where nij represents the amount of data that gait phase i is recognized as j; ni is the total number of testing data in gait phase i.

3. Experiment and Results of Gait Recognition

Ten healthy young adults (age = 26 ± 3 years, mass = 70 ± 7.3 kg, height = 175 ± 5.5 cm) and ten healthy elderly adults (age = 64.5 ± 7.6 years, mass = 61.6 ± 10.8 kg, height = 167.3 ± 8.4 cm) without known lower limb musculoskeletal or neurological dysfunction participated in this study. Written and verbal instructions of testing procedures were provided, and written consent was obtained from each subject prior to the experiment. The experimental protocol was approved by the Human Ethical Review Committee of Jilin University (No. 2023-233). After getting familiar with the experimental process, the subjects wore the adjusted sensors to complete walking. In the experiment, walking data were collected at slow speed (about 1.2 m/s), medium speed (about 1.5 m/s), and fast speed (about 1.8 m/s) speeds, respectively. Due to the limited length of the self-developed force platform, each subject needs to complete fifteen experiments. So, for each subject, the experiment collected walking data of 30 gait cycles (10 gait cycles for each walking speed × three walking speeds), and the experiment collected walking data of 600 gait cycles (30 gait cycles for each subject × 20 subjects) in total. Takea 500 gait cycle walking data including 25 gait cycle walking data for each subject for LSTM-CNN model training, and the remaining 100 gait cycle walking data for subsequent validation experiments. The walking data of subject 1 in one gait cycle are shown in Figure 5. In this paper, Matlab2021a was used as the training software for the neural network model. The hardware configuration includes i9-11900k, GTX2070ti graphics card, and 32 GB memory.

3.1. The Training of LSTM-CNN

The joint angles collected by IMUs were processed according to the above data preprocessing method. The sensor system used in this paper can collect six joint angles of human lower limbs. When the data collected by sensors with different position combinations was used as the input of the neural network model, the model may have different performance in gait recognition. In this paper, the experiments were divided into seven groups according to different sensor position combinations, as shown in Table 1. The gait recognition flow chart based on LSTM-CNN is shown in Figure 6, in which the training dataset accounts for 70% of the total dataset, and the test dataset accounts for 30% of the total dataset. Through iterative training, when the loss value exceeds the threshold value, the model training is considered complete.
Table 2 shows the experimental results statistics of the 7 groups of experiments on the gait phases of SU-RHS, SW-L, SU-LHS and SW-R, which include precision, recall and F1-score. Among them, LSTM-CNN performed best in the seventh group of experiments. Except that the precision of SU-LHS is 89.08%, all other indexes are greater than 90%. The performance of the model was poor in the third group of experiments, in which the F1-score and recall are lower than 90%. In addition, the F1-score of SU-RHS is the only one that is lower than 80% of all indices, and the F1-score is 78.64%. When the data collected by two IMUs was used as the input of the model, LSTM-CNN performed poorly, and the average F1-score in groups 1 to group 3 is 85.25%. When the data collected by four IMUs were used as the input of the model, the performance of LSTM-CNN was significantly improved, and the average F1-score in group 4 to group 6 is 91.80%. The model performed best in the seventh group of experiments, with an average F1-score of 95.61%.
Figure 7 shows the experimental statistical results of accuracy, macro-precision, macro-recall, and macro-F1 using the LSTM-CNN neural network model in seven groups of experiments. It can be seen from Figure 7 that the accuracy, macro-precision, macro-recall, and macro-F1 in the third group of experiments are 83.56%, 81.81%, 82.61%, and 82.21%, respectively, and 96.49%, 95.84%, 95.45% and 95.64%, respectively in the seventh group of experiments. The indexes are the smallest and largest, respectively in the third group and seventh group. In addition, each index in group 1 to group 3 is less than 90%. Figure 8 shows the F1-score of LSTM-CNN for SU-RHS, SW-L, SU-LHS and SW-R sub-phases, it can be seen from Figure 8 that the F1-score in the SU-LHS is the highest in the fourth group of experiments which is 95.11%, and the maximum F1-scores in the other three phases are in the seventh group of experiments which is 97.26%, 95.84% and 97.77%, respectively. Figure 9 represents the confusion matrix of seven groups of experiments. It can be seen from Figure 9 that the recognition accuracy of SU-RHS and SW-R is highest in the seventh group of experiments which is 95.26% and 96.66%, respectively, the recognition accuracy of SW-L is highest in the sixth group of experiments which is 97.22%, and the recognition accuracy of SU-LHS is highest in the fourth group of experiments which is 97.86%.

3.2. Experiment and Results Based on DPF-LSTM-CNN

Based on the above experimental results, it can be found that the best gait phase recognition performance of LSTM-CNN appeared based on the group of data from all the six IMUs; meanwhile, the recognition accuracies for SU-RHS and SW-R out of the four phases are also the highest. Therefore, the corresponding trained model derived from the group of data from all the six IMUs is defined as LSTM-CNN-1. However, the highest recognition accuracy for SW-L sub-phase appeared based on the group of data from the four IMUs located on the shanks and feet, then the corresponding trained model was defined as LSTM-CNN-2 in this case. Comparatively, the highest recognition accuracies for SU-LHS sub-phase appeared based on the group of data from the four IMUs located on the thighs and shanks, then the corresponding trained model was defined as LSTM-CNN-3 in this case. The four phases (SU-RHS, SW-L, SU-LHS, SW-R) appear in a fixed sequence throughout a complete gait cycle and continue to circulate in a durative gait task. Conclusively, a gait recognition method defined as DPF-LSTM-CNN is innovatively proposed and its specific execution procedure is shown in Figure 10. As the beginning of the recognition procedure, the group of data from all the six IMUs should be input to the trained model LSTM-CNN-1 to recognize the current gait phase, and thus the sequential phase out of the four phases in in a fixed sequence is also obviously predicted. Later, the aforementioned sequential gait phase has converted to the current gait phase whose corresponding trained model (LSTM-CNN-1, 2 or 3) with the highest recognition accuracy could be accurately determined as previously elaborated. Resultantly, the current gait phase was recognized with the highest recognition accuracy. Continuously, repeat the processing procedure in a loop to achieve real-time recognition of gait phases until the walking is over. In order to improve the accuracy of the neural network model in gait recognition and reduce error accumulation, when the gait at five consecutive times is recognized as the same gait phase, the recognition result of the model is considered effective.
In order to verify the performance of this method in gait recognition, 20% of gait data in each subject’s gait data that did not participate in neural network model training were taken as validation data sets, LSTM-CNN-1, and DPF-LSTM-CNN were used for gait recognition to compare the differences between the two methods.
Table 3 shows the accuracy and macro-F1 of gait recognition using the two methods for 20 subjects as verification objects, respectively. It can be seen from Table 3 that the average accuracy and average macro-F1 of gait recognition using LSTM-CNN-1 for 20 subjects as verification objects are 94.17% and 94.38%, respectively. The average accuracy and average macro-F1 of gait recognition using DPF-LSTM-CNN were 96.73% and 96.48%, respectively. Figure 11 shows the recognition accuracy of the two methods for four gait phases when subject 1 was the verification object. The accuracy of LSTM-CNN-1 for SU-RHS, SW-L, SU-LHS and SW-R sub-phase is 94.85%, 94.13%, 95.25% and 96.84%, respectively. The accuracy of DPF-LSTM-CNN for SU-RHS, SW-L, SU-LHS, and SW-R sub-phase was 96.83%, 97.73%, 98.47%, and 96.88%, respectively.
To further verify the performance of DPF-LATM-CNN, we compare this method with several algorithms including Deep convolutional neural networks (DCNN) [16], Convolutional, and Long Short-Term Memory neural networks (CNN-LSTM) [13]. Table 4 showed the recognition accuracy and Macro-F1 of the DPF-LSTM-CNN, DCNN, and CNN-LSTM. It can be seen from Table 4 that the accuracy and macro-F1 of the gait recognition based on DPF-LSTM-CNN are the highest which is 97.21% and 96.48%, respectively.

4. Discussion

In this paper, LSTM-CNN neural network model was used to recognize four gait phases in the gait process based on self-developed wearable wireless sensor system. The paper has analyzed the performance of the model in gait recognition using different groups of input data collected by seven different IMU grouping methods. It can be seen from Figure 7 that in the experiments of the first to third groups when only the data collected by two IMUs were used as the input of the model, the accuracy, macro-recall, macro-precision, and macro-F1 of LSTM-CNN in recognizing the four gait phases are less than 90%. In the experiments of the fourth to seventh groups, when more than 4 IMUs were used, each index exceeds 90%, and in the experiments of group 7, each index exceeds 95%. The above results show that when the feature dimensions input into LSTM-CNN become higher, the comprehensive performance of the model for gait recognition will be improved accordingly. In this paper, when the data of all six IMUs were used as the input of the model, LSTM-CNN performs best in gait recognition, with accuracy, macro-recall, macro-precision, and macro-F1 of 96.49%, 95.45%, 95.84, and 95.64%, respectively.
Although the comprehensive performance of the LSTM-CNN will be improved when the input feature dimension is higher, LSTM-CNN shows different characteristics when recognizing four different gait phases. It can be seen from Figure 8 that the maximum value of F1-scores of SU-RHS, SW-L, SU-LHS, and SW-R are not all in the seventh group of experiments. When the model was used to recognize SU-LHS sub-phase, the F1-score in group 4 is 97.86%, which is higher than 94.23% in group 7. It can be seen from Figure 9 that LSTM-CNN still performs best in the seventh group of experiments for the recognition accuracy of the four gait phases with an average accuracy of 95.60%. However, there are some differences in the recognition of each gait phase. LSTM-CNN has the highest recognition accuracy in the seventh group of experiments only when recognizing SU-RHS and SW-R sub-phases, with an accuracy of 95.26% and 96.66%, respectively. The highest accuracy is 97.22% in the sixth group of experiments when recognizing SW-L, and 97.86% in the fourth group of experiments when recognizing SU-LHS. The above results show that the performance of the neural network model may be better by using fewer sensors when recognizing a specific gait phase. Applying the above conclusions to the research and development of lower limb rehabilitation equipment, on the basis of improving the accuracy of gait recognition, it can not only reduce the number of sensors required in the equipment, but also reduce the computing pressure of the equipment control system, so that the equipment does not rely on high-performance microprocessors, and effectively reduce the cost of equipment.
It can be seen from Table 3 that when DPF-LSTM-CNN model was used for gait recognition, the accuracy, and macro-F1 have been significantly improved. The average accuracy of the model is improved from 94.17% to 96.73%, and the average macro-F1 is increased from 94.38% to 96.48% when 20 subjects were used as validation objects, respectively. Taking subject 1 as an example, it can be seen from Figure 11 that the recognition accuracy of DPF-LSTM-CNN and LSTM-CNN-1 for SW-R recognition is similar, with 96.48% and 96.88%, respectively. Except for the recognition of SW-R sub-phase, the accuracy of DPF-LSTM-CNN in recognizing the other three gait phases is significantly higher than that of LSTM-CNN-1 model. Furthermore, compared to the DCNN and CNN-LSTM, the DPF-LSTM-CNN performed better in gait recognition.

5. Conclusions

Based on the self-developed wearable wireless sensor system, the performance of the LSTM-CNN neural network model in gait recognition was analyzed using different groups of input data collected by seven different IMU grouping methods. The experimental results show that when all six IMU were used, the overall performance of the model in gait recognition is the best, and its accuracy and macro-F1 are 96.49% and 95.64%, respectively. The recognition accuracy of the model for SU-RHS and SW-R is the highest under this condition with 95.62% and 96.66%, respectively. When using the data of IMU located at the left and right thighs and shanks, the model has the highest accuracy for SW-L, with an accuracy of 97.22%. When using the data of IMU located at the left and right shanks and feet, the model has the highest accuracy for SU-LHS, with an accuracy of 97.86%. Based on the above conclusions, this paper has proposed a novel gait recognition method based on DPF-LSTM-CNN and makes an experimental verification of its performance in gait recognition. The experimental results show that the method performed better in gait recognition compared to the DCNN and CNN-LSTM. The model gait recognition method based on DFP-LSTM-CNN proposed in this paper can provide a better reference for accurate control of lower limb motion rehabilitation assistive devices.

Author Contributions

K.L.: Writing—review and editing, methodology, and conceptualization. Y.L.: Writing—original draft, software, methodology, formal analysis, and data curation. S.J.: Visualization, validation, methodology, and investigation. C.G.: Investigation and supervision. S.Z.: Project administration and formal analysis. J.F.: Software and methodology. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the experimental protocol was approved by the Human Ethical Review Committee of Jilin University (Approval No. 2023-233).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are unavailable due to privacy or ethical restrictions.

Acknowledgments

The authors wish to acknowledge the support of the volunteer subjects of Robotics and Dynamics Research Lab in Jilin University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, B.; Ma, H.; Qin, L.-Y.; Gao, F.; Chan, K.-M.; Law, S.-W.; Qin, L.; Liao, W.-H. Recent developments and challenges of lower extremity exoskeletons. J. Orthop. Transl. 2016, 5, 26–37. [Google Scholar] [CrossRef] [Green Version]
  2. Wang, L.; Chen, C.; Dong, W.; Du, Z.; Shen, Y.; Zhao, G. Locomotion Stability Analysis of Lower Extremity Augmentation Device. J. Bionic Eng. 2019, 16, 99–114. [Google Scholar] [CrossRef]
  3. Chen, C.-F.; Du, Z.-J.; He, L.; Shi, Y.-J.; Wang, J.-Q.; Xu, G.-Q.; Zhang, Y.; Wu, D.-M.; Dong, W. Development and Hybrid Control of an Electrically Actuated Lower Limb Exoskeleton for Motion Assistance. IEEE Access 2019, 7, 169107–169122. [Google Scholar] [CrossRef]
  4. Zeng, D.; Qu, C.; Ma, T.; Qu, S.; Yin, P.; Zhao, N.; Xia, Y. Research on a gait detection system and recognition algorithm for lower limb exoskeleton robot. J. Braz. Soc. Mech. Sci. Eng. 2021, 43, 298. [Google Scholar] [CrossRef]
  5. Martinez, A.; Lawson, B.; Durrough, C.; Goldfarb, M. A Velocity-Field-Based Controller for Assisting Leg Movement During Walking with a Bilateral Hip and Knee Lower Limb Exoskeleton. IEEE Trans. Robot. 2019, 35, 307–316. [Google Scholar] [CrossRef]
  6. Zheng, T.; Zhu, Y.; Zhang, Z.; Zhao, S.; Chen, J.; Zhao, J. Parametric Gait Online Generation of a Lower-limb Exoskeleton for Individuals with Paraplegia. J. Bionic Eng. 2018, 15, 941–949. [Google Scholar] [CrossRef]
  7. Xu, D.; Liu, X.; Wang, Q. Knee Exoskeleton Assistive Torque Control Based on Real-Time Gait Event Detection. IEEE Trans. Med. Robot. Bionics 2019, 1, 158–168. [Google Scholar] [CrossRef]
  8. Grimmer, M.; Schmidt, K.; Duarte, J.E.; Neuner, L.; Koginov, G.; Riener, R. Stance and Swing Detection Based on the Angular Velocity of Lower Limb Segments During Walking. Front. Neurorobot. 2019, 13, 57. [Google Scholar] [CrossRef] [Green Version]
  9. Ding, Z.; Yang, C.; Xing, K.; Ma, X.; Yang, K.; Guo, H.; Yi, C.; Jiang, F. The Real Time Gait Phase Detection Based on Long Short-Term Memory. In Proceedings of the IEEE Third International Conference on Data Science in Cyberspace (DSC), Guangzhou, China, 18–21 June 2018; pp. 33–38. [Google Scholar] [CrossRef]
  10. Rajib, G. Faster R-CNN and recurrent neural network based approach of gait recognition with and without carried objects. Expert Syst. Appl. 2022, 205, 117730. [Google Scholar]
  11. Bukhari, M.; Durrani, M.Y.; Gillani, S.; Yasmin, S.; Rho, S.; Yeo, S.-S. Exploiting vulnerability of convolutional neural network-based gait recognition system. J. Supercomput. 2022, 78, 18578–18597. [Google Scholar] [CrossRef]
  12. Mathivanan, B.; Perumal, P. Gait Recognition Analysis for Human Identification Analysis—A Hybrid Deep Learning Process. Wirel. Pers. Commun. 2022, 126, 555–579. [Google Scholar] [CrossRef]
  13. Coelho, R.M.; Gouveia, J.; Botto, M.A.; Krebs, H.I.; Martins, J. Real-time walking gait terrain classification from foot-mounted Inertial Measurement Unit using Convolutional Long Short-Term Memory neural network. Expert Syst. Appl. 2022, 203, 117306. [Google Scholar] [CrossRef]
  14. Wang, J.; Wu, D.; Gao, Y.; Wang, X.; Li, X.; Xu, G.; Dong, W. Integral Real time Locomotion Mode Recognition Based on GA CNN for Lower Limb Exoskeleton. J. Bionic Eng. 2022, 19, 1359–1373. [Google Scholar] [CrossRef]
  15. Zhang, Z.; Wang, Z.; Lei, H.; Gu, W. Gait phase recognition of lower limb exoskeleton system based on the integrated network model. Biomed. Signal Process. Control. 2022, 76, 103693. [Google Scholar] [CrossRef]
  16. Su, B.; Smith, C.; Gutierrez Farewik, E. Gait Phase Recognition Using Deep Convolutional Neural Network with Inertial Measurement Units. Biosensors 2020, 10, 109. [Google Scholar] [CrossRef]
  17. Tran, L.; Hoang, T.; Nguyen, T.; Kim, H.; Choi, D. Multi-Model Long Short-Term Memory Network for Gait Recognition Using Window-Based Data Segment. IEEE Access 2021, 9, 23826–23839. [Google Scholar] [CrossRef]
  18. Wu, X.; Yuan, Y.; Zhang, X.; Wang, C.; Xu, T.; Tao, D. Gait Phase Classification for a Lower Limb Exoskeleton System Based on a Graph Convolutional Network Model. IEEE Trans. Ind. Electron. 2022, 69, 4999–5008. [Google Scholar] [CrossRef]
  19. Chen, C.-F.; Du, Z.-J.; He, L.; Shi, Y.-J.; Wang, J.-Q.; Dong, W. A Novel Gait Pattern Recognition Method Based on LSTM-CNN for Lower Limb Exoskeleton. J. Bionic Eng. 2021, 18, 1059–1072. [Google Scholar] [CrossRef]
  20. Kreuzer, D.; Munz, M. Deep Convolutional and LSTM Networks on Multi-Channel Time Series Data for Gait Phase Recognition. Sensors 2021, 21, 789. [Google Scholar] [CrossRef]
  21. Zeni, J.A., Jr.; Richards, J.G.; Higginson, J.S. Two simple methods for determining gait events during treadmill and overground walking using kinematic data. Gait Posture 2008, 27, 710–714. [Google Scholar] [CrossRef] [Green Version]
  22. Yu, L.; Zheng, J.; Wang, Y.; Song, Z.; Zhan, E. Adaptive method for real-time gait phase detection based on ground contact forces. Gait Posture 2015, 41, 269–275. [Google Scholar] [CrossRef]
  23. Zahradka, N.; Verma, K.; Behboodi, A.; Bodt, B.; Wright, H.; Lee, S.C.K. An Evaluation of Three Kinematic Methods for Gait Event Detection Compared to the Kinetic-Based ‘Gold Standard’. Sensors 2020, 20, 5272. [Google Scholar] [CrossRef]
  24. Zhou, Q.; Shan, J.; Fang, B.; Zhang, S.; Sun, F.; Ding, W.; Wang, C.; Zhang, Q. Personal-specific gait recognition based on latent orthogonal feature space. Cogn. Comput. Syst. 2021, 3, 61–69. [Google Scholar] [CrossRef]
  25. Lu, Y.; Wang, H.; Qi, Y.; Xi, H. Evaluation of classification performance in human lower limb jump phases of signal correlation information and LSTM models. Biomed. Signal Process. Control 2021, 64, 102279. [Google Scholar] [CrossRef]
  26. Wang, X.; Dong, D.; Chi, X.; Wang, S.; Miao, Y.; An, M.; Gavrilov, A.I. sEMG-based consecutive estimation of human lower limb movement by using multi-branch neural network. Biomed. Signal Process. Control 2021, 68, 102781. [Google Scholar] [CrossRef]
  27. Wang, C.; Wu, X.; Ma, Y.; Wu, G.; Luo, Y. A Flexible Lower Extremity Exoskeleton Robot with Deep Locomotion Mode Identification. Complexity 2018, 2018, 5712108. [Google Scholar] [CrossRef]
Figure 1. The actual application scenario of the sensor system.
Figure 1. The actual application scenario of the sensor system.
Sensors 23 05905 g001
Figure 2. Internal structure of LSTM unit.
Figure 2. Internal structure of LSTM unit.
Sensors 23 05905 g002
Figure 3. Structure of CNN.
Figure 3. Structure of CNN.
Sensors 23 05905 g003
Figure 4. Structure of LSTM-CNN.
Figure 4. Structure of LSTM-CNN.
Sensors 23 05905 g004
Figure 5. Gait phase division diagram. The Lgrf represents the left ground reaction force and Rgrf represents the right ground reaction force.
Figure 5. Gait phase division diagram. The Lgrf represents the left ground reaction force and Rgrf represents the right ground reaction force.
Sensors 23 05905 g005
Figure 6. Gait pattern recognition framework based on LSTM-CNN.
Figure 6. Gait pattern recognition framework based on LSTM-CNN.
Sensors 23 05905 g006
Figure 7. Accuracy, macro-recall, macro-precision, and macro-F1 of LSTM-CNN in 7 experiments.
Figure 7. Accuracy, macro-recall, macro-precision, and macro-F1 of LSTM-CNN in 7 experiments.
Sensors 23 05905 g007
Figure 8. F1-scores of LSTM-CNN for four gait phases recognition in 7 experiments.
Figure 8. F1-scores of LSTM-CNN for four gait phases recognition in 7 experiments.
Sensors 23 05905 g008
Figure 9. Confusion matrix of LSTM-CNN in 7 experiments. (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4; (e) Group 5; (f) Group 6; (g) Group 7.
Figure 9. Confusion matrix of LSTM-CNN in 7 experiments. (a) Group 1; (b) Group 2; (c) Group 3; (d) Group 4; (e) Group 5; (f) Group 6; (g) Group 7.
Sensors 23 05905 g009aSensors 23 05905 g009b
Figure 10. Gait recognition flow chart based on DPF-LSTM-CNN.
Figure 10. Gait recognition flow chart based on DPF-LSTM-CNN.
Sensors 23 05905 g010
Figure 11. Gait phase recognition accuracy for each phase and overall of subject 1.
Figure 11. Gait phase recognition accuracy for each phase and overall of subject 1.
Sensors 23 05905 g011
Table 1. The groups of experiments depend on the IMU position combination.
Table 1. The groups of experiments depend on the IMU position combination.
GroupsPosition Combination of IMU
1Left and right thighs
2Left and right shanks
3Left and right feet
4Left and right thighs and shanks
5Left and right thighs and feet
6Left and right shanks and feet
7Left and right thighs, shanks and feet
Table 2. Comparison of precision, recall, and F1-score of different groups.
Table 2. Comparison of precision, recall, and F1-score of different groups.
GroupsGait PhasesPre (%)Rec (%)F1-Score (%)
1SU-RHS76.4386.6781.23
SW-L81.8985.8483.82
SU-LHS78.7591.1184.48
SW-R95.0180.2787.02
2SU-RHS91.8984.1787.86
SW-L86.7986.7986.79
SU-LHS85.8687.1186.48
SW-R91.7892.9192.34
3SU-RHS74.4383.3578.64
SW-L83.0681.2282.13
SU-LHS86.5081.9588.63
SW-R83.2683.9283.59
4SU-RHS93.6092.4693.03
SW-L88.3994.1591.18
SU-LHS92.5197.8695.11
SW-R90.6590.0390.34
5SU-RHS96.1792.7794.44
SW-L94.0888.7991.36
SU-LHS84.0094.2488.83
SW-R91.6394.7293.15
6SU-RHS97.2884.1390.23
SW-L86.3897.2291.48
SU-LHS88.3391.2289.75
SW-R95.8889.7892.73
7SU-RHS98.9695.6297.26
SW-L96.4295.2795.84
SU-LHS89.0894.2391.58
SW-R98.9196.6697.77
Table 3. Accuracy and macro-F1 of gait recognition using LSTM-CNN-1 and DPF-LSTM-CNN for 20 subjects as verification objects, respectively.
Table 3. Accuracy and macro-F1 of gait recognition using LSTM-CNN-1 and DPF-LSTM-CNN for 20 subjects as verification objects, respectively.
SubjectLSTM-CNN-1DPF-LSTM-CNN
Acc (%)Macro-F1 (%)Acc (%)Macro-F1 (%)
194.8493.6797.6397.04
296.3796.1197.8497.29
394.1295.2395.2996.24
492.6694.7498.6395.86
594.5893.2997.3396.19
693.3893.6795.4396.35
791.8892.7998.1194.83
894.7694.2897.2196.92
993.7195.7796.7897.03
1094.4394.3697.0296.74
1195.0194.9696.7598.05
1291.3393.4795.8796.74
1393.7195.6197.2395.39
1495.4494.3598.3996.54
1596.3695.8196.5697.06
1695.2792.5797.7697.88
1793.1993.5596.8396.43
1892.3295.1897.7395.78
1994.7994.8397.3994.35
2095.3293.4498.4696.79
Average94.1794.3897.2196.48
Table 4. Accuracy and macro-F1 of gait recognition using DPF-LSTM-CNN, DCNN, and CNN-LSTM.
Table 4. Accuracy and macro-F1 of gait recognition using DPF-LSTM-CNN, DCNN, and CNN-LSTM.
ModelsDPF-LSTM-CNDCNNCNN-LSTM
Accuracy97.21%95.37%94.57%
Macro-F196.48%94.86%95.38%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, K.; Liu, Y.; Ji, S.; Gao, C.; Zhang, S.; Fu, J. A Novel Gait Phase Recognition Method Based on DPF-LSTM-CNN Using Wearable Inertial Sensors. Sensors 2023, 23, 5905. https://0-doi-org.brum.beds.ac.uk/10.3390/s23135905

AMA Style

Liu K, Liu Y, Ji S, Gao C, Zhang S, Fu J. A Novel Gait Phase Recognition Method Based on DPF-LSTM-CNN Using Wearable Inertial Sensors. Sensors. 2023; 23(13):5905. https://0-doi-org.brum.beds.ac.uk/10.3390/s23135905

Chicago/Turabian Style

Liu, Kun, Yong Liu, Shuo Ji, Chi Gao, Shizhong Zhang, and Jun Fu. 2023. "A Novel Gait Phase Recognition Method Based on DPF-LSTM-CNN Using Wearable Inertial Sensors" Sensors 23, no. 13: 5905. https://0-doi-org.brum.beds.ac.uk/10.3390/s23135905

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop