Next Article in Journal
The Use of Reactive Programming in the Proposed Model for Cloud Security Controlled by ITSS
Next Article in Special Issue
Window-Based Multi-Objective Optimization for Dynamic Patient Scheduling with Problem-Specific Operators
Previous Article in Journal
Application Prospects of Blockchain Technology to Support the Development of Interport Communities
Previous Article in Special Issue
Towards Accurate Skin Lesion Classification across All Skin Categories Using a PCNN Fusion-Based Data Augmentation Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Robustly Effective Approaches on Motor Imagery-Based Brain Computer Interfaces

by
Seraphim S. Moumgiakmas
and
George A. Papakostas
*
MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
*
Author to whom correspondence should be addressed.
Submission received: 23 March 2022 / Revised: 13 April 2022 / Accepted: 19 April 2022 / Published: 24 April 2022
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)

Abstract

:
Motor Imagery Brain Computer Interfaces (MI-BCIs) are systems that receive the users’ brain activity as an input signal in order to communicate between the brain and the interface or an action to be performed through the detection of the imagination of a movement. Brainwaves’ features are crucial for the performance of the interface to be increased. The robustness of these features must be ensured in order for the effectiveness to remain high in various subjects. The present work consists of a review, which includes scientific publications related to the use of robust feature extraction methods in Motor Imagery from 2017 until today. The research showed that the majority of the works focus on spatial features through Common Spatial Patterns (CSP) methods (44.26%). Based on the combination of accuracy percentages and K-values, which show the effectiveness of each approach, Wavelet Transform (WT) has shown higher robustness than CSP and PSD methods in the majority of the datasets used for comparison and also in the majority of the works included in the present review, although they had a lower usage percentage in the literature (16.65%). The research showed that there was an increase in 2019 of the detection of spatial features to increase the robustness of an approach, but the time-frequency features, or a combination of those, achieve better results with their increase starting from 2019 onwards. Additionally, Wavelet Transforms and their variants, in combination with deep learning, manage to achieve high percentages thus making a method robustly accurate.

Graphical Abstract

1. Introduction

The brain is the most complex human organ and even today it has not been fully mapped. Everything a person does, thinks or imagines is a result of the communication between different parts of the brain. This communication is conducted through electrical pulses. The frequency depends on the state in which the individual is at the given time. The frequencies of the brain signals have been categorized as shown in Table 1 [1,2,3]:
The brain signals can be acquired, filtered, and processed in order for an action to be executed, such as the movement of a robotic part. Brain Computer Interfaces (BCIs) are systems by which the communication between the human brain and applications, which in some cases include the control of hardware, is achieved. The goal of these systems is to transmit the command from the brain to the final implementation through the input signals that constitute the brain signals. The basic idea on which BCI is based, is to allow the individual to communicate and manage operations without using muscles but only through brain activity [4]. These systems have been used in various fields such as communication [5], prosthetics [6], accident prevention [7], health monitoring [8], timely prediction of brain disorders [9] or even entertainment applications [10]. In addition, the continuous evolution of BCI systems has led to the implementation of this technology in training strategies, Internet of Things applications, cognitive skills analysis, and helping identify problems such as writing or drawing. More specifically, in the works [11,12,13] the authors present analyses based on eye motion experiments, which lead to approaches that can identify cognitive abilities, fixation parameters and factors affecting eye–hand coordination. The results of the above papers show how, through the BCI systems, the subject can measure his/her cognitive skills, complex tasks execution, or learning abilities, and be provided with early detection of a problem or to improve the above skills. Moreover, ref. [14] presents the correlation between executive function and algorithmic thinking, and [15] shows how various media contents affect brain waves in terms of the individual’s attention ability. As already mentioned above, BCI systems can interact with the subject’s environment. The authors of [16] designed a BCI system controlling an Internet of Things-based robot. As can be seen from all the above, BCI systems are constantly evolving and finding applications in a variety of different areas, from the communication between the individual and his/her environment to the physical activities of the individual himself/herself.

1.1. Acquisition Methods

The process by which brain signals are loaded as input signals can be divided into three categories [4,17,18]:
Non-invasive: This technique is based on electrodes that measure the potential generated by the human brain through electroencephalography (EEG) or take measurements of the magnetic field through magnetoencephalography (MEG). The sensors are located at specific points on the scalp’s surface, where it has been observed that the function of neurons is more intense than in most human functions, resulting in the sensors receiving a stronger signal.
Semi-invasive: With this technique, electrodes are located on the exposed surface of the brain, measuring the electrical activity coming from the cerebral cortex (electrocorticography (ECoG)). The electrodes can be placed on the outside of or under the dura mater.
Invasive: In this technique, sensors are placed into the brain. More specifically, micro-electrodes are located in the cortex, measuring the electrical activity of every single neuron.
Non-invasive techniques constitute the safest method for measuring brain electrical activity or magnetic field because the electrodes are placed on the surface of the scalp and they do not penetrate it. For the same reason, semi-invasive and invasive techniques are considered risky because the placement of the electrodes is performed with surgery and in the case of invasive techniques, where micro-electrodes are in contact with the cortex, the procedure is even riskier. Additionally, the cost of non-invasive equipment is low as it can be easily found on the market. Of course, the price varies depending on the type of equipment and depending on the implementation. Nevertheless, the cost is still very low compared to the other two techniques. Additionally, for the reasons mentioned above, the non-invasive technique is not as effective as the other techniques [19] and needs more preprocessing stages. Due to the low cost of the equipment and measurements of the neurons’ activity from the surface of the scalp, resulting in a significant increase in noise and a decrease in spatial resolution [17]. However, this technique in applications, in combination with the development of technology in the field of neurotechnology, has managed to achieve high classification performances.

1.2. Communication Approaches

Brain Computer Interfaces are also divided into categories based on how the brain sends commands and communicates with the application. Some of these approaches are based on the human senses. The event-related potentials (ERP) techniques are based on the responses of the individual when observing, separately or in groups, repetitive visual (SSVEP, P300), auditory (SSAEP) or somatosensory (SSSEP) stimuli, for which their oscillation is periodical. As the individual focuses on the stimulus, the frequencies of the brain’s electrical waves tend to match the frequencies of the oscillating visual stimuli. In this way, the command is given through the brain’s signals to the implementation in order for a function to be performed [20,21,22,23,24,25].
Additionally, there are approaches based on the individual’s imagination. Motor imagery (MI) is based on the idea that the individual consciously imagines that he/she is performing a movement [26]. Of course, this image must represent a movement that is possible to perform by a human, such as the movement of the arms or legs. Research has shown that the areas of the brain that are responsible for creating movements are associated with areas where images of movement are created [27,28]. Fundamentals for motor imagery are two psychical phenomena. Event-related de-synchronization (ERD) is the amplitude’s decrease of a signal rhythm, when the individual imagines or performs a movement. This decrease is observed in specific cortex regions related to movements or the imagination of it, while in other areas of the brain an increase in the amplitude is observed. This amplitude increase, in the signal’s rhythm, is the second phenomenon called event-related synchronization (ERS) [29].

1.3. Process

Figure 1 shows the main process of a BCI system which consists of six basic stages [30]. The first stage concerns the acquisition of the brain signals with one of the techniques analyzed in the previous section. As mentioned, the way in which these data will be acquired has a very important role. The majority of studies focus on non-invasive techniques and datasets, due to the easy installation of the equipment and the low cost. Preprocessing level consists of methods aimed at improving signal quality. Common steps in preprocessing level are subsampling, signal frequency filtering, channel scaling and selection, spatial filtering, and frequency decomposition [31]. At the feature extraction level, the fully preprocessed signals insert into an extraction algorithm in order for critical information about the brainwaves to be extracted and analyzed in time and/or frequency domains or with spatial filtering. Then this information is grouped into the feature vectors, which are categorized into classes by the classifiers [32]. The next stage is the green implementation in the application. This stage implements the features classes of the preprocessed brain signals. The final stage consists of the feedback that BCI systems receive [33,34]. Feedback has a great impact on the increased performance. Through this process, the mental strategy of the individuals is improved during the entry of their EEG signals into the BCI system [35].

1.4. Feature Domains

The feature extraction stage has a crucial role in the BCIs in order for the system to be as efficient and as robust as possible. Robustness shows if and how much the accuracy changes based on the different brain characteristics of each participant. A robust method will extract only the features which will keep the performance of the model stable. This means that the performance will not be affected across the different sessions and participants. Based on the analysis domain, features can be divided into four categories [36,37,38]:
Time-domain: The time-domain features (TDF) are calculated based on the raw brain signals in relation to the change of time [39]. These temporal features constitute the simplest form of information on EEG signals. Additionally, based on the time domain, fractal analysis for nonlinear features can be extracted.
Frequency-domain: Frequency-domain features (FDF) describe the changes of the signal in relation to the frequency. The analysis of the spectral features is based on the Power Spectral Density (PSD) with which useful correlations can be drawn.
High importance statistical features, such as standard deviation, correlation, contrast, minimum, maximum, root mean square and average of a specific part of the EEG signal, are extracted through the above domains.
Time-frequency domain: The features extracted through time-frequency analysis (TFDF) constitute the basis for extracting useful information and patterns. Through time-frequency analysis, changes in the frequency of EEG signals can be acquired over time.
Spatial domain: Through this domain the representation of the EEG in a spatial way is achieved. The features of this domain are extracted by the interaction of multiple channels and are not limited to the study of a single one. Additionally, spatiotemporal features describe activities of various parts of the brain that are related to an action [38].
Furthermore, Wang, P. and Jiang, A., in [40], have roughly separated the features in into three large categories based on their type:
  • Discriminative features;
  • Statistical features;
  • Data-driven adaptive features.
Discriminative features include all those features extracted in time, spectral and spatial domains, including all the variants by the corresponding domain methods. Statistical category, except the statistical features, includes also the entropy. Finally, the data-driven adaptive category includes the features extracted from the Boltzmann machine, neural networks, or deep learning approaches.

2. Review’s Approach

2.1. Structure of the Work

The present review is organized as follows: The first section consists of the introduction in which acquisition and communication methods are described, along with the basic structure of MI-BCI and the feature domains. Section 2 presents the research methods used for the present review and the early statistics of the research as well. Section 3 presents the related works that have been carried out. Section 4 presents the results of the present review. In more detail, the most used methods are shown, and which domain of features the authors have focused on in order to achieve robustness, as well as the effectiveness of each method based on accuracy and robustness. It also shows the comparison of the most effective methods based on the datasets in which they were used. Section 5 contains a discussion of the results and Section 6 summarizes the final conclusion of this work. At this point it is worth noting that research and datasets have been based solely on electrical signals acquired through electroencephalography. The plan that was followed was after the selection and study of the works, was the grouping of works in terms of feature extraction methods and the feature domain that the authors relied on in order for a trend to emerge about which method and which domain are the most used and that the authors focus on for achieving robustness. Then, the approaches that had numerical results in terms of both accuracy and robustness were presented. Cohen’s Kappa value was selected to measure robustness. Next, the most efficient approaches were selected in terms of feature extraction methods and classifiers, based on the datasets to which they were applied. In this way, for each dataset, there was one approach that was the most effective among others implemented in the same folder. Finally, conclusions emerged as to whether robustness matches the trend of the most used feature extraction methods and feature domains and what the most efficient robust approach is and whether it exists for all the cases.

2.2. Research of Scientific Works

Establishing criteria for the selection of scientific papers was very important in order to analyze the information and draw conclusions in a documented way. The literature search was based on some scientific questions aimed at accurately extracting results in order for the final review to contain all the fundamental information:
  • Which are the most used feature extraction methods in Motor Imagery BCIs?
  • Which methods are robust?
  • Which approaches have high robustness and high accuracy as well?
Scopus (www.scopus.com, accessed on 15 November 2021) was used for searching for scientific works. It is a valid and reliable database that allows you to search scientific papers. The words used for the revision refer to the title of each work, the abstract, and the keywords. A chronological filter was also used to display the papers of the last five years in order for the review to relate to the latest developments on the subject. More specifically, the search in Scopus was as follows:
TITLE-ABS-KEY ((feature AND extraction) AND (robust) AND (motor AND imagery)) AND PUBYEAR > 2016
The review process of the papers based on the following acceptance criteria:
  • The paper describes a feature extraction method of the MI-BCI field;
  • The method described in the paper aims to extract robust brain activity features;
  • The presentation of the numerical results of the metrics is necessary.
After the initial review of the results, the need emerged to establish rejection criteria in order to remove works whose content deviated from the main objectives of the review:
  • Toolboxes are described;
  • The paper is not related to the BCI field;
  • The paper does not describe a feature extraction method;
  • The proceedings were not accessible.
The research was conducted on 15 November 2021 and the results, without the usage of rejection criteria, were 52 papers. Figure 2 shows the various stages of the methodology followed, based on the PRISMA (www.prisma-statement.org, accessed on 12 April 2022) flow diagram. At this point, it is worth noting that the number of 52 papers concerned the primary research based on the Scopus, to which the criteria analyzed above were applied. During the study of the results, the need for secondary research arose, through citation searching in Google Scholar (www.scholar.google.com, accessed on 11 April 2022). The reason for this secondary search was the expansion of knowledge for specific subjects and terms in the MI-BCI field and EEG. As a result, 45 additional works were raised and referenced but they are not included in the following numeric or comparative tables or graphs.

2.3. Research Early Statistics

Based on the results, a connection emerged between the countries of origin of the scientific papers. With the usage of VosViewer (www.vosviewer.com, accessed on 3 December 2021) a bibliographic coupling network is constructed, between the countries with the largest number of scientific papers. As Figure 3 shows, China, Japan, and the United States of America have more intense bibliographic coupling based on the number of papers and the total number of citations. Apart from that, China is the country with the highest number of papers and citations (Link Strength). Japan, the United States of America, Australia, the United Kingdom and India follow.
Based on Figure 4, it seems that there is an increase of publications in 2019. Almost 69% of the papers were published in 2019, describing a modified version of CSP (Common Spatial Patterns) methods or their variants. CSP is one of the most common and popular feature extraction methods in Motor Imagery which had an intense emergence in 2019.

3. Related Work

In recent years, reviews and surveys have been published about the performance comparison of feature extraction methods in BCI systems since the feature extraction method plays a crucial role in the performance of each approach. In [41] the authors present a detailed presentation of the performance of deep learning approaches and their feature extraction methods for MI-BCI applications, including information about datasets and the feature domains. In the survey [42], the advantages and the disadvantages of feature extraction methods for BCI systems have been analyzed. Furthermore, the review [43] presents a very detailed and analytical study of various feature extraction approaches for different fields such as Motor Imagery, epilepsy, attention, P300, etc. The present review aims to present the MI-BCI feature extraction methods along with the classifiers used in each work to achieve a robust performance. To achieve this goal, the metric Kappa was set to measure robustness, something that was not presented in the above works. More specifically, the above reviews show the accuracy of each approach and in some cases the Kappa value instead of the accuracy percentages. This fact shows that robustness has a secondary role and the first one belongs to the accuracy of each approach. This work is based on the robustness of each approach and as a secondary comparison method has set the combination’s effectiveness of accuracy and robustness.

4. Efficient and Robust Approaches

4.1. Feature Extraction Methods

Table 2 shows the feature extraction methods used on MI-BCI applications identified in this review. As already mentioned above, almost half of the papers used more than one method. The combination of methods is intended to extract more robust features and increase the accuracy. The research showed that the most used feature extraction methods are the Common Spatial Patterns-based approaches (CSP), Power Spectral Density (PSD) methods and Wavelet Transform-based approaches.
In the category of CSP novel or modified approaches of methods are included, such as the 8th-order Butterworth bandpass-filters in CSP with Tikhonov regularization (FB-TRCSP), Frequency Bank CSP (FBCSP), Frequency Domain CSP (FDCSP), Regularized CSPs (RCSP) and DTimeWrapping-Based CSP. CSPs as a feature extraction method, expressed by a filtering algorithm extracting eigenvalues of two modes in the spatial domain. Through several EEG channels, this algorithm extracts the spatial components while increasing the difference of the variance between the two modes [44].
The Power Spectral Density category includes novel approaches such as Weighted Difference of Power Spectral Density (WDPSD) as well as Fast Fourier Transform (FFT), Welch, Welch periodograms and the adaptive autoregressive (AAR) method. PSD methods analyze EEG signal characteristics. The conversion of the EEG signal from the time domain to the frequency domain, to export the description of the signal’s distribution [83], has as a result the extraction of the event-related de-synchronization (ERD) features [42,84].
Finally, Wavelet Transform methods (WT) also include novel approaches such as Flexible Analytic Wavelet Transform (FAWT), Multivariate Empirical Wavelet Transform (EWT), Automatic Independent Component Analysis with Wavelet Transform (AICA-WT) and Successive Decomposition Index (SDI). Methods such as Discrete Wavelet Transform (DWT) and Continuous Wavelet Transform (CWT) are also included. Wavelet Transform constitutes a mathematical method identifying information from continuous data. It has a multi-resolution nature and the processing of non-stationary signals is achieved in the time-frequency domain [42].
As shown in Table 2, the majority of the works focus on CSP methods and their variants. In the work [85], the authors show that the Common Spatial Patterns is one of the most popular approaches in terms of effectiveness. The CSP algorithm was first introduced in 1991 [86] as a method to extract EEG signals’ abnormal components [87]. In 2000 it was presented as a feature extraction method for classification [88]. This means that CSP is a method that has been evolving for 30 years. It constitutes a method by which handmade features are extracted effectively with high accuracy [40,45]. In addition, its variants aim to solve common CSP problems such as noise sensitivity or overfitting when the sample size is small [44,62].
Fifty-eight percent of the works implemented with CSP methods or their variants were published in 2019. Furthermore, based on Table 2 and Figure 5, it is shown that the researchers aim to extract discriminative features from the spatial domain, leading to the trend being that the studies focus on spatial features to increase the performance and the robustness.

4.2. Feature Domains Distribution

Figure 6 shows the distribution of feature domains in the present review based on the research conducted. The majority of the works focus on spatial features. This is because most of the works focus on CSP or on a combination of CSP and other methods.
The distribution of the features based on the adopted categorization of Wang, P. and Jiang, A. is shown in Figure 5. It is obvious that the majority of the features were extracted manually based on the time, frequency, time-frequency and spatial domain with the appropriate feature extraction methods.
Many papers examined in the present research combine several methods and approaches, aiming to extract features from different domains. In the following percentages are included all the different feature extraction methods, domains and feature types were identified in the research, even if those were in the same work.

4.3. Efficient Approaches

For the present research, the effectiveness of each approach will be based on two basic metrics: classification accuracy and Cohen’s Kappa value. If the Kappa value has a high score, the approach does not appear to have a randomness index in its accuracy, but there is an agreement in the score of the accuracy [64].

4.3.1. Accuracy

The feature extraction process has a crucial role in the accuracy of the system [89]. Table 3 shows the feature extraction methods along with their classifiers combined to achieve the accuracy percentages and Kappa values of each approach in specific datasets. The average classification accuracy is extracted from the mean accuracy of the total number of subjects. In the following works, the metric for the robustness (Kappa) has been calculated by the authors of each paper. During the review of proceeding surveys, works with high accuracy percentages were identified, but without the calculated Kappa value. Therefore, it was not possible to measure the robustness of the corresponding approach, despite its high accuracy, and for this reason, they were not included in the following tables.
The majority of the works which had an experimental part and not only theoretical used a dataset from BCI Competition III and IV (www.bbci.de/competition/, accessed on 20 December 2021).
Most of the approaches were used on the dataset BCI Competition III IVa. In this motor imagery dataset EEG actions were recorded, corresponding to the movement of the right hand and foot. The records were extracted from five subjects, with 118 EEG channels in a 10/20 electrode configuration, at 1000 Hz sampling frequency. At this point it is worth noting that the researchers, in [51], used the BCI competition IV 2b dataset and the accuracy rate achieved was 96.4% and 96.5% with SVM and bootstrap respectively with a Kappa value equal to 0.92. This dataset includes two motor imagery classes, right/ left hand, exported from nine subjects. The sampling frequency was 250 Hz and for the EEG recording three bipolar channels were used (C3, Cz, C4) and three EOG (electrooculogram) signals. EOGs are responsible for the noise representation of eyes blinking. In the BCI IV 2a dataset, where the information was obtained from twenty-two EEG channels and three EOG from nine subjects, the work [45] achieved 79.7% accuracy and 0.73 in Kappa value for a 4-class problem. Additionally, in [64], the accuracy percentage of 99.52% with a Kappa value of more than 0.9 was achieved in the BCI III IVb dataset, which contains two classes (left hand, foot) of one single subject and continuous data and 97.5% in the BCI III V dataset containing three classes (left hand, right hand, word association) from three subjects with a Kappa value equal to 0.978. Finally, dataset 1 from the BCI III competition was used, which contains two classes (tongue, left pinky) from one subject, with 64 ECoG channels. With this dataset, ref. [63] achieved 95.89% accuracy with Kappa value 0.9244. Additionally, the paper [90] in BCI III IVa achieved 99.35% accuracy and a 0.9869 Kappa value by combining Continuous Wavelet Transform (CWT) with a Deep Convolutional Neural Network (DCNN) based AlexNet model. This dataset includes 118 EEG channels and information from five subjects.
Table 4 shows the basic characteristics of each dataset.

4.3.2. High Robustness

In order to extract conclusions about the robustness of each method, the metric of accuracy was not enough, as its value can vary in different sessions or participants and its high value may have arisen by chance or solely for a specific subject. For this reason, the value of the Cohen Kappa coefficient (K) was used as a factor by which the degree of robustness of each method would be determined. Kappa value is a metric that measures the inter-rater agreement for qualitative items of a model. This measurement is taken by the comparison of the performance the model has, through its accuracy, with the random accuracy that the model could have. This statistical measure is considered more robust than other metrics because it includes the agreement results, which occurred by chance [51,91]. Based on a confusion matrix the Kappa value is calculated by the following fraction:
This statistical measure is considered more robust than other metrics because it includes the agreement results, which occurred by chance:
K a p p a 2 ( T N T P F N F P ) ( T N + F N ) ( F N + T P ) + ( F P + T P ) ( T N + F P ) .
In general, the Kappa value constitutes a metric, which shows the robustness of an approach. A robust score for the K a p p a value is more than 0.8 [51]. The higher the value of K a p p a over 0.8 the more agreement there is, hence the robustness. In case K a p p a = 1 then there is a perfect agreement and a fully robust model. Accuracy by itself cannot prove that an approach is robust because there is always an active risk that the high accuracy is random.
For this reason, Table 5 shows the studies which achieved the highest accuracy and Kappa values during the research, using the BCI III 1/IVa/IVb/V and BCI IV 2a/2b datasets. As already mentioned above, the accuracy percentages and the Kappa-values constitute the mean values of the subjects or the time/frequency intervals that were separated in each work. Through the averages, the comparison of the methods in terms of their performance and robustness becomes more objective than presenting only the maximum values of a single subject or a single interval.
Figure 7 shows the feature extraction methods which achieved the most robust performance in each of the different datasets used in the research. The blue bars show the accuracy percentages (left axis) while the grey line shows the kappa values (right axis). The dots in the grey line are located among the accuracy percentage of each approach.
The Kappa values in [64] are represented by a graph for the BCI III IVb dataset and the values cannot be extracted with precision. For this reason, the above study is missing from Figure 7.

5. Discussion

As a first conclusion, it is noticed that the datasets used were based on non-invasive methods, except the dataset with the ECoG channels. The researchers working in the field of MI focus on the increase of the performance and the robustness of non-invasive techniques due to the ease of channel placement, the low cost, the safety they give to the individual and the more general convenience they offer in extracting information from the brain.
Table 2 shows that the majority of feature extraction methods are based on Common Spatial Patterns, including their variants. Without a doubt, CSP is the most common feature extraction technique in MI-BCIs and many variants have emerged in order to improve its performance. The spatial filtering of CSP has been designed in order to provide answers for the two-classes problems, but variants for multi-class problems have also been designed. In terms of robustness, Table 5 shows that one of the works, which achieved a high classification accuracy (96.4% and 96.5%), is based on CSP with a Kappa value equal to 0.92. This value shows the increased robustness of this approach. On the other hand, the literature shows that the CSP methods are not always robust, mainly due to the presence of noise [62,92]. Additionally, the CSP methods do not consider the location of the electrodes in spatial terms [93] and the increase of this type of method during 2019 led to the increase of spatial domain features. PSD methods, which cover the second largest percentage of the works studied (22.5%), also have high classification accuracy percentages and high Kappa values in EGoG based datasets or when it is combined with deep learning methods. In general, CSP and PSD methods cover 70% of the total works, but the accuracy they achieved and their K-values are not so high as the wavelets were used in the majority of each different dataset. Furthermore, WT also achieved the most robust performance among all works included in the present review, based on Table 3. Based on Table 5, which includes the most effective approaches on each dataset, three out of the six highest effectiveness approaches, based on Figure 7, belong to Wavelets Transform (MEWT, CWT, SDI). At this point, it should be noted that CWT is used for the presentation of a one dimensional (1-D) EEG signal, to two-dimensional (2-D) images and then the feature extraction made in the DCNN based AlexNet model. The SDI approach to feature extraction [94] based on multi-domain EEG signals (time, frequency, time-frequency, and spatial), which was motivated by DWT [64], achieved high scores in accuracy and Kappa value terms for 2-class and 3-class problems.
As Figure 7 shows, the majority of the most effective approaches which achieved high accuracy and high Kappa values are combined with wavelets methods and deep learning. Comparing the approaches which achieved the highest scores in each dataset, WT methods were more effective on datasets based on EEG channels, while CSP methods on datasets combined bipolar EEG and EOG channels. PSD approaches achieved high scores in accuracy and Kappa value on the ECoG channels-based dataset. Based on the classes of each dataset, WT and CSP are capable of high robust accuracy rates in 3-class and 4-class, respectively.
From a complexity perspective for the most effective approaches, ref. [64] used the SDI features approach, which is based on DWT, combined with FFNN. In [71], FFNN was used, but MEWT was used as the feature extraction method. Both approaches combine variants of Wavelet Transform with FFNN. The authors of [64] mention that the SDI approach is less complex than the MEWT approach and also has lower computational cost. The average training time of a single trial was 1.27 ms for the SDI approach, while in the MEWT was 22.2 ms. The highest feature extraction time for each subject for SDI and MEWT approaches was 1.36 and 1.68 s, respectively. Since both approaches have the same authors, the hardware specifications were the same. More specifically, M-5Y10c CPU 0.8GHz CPU is used with 8GB of RAM. In [90], the method followed was the representation of the signals in images through STFT and CWT. Again, this method uses a variant of Wavelet Transform in order for high accuracy and robustness to be achieved. Then the images are imported into the DCNN based AlexNet model in order for the feature extraction and the classification to be performed. The total training time was almost 41 min. In terms of real-time implementation, the hardware requirements will be higher since deep learning methods are computational power and time-consuming. On the other hand CSP approaches, such as [45,51], are not very effective on real-time applications [71]. Additionally, ref. [63] used the ECoG dataset which means that the complexity is lower since in these semi-invasive techniques the noise is reduced.

6. Conclusions

This review explored the robust feature extraction methods in the field of Motor Imagery-based Brain Computer Interfaces, based on the published works of the last five years. The research showed that there was an increase in CSP-based methods in 2019 but 87.5% of the WT-based approaches were published from 2019 onward. This fact shows that the evolution trend of CSP methods is replaced gradually by Wavelet Transform. In order to develop a robust feature extraction method, researchers focused on discriminative features. More specifically, spatial and time-frequency domains were the reference point for the majority of works. The most used datasets were variants from BCI Competition III and V for non-invasive channels. Determining whether an approach is effectively robust was based on its accuracy rate and Kappa value. The approaches with the highest accuracy percentages and also the highest Kappa values were those with combined Wavelet Transform or its variants, and deep learning methods in the majority of datasets. The Wavelet Transform methods constitute the majority of the approaches which achieved the highest robust accuracy among the works which are included in the present review.
The research led to the conclusion that WT achieved a higher robust accuracy than the most common feature extraction methods, CSP and PSD, for three out of the six datasets used. PSD methods achieved a high performance for a 2-class problem in which the signals were from semi-invasive techniques (ECoG). Additionally, WT methods have high effectiveness on 2-class and 3-class problems on a dataset whose information was acquired from 118 EEG channels. Furthermore, CSP methods are robustly effective in the 4-class problem and the 2-class problem, as well as in bipolar EEG and EOG channels.
Based on the results, the conclusion that emerges is that the most robust technique differs depending on the number of classes of each problem and the way the signals were acquired. CSP methods achieved high robust accuracy in datasets in which EOG channels and bipolar EEG were used and, along with WT, they are very effective for 3-class and 4-class problems, respectively. WT methods are also robust and effective in 2-class problems in datasets in which the information is acquired by EEG channels exclusively. In addition, PSD has high robust accuracy on semi-invasive-based datasets.
With the rise of deep learning in artificial intelligence and also in Motor Imagery, the number of works on BCIs will be increased in the near future. As mentioned above, high accuracy percentages and robustness are based on deep learning approaches and wavelets. Interesting future research might be the implementation of a robustness deep learning-based feature extraction method in online applications in order to study what the deviations will be in terms of accuracy and robustness. Besides the robust feature extraction approach, this research will also include the hardware requirements since the deep learning methods face challenges in online applications as far as the hardware is concerned. It is also very important for the hardware requirements and also for the deep learning approach to take into account the time in which the Motor Imagery system will respond in order to work properly in real-time implementations.

Author Contributions

Conceptualization, S.S.M. and G.A.P.; methodology, S.S.M.; investigation, S.S.M.; resources, S.S.M.; data curation, S.S.M.; writing—original draft preparation, S.S.M.; writing—review and editing, S.S.M. and G.A.P.; visualization, S.S.M.; supervision, G.A.P.; project administration, G.A.P. All authors have read and agreed to the published version of the manuscript.

Funding

Not applicable.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data has been present in main text.

Acknowledgments

This work was supported by the MPhil program “Advanced Technologies in Informatics and Computers”, hosted by the Department of Computer Science, International Hellenic University, Greece.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AARAdaptive Autoregressive
AICA-WTAutomatic Independent Component Analysis with Wavelet Transform
ARAutoregressive
BCIBrain Computer Interface
CNNConvolution Neural Network
CSPCommon Spatial Patterns
CWTContinuous Wavelet Transform
DCNNDeep Convolution Neural Network
dFCdynamic Functional Connectivity
DWTDiscrete Wavelet Transform
ECoGElectrocorticography
EEGElectroencephalography
EMDEmpirical Mode Decomposition
EOGElectrooculogram
ERDEvent-Related Desynchronization
ERPEvent-Related Potentials
ERSEvent-Related Synchronization
EWTEmpirical Wavelet Transform
FAWTFlexible Analytic Wavelet Transform
FB-TRCSPFrequency Bank Tikhonov Regularization Common Spatial Patterns
FBCSPFrequency Bank Common Spatial Patterns
FDCSPFrequency Domain Common Spatial Patterns
FDFFrequency Domain Features
FFNNFeedforward Neural Network
FFTFast Fourier Transform
MEGMagnetoencephalography
MFCCMel-Frequency Cepstral Coefficients
kNNk-Nearest Neighbors
LCDLocal Characteristic-scale Decomposition
Log-BPLogarithmic Band Power
MEWTMultivariate Empirical Wavelet Transform
MIMotor Imagery
PSDPower Spectral Density
RFRandom Forest
RFBReactive Frequency Band
SGRMsparse group representation model
SDISuccessive Decomposition Index
SRDASpectral Regression Discriminant Analysis
SSAEPSteady State Auditory Evoked Potentials
SSSEPSteady State Somatosensory Evoked Potentials
SSVEPSteady State Visually Evoked Potentials
STFTShort-Time-Fourier Transform
SVMSupport Vector Machine
TFDFTime-Frequency Domain Features
TFPTransfer function perturbationm
TDFTime Domain Features
WDPSDWeighted Difference of Power Spectral Density
WTWavelet Transform

References

  1. Kaushal, G.; Singh, A.; Jain, V. Better approach for denoising EEG signals. In Proceedings of the 2016 5th International Conference on Wireless Networks and Embedded Systems (WECON), Rajpura, India, 14–16 October 2016; pp. 1–3. [Google Scholar]
  2. Freitas, D.R.; Inocêncio, A.V.; Lins, L.T.; Santos, E.A.; Benedetti, M.A. A real-time embedded system design for ERD/ERS measurement on EEG-based brain-computer interfaces. In Proceedings of the XXVI Brazilian Congress on Biomedical Engineering, Armação de Buzios, Brazil, 21–25 October 2018; pp. 25–33. [Google Scholar]
  3. Abhang, P.A.; Gawali, B.W.; Mehrotra, S.C. Technological basics of EEG recording and operation of apparatus. In Introduction to EEG-and Speech-Based Emotion Recognition; Academic Press: Cambridge, MA, USA, 2016; pp. 19–50. [Google Scholar]
  4. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  5. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain–computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 2016, 12, 513–525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Rashid, N.; Iqbal, J.; Javed, A.; Tiwana, M.I.; Khan, U.S. Design of embedded system for multivariate classification of finger and thumb movements using EEG signals for control of upper limb prosthesis. BioMed Res. Int. 2018, 2018, 2695106. [Google Scholar] [CrossRef] [PubMed]
  7. Ding, S.; Yuan, Z.; An, P.; Xue, G.; Sun, W.; Zhao, J. Cascaded convolutional neural network with attention mechanism for mobile eeg-based driver drowsiness detection system. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 1457–1464. [Google Scholar]
  8. Sawan, M.; Salam, M.T.; Le Lan, J.; Kassab, A.; Gélinas, S.; Vannasing, P.; Lesage, F.; Lassonde, M.; Nguyen, D.K. Wireless recording systems: From noninvasive EEG-NIRS to invasive EEG devices. IEEE Trans. Biomed. Circuits Syst. 2013, 7, 186–195. [Google Scholar] [CrossRef]
  9. Dan, J.; Vandendriessche, B.; Paesschen, W.V.; Weckhuysen, D.; Bertrand, A. Computationally-efficient algorithm for real-time absence seizure detection in wearable electroencephalography. Int. J. Neural Syst. 2020, 30, 2050035. [Google Scholar] [CrossRef]
  10. Nijholt, A.; Tan, D.; Allison, B.; del R. Milan, J.; Graimann, B. Brain-computer interfaces for HCI and games. In Proceedings of the CHI ’08: CHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 3925–3928. [Google Scholar]
  11. Kovari, A.; Katona, J.; Heldal, I.; Helgesen, C.; Costescu, C.; Rosan, A.; Hathazi, A.; Thill, S.; Demeter, R. Examination of gaze fixations recorded during the trail making test. In Proceedings of the 2019 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy, 23–25 October 2019; pp. 319–324. [Google Scholar]
  12. Kovari, A.; Katona, J.; Costescu, C. Quantitative analysis of relationship between visual attention and eye-hand coordination. Acta Polytech. Hung. 2020, 17, 77–95. [Google Scholar] [CrossRef]
  13. Kovari, A.; Katona, J.; Costescu, C. Evaluation of eye-movement metrics in a software debbuging task using gp3 eye tracker. Acta Polytech. Hung. 2020, 17, 57–76. [Google Scholar] [CrossRef]
  14. Kovari, A. Study of Algorithmic Problem-Solving and Executive Function. Acta Polytech. Hung. 2020, 17, 241–256. [Google Scholar] [CrossRef]
  15. Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Examine the effect of different web-based media on human brain waves. In Proceedings of the 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Debrecen, Hungary, 11–14 September 2017; pp. 000407–000412. [Google Scholar]
  16. Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Electroencephalogram-based brain-computer interface for internet of robotic things. In Cognitive Infocommunications, Theory and Applications; Springer: Cham, Switzerland, 2019; pp. 253–275. [Google Scholar]
  17. Panoulas, K.J.; Hadjileontiadis, L.J.; Panas, S.M. Brain-computer interface (BCI): Types, processing perspectives and applications. In Multimedia Services in Intelligent Environments; Springer: Cham, Switzerland, 2010; pp. 299–321. [Google Scholar]
  18. Wolpaw, J.R.; Birbaumer, N.; Heetderks, W.J.; McFarland, D.J.; Peckham, P.H.; Schalk, G.; Donchin, E.; Quatrano, L.A.; Robinson, C.J.; Vaughan, T.M.; et al. Brain-computer interface technology: A review of the first international meeting. IEEE Trans. Rehabil. Eng. 2000, 8, 164–173. [Google Scholar] [CrossRef]
  19. Tangermann, M.; Krauledat, M.; Grzeska, K.; Sagebaum, M.; Blankertz, B.; Vidaurre, C.; Müller, K.R. Playing pinball with non-invasive BCI. In Proceedings of the Advances in Neural Information Processing Systems 21 (NIPS 2008), Vancouver, BC, Canada, 8–11 December 2008; pp. 1641–1648. [Google Scholar]
  20. Allison, B.Z.; McFarland, D.J.; Schalk, G.; Zheng, S.D.; Jackson, M.M.; Wolpaw, J.R. Towards an independent brain–computer interface using steady state visual evoked potentials. Clin. Neurophysiol. 2008, 119, 399–408. [Google Scholar] [CrossRef] [Green Version]
  21. Friman, O.; Luth, T.; Volosyak, I.; Graser, A. Spelling with steady-state visual evoked potentials. In Proceedings of the 2007 3rd International IEEE/EMBS Conference on Neural Engineering, Kohala Coast, HI, USA, 2–5 May 2007; pp. 354–357. [Google Scholar]
  22. Kotlewska, I.; Wójcik, M.; Nowicka, M.; Marczak, K.; Nowicka, A. Present and past selves: A steady-state visual evoked potentials approach to self-face processing. Sci. Rep. 2017, 7, 16438. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. George, O.; Smith, R.; Madiraju, P.; Yahyasoltani, N.; Ahamed, S.I. Motor Imagery: A review of existing techniques, challenges and potentials. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 12–16 July 2021; pp. 1893–1899. [Google Scholar]
  24. Bansal, D.; Mahajan, R. Chapter 2: EEG-based Brain-Computer Interfacing (BCI). In EEG-Based Brain-Computer Interfaces: Cognitive Analysis and Control; Academic Press: Cambridge, MA, USA, 2019; pp. 21–71. [Google Scholar]
  25. Fernandez-Fraga, S.; Aceves-Fernandez, M.; Pedraza-Ortega, J. EEG data collection using visual evoked, steady state visual evoked and motor image task, designed to Brain Computer Interfaces (BCI) development. Data Brief 2019, 25, 103871. [Google Scholar] [CrossRef] [PubMed]
  26. Lotze, M.; Halsband, U. Motor imagery. J. Physiol. 2006, 99, 386–395. [Google Scholar] [CrossRef] [PubMed]
  27. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef] [PubMed]
  28. Pfurtscheller, G.; Neuper, C. Motor imagery activates primary sensorimotor area in humans. Neurosci. Lett. 1997, 239, 65–68. [Google Scholar] [CrossRef]
  29. Shi, T.; Ren, L.; Cui, W. Feature extraction of brain–computer interface electroencephalogram based on motor imagery. IEEE Sensors J. 2019, 20, 11787–11794. [Google Scholar] [CrossRef]
  30. Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain computer interface: A review. Array 2019, 1, 100003. [Google Scholar] [CrossRef]
  31. Hammon, P.S.; de Sa, V.R. Preprocessing and meta-classification for brain-computer interfaces. IEEE Trans. Biomed. Eng. 2007, 54, 518–525. [Google Scholar] [CrossRef] [Green Version]
  32. Lotte, F. A tutorial on EEG signal-processing techniques for mental-state recognition in brain–computer interfaces. In Guide to Brain-Computer Music Interfacing; Springer: London, UK, 2014; pp. 133–161. [Google Scholar]
  33. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  34. Alimardani, M.; Nishio, S.; Ishiguro, H. Chapter 5: Brain-computer interface and motor imagery training: The role of visual feedback and embodiment. In Evolving BCI Therapy-Engaging Brain State Dynamics; IntechOpen: London, UK, 2018; Volume 2, p. 64. [Google Scholar]
  35. Fleury, M.; Lioi, G.; Barillot, C.; Lécuyer, A. A survey on the use of haptic feedback for brain-computer interfaces and neurofeedback. Front. Neurosci. 2020, 14, 528. [Google Scholar] [CrossRef]
  36. Ren, W.; Han, M.; Wang, J.; Wang, D.; Li, T. Efficient feature extraction framework for EEG signals classification. In Proceedings of the 2016 Seventh International Conference on Intelligent Control and Information Processing (ICICIP), Siem Reap, Cambodia, 1–4 December 2016; pp. 167–172. [Google Scholar]
  37. Pahuja, S.; Veer, K. Recent Approaches on Classification and Feature Extraction of EEG Signal: A Review. Robotica 2021, 77–101. [Google Scholar]
  38. Boonyakitanont, P.; Lek-Uthai, A.; Chomtho, K.; Songsiri, J. A review of feature extraction and performance evaluation in epileptic seizure detection using EEG. Biomed. Signal Process. Control. 2020, 57, 101702. [Google Scholar] [CrossRef] [Green Version]
  39. Kanimozhi, M.; Roselin, R. Statistical Feature Extraction and Classification using Machine Learning Techniques in Brain-Computer Interface. Int. J. Innov. Technol. Explor. Eng. 2020, 9, 1754–1758. [Google Scholar] [CrossRef]
  40. Wang, P.; Jiang, A.; Liu, X.; Shang, J.; Zhang, L. LSTM-based EEG classification in motor imagery tasks. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 2086–2095. [Google Scholar] [CrossRef]
  41. Al-Saegh, A.; Dawwd, S.A.; Abdul-Jabbar, J.M. Deep learning for motor imagery EEG-based classification: A review. Biomed. Signal Process. Control. 2021, 63, 102172. [Google Scholar] [CrossRef]
  42. Iftikhar, M.; Khan, S.A.; Hassan, A. A survey of deep learning and traditional approaches for EEG signal processing and classification. In Proceedings of the 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 1–3 November 2018; pp. 395–400. [Google Scholar]
  43. Pawar, D.; Dhage, S. Feature Extraction Methods for Electroencephalography based Brain-Computer Interface: A Review. IAENG Int. J. Comput. Sci. 2020, 47, 501–515. [Google Scholar]
  44. Zhang, R.; Xiao, X.; Liu, Z.; Jiang, W.; Li, J.; Cao, Y.; Ren, J.; Jiang, D.; Cui, L. A new motor imagery EEG classification method FB-TRCSP+ RF based on CSP and random forest. IEEE Access 2018, 6, 44944–44950. [Google Scholar] [CrossRef]
  45. Ai, Q.; Chen, A.; Chen, K.; Liu, Q.; Zhou, T.; Xin, S.; Ji, Z. Feature extraction of four-class motor imagery EEG signals based on functional brain network. J. Neural Eng. 2019, 16, 026032. [Google Scholar] [CrossRef]
  46. Razzak, I.; Hameed, I.A.; Xu, G. Robust sparse representation and multiclass support matrix machines for the classification of motor imagery EEG signals. IEEE J. Transl. Eng. Health Med. 2019, 7, 2168–2372. [Google Scholar] [CrossRef]
  47. Jiao, Y.; Zhang, Y.; Chen, X.; Yin, E.; Jin, J.; Wang, X.; Cichocki, A. Sparse group representation model for motor imagery EEG classification. IEEE J. Biomed. Health Inform. 2018, 23, 631–641. [Google Scholar] [CrossRef]
  48. Tayeb, Z.; Fedjaev, J.; Ghaboosi, N.; Richter, C.; Everding, L.; Qu, X.; Wu, Y.; Cheng, G.; Conradt, J. Validating deep neural networks for online decoding of motor imagery movements from EEG signals. Sensors 2019, 19, 210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Thomas, K.P.; Robinson, N.; Vinod, A.P. Utilizing subject-specific discriminative EEG features for classification of motor imagery directions. In Proceedings of the 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), Morioka, Japan, 23–25 October 2019; pp. 1–5. [Google Scholar]
  50. Zhang, Y.; Nam, C.S.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. Temporally constrained sparse group spatial patterns for motor imagery BCI. IEEE Trans. Cybern. 2018, 49, 3322–3332. [Google Scholar] [CrossRef] [PubMed]
  51. Khakpour, M. The Improvement of a Brain Computer Interface Based on EEG Signals. Front. Biomed. Technol. 2020, 7, 259–265. [Google Scholar] [CrossRef]
  52. Liu, Q.; Zheng, W.; Chen, K.; Ma, L.; Ai, Q. Online detection of class-imbalanced error-related potentials evoked by motor imagery. J. Neural Eng. 2021, 18, 046032. [Google Scholar] [CrossRef] [PubMed]
  53. Azab, A.M.; Mihaylova, L.; Ahmadi, H.; Arvaneh, M. Robust common spatial patterns estimation using dynamic time warping to improve bci systems. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 3897–3901. [Google Scholar]
  54. Tang, X.; Wang, T.; Du, Y.; Dai, Y. Motor imagery EEG recognition with KNN-based smooth auto-encoder. Artif. Intell. Med. 2019, 101, 101747. [Google Scholar] [CrossRef]
  55. Miao, Y.; Yin, F.; Zuo, C.; Wang, X.; Jin, J. Improved RCSP and AdaBoost-based classification for motor-imagery BCI. In Proceedings of the 2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Tianjin, China, 14–16 June 2019; pp. 1–5. [Google Scholar]
  56. Wang, J.; Feng, Z.; Lu, N. Feature extraction by common spatial pattern in frequency domain for motor imagery tasks classification. In Proceedings of the 2017 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 5883–5888. [Google Scholar]
  57. Gu, J.; Wei, M.; Guo, Y.; Wang, H. Common Spatial Pattern with L21-Norm. Neural Process. Lett. 2021, 53, 3619–3638. [Google Scholar] [CrossRef]
  58. Hossain, I.; Hettiarachchi, I. Calibration time reduction for motor imagery-based BCI using batch mode active learning. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  59. Hossain, I.; Khosravi, A.; Hettiarachchi, I.; Nahavandi, S. Batch mode query by committee for motor imagery-based BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 27, 13–21. [Google Scholar] [CrossRef]
  60. Peterson, V.; Wyser, D.; Lambercy, O.; Spies, R.; Gassert, R. A penalized time-frequency band feature selection and classification procedure for improved motor intention decoding in multichannel EEG. J. Neural Eng. 2019, 16, 016019. [Google Scholar] [CrossRef] [Green Version]
  61. Jiang, Q.; Zhang, Y.; Ge, G.; Xie, Z. An adaptive csp and clustering classification for online motor imagery EEG. IEEE Access 2020, 8, 156117–156128. [Google Scholar] [CrossRef]
  62. Samuel, O.W.; Asogbon, M.G.; Geng, Y.; Pirbhulal, S.; Mzurikwao, D.; Chen, S.; Fang, P.; Li, G. Determining the optimal window parameters for accurate and reliable decoding of multiple classes of upper limb motor imagery tasks. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; pp. 422–425. [Google Scholar]
  63. Islam, M.N.; Sulaiman, N.; Rashid, M.; Bari, B.S.; Hasan, M.J.; Mustafa, M.; Jadin, M.S. Empirical mode decomposition coupled with fast fourier transform based feature extraction method for motor imagery tasks classification. In Proceedings of the 2020 IEEE 10th International Conference on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 9 November 2020; pp. 256–261. [Google Scholar]
  64. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z. Identification of motor and mental imagery EEG in two and multiclass subject-dependent tasks using successive decomposition index. Sensors 2020, 20, 5283. [Google Scholar] [CrossRef]
  65. Jana, G.C.; Shukla, S.; Srivastava, D.; Agrawal, A. Performance estimation and analysis over the supervised learning approaches for motor imagery EEG signals classification. In Intelligent Computing and Applications; Springer: Cham, Switzerland, 2021; pp. 125–141. [Google Scholar]
  66. Kim, C.; Sun, J.; Liu, D.; Wang, Q.; Paek, S. An effective feature extraction method by power spectral density of EEG signal for 2-class motor imagery-based BCI. Med. Biol. Eng. Comput. 2018, 56, 1645–1658. [Google Scholar] [CrossRef] [PubMed]
  67. Ortiz-Echeverri, C.; Paredes, O.; Salazar-Colores, J.S.; Rodríguez-Reséndiz, J.; Romo-Vázquez, R. A Comparative Study of Time and Frequency Features for EEG Classification. In Proceedings of the VIII Latin American Conference on Biomedical Engineering and XLII National Conference on Biomedical Engineering, Cancún, México, 2–5 October 2019; pp. 91–97. [Google Scholar]
  68. Chu, Y.; Zhao, X.; Zou, Y.; Xu, W.; Han, J.; Zhao, Y. A decoding scheme for incomplete motor imagery EEG with deep belief network. Front. Neurosci. 2018, 12, 680. [Google Scholar] [CrossRef] [PubMed]
  69. Meziani, A.; Djouani, K.; Medkour, T.; Chibani, A. A Lasso quantile periodogram based feature extraction for EEG-based motor imagery. J. Neurosci. Methods 2019, 328, 108434. [Google Scholar] [CrossRef] [PubMed]
  70. Chaudhary, S.; Taran, S.; Bajaj, V.; Siuly, S. A flexible analytic wavelet transform based approach for motor-imagery tasks classification in BCI applications. Comput. Methods Programs Biomed. 2020, 187, 105325. [Google Scholar] [CrossRef] [PubMed]
  71. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z.; Siuly, S.; Ding, W. A matrix determinant feature extraction approach for decoding motor and mental imagery EEG in subject specific tasks. IEEE Trans. Cogn. Dev. Syst. 2020, 1. [Google Scholar] [CrossRef]
  72. Al-Qazzaz, N.K.; Alyasseri, Z.A.A.; Abdulkareem, K.H.; Ali, N.S.; Al-Mhiqani, M.N.; Guger, C. EEG feature fusion for motor imagery: A new robust framework towards stroke patients rehabilitation. Comput. Biol. Med. 2021, 137, 104799. [Google Scholar] [CrossRef] [PubMed]
  73. Zhang, C.; Kim, Y.K.; Eskandarian, A. EEG-inception: An accurate and robust end-to-end neural network for EEG-based motor imagery classification. J. Neural Eng. 2021, 18, 046014. [Google Scholar] [CrossRef]
  74. Amin, S.U.; Alsulaiman, M.; Muhammad, G.; Bencherif, M.A.; Hossain, M.S. Multilevel weighted feature fusion using convolutional neural networks for EEG motor imagery classification. IEEE Access 2019, 7, 18940–18950. [Google Scholar] [CrossRef]
  75. Tang, X.; Zhang, N.; Zhou, J.; Liu, Q. Hidden-layer visible deep stacking network optimized by PSO for motor imagery EEG recognition. Neurocomputing 2017, 234, 1–10. [Google Scholar] [CrossRef]
  76. Samanta, K.; Chatterjee, S.; Bose, R. Cross-subject motor imagery tasks EEG signal classification employing multiplex weighted visibility graph and deep feature extraction. IEEE Sensors Lett. 2019, 4, 7000104. [Google Scholar] [CrossRef]
  77. Jagadish, B.; Rajalakshmi, P. A novel feature extraction framework for four class motor imagery classification using log determinant regularized riemannian manifold. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 6754–6757. [Google Scholar]
  78. Miao, M.; Wang, A.; Zeng, H. Designing robust spatial filter for motor imagery electroencephalography signal classification in brain-computer interface systems. In Fuzzy Systems and Data Mining III: Proceedings of FSDM 2017 (Frontiers in Artificial Intelligence and Applications); IOS Press: Amsterdam, The Netherlans, 2017; pp. 189–196. [Google Scholar]
  79. Wang, P.; He, J.; Lan, W.; Yang, H.; Leng, Y.; Wang, R.; Iramina, K.; Ge, S. A hybrid EEG-fNIRS brain-computer interface based on dynamic functional connectivity and long short-term memory. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; Volume 5, pp. 2214–2219. [Google Scholar]
  80. Seraj, E.; Karimzadeh, F. Improved detection rate in motor imagery based bci systems using combination of robust analytic phase and envelope features. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 24–28. [Google Scholar]
  81. Bhattacharyya, S.; Mukul, M.K. Reactive frequency band-based real-time motor imagery classification. Int. J. Intell. Syst. Technol. Appl. 2018, 17, 136–152. [Google Scholar] [CrossRef]
  82. Trigui, O.; Zouch, W.; Messaoud, M.B. Hilbert-Huang transform and Welch’s method for motor imagery based brain computer interface. Int. J. Cogn. Inform. Nat. Intell. 2017, 11, 47–68. [Google Scholar] [CrossRef] [Green Version]
  83. Vega-Escobar, L.; Castro-Ospina, A.; Duque-Muñoz, L. Feature extraction schemes for BCI systems. In Proceedings of the 2015 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA), Bogota, Colombia, 2–4 September 2015; pp. 1–6. [Google Scholar]
  84. Hong, J.; Qin, X.; Li, J.; Niu, J.; Wang, W. Signal processing algorithms for motor imagery brain-computer interface: State of the art. J. Intell. Fuzzy Syst. 2018, 35, 6405–6419. [Google Scholar] [CrossRef]
  85. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain–computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef] [Green Version]
  86. Koles, Z.J. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalogr. Clin. Neurophysiol. 1991, 79, 440–447. [Google Scholar] [CrossRef]
  87. Reuderink, B.; Poel, M. Robustness of the Common Spatial Patterns Algorithm in the BCI-Pipeline. Techical Report 2008. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.216.1456&rep=rep1&type=pdf (accessed on 23 March 2022).
  88. Ramoser, H.; Muller-Gerking, J.; Pfurtscheller, G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans. Rehabil. Eng. 2000, 8, 441–446. [Google Scholar] [CrossRef] [Green Version]
  89. Mitsuhashi, T. Impact of feature extraction to accuracy of machine learning based hotspot detection. In Proceedings of the International Society for Optics and Photonics, Monterey, CA, USA, 11–14 September 2017; Volume 10451, p. 104510C. [Google Scholar]
  90. Chaudhary, S.; Taran, S.; Bajaj, V.; Sengur, A. Convolutional neural network based approach towards motor imagery tasks EEG signals classification. IEEE Sens. J. 2019, 19, 4494–4500. [Google Scholar] [CrossRef]
  91. Vieira, S.M.; Kaymak, U.; Sousa, J.M. Cohen’s kappa coefficient as a performance measure for feature selection. In Proceedings of the International Conference on Fuzzy Systems, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  92. Grosse-Wentrup, M.; Liefhold, C.; Gramann, K.; Buss, M. Beamforming in noninvasive brain–computer interfaces. IEEE Trans. Biomed. Eng. 2009, 56, 1209–1219. [Google Scholar] [CrossRef]
  93. Huang, G.; Liu, G.; Meng, J.; Zhang, D.; Zhu, X. Model based generalization analysis of common spatial pattern in brain computer interfaces. Cogn. Neurodynamics 2010, 4, 217–223. [Google Scholar] [CrossRef] [Green Version]
  94. Raghu, S.; Sriraam, N.; Rao, S.V.; Hegde, A.S.; Kubben, P.L. Automated detection of epileptic seizures using successive decomposition index and support vector machine classifier in long-term EEG. Neural Comput. Appl. 2020, 32, 8965–8984. [Google Scholar] [CrossRef]
Figure 1. BCI basic architecture.
Figure 1. BCI basic architecture.
Computers 11 00061 g001
Figure 2. Adopted PRISMA flow diagram 2020.
Figure 2. Adopted PRISMA flow diagram 2020.
Computers 11 00061 g002
Figure 3. Bibliographic coupling analysis based on the countries.
Figure 3. Bibliographic coupling analysis based on the countries.
Computers 11 00061 g003
Figure 4. Papers per year.
Figure 4. Papers per year.
Computers 11 00061 g004
Figure 5. Feature types.
Figure 5. Feature types.
Computers 11 00061 g005
Figure 6. Feature domains.
Figure 6. Feature domains.
Computers 11 00061 g006
Figure 7. Accuracy—Kappa values.
Figure 7. Accuracy—Kappa values.
Computers 11 00061 g007
Table 1. Brainwaves.
Table 1. Brainwaves.
BrainwavesHz/V
Delta ( δ )≤4 Hz, 100 μ V
Theta ( θ )4–8 Hz, <100 μ V
Alpha ( α )8–13 Hz, <50 μ V
Beta ( β )13–30 Hz, <30 μ V
Gamma ( γ )≥30 Hz, ≤10 μ V
Table 2. Results.
Table 2. Results.
MethodsCitationsPercentages
CSP-based[40,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61]44.26%
PSD methods[52,62,63,64,65,66,67,68,69]21.42%
Wavelet Transform-based[48,53,54,64,70,71,72]16.65%
Deep learning[73,74,75,76]9.52%
Riemmannian[58,59,77]7.14%
Boltzmann[68,75]4.76%
LCD[45,78]4.76%
dFC[79]2.38%
MFCC[67]2.38%
log-BP[48]2.38%
TPF[80]2.38%
RFB[81]2.38%
HHT[82]2.38%
EMD[63]2.38%
Table 3. Approaches with the highest accuracy.
Table 3. Approaches with the highest accuracy.
Feature ExtractionClassifierKappaAvg. AccuracyDatasetWork
EMD-FFTSVM0.924495.89%BCI III Dataset 1[63]
RJSPCASVM-based0.91678%BCI III Dataset 1[46]
MEWTFFNN0.9599.55%BCI III IVa[71]
CWTDCNN0.986999.35%BCI III IVa[90]
STFTDCNN0.979898.7%BCI III IVa[90]
SDIFFNN0.969397.46%BCI III IVa[64]
SDISVM0.91593.05%BCI III IVa[64]
FAWTSubspace kNN0.91795.97%BCI III IVa[70]
CSPSGRM0.5477.7%BCI III IVa[47]
MEWTFFNN0.9599.52%BCI III IVb[71]
SDIFFNN0.97897.5%BCI III V[64]
MEWTFFNN0.89491.8%BCI III V[71]
CNNLSTM0.5577.44%BCI IV 2a[73]
CSP/LCDSRDA0.7379.7%BCI IV 2a[45]
CSPSVM0.9296.4%BCI IV 2b[51]
CSPBootstrap0.9296.5%BCI IV 2b[51]
CNNLSTM0.54465.88%BCI IV 2b[73]
CSPSGRM0.5778.2%BCI IV 2b[47]
Table 4. Datasets.
Table 4. Datasets.
DatasetChannelsClassesSubjects
BCI III Dataset 164 ECoG2Continuous EEG
BCI III IVa118 EEG25
BCI III IVb118 EEG2Continuous EEG
BCI III V32 EEG3Continuous EEG
BCI IV 2a22 EEG/3 EOG49
BCI IV 2b3 bipolar EEG/3 EOG29
Table 5. Highest effectiveness per dataset.
Table 5. Highest effectiveness per dataset.
DomainClassesAvg. AccuracyK-ValueDatasetWork
Frequency domain295.89%0.9244BCI III Dataset 1[63]
Time-Frequency domain299.35%0.987BCI III IVa[90]
Time-Frequency domain299.52%0.95BCI III IVb[71]
Multi-domain397.5%0.978BCI III V[64]
Spatial domain479.7%0.73BCI IV 2a[45]
Spatial domain296.5%0.92BCI IV 2b[51]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moumgiakmas, S.S.; Papakostas, G.A. Robustly Effective Approaches on Motor Imagery-Based Brain Computer Interfaces. Computers 2022, 11, 61. https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050061

AMA Style

Moumgiakmas SS, Papakostas GA. Robustly Effective Approaches on Motor Imagery-Based Brain Computer Interfaces. Computers. 2022; 11(5):61. https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050061

Chicago/Turabian Style

Moumgiakmas, Seraphim S., and George A. Papakostas. 2022. "Robustly Effective Approaches on Motor Imagery-Based Brain Computer Interfaces" Computers 11, no. 5: 61. https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop