Next Article in Journal
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review
Previous Article in Journal
Assessment of Reinforced Concrete Surface Breaking Crack Using Rayleigh Wave Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quaternion-Based Signal Analysis for Motor Imagery Classification from Electroencephalographic Signals

by
Patricia Batres-Mendoza
1,
Carlos R. Montoro-Sanjose
2,3,
Erick I. Guerra-Hernandez
1,
Dora L. Almanza-Ojeda
4,*,
Horacio Rostro-Gonzalez
1,3,
Rene J. Romero-Troncoso
3,5 and
Mario A. Ibarra-Manzano
3,4
1
Laboratorio de Sistemas Bioinspirados, Departamento de Ingeniería Electrónica, DICIS, Universidad de Guanajuato, Carr. Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico
2
Departamento de Arte y Empresa, DICIS, Universidad de Guanajuato, Carr. Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico
3
Cuerpo Académico de Telemática, DICIS, Universidad de Guanajuato, Carr. Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico
4
Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, DICIS, Universidad de Guanajuato, Carr. Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico
5
Departamento de Ingeniería Electrónica, DICIS, Universidad de Guanajuato, Carr. Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico
*
Author to whom correspondence should be addressed.
Submission received: 27 January 2016 / Revised: 26 February 2016 / Accepted: 29 February 2016 / Published: 5 March 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Quaternions can be used as an alternative to model the fundamental patterns of electroencephalographic (EEG) signals in the time domain. Thus, this article presents a new quaternion-based technique known as quaternion-based signal analysis (QSA) to represent EEG signals obtained using a brain-computer interface (BCI) device to detect and interpret cognitive activity. This quaternion-based signal analysis technique can extract features to represent brain activity related to motor imagery accurately in various mental states. Experimental tests in which users where shown visual graphical cues related to left and right movements were used to collect BCI-recorded signals. These signals were then classified using decision trees (DT), support vector machine (SVM) and k-nearest neighbor (KNN) techniques. The quantitative analysis of the classifiers demonstrates that this technique can be used as an alternative in the EEG-signal modeling phase to identify mental states.

1. Introduction

The interest in establishing direct communication between the brain and other external devices using electroencephalographic (EEG) signals has increased with the use of brain-computer interface (BCI) systems. According to Wolpaw [1], “BCI interfaces allow to record the brain signals of an individual, extract their characteristics and turn them into artificial outputs that operate outside or in your own body”. In other words, the interface establishes a channel of communication and control between an individual and an external device. BCI systems use EEG activity to perform various tasks, such as controlling cursor movements [2], browsing the web [3,4], selecting letters or icons [5,6,7,8], communicating with virtual environments (e.g., in games) [9,10], robot navigation [11,12,13], controlling a wheelchair [14,15,16,17] or operating prosthetics [18,19,20], among others.
By nature, EEG signals are non-linear and non-stationary. Usually, these signals are processed and analyzed using various mathematical methods to gather information regarding the frequency components and, in turn, the functional relationships between brain areas. The most-commonly used techniques in the analysis are the fast Fourier transform (FFT), power spectral density (PSD), Hjorth parameters and the discrete wavelet transform (DWT). Several strategies have been presented to analyze brain signals by extracting and classifying EEG signals for cognitive-movement detection purposes. For instance, Hongyu [21] presents an on-line classification method for BCI based on common spatial patterns (CSP) for feature extraction, using support vector machine (SVM) as a classifier for imagined hand and foot movements achieving accuracy results of 86.3%, 91.8% and 92.0% with three subjects. Similarly, Bhattacharyya [22] conducted a study of comparative performance analysis of different classifiers, linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), k-nearest neighbor (KNN), linear SVM, radial basis function (RBF) SVM and naïve Bayesian) to differentiate EEG signals for left-right limb movement, with SVM being the most accurate at 82.14%. In turn, Jiralerspong [23] conducted an experimental test of three mental states using FFT with a hamming window function that resulted in a 72% recognition rate. These strategies show good accuracy rates, but the information is extracted within a frequency or time-frequency domain, thereby losing vital information. On the other hand, some studies have been conducted based on multi-scale entropy (MSE) analysis to detect features of biological signals, as shown in [24], where Costa, et al. present a generalization of multi-scale entropy to analyze the structure of time series of heartbeats. Similarly, Mossabber [25] uses multivariate multi-scale entropy (MMSE) as a generalization of MSE to be adapted for biological and physical systems. Morabito, et al. [26] also use a multivariate multi-scale methodology to assess the complexity of physiological systems by means of permutation entropy whereby time series are processed in segments. However, the analysis does not appear to analyze several signals at the same time, and the performance of signals is based on patterns, which results in magnitude being weighted differently.
An alternative would be to conduct the signal analysis within a time domain. Quaternions can be an alternative to model EEG signals because they provide us with a mathematical notation to represent object orientations and rotations three-dimensionally, which makes it possible to represent EEG multichannel signals beyond what traditional methods allow. Recent studies have used quaternion algebra with traditional methods. For instance, Furman [27] uses the so-called quaternion Fourier transform (QFT) to process 3D images, and claims that the rotation operation is faster than when using a matrix-based method. In addition, Zhao [28] uses quaternion principle component analysis (QPCA) to represent EEG multichannel epilepsy signals with better results than those obtained using a traditional approach.
There are some considerable advantages in using quaternions, and thus a new technique to represent EEG signals visually using quaternion algebra to classify brain activity in relation to visual cues is presented here. The visual representation, extraction and classification of EEG features correspond to the cognitive activity recorded as the brain processes motor images. Motor imagery can be defined as a mental process linked to a motor action without any overt motor output [29]. In the case of this study, three kinds of visual cues were used: left, right and waiting time. As a result, EEG signals gathered using the BCI device were divided into blocks that contained information from only four of the 14 sensors. Each block was represented using quaternions to extract the signal features, and were later classified (off-line) using three techniques: KNN, SVM and decision trees (DT). Finally, the various classification results obtained when detecting classes were used to validate the performance levels.
This paper is organized as follows: Section 2 is an introduction to quaternions, BCI systems and classifiers; Section 3 includes a description the QSA technique; Section 4 accounts for the experimental tests; Section 5 features a discussion of the QSA implementation results; and Section 6 ends with some conclusions.

2. Preliminaries

This section provides a brief but accurate description of the key elements used to develop the study that is, quaternion algebra and the BCI device used to acquire the brain signals.

2.1. Quaternions

Quaternions were proposed in 1843 by Hamilton [30], as a set of four constituents (one real component and three imaginary) of the form: q = w + ix + jy + kz, where w, x, y, z ∈ ℝ and i, j, k are symbols of three imaginary quantities known as imaginary units. These units follow these rules:
i2 = j2 = k2 = ijk = −1
ij = k, jk = i, ki = j
ji = −k, kj = −i, ik = −j
A quaternion can be described as:
q = (s + v), v = (x, y, z)
where s and v are known as the quaternion’s scalar and vector, respectively. When s = 0, q is known as a pure quaternion.
Table 1 summarizes the basic operations of quaternion algebra, where q and p are two quaternions, whilst the dot and the cross represent the usual scalar and vector products.
Note that the multiplication of quaternions is a not a commutative operation; instead, it is associative and distributive in relation to the addition.
Quaternions with a norm equal to one are known as unit (or normalized) quaternions. If q is a unit quaternion, it can be written as [31]:
q = cos θ + sin θ   e , e = 1
where cos θ = s , sin θ = v and u = v / v . Equally, the previous equation can be used to represent vector rotations.
q = cos θ + sin θ   e =   b a 1
where a and b are any two vectors having the same length, the angle between a and b is θ, e is perpendicular to both a and b, and a, b, and e form a right-handed set.
For a rotation of angle θ around a unit vector a, q must be formed thus:
q = cos ( θ 2 ) + a   sin ( θ 2 )
Furthermore, the operation to be performed on a vector r to produce a rotated vector r’ is:
r = q r q 1 = ( cos ( θ 2 ) +   a   sin ( θ 2 ) ) r ( cos ( θ 2 )   a   sin ( θ 2 ) )
Equation (6) is a useful representation that makes the rotation of a vector easier. We can see that r is the original vector, r is the rotated quaternion, and q is the quaternion that defines the rotation.

2.2. BCI System

BCI devices capture brain signals of individuals that will be subsequently processed and analyzed. Depending on how signals are captured, BCI devices are classified as invasive and non-invasive. The Emotiv Epoc headset (Figure 1a) is a non-invasive mobile BCI device with a gyroscopic sensor and 14 EEG channels (electrodes) and two reference channels (CMS/DRL with a 128 Hz sample frequency). The distribution of sensors in the headset is based on the international 10–20 electrode placement system with two sensors as reference for proper placement on the head. This device obtains the brain activity of an individual, and is able to detect and process their thoughts, feelings and expressions in real time. Based on the 10–20 system, the channels are: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 (Figure 1b).

2.3. Description of Classifiers

In this study, we used several classification algorithms: decision tree, support vector machine and k-nearest neighbor. DT [32,33,34] is a widely-used and easy-to-implement classification technique used to analyze data for prediction purposes. It consists of a set of conditions or rules organized in a hierarchical structure where the final decision can be determined following conditions established from the root to its leaves. SVM [35,36,37] refers to supervised learning models used to classify data into two categories by finding an optimal hyperplane that separates two possible values for the variable y   { + 1 ,   1 } . If the data can be separated linearly, the hyperplanes divide SVM input data into two initial subgroups by assigning {−1, +1} tags. KNN [38] is a non-parametric approach used to solve classification and regression problems, based on the assumption that an object’s class is the same as that of its closest neighbors.
Both decision trees and KNN are designed to handle multi-class problems, which is not the case for the SVM classifier as this is an algorithm originally developed for use with two classes only. However, the classification can be expanded to more than two classes by combining binary classifiers, that is, by generating a classifier for the d classes available. For instance, it could be used thus: classifier 1 (class 0 vs. class 1 and class 2), classifier 2 (class 1 vs. class 0 and class 2), classifier 3 (class 2 vs. class 0 and class 1). A decision function is used subsequently to group them and assess the class they belong to following this criterion: class 0 (1 0 0), class 1 (0 1 0) and class 2 (0 0 1).

3. Proposed Method

In signal-pattern recognition, each sample is represented by a collection of descriptors that will be segmented and classified. Thus, the characterization of signals, in this case EEG signals, is essential to determine the performance and accuracy of the final classification process. As mentioned in Section 1, the analysis in the frequency domain is a common technique used when extracting features of EEG signals; for instance, the Fourier transform, short Fourier transform or wavelet transform are commonly used. However, the use of quaternion algebra is proposed as a novel tool to extract EEG-signal features that simplifies the final classification task.
EEG signals can be described using quaternion-based rotations and orientations, as shown earlier. Vector rotations are usually described by the rotation matrix or Euler angles [39], and quaternions can be advantageous for three reasons: (1) they avoid ambiguities in the data; (2) they allow a more accurate representation of the data; and (3) they require fewer calculations than other rotational techniques.
The proposed so-called quaternion-based signal analysis (QSA) method described in Algorithm 1 (Table 2) can be used to model multichannel EEG signals using quaternions, taking a set of input signals as a single entity, and converting it into a pure quaternion.
Thus, the multichannel EEG information is represented using quaternions to characterize signals recorded for 10 min during each test, and classified after further processing subsequently.
Algorithm 1 can be described as follows: Line 1 defines the input data: delta (dt) movements in time t, signals to be analyzed (signals) and blocks with the channels to be analyzed (nblocks), where each block includes signals from four channels, and pr is a flag to indicate whether the sample is being used during validation or training. Line 2 calculates the segments matrix y(t) defined by changes between the three classes (left, right and waiting time) from the pool of signals contained in nblocks. The quaternions array quat, at line 3, is created using the channels included in nblocks. Line 4 dictates that for each segment yi, q(t) and r(t) are formed considering that q(t) is an array with n quaternions (line 4a) and r(t) is an array with n pure quaternions moved according to a dt value (line 4b). After this, the rotation qrot(t) is calculated using quaternions q(t) and r(t) (line 4c), which produces an array of n rotated quaternions. Line 4d calculates the scalar array qmod(t) from the module qrot(t), which contains n scalar elements that will be used at line 4e to form a matrix with Mi,j features, where index i corresponds to the analyzed segment and index j is one of the m features to be analyzed using the equations included in Table 3. Line 4f gives shape to vector ci using the classes assigned to each segment yi. In addition, matrix Mk,j is created using k-th data for the training phase at a %t rate (line 5) and matrix Ml,j, together with the remainder of the original matrix Mij, is used to create the validation where index l corresponds to the elements used in the validation process in a 1−%t proportion to the analyzed segment (line 6). Thus, in lines 7 and 8 data are trained and/or validated by calling the function ProcQSA() and returning a vector R using classes C ^ k (class aimed for training purposes) and C ^ l (class aimed for validation purposes). In the function ProcQSA(), during the training phase, matrix Mt data are assessed using DT, KNN or SVM classifiers, adjusting the parameters for each classifier. Finally, the accuracy percentages are calculated during the training and validation phases (lines 9 and 10).
Table 3 shows the formulae used to extract features by Haralick [40] and adapted for use with quaternion algebra to implement the QSA model.
Equation (7) shows matrix M with its features vector. In this matrix columns correspond to features and rows to samples. The features vector consists of the average, variance, contrast and homogeneity.
M = [ μ 1 σ 1 2 μ 2 σ 2 2 c o n 1 H 1 c o n 2 H 2 μ i σ i 2 c o n i H i ]

4. Description of the Experimental Tests

This section outlines the steps followed to implement the proposed QSA method (Algorithm 1) with EEG signals obtained from various participants for the purposes of detecting three mental states: thinking “left”, “right” and “waiting time”, with no need for any movement or word.
First of all, the technique consists in representing the EEG signals as a single quaternion. For experimental testing, one channel of the EEG signal is used as the real component and three more channels as the imaginary components. After that, feature computation provides the input data for the KNN, SVM and DT classifiers.
EEG signals are acquired using the Emotiv Epoc headset (shown above in Figure 1a). The graphic user interface used to analyze the acquired signals is Python-based, and the feature selection and extraction is programmed using Matlab R2014a®. The following subsections include detailed descriptions of each stage of the block diagram (Figure 2).

4.1. EEG Signal Acquisition

For signal acquisition purposes, the BCI’s cognitive mode was used, given that motor activities were one of the cognitive processes involved in producing movement, which was the key focus of the tests. The experiment was conducted with 10 participants of several ages, both male and female. The time for each individual test was 10 consecutive minutes, during which three elements appeared on the screen: an arrow pointing to the left and moving in that direction, a cross representing rest time and an arrow pointing to the right and moving in that direction (see Figure 3). The left and right arrows alternate after appearing for 10 s (test mode) with a 5 s rest time in between during which a cross appeared at the center of the screen. In other words, a total of 40 arrows were shown alternately for 10 min, with 5 s breaks between each alternation. The purpose of the experiment was to make the participant think or imagine movement while the arrow appeared on the screen. In addition, during the test mode, participants were asked to remain motionless and avoid sharp body movements that could interfere with the signal being recorded, which they did, but were then allowed to move freely during rest time. States 0, 1 and 2 were used to refer to the three elements: waiting time (0), left (1) and right (2) during the classification phase.
The number of samples recorded during the EEG data acquisition phase from each of the 10 participants amounts to 76,800, considering eight of the 14 available channels on the Emotiv Epoc device in relation to activity in the motor and frontal areas of the brain. Later, the samples were combined (Table 4) to form signal blocks (nblocks) that were then used as input in Algorithm 1 in order to find the channel with the best performance.
Figure 4 shows the signals acquired from block 1 for 5 s.

4.2. QSA Method Implementation

After acquiring the data in the previous step, the next phase consisted of extracting EEG signal features to find the desired classes (thinking “left”, “right” and “waiting time”). First, the QSA method was implemented (Algorithm 1) to represent EEG signal blocks as shown in Table 4, and to extract the features related to the cues shown. In these experiments pre-processing was not required. Thus, to detect sharp changes between classes a segments matrix y(t) was obtained using 1200 samples for each 10 s acquired for “left” and “right” classes, and 600 samples for the “waiting time” class obtained for each 5 s-long breaks. In addition, the quaternion (quat) emerged from block signals considering the first channel as the scalar component and the remaining ones as the imaginary components, as shown in Figure 5. Later, q(t) and r(t) were calculated, the latter with its relevant dt movement. As for the movement, several tests were conducted using different dt values for each block. In other words, a dt movement of 1 to 10 was used in multiples of 7.8 ms (sample period) at point in time t to find out its impact on the classification accuracy percentage. Rotation qrot was calculated using q(t) and r(t), as well as qmod using the result of rotation qrot. Once the module was obtained, the four features included in Table 3 (Mean, Contrast, Homogeneity and Variance) were calculated to generate matrix Mi,j and vector ci (where i represents the segments of y(t) and j represents the features). The resulting matrix Mi,j was using during the training and classification process to infer the mental state and the required class.

4.3. Classification

During the training and validation stages, several samples were taken to create submatrix Mk,j using 30% of Mi,j on condition that the data contained examples of the three classes required, that is, “left” (1), “right” (2) and “waiting time” (0). In addition, to create matrix Mm,n during the piloting phase, 70% of the data were used. In this study, the training and validation process was conducted using three classifiers: (1) decision tree based on input and predictor variables; (2) SVM using a kernel-type Gaussian RBF with a default scaling factor; and (3) KNN, which uses the Euclidean distance with a number of default Matlab-selected neighbors. The classification process was repeated 20 times for each of the 10 movements mentioned in each block. The methodology was applied to each of the 10 participants with the following results: 10 participants × 10 blocks × 10 dt × 20 repeats.
Considering all these parameters, a detailed analysis of the performance and best accuracy for each classifier for various signal blocks and movements is presented in the following sections.

5. Results and Discussion

The accuracy percentages obtained during the classification phase are shown in Table 5. The table presents the accuracy rate for the three classifiers for each dt movement. As can be seen, with a movement dt = 4, DT is the classifier with the best classification percentages at 84.92%, followed by KNN at 84.34%. In contrast, when the movement was dt = 2, KNN was 84.39%. Finally, SVM recorded the lowest classification percentages, such as 77.49% for dt = 4.
Figure 6, in turn, shows the graphs that correspond to Table 5 data. Figure 6a shows the symmetry of data for the three classifiers. In addition, in Figure 6, both (a) and (b) show that data for DT and KNN are both close to 84%, while the SVM mean was recorded as 78%.
On the other hand, Figure 6b shows the accuracy of the various classifiers. Note that parameter “dt” does not alter the accuracy of the results and as a result does not affect the QSA model.
Table 6 shows the signal block performance results for the 10 blocks under analysis. Block 1 had the best results for DT and KNN with an accuracy in excess of 86% for both classifiers at movements of 1 and 3. As for SVM, it achieved an accuracy rate of 78% in block 4 with a movement dt = 8.
Table 7 shows that block 1 achieved the best average results for DT and KNN, unlike SVM which performed at its best in block 3.
Figure 7 shows the behavior of blocks by classifier. As can be seen, blocks 1 and 3 stand out for all three classifiers. In contrast, block 6 shows a poor performance for DT at 84.44%, block 9 produced poor results for KNN at 83.04% and block 7 did likewise for SVM at 77.02%.
Therefore, the best accuracy results were obtained by block 1 with a movement dt = 4. Table 8 and Figure 8 show the behavior of data obtained for each participant in block 1.
Subject 1 consistently achieved the best results compared to the remaining participants because this was the only participant that was trained using the BCI device and data-acquisition system. However, considering that the remaining participants had no training, their results were good nonetheless.
The performance obtained by the three classifiers has been compared using various assessment metrics, such as recognition rate (RT) and error rate (ET). However, according to Ibarra [41], these metrics do not always suffice to assess a classification method. For instance, in some cases the estimate of the recognition rate may not contain enough information to be assessed. Therefore, further metrics ought to be used, such as sensitivity (Sd) and specificity (Spd), and four complementary performance metrics: accuracy (Ad), false alarm rate (FAd), positive probability (PPd) and negative probability (NPd), where subscript d corresponds to the class in point.
The sensitivity metric assesses how aptly the classifier can recognize samples from the class in point. Specificity, also known as real negative rate, measures whether the classifier can recognize samples that do not belong to the class in point. Accuracy assesses the number of samples classified within the class in point that actually belong to it. The false alarm rate records fake positives or type-I errors. Positive probability is concerned with the proportion of samples within the class in point that has been classified correctly compared to the samples that do not belong to the class in point and have been misclassified. This latter metric is weighted considering the number of samples in each class. Negative probability measures the proportion of samples classified as not belonging to the class in point. This metric is weighted considering the number of samples in each class.
R T = # { c | c   = c ^ } # { c }
E T = # { c |   c c ^ } # { c }
S d = # { c |   c = d   &   c = c ^ } # { c |   c = d   }
S p d = # { c | c   d   &   c = c ^ } # { c | c   d   }
A d = # { c |   c = d   &   c = c ^ } # { c | c ^   = d   }
F A d = # { c | c   d   &   c c ^ } # { c | c   d   }
P P d = S d 1 S p d
N P d = 1 S d S p d
The results are presented in Table 9, highlighting the best score delivered by each performance measure using the three classifiers.
Note that in Table 9 the DT classifier shows the best performance for the recognition, sensitivity, specificity, accuracy and positive probability rates. The results for KNN are similar to those for DT, but the former shows a better accuracy performance with class 0. The SVM classifier shows the worst performance in recognition and error, but has a better performance in sensitivity with class 1 and accuracy for class 0. The error rates are 15.25% for DT, 16.38% for KNN, and 22.65% for SVM, all of them still within an acceptable range.
To sumrize, the results of testing this new method with 10 participants 20 times per participant using 30%–70% of the data have been presented. The average performance accuracy was 84.75% when using the DT classifier. The same test was performed using DT and SVM classifiers that yielded an accuracy rate of 83.62% for KNN and 77.35% for the SVM classifier. After exhaustive tests of the proposed technique, the results show that this methodology for monitoring, representing and classifying EEG signals can be usefully applied for the purposes of having individuals control external devices.

6. Conclusions

Representing rotations in three-dimensional spaces using quaternions is computationally more efficient, both in terms of storage space required and the number of necessary operations. Thus, the results presented in this article show that QSA can be used as a tool to extract EEG signal features to identify “left” and “right” accurately using DT and KNN classifiers, even though admittedly the tests were conducted in a semi-controlled environment.
This study has presented the QSA method to obtain features of EEG signals and to represent motor instructions. The differences between these features were assessed using DT, SVM and KNN classifiers. The best classification results were obtained using DT. The QSA technique opens up new modeling and classification opportunities to process both biosignals and other types of signals for BCI devices. The QSA technique requires greater levels of independence between signals embedded into each quaternion for best results. In the future, this technique should be modified to reduce the number of samples needed to obtain a class and thus the analysis and processing times. In addition, the number of cognitive classes should be increased to include facial and emotional expressions in tests as well.

Acknowledgments

Horacio Rostro Gonzalez wishes to acknowledge the funds provided by the Mexican Secretaría de Educación Pública (SEP) and the University of Guanajuato (UG) within their Nuevo Profesor de Tiempo Completo (NPTC) and Convocatoria Institucional 2015 programs. The authors also wish to express their gratitude to the Engineering Division (DICIS) of the University of Guanajuato for the funds provided to cover the costs of publishing in open access.

Author Contributions

Ibarra-Manzano proposed and described the theoretical use of the QSA technique; Batres-Mendoza and Guerra-Hernandez programmed the QSA and the classifiers; Batres-Mendoza and Montoro-Sanjosé designed and prepared the experiments; Batres-Mendoza and Almanza-Ojeda tested and analyzed the metrics of the data; and Rostro-Gonzalez and Rene J. Romero-Troncoso contributed the BCI System, its use and the theoretical explanation of the classifiers for implementing them. All the authors contributed to writing the manuscript; and Montoro-Sanjose and Almanza-Ojeda carried out the proofreading of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EEGElectroencephalography
QSAQuaternion-based signal analysis
BCIBrain computer interface
DTDecision tree
KNNK-nearest neighbor
SVMSupport vector machine
FFTFast Fourier transform
PSDPower spectral density
DWTDiscrete wavelet transform
CSPCommon spatial patterns
QFTQuaternion Fourier transform
QPCAQuaternion principle component analysis
RBFRadial basis function
RTRecognition rate
ETError rate
SSensivity
SpSpecificity
AAccuracy
PPPositive probability
NPNegative probability
FAFalse alarm
QDAQuadratic discriminant analysis
LDALinear discriminant analysis
MSEMulti-scale entropy
MMSEMultivariate multi-scale entropy

References

  1. Wolpaw, J.R.; Birbaumer, N.; Heetderks, W.J.; McFarland, D.J.; Peckham, P.H.; Schalk, G.; Donchin, E.; Quatrano, L.A.; Robinson, C.J.; Vaughan, T.M. Brain-Computer interface technology: A review of the first international meeting. IEEE Trans. Rehabil. Eng. 2000, 8, 164–173. [Google Scholar] [CrossRef] [PubMed]
  2. Kanoh, S.; Miyamoto, K.; Yoshinobu, T. A P300-based BCI system for controlling computer cursor movement. In Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Boston, MA, USA, 30 August–3 September 2011; pp. 6405–6408.
  3. Escolano, C.; Antelis, J.M.; Minguez, J. A Telepresence Mobile Robot Controlled With a Noninvasive Brain–Computer Interface. IEEE Trans. Syst. Man Cybern. B Cybern. 2012, 42, 793–804. [Google Scholar] [CrossRef] [PubMed]
  4. Leeb, R.; Tonin, L.; Rohm, M.; Desideri, L.; Carlson, T.; Millan, J.D.R. Towards Independence: A BCI Telepresence Robot for People With Severe Motor Disabilities. IEEE Proc. 2015, 103, 969–982. [Google Scholar] [CrossRef]
  5. See, A.R.; Chen, S.-C.; Ke, H.-Y.; Su, C.-Y.; Hou, P.-Y.; Liang, C.K. Hierarchical character selection for a brain computer interface spelling system. In Proceedings of the 3rd International Conference on Innovative Computing Technology (INTECH), London, UK, 29–31 August 2013; pp. 415–420.
  6. Wang, Y.; Li, J.; Jian, R.; Gu, R. Channel selection based on amplitude and phase characteristics for P300-based brain-computer interface. In Proceedings of the 6th International Conference on Biomedical Engineering and Informatics (BMEI), Hangzhou, China, 16–18 December 2013; pp. 202–207.
  7. Samizo, E.; Yoshikawa, T.; Furuhashi, T. Improvement of spelling speed in P300 speller using transition probability of letters. In Proceedings of the Joint 6th International Conference on Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), Kobe, Japan, 20–24 November 2012; pp. 163–166.
  8. Amcalar, A.; Cetin, M. A brain-computer interface system for online spelling. In Proceedings of the IEEE 18th Signal Processing and Communications Applications Conference (SIU), Diyarbakir, Turkey, 22–24 April 2010; pp. 196–199.
  9. Munoz, J.E.; Chavarriaga, R.; Villada, J.F.; Sebastian Lopez, D. BCI and motion capture technologies for rehabilitation based on videogames. In Proceedings of the IEEE Global Humanitarian Technology Conference (GHTC), San Jose, CA, USA, 10–13 October 2014; pp. 396–401.
  10. Muñoz, J.E.; Villada, J.F.; Muñoz, C.D.; Henao, O.A. Multimodal system for rehabilitation aids using videogames. In Proceedings of the IEEE Central America and Panama Convention (CONCAPAN XXXIV), Panama City, Panama, 12–14 November 2014; pp. 1–7.
  11. Vourvopoulos, A.; Liarokapis, F. Robot Navigation Using Brain-Computer Interfaces. In Proceedings of the IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Liverpool, UK, 25–27 June 2012; pp. 1785–1792.
  12. Upadhyay, R.; Kankar, P.K.; Padhy, P.K.; Gupta, V.K. Robot motion control using Brain Computer Interface. In Proceedings of the 2013 International Conference on Control, Automation, Robotics and Embedded Systems (CARE), Jabalpur, India, 16–18 December 2013; pp. 1–5.
  13. Chae, Y.; Jo, S.; Jeong, J. Brain-actuated humanoid robot navigation control using asynchronous Brain-Computer Interface. In Proceedings of the IEEE/EMBS 5th International Conference on Neural Engineering (NER), Cancun, Mexico, 27 April–1 May 2011; pp. 519–524.
  14. Chai, R.; Ling, S.H.; Hunter, G.P.; Nguyen, H.T. Mental non-motor imagery tasks classifications of brain computer interface for wheelchair commands using genetic algorithm-based neural network. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, 10–15 June 2012; pp. 1–7.
  15. Chai, R.; Ling, S.H.; Hunter, G.P.; Tran, Y.; Nguyen, H.T. Classification of wheelchair commands using brain computer interface: Comparison between able-bodied persons and patients with tetraplegia. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 989–992.
  16. Chai, R.; Ling, S.H.; Hunter, G.P.; Nguyen, H.T. Toward fewer EEG channels and better feature extractor of non-motor imagery mental tasks classification for a wheelchair thought controller. In Proceedings of the 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA, 28 August–1 September 2012; pp. 5266–5269.
  17. Kim, H.S.; Chang, M.H.; Lee, H.J.; Park, K.S. A comparison of classification performance among the various combinations of motor imagery tasks for brain-computer interface. In Proceedings of the 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 435–438.
  18. Ofner, P.; Muller-Putz, G.R. Using a Noninvasive Decoding Method to Classify Rhythmic Movement Imaginations of the Arm in Two Planes. IEEE Trans. Biomed. Eng. 2015, 62, 972–981. [Google Scholar] [CrossRef] [PubMed]
  19. Tavella, M.; Leeb, R.; Rupp, R.; del Millan, J.R. Towards natural non-invasive hand neuroprostheses for daily living. In Proceedings of the 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Buenos Aires, Argentina, 1–4 September 2010; pp. 126–129.
  20. Ofner, P.; Muller-Putz, G.R. Decoding of velocities and positions of 3D arm movement from EEG. In Proceedings of the 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA, 28 August–1 September 2012; pp. 6406–6409.
  21. Sun, H.; Xiang, Y.; Sun, Y.; Zhu, H.; Zeng, J. On-line EEG classification for brain-computer interface based on CSP and SVM. In Proceedings of the 3rd International Congress on Image and Signal Processing (CISP), Yantai, China, 16–18 October 2010; pp. 4105–4108.
  22. Bhattacharyya, S.; Khasnobish, A.; Chatterjee, S.; Konar, A.; Tibarewala, D.N. Performance Analysis of LDA, QDA and KNN Algorithms in Left-Right Limb Movement Classification from EEG Data. In Proceedings of the 2010 International Conference on Systems in Medicine and Biology (ICSMB), Kharagpur, India, 16–18 December 2010; pp. 126–131.
  23. Jiralerspong, T.; Liu, C.; Ishikawa, J. Identification of three mental states using a motor imagery based brain machine interface. In Proceedings of the 2014 IEEE Symposium on Computational Intelligence in Brain Computer Interfaces (CIBCI), Orlando, FL, USA, 9–12 December 2014; pp. 49–56.
  24. Costa, M.D.; Goldberger, A.L. Generalized Multiscale Entropy Analysis: Application to Quantifying the Complex Volatility of Human Heartbeat Time Series. Entropy 2015, 17, 1197–1203. [Google Scholar] [CrossRef]
  25. Ahmed, M.U.; Mandic, D.P. Multivariate multiscale entropy: A tool for complexity analysis of multichannel data. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2011, 84, 3067–3076. [Google Scholar] [CrossRef] [PubMed]
  26. Morabito, F.C.; Labate, D.; La Foresta, F.; Bramanti, A.; Morabito, G.; Palamara, I. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG. Entropy 2012, 14, 1186–1202. [Google Scholar] [CrossRef]
  27. Ell, T.A. Quaternion Fourier Transform: Re-tooling Image and Signal Processing Analysis. In Quaternion and Clifford Fourier Transforms and Wavelets, 1st ed.; Hitzer, E., Sangwine, S.J., Eds.; Birkhäuser Basel: Basel, Switzerland, 2013; pp. 3–14. [Google Scholar]
  28. Zhao, Y.; Hong, W.; Xu, Y.; Zhang, T. Multichannel Epileptic EEG Classification Using Quaternions and Neural Network. In Proceedings of the First International Conference on Pervasive Computing Signal Processing and Applications (PCSPA), Harbin, China, 17–19 September 2010; pp. 568–571.
  29. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. IEEE Proc. 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  30. Hamilton, W.R. On quaternions. Proc. R. Ir. Acad. 1847, 3, 1–16. [Google Scholar]
  31. Pujol, J. Hamilton, Rodrigues, Gauss, Quaternions, and Rotations: A Historical Reassessment. Commun. Math. Anal. 2012, 13, 1–14. [Google Scholar]
  32. Kotsiantis, S. A hybrid decision tree classifier. J. Intell. Fuzzy Syst. 2014, 26, 327–336. [Google Scholar]
  33. White, R.L. Astronomical Applications of Oblique Decision Trees. AIP Conf. Proc. 2008, 1082, 37–43. [Google Scholar]
  34. Elnaggar, A.A.; Noller, J.S. Application of Remote-sensing Data and Decision-Tree Analysis to Mapping Salt-Affected Soils over Large Areas. Remote Sens. 2010, 2, 151–165. [Google Scholar] [CrossRef]
  35. Xu, H.; Song, W.; Hu, Z.; Chen, C.; Zhao, X.; Zhang, J. A speedup SVM decision method for online EEG processing in motor imagery BCI. In Proceedings of the 10th International Conference on Intelligent Systems Design and Applications (ISDA), Cairo, Egypt, 29 November–1 December 2010; pp. 149–153.
  36. Labate, D.; Palamara, I.; Mammone, N.; Morabito, G.; La Foresta, F.; Morabito, F.C. SVM classification of epileptic EEG recordings through multiscale permutation entropy. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–5.
  37. Li, K.; Zhang, X.; Du, Y. A SVM based classification of EEG for predicting the movement intent of human body. In Proceedings of the 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, 30 October–2 November 2013; pp. 402–406.
  38. NirmalaDevi, M.; Appavu, S.; Swathi, U.V. An amalgam KNN to predict diabetes mellitus. In Proceedings of the 2013 International Conference on Emerging Trends in Computing, Communication and Nanotechnology (ICE-CCN), Tuticorin, India, 25–26 March 2013; pp. 691–695.
  39. Diebel, J. Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors; Technical Report; Stanford University: Stanford, CA, USA, 2006. [Google Scholar]
  40. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef]
  41. Ibarra-Manzano, M. Vision Multi-Caméras Pour la Detection d´obstacles sur un Robot de Service: Des Algorithms à un Système Integer. Ph.D. Thesis, University of Toulouse, Toulouse, France, January 2011. [Google Scholar]
Figure 1. BCI System: (a) Emotiv Epoc headset; and (b) Emotiv Epoc electrode arrangement.
Figure 1. BCI System: (a) Emotiv Epoc headset; and (b) Emotiv Epoc electrode arrangement.
Sensors 16 00336 g001
Figure 2. Block diagram of the overall EEG signal classification strategy.
Figure 2. Block diagram of the overall EEG signal classification strategy.
Sensors 16 00336 g002
Figure 3. Visual cues with timing scheme.
Figure 3. Visual cues with timing scheme.
Sensors 16 00336 g003
Figure 4. Example of block 1 EEG signals from four channels captured by the Emotiv Epoc device for 5 s.
Figure 4. Example of block 1 EEG signals from four channels captured by the Emotiv Epoc device for 5 s.
Sensors 16 00336 g004
Figure 5. Creating the quaternion using elements of block 1, with FC5 channel as the scalar component and FC6, P7 and P8 as the imaginary components.
Figure 5. Creating the quaternion using elements of block 1, with FC5 channel as the scalar component and FC6, P7 and P8 as the imaginary components.
Sensors 16 00336 g005
Figure 6. Graphical representation of accuracy rates: (a) for DT (1), KNN (2) and SVM (3); and (b) accuracy with delta movement.
Figure 6. Graphical representation of accuracy rates: (a) for DT (1), KNN (2) and SVM (3); and (b) accuracy with delta movement.
Sensors 16 00336 g006
Figure 7. Graphical representation of accuracy rates for: (a) DT; (b) KNN and (c) SVM for different blocks.
Figure 7. Graphical representation of accuracy rates for: (a) DT; (b) KNN and (c) SVM for different blocks.
Sensors 16 00336 g007
Figure 8. Graphical representation of accuracy rates for block 1 using: (a) DT; (b) KNN and (c) SVM for block 1 and dt = 4.
Figure 8. Graphical representation of accuracy rates for block 1 using: (a) DT; (b) KNN and (c) SVM for block 1 and dt = 4.
Sensors 16 00336 g008aSensors 16 00336 g008b
Table 1. Operations using quaternions q and p.
Table 1. Operations using quaternions q and p.
OperationFormulae
Addition q + p = ( s q + s p ,   v q + v p )
Multiplication q     p =   ( s q s p   v q v p ,   s p v q + s q v p +   v q × v p )
Scalar product q p = ( s q s p ,   v q v p )
Conjugate q ¯ = ( s q ,   v q )
Norm q = q ¯ q =   q q ¯ =   s q 2 + v q 2
Inverse q 1 = q ¯ q 2
Table 2. QSA Method for training and classifying EEG signals.
Table 2. QSA Method for training and classifying EEG signals.
Algorithm 1
  • Inputs: dt, signals, nblocks, pr
  • y(t) ← segments of signals
  • quat ← signals (nblocks)
  • For each yi(t) do
  • q(t) ← quat(t)
  • r(t) ← quat(t-dt)
  • qrot(t) ← nrot (q(t), r(t))
  • qmod(t) ← mod(qrot(t))
  • Mi,j ← fj (qmod(t)) {j = 1,…, m}
  • ci{ c=(1,2,3,...,n)|yi(t) c}
End for
5.
Mk,j ←{ Mi,j | # k # i = % t }
6.
Ml,j{ Mi,j |{l} {k} , # l # i = 1 % t }
7.
[ C ^ k , pr] = ProcQSA( pr, Mk,j, ck)
8.
[ C ^ l , pr] = ProcQSA(pr, Ml,j, cl)
9.
%rt = # { C k   | C k =   C ^ k } # { C k }
10.
%rv = # { C l   | C l =   C ^ l } # { C l }
Function ProcQSA(pr, Mt, c)
  • if pr == true then
    a. R = training(Mt,c)
    b. pr = false
  • else
    b. R = classify(Mt)
    end if
  • return [R,pr]
Table 3. Statistical features extracted using quaternions.
Table 3. Statistical features extracted using quaternions.
Statistical FeaturesEquation
Mean ( μ ) = Σ (   q m o d ) N
Variance ( σ 2 ) =   ( ( q m o d ) 2 μ ) 2 + ( q m o d ) 2 2 N
Contrast ( c o n ) = Σ (   q m o d ) N 2
Homogeneity ( H ) = 1 1 + ( q m o d ) 2
Cluster Shade ( c s ) = ( q m o d μ ) 3
Cluster prominence ( c p ) = ( q m o d μ ) 4
Table 4. Signal blocks.
Table 4. Signal blocks.
BlockBCI Channel
1FC5FC6P7P8
2FC5FC6T7T8
3FC6FC5P7P8
4FC6FC5T7T8
5F3F4FC5FC6
6F3F4FC5FC6
7F4F3FC5FC6
8F4F3T7T8
9T7T8FC5FC6
10T7T8P7P8
Table 5. Accuracy rate for DT, KNN and SVM classifiers with dt movement.
Table 5. Accuracy rate for DT, KNN and SVM classifiers with dt movement.
dtClassification Accuracy
DTKNNSVM
MAXMEANMINMAXMEANMINMAXMEANMIN
10.95720.84900.76620.94400.84180.78570.78150.77480.0000
20.95510.84730.76330.94900.84390.78520.78430.77490.0000
30.94710.84760.76600.94740.84290.78510.78370.77540.7662
40.95070.84920.76860.94870.84340.78260.78280.77490.7633
50.95160.84780.76280.94680.84320.78360.78220.77500.7660
60.94680.84630.76580.94570.84120.77770.78090.77430.7686
70.95340.84740.76380.94840.84240.78760.78360.77490.7628
80.94950.84780.77600.94730.84180.78370.78200.77530.7658
90.95040.84710.77220.94990.84270.78490.78550.77480.7638
100.95680.84750.76490.94900.84330.78790.78280.77470.7753
Table 6. Best accuracy rates for signal blocks using classifiers DT, KNN and SVM.
Table 6. Best accuracy rates for signal blocks using classifiers DT, KNN and SVM.
ClassifierSignal Blocks
12345678910
DT0.86440.85190.86350.85140.85200.84370.85160.84850.85090.8531
KNN0.86510.81010.86090.84180.84570.83960.84510.84310.83940.8490
SVM0.77810.77700.77900.77990.77800.77740.77840.77840.77840.7775
Table 7. Average accuracy rate for signal blocks using classifiers DT, KNN and SVM.
Table 7. Average accuracy rate for signal blocks using classifiers DT, KNN and SVM.
ClassifierSignal Blocks
12345678910
DT0.85840.84720.85750.84670.84770.83980.84890.84170.84630.8478
KNN0.85990.83700.85530.83710.84080.83570.84320.83630.83570.8457
SVM0.77520.77520.77600.77530.77410.77410.77390.77390.77540.7746
Table 8. Accuracy rate for 10 subjects using classifiers DT, KNN and SVM (block 1 and dt = 4).
Table 8. Accuracy rate for 10 subjects using classifiers DT, KNN and SVM (block 1 and dt = 4).
ClassifierSubjects
12345678910
DT0.94270.84250.83240.82910.86620.86720.86490.80980.76320.8574
KNN0.94710.83130.79460.81220.85340.85080.86420.78750.77350.8473
SVM0.76250.77020.77100.77970.78110.76870.78180.77880.75810.7831
Table 9. Comparison of eight performance measures for the three classifiers using signal block 1, for dt = 4. The best performance results are highlighted in bold.
Table 9. Comparison of eight performance measures for the three classifiers using signal block 1, for dt = 4. The best performance results are highlighted in bold.
Performance MeasuresDTKNNSVM
RT0.84750.83620.7735
ET0.15250.16380.2265
S0111
S10.65050.63440.8775
S20.67010.63490.0938
Sp00.65980.63430.4948
Sp10.90640.89640.7427
Sp20.89720.89260.9638
A00.99780.99790.9979
A10.69440.65480.6317
A20.66630.50020.1404
FA00.34020.36570.5052
FA10.09360.10360.2573
FA20.10280.10740.0362
PP04.11223.53601.9911
PP18.34566.67543.3885
PP28.63318.49260.9183
NP0000
NP10.38300.40940.1524
NP20.36470.41070.9343

Share and Cite

MDPI and ACS Style

Batres-Mendoza, P.; Montoro-Sanjose, C.R.; Guerra-Hernandez, E.I.; Almanza-Ojeda, D.L.; Rostro-Gonzalez, H.; Romero-Troncoso, R.J.; Ibarra-Manzano, M.A. Quaternion-Based Signal Analysis for Motor Imagery Classification from Electroencephalographic Signals. Sensors 2016, 16, 336. https://0-doi-org.brum.beds.ac.uk/10.3390/s16030336

AMA Style

Batres-Mendoza P, Montoro-Sanjose CR, Guerra-Hernandez EI, Almanza-Ojeda DL, Rostro-Gonzalez H, Romero-Troncoso RJ, Ibarra-Manzano MA. Quaternion-Based Signal Analysis for Motor Imagery Classification from Electroencephalographic Signals. Sensors. 2016; 16(3):336. https://0-doi-org.brum.beds.ac.uk/10.3390/s16030336

Chicago/Turabian Style

Batres-Mendoza, Patricia, Carlos R. Montoro-Sanjose, Erick I. Guerra-Hernandez, Dora L. Almanza-Ojeda, Horacio Rostro-Gonzalez, Rene J. Romero-Troncoso, and Mario A. Ibarra-Manzano. 2016. "Quaternion-Based Signal Analysis for Motor Imagery Classification from Electroencephalographic Signals" Sensors 16, no. 3: 336. https://0-doi-org.brum.beds.ac.uk/10.3390/s16030336

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop