Emotion Classification from EEG Signals Using Time-Frequency-DWT Features and ANN

Abstract

This paper proposes the use of time-frequency and wavelet transform features for emotion recognition via EEG signals. The proposed experiment has been carefully designed with EEG electrodes placed at FP1 and FP2 and using images provided by the Affective Picture System (IAP), which was developed by the University of Florida. A total of two time-domain features, two frequen-cy-domain features, as well as discrete wavelet transform coefficients have been studied using Artificial Neural Network (ANN) as the classifier, and the best combination of these features has been determined. Using the data collected, the best detection accuracy achievable by the proposed schemed is about 81.8%.

Share and Cite:

Ang, A. , Yeong, Y. and Wee, W. (2017) Emotion Classification from EEG Signals Using Time-Frequency-DWT Features and ANN. Journal of Computer and Communications, 5, 75-79. doi: 10.4236/jcc.2017.53009.

1. Introduction

EEG carries important information on the responses to stimuli in the human brain. By studying the pattern of the brain signal waveforms, we can identify the types of emotion up to a certain level of accuracy. An emotion recognition system can help in understanding the cognitive functions of the brain. It can also enable the command and control of machines such as the cursor of a computer, wheelchairs, or a robotic arm. In the longer term, it may even allow disabled patients who have lost their voice and movement ability to express their thoughts and emotions.

Different features have been used to determine the emotion from EEG signals. Among them, wavelet transform coefficients have been used very commonly with good results [1] [2]. Several types of classifiers, namely Extreme Learning Machine (ELM), Support Vector Machine (SVM), and Artificial Neural Network (ANN), have been investigated for emotion classification applications [1] [3]. ANN classifier has also been used to classify brain signals of subjects engaging in mental tasks [4]. Another classifier that has been used to classify emotions is the Fuzzy C Means (FCM) Clustering [5]. In this paper, a new combination of features have been proposed and the ANN classifier has been used to further improve the emotion classification accuracy. Features extracted include mean, standard deviation, maximum frequency amplitude, power, and wavelet coefficients at sym6 and db4.

The organization of this paper is as follows. Section 2 describes the methodology for EEG signals acquisition and raw data pre-processing. Section 3 analyzes and proposes the features to be extracted from the EEG signals. Section 4 presents the classification results. Section 5 concludes the paper.

2. EEG Signals Acquisition

2.1. Test Subjects

A total of 22 samples have been collected from our EEG signal acquisition experiment and used for the study reported in this paper. The test subjects were selected to have the following characteristics; 1) males between 23 - 25 years old, 2) right-handed, 3) and are penultimate engineering students.

2.2. The Experiment

EEG signals were recorded using active dry EEG electrodes and g.US Bamp, a bio-signal amplifier with an integrated band-pass filter. The sampling period is 256 Hz, with the electrodes positioned on FP1, FP2 and Cz, where Cz is the reference electrode.

The experiment was carried out in a dark room with low lighting and noise condition to reduce noise and artifacts. Test subjects were seated facing a projector screen of approximately 4 ft by 5 ft. This is to ensure movements are kept to the minimum throughout the experiment.

The experiment was conducted with the use of International Affective Picture System (IAPS) [6] to evoke emotions of happy and sad in the subjects.

Self-Assessment was also included in the experiment for the subjects to rate each IAPS picture from extremely sad to happy with a scale of 1 - 9 respectively.

Focal object (i.e. cross in a circle symbol) will be shown before the showing each IAPS picture. This is to allow ample concentration and minimal movement for the subject.

Images from IAPS were used to stimulate two discrete emotions, “happy” and “sad”. The experiment is divided into 3 sets, where 10 different images were shown per set. From Figure 1, the process flow is as followed: 2 seconds focus time, followed by 4 seconds of image and 8 seconds of survey time. At the end of each experiment set, test participants are required to fill up a Self-Assessment form (SAM) to indicate their emotion associated with the image. To ensure the reliability and accuracy of the EEG test results, only signal data with score above “7” and below the score of “3” are considered in this research for feature extraction.

Figure 1. Functional block diagram of emotion recognition process.

2.3. Pre-Processing

Artifacts can be present in EEG signal as a result of involuntary muscle movement as such the blinking of eyes, or interference caused from the heartbeat cardiac electrical system, introducing unwanted spikes and distortion in the interpretation of EEG signal. In this research, the frequencies of interest are alpha band (8 - 12 Hz) and beta band (12 - 30 Hz). Therefore, alpha band separation in the time domain can be achieved through the use of elliptic filter with a cutoff frequency of 13 Hz, while beta band separation in the time domain can be achieved through the use of Chebyshev filter with a cutoff frequency between 12 Hz and 31 Hz.

3. Emotion Recognition―Features Extraction

The functional block diagram of the process involved in emotion recognition is given in Figure 1.

The design of the EEG signal acquisition has been described in the previous section. The proposed features and the classifier used in this paper will now be described below.

The feature vector can be represented as:

(1)

where F1, F2, … FN are the N features extracted from the EEG signals.

3.1. Time Domain Analysis

Time domain feature extraction method uses features derived from the time varying EEG signals for emotion recognition. In this study, the mean and standard deviation of the time-domain signals were used as the possible features.

3.2. Frequency Domain Analysis

Frequency domain feature extraction method uses features derived from the EEG signals spectrum. Specifically, a short-time Fourier transform is first performed to obtain the spectrum of the signals. The maximum amplitude of the frequency components and the power of the signals were then determined. Research has shown that the left frontal (FP1) is associated with negative emotion, while the right frontal (FP2) is associated with positive emotion [7]. In our feature extraction method, the power and the ratio of the maximum amplitude determined from FP1 and that determined from FP2 are used as the two features.

3.3. Wavelet Transform Domain Analysis

Transform domain uses discrete wavelet transform (DWT) for feature extraction.DWT uses time-scale signal analysis, signal decomposition and signal reconstruction in signal processing for EEG signals. Wavelet functions shown in (2), are derived from initial time-limited wavelet h(t) which is dilated by a value of a = 2m, translated by constant b = k2m and normalized so that

(2)

where integer m, k and the initial wavelet coefficients are as defined in [8]. Upon passing the signal through a pair of filters repeatedly, the DWT decomposes the signal into approximation coefficients (CA) and detailed coefficients (CD). In this paper, wavelet functions of Daubechies order 4 and Symlets order 6 are studied.

4. Emotion Recognition―Classification

In this study, the Artificial Neural Network (ANN) classifier is used as the classifier. The classification accuracies for different combinations of the features considered above were studied. The classification accuracies for four of the combinations studied are summarized in Table 1.

5. Results and Discussions

Based on the performance obtained when the individual feature is considered, Discrete Wavelet Transform (DWT) has the highest recognition accuracy of 77.3%.

Our intermediate results show that, time domain features are able to produce better accuracy for the “Happy” emotion, achieving 81.8% and 72.7% accuracy for mean and standard deviation respectively. On the other hand, frequency domain features were observed to perform better for the “Sad” emotion with a 72.7% accuracy using both the maximum frequency amplitude and power as the features.

From Table 1, it is interesting to note that the best combination is the one that uses the peak frequency amplitude, power and DWT as the features. The classification accuracy achievable is 81.8%.

6. Conclusions

The effectiveness of using a combination of time, frequency, and DWT features for emotion recognition via EEG signal has been studied. The experiment has been carefully designed and conducted using the images provided by the IAP. Using the data collected, the emotion recognition accuracy obtained with ANN

Table 1. Classification accuracy.

as the classifier is 81.8%. This finding presents an alternative set of features that can be used to classify two emotional states, which can in turn be useful for brain-computer-interface applications such as games.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Yohanes, R.E.J. (2012) Discrete Wavelet Transform Coefficients for Emotion Recognition from EEG Signals. 34th Annual Interna-tional Conference of the IEEE Engineering in Medicine and Biology Society, September 2012. Maxwell, J.C. (1892) A Treatise on Electricity and Magnetism. 3rd Edition, Vol. 2. Clarendon, Oxford, 68-73.
[2] Murrugappan, M., Ramachandran, N. and Sazali, Y. (2010) Classification of Human Emotion from EEG Using Discrete Wavelet Transform. J. Biomedical Science and Engineering, 3, 390-396. https://doi.org/10.4236/jbise.2010.34054
[3] Tolic, M. and Jovic, F. (2013) Classification of Wavelet Transformed EEG Signals with Neural Network for Imagined Mental and Motor Tasks. Faculty of Electrical Engineering, University J.J. Strossmayer in Osijek, Croatia, 3 March 2013.
[4] Suleiman, A.-B.R. and Fatehi, T.A.-H. Feature Extraction Techniques of EEG Signal for BCI Application. Computer and Information Engineering Department, College of Electronics Engineering, University of Mosul. Mosul, Iraq.
[5] Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Hazry, D. and Zunaidi. I. (2008) Time-Frequency Analysis of EEG Signals for Human Emotion Detection. Universiti Malaysia Perlis, School of Mechatronics Engineering, Perlis, Malaysia. https://doi.org/10.1007/978-3-540-69139-6_68
[6] Lang, P.J., Bradley, M.M. and Cuthbert, B.N. (2008) International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual. Technical Report A-8. University of Florida, Gainesville, FL.
[7] Bos, D.O. EEG-Based Emotion Recognition—The Influence of Visual and Auditory Stimuli. Ph.D. Thesis, Department of Computer Science, University of Twente, Enschedue.
[8] Newland, D.E. (1994) An Introduction to Random Vibrations, Spectral and Wavelet Analysis. 3 Vols. Longman Scientific & Technical.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.