Next Article in Journal
A Wireless Passive Pressure-Sensing Method for Cryogenic Applications Using Magnetoresistors
Previous Article in Journal
Mood Disorder Severity and Subtype Classification Using Multimodal Deep Neural Network Models
Previous Article in Special Issue
Examining Neural Connectivity in Schizophrenia Using Task-Based EEG: A Graph Theory Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parallel Ictal-Net, a Parallel CNN Architecture with Efficient Channel Attention for Seizure Detection

by
Gerardo Hernández-Nava
1,
Sebastián Salazar-Colores
2,*,
Eduardo Cabal-Yepez
3 and
Juan-Manuel Ramos-Arreguín
1
1
Faculty of Engineering, Autonomous University of Querétaro, Queretaro 76140, Mexico
2
Research Department, Centro de Investigaciones en Óptica A.C., Guanajuato 37150, Mexico
3
Multidisciplinary Studies Department, Campus Yuriria, University of Guanajuato, Guanajuato 38954, Mexico
*
Author to whom correspondence should be addressed.
Submission received: 3 November 2023 / Revised: 28 December 2023 / Accepted: 29 December 2023 / Published: 23 January 2024
(This article belongs to the Special Issue Advancements in EEG and Biosignal Sensing Technologies)

Abstract

:
Around 70 million people worldwide are affected by epilepsy, a neurological disorder characterized by non-induced seizures that occur at irregular and unpredictable intervals. During an epileptic seizure, transient symptoms emerge as a result of extreme abnormal neural activity. Epilepsy imposes limitations on individuals and has a significant impact on the lives of their families. Therefore, the development of reliable diagnostic tools for the early detection of this condition is considered beneficial to alleviate the social and emotional distress experienced by patients. While the Bonn University dataset contains five collections of EEG data, not many studies specifically focus on subsets D and E. These subsets correspond to EEG recordings from the epileptogenic zone during ictal and interictal events. In this work, the parallel ictal-net (PIN) neural network architecture is introduced, which utilizes scalograms obtained through a continuous wavelet transform to achieve the high-accuracy classification of EEG signals into ictal or interictal states. The results obtained demonstrate the effectiveness of the proposed PIN model in distinguishing between ictal and interictal events with a high degree of confidence. This is validated by the computing accuracy, precision, recall, and F1 scores, all of which consistently achieve around 99% confidence, surpassing previous approaches in the related literature.

1. Introduction

More than 70 million people worldwide have epilepsy, a non-transmissible, chronic neurological disorder, which does not distinguish among age, sex, or race. Epilepsy is characterized by non-induced convulsions occurring in an intermittent and random way. In 2014, the International League Against Epilepsy (ILAE) designated epilepsy as a brain disease characterized by any of the following conditions: occurrence of at least two non-provoked seizures, 24 h apart, one non-provoked crisis with a minimal probability of 60% of suffering new seizures through the following 10 years, or having an epilepsy syndrome diagnosis [1].
During an epileptic seizure, transient indicators or symptoms appear because of excessive or simultaneous abnormal neural activity. The symptoms can include temporal disorientation, absent lapses, loss of consciousness, psychic symptoms (e.g., dread, anxiety, and déjà vu), as well as sudden and uncontrollable muscular-contraction movements of the body extremities. Suffering from this disease restricts people in different ways and affects families by disrupting their daily lives, making them the target of social and workplace discrimination, service denial, social stigma, and, as a consequence, resulting in depression [1]. Therefore, it is necessary to develop highly reliable assistant tools for carrying out a timely diagnosis of epilepsy to diminish social and emotional affectations regarding patients.
Input data are key elements in the development of aiding tools for the detection of epileptic surges; hence, most works concerning this subject use electroencephalography (EEG) signals as an information source, since they provide essential physiological indicators to diagnose, treat, and trace epilepsy [2]. The EEG is an electrical signal in the order of microvolts [µV], which reflects the brain’s activity by placing superficial electrodes on the scalp or intracranial ones directly on the brain. An EEG signal shows different stages that can be used for detecting epileptic phases: preictal, a brief time before an epileptic seizure; ictal, the middle stage of a stroke; postictal, the state just after an epileptic attack; and interictal, the lapse of time between consecutive epileptic events [3].
Previous works have predicted, detected, and classified epileptic incidents, looking to help patients with this disease, by utilizing artificial intelligence (AI), specifically deep learning (DL). The most-employed approaches involve support vector machines (SVMs) [4], k-nearest neighbors (KNNs) [5], Bayesian networks (BNs) [6], decision trees (DTs) [7], artificial neural networks (ANNs) [8], long short-term memories (LSTMs) [9], and convolutional neural networks [10]. Distinct characterization methods of the EEG signal time–frequency domain utilizing the discrete wavelet transform (DWT). Furthermore, some works utilize images generated by means of the Gramian angular field transform to classify the EEG signal.
In this work, the EEG signals are characterized in the time–frequency domain utilizing the continuous wavelet transform (CWT). The CWT is particularly effective for characterizing EEG signals in the time–frequency domain, especially given the non-stationary nature of EEG data. Unlike other techniques, such as the short-time Fourier transform (STFT), CWT excels in its adaptability to varying frequencies over time, making it well-suited for an EEG analysis. This adaptability allows the CWT to simultaneously analyze both high- and low-frequency components within EEG signals, offering a comprehensive view of brain activities across different frequency bands. Furthermore, the CWT’s high resolution in the time–frequency domain provides a more accurate representation of ictal and interictal events, which are crucial for understanding and diagnosing neurological conditions. The output of the CWT, known as a scalogram, visually represents the variation of frequency components over time in a logarithmic format, thereby offering an insightful visualization of the complex dynamics in EEG signals. These obtained spectrograms are the input to the proposed parallel ictal-net (PIN), which is an architecture that contains neural networks working in parallel, and each neural network has an efficient channel attention module that is described in detail in Section 3.3. The obtained results demonstrate that the proposed PIN can efficiently discern between ictal and interictal events with high certainty, outperforming recently have been employed as input data sources to obtain parameters, such as the average value, variance, standard deviation, root mean square (RMS), power spectral density (PSD), as well as other reference values obtained through the signal’s transformation into the introduced approaches, by computing the performance metrics, accuracy, precision, recall, and F1 scores, which are the most occupied efficacy measurements in the associated literature, obtaining around a 99% confidence value for all of them.

2. Related Works

Even though there is a substantial number of works that analyze the Bonn University dataset [11], a publicly available database of EEG signals, most of them display an assortment among all the five subsets, or categories, contained in it. This section analyzes the state of the art in the subject by examining related works in the literature pursuing the goal of achieving the classification of the EEG signals in the Bonn database, specifically those corresponding to the ictal and interictal states, contained in the subsets D and E, respectively, which contain EEG registers taken from the epileptogenic zone. On this subject, in [6], subsets D and E are classified through SVMs, KNNs, and BNs, reaching an effectiveness between 88.60% and 91.16%, depending on the used classification algorithm. The results are obtained through two- and four-fold cross validations. The EEG signals are processed utilizing a radial-basis function to obtain parameters, such as the average, variance, standard deviation, root mean square (RMS), and power spectral density (PSD) mean, to achieve their classification.
In [12], the Bonn University database is classified using SVMs, KNNs, and a DT. The EEG signals are parameterized by applying the DWT and the empirical mode decomposition (EMD), achieving accuracy rates ranging from 60% to 100%. In [13], various algorithms, such as the SVM, ANN, LSTM, among others, were compared for classifying EEG signals in this dataset. In [14], the same database was classified through a convolutional neural network (CNN), which processed images generated via the Gramian angular field transform. This method involved applying the Gramian angular sum field and the Gramian angular difference field to the EEG signal and its corresponding instantaneous power. In [5], machine learning algorithms combined with a fuzzy classifier were employed, achieving over 90% accuracy in classifying the Bonn dataset. An LSTM network was proposed in [15] for the database classification, featuring four layers and trained using a 10-fold cross-validation procedure with 30 epochs and a batch size of 32. The ictal-net, introduced in [16], is a CNN designed to classify EEG signals segmented into 1.6 s windows, achieving a 93.7% accuracy rate. A novel approach for epilepsy detection using a variable-frequency complex demodulation (VFCDM) to obtain high-resolution time–frequency spectra (TFSs) and a CNN has recently emerged [17]. This method, differing from traditional techniques, like the continuous wavelet transform (CWT), has shown promising results in identifying various epilepsy states through the EEG analysis. However, given the excellent outcomes achieved in similar tasks with simpler CNN networks [18], it is a logical progression to explore the combination of the CWT with the CNN for further advancements in this field.
In light of the preceding discussion, the primary objective of this investigation is to conduct a classification of EEG signals derived from patients diagnosed with epilepsy. The aim is to accurately distinguish between ictal and interictal states, utilizing the database of Bonn University. To fulfill this objective, a novel neural network architecture, henceforth referred to as the parallel ictal-net (PIN), is meticulously designed and subsequently applied. The successful achievement of a highly reliable differentiation between ictal and interictal states is anticipated to facilitate the implementation of more efficacious treatment protocols for individuals afflicted by epilepsy, with the ultimate goal of significantly enhancing the quality of life for both the patients and their respective families.

3. Theoretical Background

3.1. Continues Wavelet Transform

The continuous wavelet transform (CWT) is a mathematical approach employed to deconstruct a continuous or discrete signal into its constituent frequency components. This technique proves invaluable for the analysis of transient signals, as it provides detailed information regarding their variations in both the time and frequency domains. Consequently, this allows for the extraction of meaningful characteristics that remain elusive when other methods, such as the Fourier transform, are utilized [19].
The CWT is implemented for an analyzed signal by utilizing a mother-wavelet function as a basis. Each mother wavelet has specific features, such as waveform and temporal width, which are employed for determining the distinct frequency components in the signal [17]. Figure 1 depicts different mother-wavelet functions.
The signal analysis was performed by scaling the mother-wavelet function and adapting the deduced information in time in such a way that it matched the analyzed signal. The resulting wavelet coefficients provided the frequency contents of the signal for each scale (wavelength width) and location (shifting). A scalogram can be obtained from the wavelet coefficients through further processing, to determine the high-efficiency features and patterns associated with the analyzed signal.
A scalogram is a graphical representation of a signal’s frequency overtime. It is usually employed for image and signal processing to analyze time and space variations [20,21]. Figure 2 illustrates an 868 point ictal signal scalogram.

3.2. Deep Learning and the CNN

Deep learning (DL) is a subarea of artificial intelligence (AI), which usually works with large datasets for statistical applications and predictive models. In recent years, there has been a broad number of areas where DL has shown its usefulness, including image classification [22], natural language processing [23], financial tasks [24], and the energy sector [25], among many others. The most outstanding aspect of DL is that it is an automatic self-learning AI technique, which has proven to be highly effective in solving complex problems. In this regard, an artificial neural network (ANN) is a computational model that consists of artificial neuron sets, which carry out sequential connections among themselves, grouped into structures known as layers. Usually, an ANN has an input layer, one or more in-between layers of neurons, denominated hidden layers, with activation functions that deliver an output.
Deep learning is based on the implementation of a deep ANN. CNNs are a category of ANN used in this field, which were proposed in 1998 by LeCun et al. [26]. CNNs are specially designed for image identification and classification purposes, where the initial layers learn about simple features, such as lines, curves, and contours to recognize the image complex details of the deepest layers.
CNNs can be described through five main operations [26]: (1) convolution is used for identifying the patterns and features composing the analyzed information. This operation is performed utilizing kernels, which are small-sized matrices applied to the analyzed images for emphasizing some of their features. (2) Grouping is employed for reducing the dimensions of the matrices and vectors computed during the convolutional operation by removing some elements to simplify the information processing. The most used techniques for performing this task are computing either the maximum value of the applied kernel or its average. (3) Activation is the stage where the non-linearity of the analysis is introduced through functions, such as SoftMax, rectified linear unit (ReLU), sigmoid, Gaussian error linear units (GELUs), among many others, which assist the CNNs with learning complex patterns. (4) Normalization improves the network training and its performance by adjusting all the values to follow the same distribution. The most used approach for carrying out this procedure is batch normalization. Classification is the final stage of a CNN where labels are assigned to input images according to the model learned by the network.

3.3. Efficient Channel Attention

An attention module is a supplementary layer that allows the ANN architecture to focus on specific information defined during the processing; in other words, it grants the ANN the ability of paying attention to certain patterns without assigning fixed weights or handling all the data in a similar manner, allowing it to improve the efficiency and performance of the process [27]. At present, attention modules are widely used [28,29,30]; for instance, in [28], a self-attention module was introduced to endorse the process for distinguishing among the suitability of different input tokens during a data series analysis.
There are several attention mechanisms, for instance, soft attention, attention autoencoders, and non-local attention, among others. Each one has specific characteristics that make them suitable for different applications. Some areas where attention mechanisms are widely utilized are natural language processing, computer vision, and voice recognition [31,32,33]. In this work, after an extensive ablation study, an efficient channel attention module was employed because it provided the best performance results against a channel attention block, a pixel attention block, and not using an attention mechanism.
As stated before, CNNs consist of convolutional layers where filters are applied to the input data for obtaining multi-channel characteristic maps as outputs, with each layer providing a different feature from the processed data. Hence, efficient channel attention modules allow the convolutional architecture to focus on the most-relevant elements by adding an additional layer to the CNNs. Attention layers learn selectively utilizing a training function, which picks out or discards some channels in order to attain more effective predictions, reducing the computational cost, since irrelevant components for the required task are disregarded, allowing it to produce a more efficient convolutional architecture.
The efficient channel attention technique has been utilized for tasks, such as natural language processing, object detection, and image classification [34,35], to improve the efficiency and performance of the defined models. It can be implemented in different ways, for instance, the squeeze-and-excitation module described in [27], which consists of two main operations. The compression operation reduces the spatial dimensionality of the feature map, through global mean grouping, delivering a statistical vector for each channel, whereas the excitation process utilizes this statistical vector, a fully connected layer, and a sigmoid activation function to adjust the characteristics for the channel output as:
f c = s i g m o i d W 2 R e L u W 1 y
where y is the obtained statistical vector for each channel, W 1 and W 2 are learned weights, and f c is the excitation operation output function for channel c . The module outcome is obtained by multiplying the input feature map by the excitation operation output function, f c .
In a different scenario, the efficient channel attention module (ECA), described in [27], utilizes a CNN, instead of a fully connected layer, to learn the attention weights for each channel, which are generated through the excitation operation output function, to scale the feature map at the input by:
f c = s i g m o i d c o n v y
where c o n v is the convolutional layer.

3.4. The Bonn Database

The database utilized in this work was sourced from the University of Bonn [36]. It comprises five sets, labeled A to E, each containing 100 single-channel EEG recordings, each lasting 23.6 s. These signals were recorded at a sampling frequency of 173.61 Hz, using a pass-band filter ranging from 0.53 to 40 Hz.
The database batches were distributed as:
  • Group A: five healthy subjects in a relaxation state with eyes closed.
  • Group B: five healthy subjects in a relaxation state with eyes open.
  • Group C: five pathological subjects free from seizures, whose EEG registers were obtained from the hippocampus formation in the brain’s opposite hemisphere.
  • Group D: five pathological subjects free from seizures, whose EEG registers were obtained from the epileptogenic zone.
  • Group E: five pathological subjects with epileptic seizure activity.
The primary aim of the proposed parallel ictal-net (PIN) architecture is to differentiate between ictal and interictal EEG signals. For this purpose, only datasets D and E are utilized in this study, as they uniquely consist of recordings from the epileptogenic zone in the same hemisphere.
Figure 3 illustrates examples of EEG signals from the Bonn database, accompanied by their respective scalograms generated through the application of the continuous wavelet transform (CWT). It is crucial to highlight that discerning straightforward patterns within these scalograms is challenging, thereby necessitating the use of advanced techniques founded on artificial intelligence models to effectively interpret them. This figure underscores the complexity of the EEG data and the value of AI-driven analytical approaches in this domain.

4. Parallel Ictal-Net Integration

The parallel ictal-net (PIN) synthesis was performed considering six sets of variables, which included (a) the mother-wavelet function election, where the generalized Morse wavelet (gmw), the Morlet, the bump, the complex Mexican hat (cmhat), and the Hilbert analytic function of Hermitian hat (hhhat) were considered. (b) The sampling frequency (Fs), used to compute the scalograms, which adopted either a value of 173.61 or 0. (c) The number of samples on each window, which changed from 174 to 2083. (d) The overlap percentage between consecutive windows, which can be zero, one quarter, one third, or half a window. (e) The learning rate of the neural network training, which takes a value of 0.01, 0.001, or 0.0001. (f) The optimizer type, which can be an Adam optimizer or a stochastic gradient descent (SGD). All these specifications are shown in Table 1.
Table 1 shows a total of 28 variables, distributed across six adjustable hyperparameters, each integral for the accurate recognition of epileptic events. To ensure the robustness of our results, we employed the stratified 10-fold cross-validation technique to evaluate each combination of these hyperparameters systematically. The integration process of the parallel ictal-net (PIN) was structured as several key stages:
(a) Signal pre-processing: this initial stage involved segmenting the acquired EEG signal into windows. For each window, a scalogram was computed and normalized using the continuous wavelet transform (CWT), preparing the data for subsequent analyses.
(b) Pre-training: before the main training phase, the PIN underwent a pre-training stage. This step was crucial for fine-tuning the network’s parameters, ensuring that it was primed for an optimal performance during the main training process.
(c) PIN training: the core of the integration process, the training stage of the PIN, was multifaceted. It included transfer learning, which leveraged the pre-trained models to enhance the learning efficiency; block-coding, which systematically organized the network’s layers; and the actual training phase, where the network learnt to classify EEG signals into different epileptic states.
(d) PIN evaluation: the final stage was the comprehensive evaluation of the trained PIN. This phase assessed the network’s performance in accurately detecting and classifying epileptic events, ensuring its effectiveness and reliability as a diagnostic tool.

4.1. Signal Pre-Processing

The EEG signals in the database were already filtered; therefore, signal pre-processing consisted of resampling the signals according to the desired window length (i.e., number of samples) and the overlap percentage between consecutive windows. For instance, the pre-processed signals introduced to the PIN were arranged into 868 sample windows, with a 33.3% overlap, delivering seven windows for each signal, which were treated utilizing the CWT implemented in Python, utilizing the ssqueezepy library [35], to be entered into a graphics processing unit (GPU), exponentially decreasing the computation time. There were two alternatives to perform the CWT calculation: in the first one, the sampling frequency was provided, and in the second one, it was not; however, the mother-wavelet function must always be specified.
Once the CWT coefficients, C i , were obtained, their corresponding absolute values were computed and normalized, C i _ N , as described in Equation (3), to estimate the scalogram, which was processed through the proposed PIN:
C i _ N = C i C m i n C m a x C m i n
where C m i n and C m a x represent the minimum and maximum absolute values, respectively, of the computed CWT coefficients.

4.2. Pre-Training

The pre-processing stage was added to achieve a greater performance through the proposed architecture, before carrying out the transfer learning for every convolutional block. The pre-processing stage consisted of splitting up the EEG signal into the n scalograms composing it; therefore, 180×n signals were obtained for teaching the pre-training block, since the overall network required 180 signals to be trained.
The pre-training block was composed of a 2D convolutional layer, an efficient channel attention module, a max-pooling layer, and two neurons with a SoftMax activation function at their outputs. This architecture, which was discussed thoroughly in Section 5.2, was identical to that of the convolutional blocks in the networks from the reviewed literature, which were used for a comparison. The pre-training block was taught in 150 epochs; however, an early stopping callback was implemented to avoid overtraining the network.

4.3. Network Training

The main difference among the considered networks was the number of convolutional blocks building them, which depended on the number of scalograms, n, from the analyzed signal. Transfer learning was carried out first to assign the weights, obtained during the pre-training phase, to the layers of each convolutional block building the network; then, the network was trained by varying the optimization and learning rate values, keeping the number of epochs fixed to 150 and the batch size set to 32. An early stopping callback was implemented to reduce the training time, improve the network’s performance, and avoid overtraining. The early stopping callback ended the training process when a 100% accuracy was achieved in the training set.

4.4. Validation

A 10 k-fold cross-validation was performed to verify each proposed network by computing the accuracy, precision, recall, and F1 score, which were the most utilized effectiveness criteria in the literature regarding the identification of ictal and interictal states from the EEG signals of epileptic patients. These performance parameters are computed by Equations (4), (5), (6), and (7), respectively:
a c c u r a c y = T P + T N T P + T N + F P + F N
p r e c i s i o n = T P T P + F P
r e c a l l = T P T P + F N
F 1   s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
From (4) to (7), the true positives ( T P s) are the ictal events correctly identified; false positives ( F P s ) correspond to the scalograms incorrectly classified as belonging to ictal episodes; true negatives ( T N s ) are the interictal windows properly identified; and false negatives ( F N s ) are the interictal episodes recognized as ictal events by the PIN. The reported value of the performance metrics was obtained by computing the corresponding average from 10 distinct trials.

4.5. Computer System

The proposed methodology was used in four distinct computer systems. The first one consisted of four NVIDIA-RTX-3090 GPUs and a RYZEN Threadripper processor 1920X with 128 GB of RAM. The second one was composed of an NVIDIA RTX 3080 GPU and a RYZEN-7 processor 1700X with 48 GB of RAM, and the third one was composed of an NVIDIA-RTX 3060 GPU and a RYZEN-5 processor 5600 G with 48 GB of RAM. Hence, there were seven GPUs, 142 GB of VRAM, and 256 GB of RAM in total. All computer systems functioned under the Ubuntu-22.04 operating system. The ssqueezepy library [37] was used for computing the scalograms through the CWT, whereas the Keras library [38] was utilized for designing, compiling, and training the neural network, utilizing the Python programming language.

5. Results

This section shows the obtained results from searching the optimal hyperparameters for the proposed PIN compilation to achieve the best performance for classifying between ictal or interictal events, as well as its architecture. Then, the input signal in the PIN was depicted, and a thorough comparison of the obtained results from the proposed architecture, against those from recently proposed approaches in the state of the art, was performed through the previously described performance metrics.

5.1. Hyperparameter Search Results

A comprehensive assessment of 2880 unique neural network configurations was conducted, taking into account the 28 variables listed in Table 1, which were distributed among the six tunable hyperparameters. This approach ensured a thorough optimization of the proposed PIN architecture. To analyze parameter tuning, a parallel coordinates graphic, as depicted in Figure 4, was utilized. This graphic aided in fine-tuning the parameters to achieve the best quantitative results in terms of the performance metrics commonly used in the related literature.
Figure 4 shows the five different wavelet functions in the first coordinate, followed by the sampling frequency, which can or cannot be defined to compute the CWT; then, the number of samples on each window, the overlap percentage between consecutive windows, the learning rate at which the neural network is trained, the optimizer type, and the four performance metrics to record the efficiency of the 2880 tested architectures are presented. In Figure 4, the lines represent the hyperparameter combination through a heat map, which depends on the obtained performance metric values; the darker the color, the higher the obtained metrics, with 1 corresponding to 100%.
Of the 2880 tested network configurations, 135 surpassed the approaches reported in the state of the art. Their corresponding hyperparameter sequences are depicted in Figure 5 by means of zooming into Figure 4.
From Figure 5, it can be deduced that the networks with the best performance are those trained with the Adam optimizer at a learning rate of 0.0001. Furthermore, from the 135 configurations surpassing the architectures reported in the state of the art, 13 of them, which are shown in Table 2, achieve 100% accuracy, precision, recall, and F1 scores. This table shows all the tunable hyperparameter arrangements, with a 0.0001 learning rate, as well as the scalogram computing and processing times, the corresponding architecture training time, and the total number of parameters for each network configuration. The mean square error (L2) was used as the second selection criterion for these 13 configurations, and the number of parameters required by the network was the third one. Hence, the second configuration in Table 2, highlighted in bold, was selected as the best network composition.

5.2. PIN Architecture and Comparison

The network arrangement with the best performance was designated as parallel ictal-net (PIN), which was configured with the following hyperparameters: (a) the bump wavelet mother function, (b) without an assigned sampling frequency, (c) a window extent of 868 samples, (d) with a 33% overlap between consecutive windows, as shown in Figure 6, and (e) a learning rate of 0.0001. The scalogram computation elapsed for 26.04 s, the training time was 352.78 s for the 10 trials, and the L2 function value was 0.47.
Figure 6 illustrates that the input for the parallel ictal-net (PIN) consisted of seven scalograms, each with 868 samples analyzed across 233 scales, resulting in dimensions of 233 × 868 × 7. To manage the network’s computational load and prevent memory overflow, these inputs were resized to 250 × 250 × 7 using a bilinear interpolation, except for smaller windows. This resizing reduced the number of input parameters.
Each scalogram input into the PIN was processed through seven convolutional blocks. At the beginning of each block, a 2D layer with 10 filters applied a 3 × 3 mask, forwarding the processed data to a 2D efficient channel attention module. The data then passed through an ReLU activation function and a max-pooling layer with a 2 × 2 kernel.
After the convolutional blocks, a flattening layer adjusted the dimensions of the extracted features. These features were then classified using a dense layer with two output neurons employing a SoftMax activation function.
The outputs from the seven convolutional blocks were merged and passed through a dense layer with 432 neurons and an ReLU activation function. This was followed by the batch normalization stage, a dropout layer set at a 0.7 rate, and a final two-neuron dense layer with a SoftMax activation function. Figure 7 presents a flowchart of the PIN architecture, offering a clear visual representation of this process.
Table 3 shows a comparative performance analysis of the proposed PIN against recently introduced approaches in the state of the art about identifying ictal and interictal states, taking as benchmarks the following four performance metrics: accuracy, precision, recall, and F1 scores. However, it is worth noting that not all the methods in the literature report the F1 score. From this table, it can be observed that the proposed PIN surpasses the previous proposals in the state of the art by at least 1.14%, taking into consideration all the analyzed performance metrics.
In Figure 8, the confusion matrix for the classification of subsets D and E in the University of Bonn database is presented. These values were utilized to derive the metrics reported in Table 3.

6. Conclusions

Epilepsy is a neurological disorder that not only restricts those who suffer from it, but also affects their families, making them targets of social discrimination. A reliable tool for the timely diagnosis of epilepsy can help mitigate the social and emotional impacts on patients. Electroencephalography (EEG) signals provide physiological indicators that are crucial for diagnosing, treating, and tracing epilepsy. Consequently, several works are dedicated to analyzing the publicly available database from Bonn University. However, the reliable classification of ictal and interictal states from EEG signals taken in the epileptogenic zone remains an issue. Therefore, in this work, we performed a thorough analysis of various neural network configurations for classifying ictal and interictal events. Our results show that 135 out of 2880 different CNN arrangements surpass recently proposed approaches in the reviewed literature. Furthermore, 13 of these 135 structures demonstrate 99% certainty in detecting ictal and interictal stages, considering the reliability metrics of accuracy, precision, recall, and F1 scores. Moreover, the network architecture identified in Table 1 as parallel ictal-net (PIN) was selected due to the number of parameters required to configure it and its low mean square error (L2). The PIN leverages efficient channel attention modules within its parallel convolutional blocks to enhance its performance, analyzing EEG signal scalograms derived via the continuous wavelet transform (CWT), implemented using the ssqueezepy library in Python. This approach not only yielded high accuracy results, but also optimized the processing speed, paving the way for potential online applications for epilepsy diagnosis and monitoring.

Author Contributions

Conceptualization, G.H.-N. and S.S.-C.; methodology, G.H.-N.; software, G.H.-N. and S.S.-C.; validation, G.H.-N. and S.S.-C.; formal analysis, G.H.-N.; investigation, G.H.-N.; resources, S.S.-C., E.C.-Y. and J.-M.R.-A.; writing—original draft preparation, G.H.-N.; writing—review and editing, E.C.-Y. and J.-M.R.-A.; visualization, G.H.-N.; supervision, S.S.-C.; project administration, S.S.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Council of Humanities Science and Technology (CONAHCyT) under the master’s fellowship program, with support number 798559.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.ukbonn.de/epileptologie/arbeitsgruppen/ag-lehnertz-neurophysik/downloads/.

Acknowledgments

The authors express their sincere gratitude to the Faculty of Engineering at the Autonomous University of Queretaro and to the Centro de Investigaciones en Óptica for their valuable support in conducting this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fisher, R.S.; Acevedo, C.; Arzimanoglou, A.; Bogacz, A.; Cross, J.H.; Elger, C.E.; Engel, J., Jr.; Forsgren, L.; French, J.A.; Glynn, M.; et al. ILAE Official Report: A Practical Clinical Definition of Epilepsy. Epilepsia 2014, 55, 475–482. [Google Scholar] [CrossRef] [PubMed]
  2. Shoeibi, A.; Khodatars, M.; Ghassemi, N.; Jafari, M.; Moridian, P.; Alizadehsani, R.; Panahiazar, M.; Khozeimeh, F.; Zare, A.; Hosseini-Nejad, H.; et al. Epileptic Seizures Detection Using Deep Learning Techniques: A Review. Int. J. Environ. Res. Public Health 2021, 18, 5780. [Google Scholar] [CrossRef]
  3. Chou, C.-H.; Shen, T.-W.; Tung, H.; Hsieh, P.F.; Kuo, C.-E.; Chen, T.-M.; Yang, C.-W. Convolutional Neural Network-Based Fast Seizure Detection from Video Electroencephalograms. Biomed. Signal Process. Control 2023, 80, 104380. [Google Scholar] [CrossRef]
  4. Rafid Ahmad, S.R.; Sayeed, S.M.; Ahmed, Z.; Siddique, N.M.; Parvez, M.Z. Prediction of Epileptic Seizures Using Support Vector Machine and Regularization. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 1217–1220. [Google Scholar]
  5. Aayesha; Qureshi, M.B.; Afzaal, M.; Qureshi, M.S.; Fayaz, M. Machine Learning-Based EEG Signals Classification Model for Epileptic Seizure Detection. Multimed. Tools Appl. 2021, 80, 17849–17877. [Google Scholar] [CrossRef]
  6. Kumari, R.S.S.; Abirami, R. Automatic Detection and Classification of Epileptic Seizure Using Radial Basis Function and Power Spectral Density. In Proceedings of the 2019 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 21–23 March 2019; pp. 6–9. [Google Scholar]
  7. Xu, X.; Lin, M.; Xu, T. Epilepsy Seizures Prediction Based on Nonlinear Features of EEG Signal and Gradient Boosting Decision Tree. Int. J. Environ. Res. Public Health 2022, 19, 11326. [Google Scholar] [CrossRef] [PubMed]
  8. Daoud, H.; Bayoumi, M.A. Efficient Epileptic Seizure Prediction Based on Deep Learning. IEEE Trans. Biomed. Circuits Syst. 2019, 13, 804–813. [Google Scholar] [CrossRef]
  9. Muhammad Usman, S.; Khalid, S.; Bashir, S. A Deep Learning Based Ensemble Learning Method for Epileptic Seizure Prediction. Comput. Biol. Med. 2021, 136, 104710. [Google Scholar] [CrossRef]
  10. Dissanayake, T.; Fernando, T.; Denman, S.; Sridharan, S.; Fookes, C. Deep Learning for Patient-Independent Epileptic Seizure Prediction Using Scalp EEG Signals. IEEE Sens. J. 2021, 21, 9377–9388. [Google Scholar] [CrossRef]
  11. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of Nonlinear Deterministic and Finite-Dimensional Structures in Time Series of Brain Electrical Activity: Dependence on Recording Region and Brain State. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef]
  12. Bekbalanova, M.; Zhunis, A.; Duisebekov, Z. Epileptic Seizure Prediction in EEG Signals Using EMD and DWT. In Proceedings of the 2019 15th International Conference on Electronics, Computer and Computation (ICECCO), Abuja, Nigeria, 10–12 December 2019; pp. 1–4. [Google Scholar]
  13. Nagabushanam, P.; Thomas George, S.; Radha, S. EEG Signal Classification Using LSTM and Improved Neural Network Algorithms. Soft Comput. 2020, 24, 9981–10003. [Google Scholar] [CrossRef]
  14. Shankar, A.; Khaing, H.K.; Dandapat, S.; Barma, S. Epileptic Seizure Classification Based on Gramian Angular Field Transformation and Deep Learning. In Proceedings of the 2020 IEEE Applied Signal Processing Conference (ASPCON), Kolkata, India, 7–9 October 2020; pp. 147–151. [Google Scholar]
  15. Shekokar, K.; Dour, S.; Ahmad, G. Epileptic Seizure Classification Using LSTM. In Proceedings of the 2021 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; pp. 591–594. [Google Scholar]
  16. Hernández-Nava, G.; Salazar-Colores, S.; Ortiz-Echeverri, C.J.; López-Leyva, S. Ictal-Net: Un Diseño de CNN Para La Clasificación de Escalogramas de Electroencefalogramas Con Crisis Convulsivas. In Diseño y Planeación Mecatrónica; Ramos Arreguín, J.M., Salazar Colores, S., Cabal Yepez, E., Vargas Soto, J.E., Eds.; Asociación Mexicana de Mecatrónica A.C: Queretaro, Mexico, 2022; pp. 27–38. ISBN 978-607-9394-25-7. [Google Scholar]
  17. Veeranki, Y.R.; McNaboe, R.; Posada-Quintero, H.F. EEG-Based Seizure Detection Using Variable-Frequency Complex Demodulation and Convolutional Neural Networks. Signals 2023, 4, 816–835. [Google Scholar] [CrossRef]
  18. Ortiz-Echeverri, C.J.; Salazar-Colores, S.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A. A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network. Sensors 2019, 19, 4541. [Google Scholar] [CrossRef] [PubMed]
  19. Stéphane, M. CHAPTER 7—Wavelet Bases. In A Wavelet Tour of Signal Processing, 3rd ed.; Stéphane, M., Ed.; Academic Press: Boston, MA, USA, 2009; pp. 263–376. ISBN 978-0-12-374370-1. [Google Scholar]
  20. Mashrur, F.R.; Islam, M.S.; Saha, D.K.; Islam, S.M.R.; Moni, M.A. SCNN: Scalogram-Based Convolutional Neural Network to Detect Obstructive Sleep Apnea Using Single-Lead Electrocardiogram Signals. Comput. Biol. Med. 2021, 134, 104532. [Google Scholar] [CrossRef] [PubMed]
  21. Türk, Ö.; Özerdem, M.S. Epilepsy Detection by Using Scalogram Based Convolutional Neural Network from EEG Signals. Brain Sci. 2019, 9, 115. [Google Scholar] [CrossRef] [PubMed]
  22. Nkemelu, D.K.; Omeiza, D.; Lubalo, N. Deep Convolutional Neural Network for Plant Seedlings Classification. arXiv 2018, arXiv:1811.08404. [Google Scholar]
  23. Widiastuti, N.I. Convolution Neural Network for Text Mining and Natural Language Processing. IOP Conf. Ser. Mater. Sci. Eng. 2019, 662, 52010. [Google Scholar] [CrossRef]
  24. Tuo, S.; Chen, T.; He, H.; Feng, Z.; Zhu, Y.; Liu, F.; Li, C. A Regional Industrial Economic Forecasting Model Based on a Deep Convolutional Neural Network and Big Data. Sustainability 2021, 13, 12789. [Google Scholar] [CrossRef]
  25. Geng, Z.; Zhang, Y.; Li, C.; Han, Y.; Cui, Y.; Yu, B. Energy Optimization and Prediction Modeling of Petrochemical Industries: An Improved Convolutional Neural Network Based on Cross-Feature. Energy 2020, 194, 116851. [Google Scholar] [CrossRef]
  26. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  27. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar]
  28. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  29. Luong, T.; Pham, H.; Manning, C.D. Effective Approaches to Attention-Based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1412–1421. [Google Scholar]
  30. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-Local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  31. Tursunov, A.; Mustaqeem; Choeh, J.Y.; Kwon, S. Age and Gender Recognition Using a Convolutional Neural Network with a Specially Designed Multi-Attention Module through Speech Spectrograms. Sensors 2021, 21, 5892. [Google Scholar] [CrossRef] [PubMed]
  32. Guo, M.-H.; Xu, T.-X.; Liu, J.-J.; Liu, Z.-N.; Jiang, P.-T.; Mu, T.-J.; Zhang, S.-H.; Martin, R.R.; Cheng, M.-M.; Hu, S.-M. Attention Mechanisms in Computer Vision: A Survey. Comput. Vis. Media 2022, 8, 331–368. [Google Scholar] [CrossRef]
  33. Gupta, A.; Arunachalam, S.; Balakrishnan, R. Deep Self-Attention Network for Facial Emotion Recognition. Procedia Comput. Sci. 2020, 171, 1527–1534. [Google Scholar] [CrossRef]
  34. Shu, X.; Chang, F.; Zhang, X.; Shao, C.; Yang, X. ECAU-Net: Efficient Channel Attention U-Net for Fetal Ultrasound Cerebellum Segmentation. Biomed. Signal Process. Control 2022, 75, 103528. [Google Scholar] [CrossRef]
  35. Qin, Z.; Zhang, P.; Wu, F.; Li, X. FcaNet: Frequency Channel Attention Networks. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 763–772. [Google Scholar]
  36. Rai, D.; Kerr, M.P.; McManus, S.; Jordanova, V.; Lewis, G.; Brugha, T.S. Epilepsy and Psychiatric Comorbidity: A Nationally Representative Population-Based Study. Epilepsia 2012, 53, 1095–1103. [Google Scholar] [CrossRef]
  37. Muradeli, J. Ssqueezepy, 2020. GitHub Repository. Available online: https://github.com/OverLordGoldDragon/ssqueezepy/ (accessed on 27 December 2023).
  38. Chollet, F. Deep Learning with Python; Manning Publications Co.: Greenwich, CT, USA, 2017. [Google Scholar]
Figure 1. Mother wavelet forms: (a) bump, (b) complex Mexican hat, (c) generalized Morse wavelet, (d) Hilbert analytic function of Hermitian hat, (e) Morlet ictal signal.
Figure 1. Mother wavelet forms: (a) bump, (b) complex Mexican hat, (c) generalized Morse wavelet, (d) Hilbert analytic function of Hermitian hat, (e) Morlet ictal signal.
Sensors 24 00716 g001
Figure 2. Ictal signal scalogram using the bump mother-wavelet function.
Figure 2. Ictal signal scalogram using the bump mother-wavelet function.
Sensors 24 00716 g002
Figure 3. EEG Signals and corresponding scalograms using the bump wavelet. (a) EEG signal during an ictal state; (b) scalogram derived from the ictal signal; (c) EEG signal during an interictal state; (d) scalogram generated from the interictal signal.
Figure 3. EEG Signals and corresponding scalograms using the bump wavelet. (a) EEG signal during an ictal state; (b) scalogram derived from the ictal signal; (c) EEG signal during an interictal state; (d) scalogram generated from the interictal signal.
Sensors 24 00716 g003
Figure 4. Parallel coordinate plot displaying the tracking of parameters employed in the network design throughout the experimentation process.
Figure 4. Parallel coordinate plot displaying the tracking of parameters employed in the network design throughout the experimentation process.
Sensors 24 00716 g004
Figure 5. Close up visualizing the architecture’s ictal signal. Network configurations that cross the pink line are those considered to have the best performance.
Figure 5. Close up visualizing the architecture’s ictal signal. Network configurations that cross the pink line are those considered to have the best performance.
Sensors 24 00716 g005
Figure 6. Original signal sampling and its transformation into scalograms as inputs for the PIN.
Figure 6. Original signal sampling and its transformation into scalograms as inputs for the PIN.
Sensors 24 00716 g006
Figure 7. Architecture of the parallel ictal-net model.
Figure 7. Architecture of the parallel ictal-net model.
Sensors 24 00716 g007
Figure 8. Confusion matrix for the classification of subsets D and E using the PIN architecture.
Figure 8. Confusion matrix for the classification of subsets D and E using the PIN architecture.
Sensors 24 00716 g008
Table 1. Hyperparameters.
Table 1. Hyperparameters.
HyperparametersValues
Waveletgmw, morlet, bump, cmhat, hhhat
FsNone, 173.61
Samples per window174, 347, 521, 694, 868, 1042, 1215, 1389, 1562, 1736, 1910, 2083
Percent of overlap0, 0.25, 0.33, 0.50
Learning rate0.01, 0.001, 0.0001
OptimizerAdam, SGD *
SGD * stochastic gradient descent.
Table 2. Best networks resulting from the PIN.
Table 2. Best networks resulting from the PIN.
WaveletFs [Hz]Samples per WindowPercent of Overlap [%]Time Scalograms [s]Time Training [s]L2Number of Parameters
gmwNone10423312.67236.900.477,309,640
bumpNone8683326.04382.780.476,981,109
morlet173.6110422511.59265.730.496,143,871
cmhat173.6110423324.23515.290.508,994,440
bumpNone10420014.65322.520.514,790,902
bump173.6112150014.93345.510.515,582,102
bump173.6117362511.27285.610.516,196,313
cmhat173.6113895019.48427.030.529,987,071
morlet173.611736007.62264.830.526,352,373
bumpNone13890011.02284.530.534,787,633
cmhatNone13892516.60391.850.537,990,262
gmwNone2083256.01189.410.537,556,333
gmw173.612083336.01181.660.547,556,333
Values in bold indicate the variables of the best-scoring network.
Table 3. Comparison of the metrics obtained with the PIN against the best architectures found in the state of the art that perform the classifications of subsets D and E from the University of Bonn dataset.
Table 3. Comparison of the metrics obtained with the PIN against the best architectures found in the state of the art that perform the classifications of subsets D and E from the University of Bonn dataset.
ClassifierInputTransformValidation Typeaccuracy
[%]
precision
[%]
recall
[%]
F1 score
[%]
Parallel ictal-netScalogramsCWT10-fold cross-validation98.9999.0999.0098.99
LOSO99.4999.5499.599.49
70% training
30% testing
97.9998.1898.0097.99
Ictal-net [16]ScalogramsCWT60% training
20% validating
20% testing
93.9095.4792.1993.80
RF [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
97.0497.0097.73-
SVM [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
93.5293.5092.04-
KNN [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
97.9698.0097.35-
DT [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70 %training
30% testing
96.6796.7097.73-
MLP [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
98.1598.2096.59-
FURIA [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
96.3096.3097.73-
FLR [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
93.1593.3096.21-
FRNN [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
98.7098.7098.86-
VQNN [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
97.5997.6097.73-
FNN [5]Min. amp., mean amp., SD, PSD, max PSD, mean PSD, and var. of PSDDWT70% training
30% testing
96.8596.9098.84-
CNN [19]ScalogramsCWT10-fold cross-validation98.50-98.9898.50
LSTM [15]--10-fold cross-validation95.00-91.00-
SVM [6]Mean, variance, SD, RMS, mean PSD RBF PSD2-fold cross-validation88.60-94.47-
SVM [6]Mean, variance, SD, RMS, mean PSD RBF PSD4-fold cross-validation88.80-94.60-
KNN [6]Mean, variance, SD, RMS, mean PSD RBF PSD2-fold cross-validation89.23-89.37-
KNN [6]Mean, variance, SD, RMS, mean PSD RBF PSD4-fold cross-validation88.86-89.96-
BN [6]Mean, variance, SD, RMS, mean PSD RBF PSD2-fold cross-validation91.16-91.37-
BN [6]Mean, variance, SD, RMS, mean PSD RBF PSD4-fold cross-validation90.39-90.69-
SVM [12]Mean, variance, skewness, and kurrtosisEMD60% training
40% testing
96.25---
SVM [12]Coefficients A and DDWT60% training
40% testing
96.25---
KNN [12]Mean, variance, skewness, and kurrtosisEMD60% training
40% testing
93.75---
KNN [12]Coefficients A and DDWT60% training
40% testing
95.00---
DT [12]Mean, variance, skewness, and kurrtosisEMD60% training
40% testing
97.50---
DT [12]Coefficients A and DDWT60% training
40% testing
98.75---
CNN FE [14]GASFGAF80% training
20% testing
94.50-99.00-
CNN FE [14]GADFGAF80% training
20% testing
94.00-99.00-
CNN IP [14]GASFGAF80% training
20% testing
96.50-97.00-
CNN IP [14]GADFGAF80% training
20% testing
97.00-97.00-
The shaded cells highlight the values that rate the performance of the proposed architecture. The symbol ‘-’ indicates that no information is available. SD: standard deviation; PSD: power spectral density; EMD: empirical mode decomposition; LOSO: leave one subject out.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hernández-Nava, G.; Salazar-Colores, S.; Cabal-Yepez, E.; Ramos-Arreguín, J.-M. Parallel Ictal-Net, a Parallel CNN Architecture with Efficient Channel Attention for Seizure Detection. Sensors 2024, 24, 716. https://0-doi-org.brum.beds.ac.uk/10.3390/s24030716

AMA Style

Hernández-Nava G, Salazar-Colores S, Cabal-Yepez E, Ramos-Arreguín J-M. Parallel Ictal-Net, a Parallel CNN Architecture with Efficient Channel Attention for Seizure Detection. Sensors. 2024; 24(3):716. https://0-doi-org.brum.beds.ac.uk/10.3390/s24030716

Chicago/Turabian Style

Hernández-Nava, Gerardo, Sebastián Salazar-Colores, Eduardo Cabal-Yepez, and Juan-Manuel Ramos-Arreguín. 2024. "Parallel Ictal-Net, a Parallel CNN Architecture with Efficient Channel Attention for Seizure Detection" Sensors 24, no. 3: 716. https://0-doi-org.brum.beds.ac.uk/10.3390/s24030716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop