Research

October 13, 2020

Cross-Subject EEG-Based Emotion Recognition through Neural Networks with Stratified Normalization

By Javier Fernández
Cross-Subject EEG-Based Emotion Recognition through Neural Networks with Stratified Normalization
READ THE PAPER

How to best classify one’s emotions based on their brainwaves remains a very challenging issue in affective computing, largely due to the poor homogeneity between participant signals. To overcome this high variability in brain patterns, proven methods typically require the training of participant-dependent models, which depend on tedious calibration phases. Conceivably, a better approach would consist of creating a model that can generalize over multiple individuals, thus removing the time-consuming calibration session. However, state-of-the-art techniques still fail to set the relevant emotion information apart from other factors, and the classification accuracy of general models remains lower than participant-dependent ones.

Over the past months, we have studied and evaluated a new participant-based feature normalization method, stratified normalization, for training deep neural networks in task of cross-subject emotion classification from EEG signals. We applied our new method on the SJTU Emotion EEG dataset (SEED) [1], [2], which contains 62-channel EEG data collected from 15 participants, who carried out three sessions over the same 15 film clips. An emotional rating was assigned to each film clip and obtained by averaging the ratings of 20 participants who were asked to indicate one keyword (positive, neutral, or negative) after watching them.

To deepen the evaluation, we compared our method with the standard batch normalization. The methodology for both batch normalization and our method are displayed in Figure 1.

Figure 1. On the left, the batch normalization method, where the data is normalized per feature, independently of the participant and session. On the right, our method, or so-called stratified normalization method, where the data is normalized per feature, participant, and session.

For a clearer visualization of the classifier’s performance, we run the dimensionality reduction tool UMAP [3] to reduce the input and output of the data to two dimensions. While Figure 2 shows the embedding of the predicted values at the input of the layer of the neural network for the last five participants of the dataset, Figure 3 plots the embedding of the predicted values at the output of the layer of the neural network for the same five participants.

Figure 2: Embedding the input of the neural network with three emotion categories.
Figure 3: Embedding the output of the neural network with three emotion categories.

Compared to standard batch normalization, the new method efficiently suppresses the part of the signal that is specific to each participant, their brain signature if you like, to better extract the emotion information from the brain signals. Results show that our method outperforms state-of-the-art methods for binary classification of emotions (positive and negative) with a cross-validation score of 91.6%.

These results indicate the high applicability of stratified normalization for cross-subject emotion recognition tasks, suggesting that this method could be applied to other applications that require domain adaptation algorithms.

Team

Javier Fdez, Nicholas Guttenberg, Olaf Witkowski, and Antoine Pasquali

References:

[1] R. N. Duan, J. Y. Zhu, and B. L. Lu, “Differential entropy feature for EEG-based emotion classification”, International IEEE/EMBS Conference on Neural Engineering, pp. 81-84, 2013.

[2] W. L. Zheng and B. L. Lu, “Investigating critical frequency bands and channels for EEG-Based emotion recognition with deep neural networks”, IEEE Transactions on Autonomous Mental Development, vol. 7, pp. 162-175, 2015.

[3] L. McInnes, J. Healy, and J. Melville, “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction”, 2018.