# TSception
This is the PyTorch implementation of the TSception in our paper:
*Yi Ding, Neethu Robinson, Qiuhao Zeng, Dou Chen, Aung Aung Phyo Wai, Tih-Shih Lee, Cuntai Guan, "TSception: A Deep Learning Framework for Emotion Detection Using EEG", in IJCNN 2020, WCCI'20* available [Arxiv](https://arxiv.org/abs/2004.02965), [IEEE Xplore](https://ieeexplore.ieee.org/document/9206750)
It is an end-to-end deep learning framework to do classification from raw EEG signals.
A [journal version](https://arxiv.org/abs/2104.02935) of TSception using DEAP dataset can be found at this [website](https://github.com/yi-ding-cs/TSception)
# Requirement
```
python == 3.6 and above
torch == 1.2.0 and above
numpy == 1.16.4
h5py == 2.9.0
pathlib
```
# Run the code
please save the data into a folder and set the path of the data in 'PrepareData.py'.
> python PrepareData.py
After running the above script, a file named 'data_split.hdf' will be generated at the same location of the script. Please set the location of data_split.hdf in 'Train.py' before running it.
> python Train.py
# Acknowledgment
This code is double-checked by Quihao Zeng and Ravikiran Mane.
# EEG data
Different from images, the EEG data can be treated as 2D time series, whose dimensions are channels (EEG electrodes) and time respectively, (Fig.1) The channels here are the EEG electrodes instead of RGB dimensions in image or the input/output channels for convolutional layers. Because the electrodes are located on different areas on the surface of the human's head, the channel dimension contains spatial information of EEG; The time dimension is full of temporal information instead. In order to train a classifier, the EEG signal will be split into shorter time segments by a sliding window with a certain overlap along the time dimension. Each segment will be one input sample for the classifier.
<p align="center">
<img src="https://user-images.githubusercontent.com/58539144/74715094-ca284500-5266-11ea-9919-9e742e72e37d.png" width=600 align=center>
</p>
<p align="center">
Fig.1 EEG data. The hight is channel dimesion and the width is the time dimension.
</p>
# Data to use
There are 2 subjects' data available for researchers to run the code. Please find the data in the folder named 'data' in this repo. The data is cleared by a band-pass filter(0.3-45) and [ICA (MNE)](https://mne.tools/stable/auto_tutorials/preprocessing/plot_40_artifact_correction_ica.html). The file is in '.hdf' format. To load the data, please use:
> dataset = h5py.File('NAME.hdf','r')
After loading, the keys are 'data' for data and 'label' for the label. The dimension of the data is (trials x channels x data). The dimension of the label is (trials x data). To use the data and label, please use:
> data = dataset['data']
> label = dataset['label']
The visilizations of the 2 subjects' data are shown in Fig.3:
<p align="center">
<img src="https://user-images.githubusercontent.com/58539144/86339561-51aaa980-bc86-11ea-9cf0-c44ffadd1c3e.png" width=800 align=center>
</p>
<p align="center">
Fig.3 Visilizations of the 2 subjects' data. Amplitudes of the data are in uV.
</p>
# Structure of TSception
TSception can be divided into 3 main parts: temporal learner, spatial learner and classifier(Fig.2). The input is fed into the temporal learner first followed by spatial learner. Finally, the feature vector will be passed through 2 fully connected layer to map it to the corresponding label. The dimension of input EEG segment is (channels x 1 x timepoint_per_segment), in our case, it is (4 x 1 x 1024), since it has 4 channels, and 1024 data points per channel. There are 9 kernels for each type of temporal kernels in temporal learner, and 6 kernels for each type of spatial kernels in spatial learner. The multi-scale temporal convolutional kernels will operate convolution on the input data parallelly. For each convolution operation, Relu() and average pooling is applied to the feature. The output of each level temporal kernel are concatenated along feature dimension, after which batch normalization is applied. In the spatial learner, the global kernel and hemisphere kernel are used to extract spatial information. Specially, the output of the two spatial kernels will be concatenated along channel dimension after Relu, and average pooling. The flattened feature map will be fed into a fully connected layer. After the dropout layer and softmax activation function, the classification result will be generated. For more details, please see the comments in the code and our paper.
<p align="center">
<img src="https://user-images.githubusercontent.com/58539144/74716976-80415e00-526a-11ea-9433-02ab2b753f6b.PNG" width=800 align=center>
</p>
<p align="center">
Fig.2 TSception structure
</p>
# Cite
Please cite our paper if you use our code in your own work:
```
@INPROCEEDINGS{9206750,
author={Y. {Ding} and N. {Robinson} and Q. {Zeng} and D. {Chen} and A. A. {Phyo Wai} and T. -S. {Lee} and C. {Guan}},
booktitle={2020 International Joint Conference on Neural Networks (IJCNN)},
title={TSception:A Deep Learning Framework for Emotion Detection Using EEG},
year={2020},
volume={},
number={},
pages={1-7},
doi={10.1109/IJCNN48605.2020.9206750}}
```
没有合适的资源?快使用搜索试试~ 我知道了~
(论文加源码)基于多尺度卷积神经网络的脑电情绪识别(数据集为deap)
共22个文件
py:12个
txt:3个
md:2个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
5星 · 超过95%的资源 18 下载量 165 浏览量
2022-04-07
18:39:40
上传
评论 13
收藏 8.65MB ZIP 举报
温馨提示
在本文中,我们提出了一种多尺度卷积神经网络TSception,用于从脑电图(EEG)中学习时域特征和空间不对称性。TSception由动态时间层、非对称空间层和高层融合层组成,这些层同时学习时间和通道维度上的区别表示。动态时间层由多尺度一维卷积核组成,其长度与脑电信号的采样率有关,学习脑电的动态时间和频率表示。非对称空间层利用情绪反应背后的非对称神经激活,学习辨别性的全局和半球表征。学习到的空间表示将通过高级融合层进行融合。使用更广义的交叉验证设置,在两个公开可用的数据集DEAP和MAHNOB-HCI上对所提出的方法进行了评估。该网络的性能与之前报道的方法进行了比较,如SVM、KNN、FBFgMDM、FBTSC、无监督学习、DeepConvNet、ShallowConvNet和EEGNet。在大多数实验中,与比较的方法相比,我们的方法获得了更高的分类精度和F1分数。
资源推荐
资源详情
资源评论
收起资源包目录
(论文加源码)基于多尺度卷积神经网络的脑电情绪识别.zip (22个子文件)
(论文加源码)基于多尺度卷积神经网络的脑电情绪识别
自己复现
EEGDataset.py 535B
Train.py 22KB
data
sub_1.hdf 3.52MB
sub_0.hdf 3.52MB
LICENSE 1KB
PrepareData.py 4KB
Models.py 8KB
README.md 5KB
readme.txt 400B
2104.02935v2.pdf 3.36MB
论文源码
LICENSE 1KB
code
main-DEAP.py 2KB
utils.py 4KB
eeg_dataset.py 578B
networks.py 2KB
cross_validation.py 11KB
prepare_data_DEAP.py 7KB
requirements.txt 82B
train_model.py 5KB
baselines.py 8KB
README.md 3KB
readme.txt 714B
共 22 条
- 1
脑电情绪识别
- 粉丝: 1w+
- 资源: 34
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
- 1
- 2
- 3
- 4
前往页