# deep multi-view feature learning
#
## Abstract
  Epilepsy is a common neurological illness caused by abnormal discharge of brain neurons, where epileptic seizure could lead to life-threatening emergencies. By analyzing the encephalogram (EEG) signals of patients with epilepsy, their conditions can be monitored so that epileptic seizure can be detected and intervened in time. In epilepsy research, use of appropriate methods to obtain effective features is of great importance to the detection accuracy. In order to obtain features that can produce better detection results, this paper proposes a multi-view deep feature extraction method. The method first uses fast Fourier transform (FFT) and wavelet packet decomposition (WPD) to construct the initial multi-view features. Convolutional neural network (CNN) is then used to automatically learn the deep feature from the initial multi-view features, which can reduce the dimensionality and obtain features with better ability for seizure identification. Furthermore, the multi-view Takagi-Sugeno-Kang fuzzy system (MV-TSK-FS), an interpretable rule-based classifier, is used to obtain a classification model with stronger generalizability based on the obtained deep multi-view features. Experimental studies show that the proposed multi-view deep feature extraction method has better performance than common feature extraction methods such as principal component analysis (PCA), FFT and WPD. The performance of classification using the multi-view deep features is better than that using single-view deep features.
## author
Xiaobin Tian, Zhaohong Deng, Senior Member, IEEE, Kup-Sze Choi, Dongrui Wu, Senior Member,
IEEE, Bin Qin, Jun Wan, Hongbin Shen, Shitong Wang
## using this code
```
├── data
| ├──raw\_data
├──preprocessing
| ├──preprocessing_data.m
| ├──load_data.m
| ├──domain_transform.m
├──CNN\_feature\_extracting
| | ├──feature_extracting.py
| | ├──view1_CNNmodel.py
| | ├──view2_CNNmodel.py
| | ├──view3_CN
├──mult\_TSK\_FS
| | ├──auto_expt_mul_TSK.m
| | ├──confusion_matrix.m
| | ├──expt_mul_TSK.m
| | ├──fromXtoZ.m
| | ├──lab2vec.m
| | ├──preproc.m
| | ├──test_mul_TSK.m
| | ├──test_TSK_FS.m
| | ├──train_mul_TSK.m
| | ├──train_TSK_FS.m
| | ├──vec2lab.m
```
1.Please set the original datset in "data/raw_data".
2.Run "/preprocessing/preprocessing_data.m" using matlab and get the initial multi-view EEG features.
3.python3 is needed. Your environment should include numpy, scipy and tensorflow. Run "/CNN\_feature\_extracting/feature_extracting.py" using python3 and get the deep multi-view features.
4.Run "/mult\_TSK\_FS/auto_expt_mul_TSK.m" using matlab and calculate the performance of this study.
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
癫痫是一种常见的神经系统疾病,由脑神经元异常放电引起,癫痫发作可能导致危及生命的紧急情况。通过分析癫痫患者的脑电图(EEG)信号,可以监测癫痫患者的病情,从而及时发现和干预癫痫发作。在癫痫研究中,使用合适的方法获取有效的特征对检测的准确性至关重要。为了获得能够产生更好检测结果的特征,本文提出了一种多视图深度特征提取方法。该方法首先利用快速傅里叶变换(FFT)和小波包分解(WPD)构造初始多视图特征;然后利用卷积神经网络(CNN)从初始的多视图特征中自动学习深度特征,降低维数,获得癫痫发作识别能力更好的特征。在此基础上,利用基于可解释规则的多视图Takagi-Sugeno-Kang模糊系统(MV-TSK-FS),基于所获得的深度多视图特征,获得具有更强泛化能力的分类模型。实验研究表明,所提出的多视图深度特征提取方法比常用的主成分分析(PCA)、FFT和WPD等特征提取方法具有更好的性能。使用多视图深度特征进行分类的性能优于使用单视图深度特征。
资源推荐
资源详情
资源评论
收起资源包目录
论文资源deep multi-view feature learning-癫痫检测深度多特征融合实现检测 (104个子文件)
checkpoint 271B
checkpoint 271B
checkpoint 271B
model.ckpt-2066.data-00000-of-00001 109.84MB
model.ckpt-4108.data-00000-of-00001 8.94MB
model.ckpt-4194.data-00000-of-00001 8.94MB
model.ckpt-4366.data-00000-of-00001 8.94MB
model.ckpt-4280.data-00000-of-00001 8.94MB
model.ckpt-4022.data-00000-of-00001 8.94MB
model.ckpt-4200.data-00000-of-00001 4.56MB
model.ckpt-4372.data-00000-of-00001 4.56MB
model.ckpt-4286.data-00000-of-00001 4.56MB
model.ckpt-4458.data-00000-of-00001 4.56MB
model.ckpt-4544.data-00000-of-00001 4.56MB
.gitattributes 66B
.gitignore 50B
deep-multi-view-feature-learning-master.iml 448B
model.ckpt-2066.index 709B
model.ckpt-4194.index 609B
model.ckpt-4280.index 609B
model.ckpt-4022.index 609B
model.ckpt-4366.index 609B
model.ckpt-4108.index 609B
model.ckpt-4372.index 471B
model.ckpt-4544.index 471B
model.ckpt-4286.index 471B
model.ckpt-4200.index 471B
model.ckpt-4458.index 471B
events.out.tfevents.1632638876.LAPTOP-FUHHI9SJ 400KB
events.out.tfevents.1632646503.LAPTOP-FUHHI9SJ 399KB
events.out.tfevents.1632645324.LAPTOP-FUHHI9SJ 399KB
events.out.tfevents.1632658899.LAPTOP-FUHHI9SJ 399KB
events.out.tfevents.1632659402.LAPTOP-FUHHI9SJ 399KB
events.out.tfevents.1632644980.LAPTOP-FUHHI9SJ 394KB
events.out.tfevents.1632644831.LAPTOP-FUHHI9SJ 394KB
events.out.tfevents.1632638642.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632659388.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632658882.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632644806.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632645310.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632644965.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632646489.LAPTOP-FUHHI9SJ 370KB
events.out.tfevents.1632659908.LAPTOP-FUHHI9SJ 369KB
events.out.tfevents.1632659822.LAPTOP-FUHHI9SJ 369KB
events.out.tfevents.1632659700.LAPTOP-FUHHI9SJ 369KB
events.out.tfevents.1632636466.LAPTOP-FUHHI9SJ 363KB
events.out.tfevents.1632636134.LAPTOP-FUHHI9SJ 361KB
events.out.tfevents.1632646500.LAPTOP-FUHHI9SJ 307KB
events.out.tfevents.1632658897.LAPTOP-FUHHI9SJ 307KB
events.out.tfevents.1632645322.LAPTOP-FUHHI9SJ 307KB
events.out.tfevents.1632659399.LAPTOP-FUHHI9SJ 307KB
events.out.tfevents.1632644818.LAPTOP-FUHHI9SJ 306KB
events.out.tfevents.1632638862.LAPTOP-FUHHI9SJ 306KB
events.out.tfevents.1632644976.LAPTOP-FUHHI9SJ 303KB
expt_mul_TSK.m 3KB
train_mul_TSK.m 2KB
domain_transform.m 2KB
train_TSK_FS.m 2KB
confusion_matrix.m 1KB
load_data.m 1KB
auto_expt_mul_TSK.m 1KB
preproc.m 1KB
test_mul_TSK.m 610B
preprocessing_data.m 580B
fromXtoZ.m 538B
lab2vec.m 273B
test_TSK_FS.m 267B
vec2lab.m 186B
train_data2.mat 632.92MB
data_1_predict.mat 364.05MB
data1.mat 45.12MB
data2.mat 17.76MB
data_1_train.mat 4.98MB
README.md 3KB
model.ckpt-2066.meta 118KB
model.ckpt-4108.meta 111KB
model.ckpt-4194.meta 111KB
model.ckpt-4022.meta 111KB
model.ckpt-4366.meta 111KB
model.ckpt-4280.meta 111KB
model.ckpt-4544.meta 94KB
model.ckpt-4200.meta 94KB
model.ckpt-4372.meta 94KB
model.ckpt-4286.meta 94KB
model.ckpt-4458.meta 94KB
graph.pbtxt 287KB
graph.pbtxt 263KB
graph.pbtxt 219KB
view3_CNNmodel.py 4KB
feature_extracting.py 4KB
view1_CNNmodel.py 4KB
view2_CNNmodel.py 4KB
other.py 656B
view3_CNNmodel.cpython-36.pyc 3KB
view3_CNNmodel.cpython-37.pyc 3KB
view1_CNNmodel.cpython-36.pyc 3KB
view1_CNNmodel.cpython-37.pyc 3KB
view2_CNNmodel.cpython-36.pyc 3KB
view2_CNNmodel.cpython-37.pyc 3KB
workspace.xml 12KB
共 104 条
- 1
- 2
资源评论
高山仰止景
- 粉丝: 1521
- 资源: 22
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功