记录自己学习过程中复现的代码 

原论文(出版时间2022.8)

Physics-Informed Attention Temporal Convolutional Network for EEG-Based Motor Imagery Classification  

 期刊:( 影响因子11.6+)

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS

摘要

英文:The brain-computer interface (BCI) is a cutting-edge technology that has the potential to change the world. Electroencephalogram (EEG) motor imagery (MI) signal has been used extensively in many BCI applications to assist disabled people, control devices or environments, and even augment human capabilities. However, the limited performance of brain signal decoding is restricting the broad growth of the BCI industry. In this article, we propose an attention-based temporal convolutional network (ATCNet) for EEG-based motor imagery classifification. The ATCNet model utilizes multiple techniques to boost the performance of MI classifification with a relatively small number of parameters. ATCNet employs scientifific machine learning to design a domain-specifific deep learning model with interpretable and explainable features, multihead self-attention to highlight the most valuable features in MI-EEG data, temporal convolutional network to extract high-level temporal features, and convolutional-based sliding window to augment the MI-EEG data effificiently. The proposed model outperforms the current state-of-the-art techniques in the BCI Competition IV-2a dataset with an accuracy of 85.38% and 70.97% for the subject-dependent and subject-independent modes, respectively.

翻译:脑机接口(BCI)是一项有可能改变世界的前沿技术。脑电图(EEG)运动图像(MI)信号已被广泛用于许多BCI应用中以协助残疾人控制设备或环境、甚至增强人的能力。然而大脑信号解码的有限性能限制了BCI行业的广泛发展。在这篇文章中,我们提出了一个基于注意力的时间卷积网络(ATCNet)用于基于EEG的运动图像分类。该ATCNet模型利用多种技术来提高MI分类的性能,参数数量相对较少。ATCNet采用了科学的机器学习来设计一个特定领域的深度学习模型,具有可解释和可说明的特征,多头自我关注来突出MI-EEG数据中最有价值的特征,时间卷积网络来提取高层次的时间特征,以及基于卷积的滑动特征。颞部卷积网络提取高层次的时间特征,基于卷积的滑动窗口有效地增强了MI-EEG数据。所提出的模型在BCI中的表现优于目前最先进的技术。在IV-2a数据集中,提议的模型优于目前最先进的技术,准确率为85.38%和70.97%。

 技术路线图

 

 实验环境

CUDA = 11.0

cudnn = 8.0

python = 3.7

tensorflow =2.4.0

实验结果

Subject: 1   best_run: 1   acc: 0.9149   kappa: 0.8866   avg_acc: 0.8629 +- 0.0124   avg_kappa: 0.8171 +- 0.0166 Subject: 2   best_run: 1   acc: 0.8090   kappa: 0.7454   avg_acc: 0.6448 +- 0.0326   avg_kappa: 0.5264 +- 0.0435 Subject: 3   best_run: 1   acc: 0.9861   kappa: 0.9815   avg_acc: 0.9410 +- 0.0143   avg_kappa: 0.9213 +- 0.0191 Subject: 4   best_run: 1   acc: 0.8264   kappa: 0.7685   avg_acc: 0.7677 +- 0.0258   avg_kappa: 0.6903 +- 0.0344 Subject: 5   best_run: 1   acc: 0.8681   kappa: 0.8241   avg_acc: 0.8017 +- 0.0142   avg_kappa: 0.7356 +- 0.0189 Subject: 6   best_run: 5   acc: 0.8194   kappa: 0.7593   avg_acc: 0.7118 +- 0.0170   avg_kappa: 0.6157 +- 0.0227 Subject: 7   best_run: 10  acc: 0.7257   kappa: 0.6343   avg_acc: 0.8986 +- 0.0296   avg_kappa: 0.8648 +- 0.0394 Subject: 8   best_run: 7   acc: 0.9271   kappa: 0.9028   avg_acc: 0.8722 +- 0.0133   avg_kappa: 0.8296 +- 0.0178 Subject: 9   best_run: 1   acc: 0.9080   kappa: 0.8773   avg_acc: 0.8778 +- 0.0186   avg_kappa: 0.8370 +- 0.0247

Average of 9 subjects - best runs: Accuracy = 0.8650   Kappa = 0.8200

Average of 9 subjects x 10 runs (average of 90 experiments): Accuracy = 0.8198   Kappa = 0.7598  

 

源码

已上传我的资源:https://download.csdn.net/download/Nan_Feng_ya/88915595https://download.csdn.net/download/Nan_Feng_ya/88915595

 结束

相关文章

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: