Note
Go to the end to download the full example code.
Within Session SSVEP#
This Example shows how to perform a within-session SSVEP analysis on the MAMEM dataset 3, using a CCA pipeline.
The within-session evaluation assesses the performance of a classification pipeline using a 5-fold cross-validation. The reported metric (here, accuracy) is the average of all fold.
# Authors: Sylvain Chevallier <sylvain.chevallier@uvsq.fr>
#
# License: BSD (3-clause)
import warnings
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.pipeline import make_pipeline
import moabb
from moabb.datasets import Kalunga2016
from moabb.evaluations import WithinSessionEvaluation
from moabb.paradigms import SSVEP
from moabb.pipelines import SSVEP_CCA
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
moabb.set_log_level("info")
Loading Dataset#
Load 2 subjects of Kalunga2016 dataset
subj = [1, 3]
dataset = Kalunga2016()
dataset.subject_list = subj
Choose Paradigm#
We select the paradigm SSVEP, applying a bandpass filter (3-15 Hz) on the data and we keep only the first 3 classes, that is stimulation frequency of 13Hz, 17Hz and 21Hz.
Create Pipelines#
Use a Canonical Correlation Analysis classifier
Get Data (optional)#
To get access to the EEG signals downloaded from the dataset, you could use dataset.get_data(subjects=[subject_id]) to obtain the EEG under MNE format, stored in a dictionary of sessions and runs. Otherwise, paradigm.get_data(dataset=dataset, subjects=[subject_id]) allows to obtain the EEG data in scikit format, the labels and the meta information. In paradigm.get_data, the EEG are preprocessed according to the paradigm requirement.
# sessions = dataset.get_data(subjects=[3])
# X, labels, meta = paradigm.get_data(dataset=dataset, subjects=[3])
Evaluation#
The evaluation will return a DataFrame containing a single AUC score for each subject and pipeline.
overwrite = True # set to True if we want to overwrite cached results
evaluation = WithinSessionEvaluation(
paradigm=paradigm, datasets=dataset, suffix="examples", overwrite=overwrite
)
results = evaluation.process(pipeline)
print(results.head())
Kalunga2016-WithinSession: 0%| | 0/2 [00:00<?, ?it/s]
Kalunga2016-WithinSession: 50%|█████ | 1/2 [00:00<00:00, 2.48it/s]
0%| | 0.00/2.27M [00:00<?, ?B/s]
1%|▏ | 12.3k/2.27M [00:00<00:30, 74.5kB/s]
2%|▋ | 41.0k/2.27M [00:00<00:15, 144kB/s]
4%|█▌ | 95.2k/2.27M [00:00<00:08, 248kB/s]
7%|██▋ | 157k/2.27M [00:00<00:06, 319kB/s]
11%|████▏ | 245k/2.27M [00:00<00:04, 426kB/s]
14%|█████▌ | 327k/2.27M [00:00<00:04, 474kB/s]
20%|███████▊ | 454k/2.27M [00:01<00:02, 609kB/s]
24%|█████████▏ | 536k/2.27M [00:01<00:02, 599kB/s]
27%|██████████▌ | 616k/2.27M [00:01<00:02, 590kB/s]
31%|███████████▉ | 698k/2.27M [00:01<00:02, 586kB/s]
35%|█████████████▋ | 797k/2.27M [00:01<00:02, 618kB/s]
39%|███████████████ | 879k/2.27M [00:01<00:02, 605kB/s]
43%|████████████████▊ | 977k/2.27M [00:01<00:02, 631kB/s]
47%|█████████████████▋ | 1.06M/2.27M [00:02<00:01, 614kB/s]
53%|████████████████████▏ | 1.21M/2.27M [00:02<00:01, 739kB/s]
57%|█████████████████████▌ | 1.29M/2.27M [00:02<00:01, 689kB/s]
60%|██████████████████████▉ | 1.37M/2.27M [00:02<00:01, 655kB/s]
67%|█████████████████████████▎ | 1.52M/2.27M [00:02<00:00, 767kB/s]
71%|██████████████████████████▉ | 1.61M/2.27M [00:02<00:00, 742kB/s]
75%|████████████████████████████▎ | 1.70M/2.27M [00:02<00:00, 692kB/s]
78%|█████████████████████████████▋ | 1.78M/2.27M [00:03<00:00, 657kB/s]
82%|███████████████████████████████ | 1.86M/2.27M [00:03<00:00, 633kB/s]
88%|█████████████████████████████████▌ | 2.01M/2.27M [00:03<00:00, 754kB/s]
94%|███████████████████████████████████▋ | 2.14M/2.27M [00:03<00:00, 802kB/s]
98%|█████████████████████████████████████▍| 2.24M/2.27M [00:03<00:00, 768kB/s]
0%| | 0.00/2.27M [00:00<?, ?B/s]
100%|█████████████████████████████████████| 2.27M/2.27M [00:00<00:00, 8.54GB/s]
0%| | 0.00/2.13M [00:00<?, ?B/s]
1%|▏ | 12.3k/2.13M [00:00<00:30, 68.9kB/s]
2%|▋ | 39.9k/2.13M [00:00<00:16, 130kB/s]
4%|█▋ | 91.1k/2.13M [00:00<00:09, 218kB/s]
7%|██▊ | 151k/2.13M [00:00<00:07, 282kB/s]
11%|████▏ | 228k/2.13M [00:00<00:05, 360kB/s]
16%|██████ | 334k/2.13M [00:00<00:03, 469kB/s]
19%|███████▌ | 415k/2.13M [00:01<00:03, 486kB/s]
24%|█████████▍ | 513k/2.13M [00:01<00:03, 532kB/s]
30%|███████████▌ | 628k/2.13M [00:01<00:02, 596kB/s]
33%|█████████████ | 710k/2.13M [00:01<00:02, 576kB/s]
36%|██████████████▏ | 775k/2.13M [00:01<00:02, 530kB/s]
41%|███████████████▉ | 872k/2.13M [00:01<00:02, 559kB/s]
46%|██████████████████ | 987k/2.13M [00:02<00:01, 613kB/s]
51%|███████████████████▎ | 1.08M/2.13M [00:02<00:01, 610kB/s]
55%|████████████████████▊ | 1.16M/2.13M [00:02<00:01, 586kB/s]
58%|██████████████████████▏ | 1.24M/2.13M [00:02<00:01, 561kB/s]
63%|████████████████████████ | 1.35M/2.13M [00:02<00:01, 595kB/s]
69%|██████████████████████████▎ | 1.48M/2.13M [00:02<00:00, 672kB/s]
76%|████████████████████████████▊ | 1.61M/2.13M [00:02<00:00, 734kB/s]
82%|██████████████████████████████▉ | 1.74M/2.13M [00:03<00:00, 750kB/s]
87%|█████████████████████████████████▏ | 1.86M/2.13M [00:03<00:00, 757kB/s]
92%|██████████████████████████████████▊ | 1.95M/2.13M [00:03<00:00, 708kB/s]
97%|████████████████████████████████████▉ | 2.07M/2.13M [00:03<00:00, 728kB/s]
0%| | 0.00/2.13M [00:00<?, ?B/s]
100%|█████████████████████████████████████| 2.13M/2.13M [00:00<00:00, 10.5GB/s]
Kalunga2016-WithinSession: 100%|██████████| 2/2 [00:09<00:00, 5.43s/it]
Kalunga2016-WithinSession: 100%|██████████| 2/2 [00:09<00:00, 4.68s/it]
score time samples ... n_sessions dataset pipeline
0 0.773333 0.038621 48.0 ... 1 Kalunga2016 CCA
1 0.915556 0.037432 48.0 ... 1 Kalunga2016 CCA
[2 rows x 9 columns]
Plot Results#
Here we plot the results, indicating the score for each subject
plt.figure()
sns.barplot(data=results, y="score", x="session", hue="subject", palette="viridis")

<Axes: xlabel='session', ylabel='score'>
And the computation time in seconds
plt.figure()
ax = sns.barplot(data=results, y="time", x="session", hue="subject", palette="Reds")
ax.set_ylabel("Time (s)")
plt.show()

Total running time of the script: (0 minutes 10.993 seconds)
Estimated memory usage: 294 MB