Note
Go to the end to download the full example code.
Within Session SSVEP#
This Example shows how to perform a within-session SSVEP analysis on the MAMEM dataset 3, using a CCA pipeline.
The within-session evaluation assesses the performance of a classification pipeline using a 5-fold cross-validation. The reported metric (here, accuracy) is the average of all fold.
# Authors: Sylvain Chevallier <sylvain.chevallier@uvsq.fr>
#
# License: BSD (3-clause)
import warnings
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.pipeline import make_pipeline
import moabb
from moabb.datasets import Kalunga2016
from moabb.evaluations import WithinSessionEvaluation
from moabb.paradigms import SSVEP
from moabb.pipelines import SSVEP_CCA
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
moabb.set_log_level("info")
Loading Dataset#
Load 2 subjects of Kalunga2016 dataset
subj = [1, 3]
dataset = Kalunga2016()
dataset.subject_list = subj
Choose Paradigm#
We select the paradigm SSVEP, applying a bandpass filter (3-15 Hz) on the data and we keep only the first 3 classes, that is stimulation frequency of 13Hz, 17Hz and 21Hz.
Create Pipelines#
Use a Canonical Correlation Analysis classifier
interval = dataset.interval
freqs = paradigm.used_events(dataset)
pipeline = {}
pipeline["CCA"] = make_pipeline(SSVEP_CCA(interval=interval, freqs=freqs, n_harmonics=3))
Get Data (optional)#
To get access to the EEG signals downloaded from the dataset, you could use dataset.get_data(subjects=[subject_id]) to obtain the EEG under MNE format, stored in a dictionary of sessions and runs. Otherwise, paradigm.get_data(dataset=dataset, subjects=[subject_id]) allows to obtain the EEG data in scikit format, the labels and the meta information. In paradigm.get_data, the EEG are preprocessed according to the paradigm requirement.
# sessions = dataset.get_data(subjects=[3])
# X, labels, meta = paradigm.get_data(dataset=dataset, subjects=[3])
Evaluation#
The evaluation will return a DataFrame containing a single AUC score for each subject and pipeline.
overwrite = True # set to True if we want to overwrite cached results
evaluation = WithinSessionEvaluation(
paradigm=paradigm, datasets=dataset, suffix="examples", overwrite=overwrite
)
results = evaluation.process(pipeline)
print(results.head())
Kalunga2016-WithinSession: 0%| | 0/2 [00:00<?, ?it/s]No hdf5_path provided, models will not be saved.
Kalunga2016-WithinSession: 50%|█████ | 1/2 [00:00<00:00, 1.86it/s]
0%| | 0.00/2.27M [00:00<?, ?B/s]
1%|▏ | 12.3k/2.27M [00:00<00:35, 63.1kB/s]
2%|▋ | 41.0k/2.27M [00:00<00:18, 122kB/s]
4%|█▌ | 95.2k/2.27M [00:00<00:10, 211kB/s]
8%|██▉ | 174k/2.27M [00:00<00:06, 310kB/s]
16%|██████▏ | 364k/2.27M [00:00<00:03, 601kB/s]
33%|████████████▎ | 739k/2.27M [00:01<00:01, 1.15MB/s]
66%|████████████████████████▌ | 1.51M/2.27M [00:01<00:00, 2.26MB/s]
0%| | 0.00/2.27M [00:00<?, ?B/s]
100%|█████████████████████████████████████| 2.27M/2.27M [00:00<00:00, 7.79GB/s]
0%| | 0.00/2.13M [00:00<?, ?B/s]
1%|▏ | 12.3k/2.13M [00:00<00:35, 60.1kB/s]
2%|▋ | 41.0k/2.13M [00:00<00:17, 120kB/s]
4%|█▋ | 92.2k/2.13M [00:00<00:10, 201kB/s]
9%|███▋ | 200k/2.13M [00:00<00:05, 370kB/s]
20%|███████▉ | 432k/2.13M [00:00<00:02, 730kB/s]
42%|███████████████▉ | 890k/2.13M [00:01<00:00, 1.40MB/s]
85%|███████████████████████████████▍ | 1.81M/2.13M [00:01<00:00, 2.72MB/s]
0%| | 0.00/2.13M [00:00<?, ?B/s]
100%|█████████████████████████████████████| 2.13M/2.13M [00:00<00:00, 9.30GB/s]
No hdf5_path provided, models will not be saved.
Kalunga2016-WithinSession: 100%|██████████| 2/2 [00:05<00:00, 2.91s/it]
Kalunga2016-WithinSession: 100%|██████████| 2/2 [00:05<00:00, 2.55s/it]
score time samples ... n_sessions dataset pipeline
0 0.248889 0.042300 48.0 ... 1 Kalunga2016 CCA
1 0.331111 0.041013 48.0 ... 1 Kalunga2016 CCA
[2 rows x 9 columns]
Plot Results#
Here we plot the results, indicating the score for each subject
plt.figure()
sns.barplot(data=results, y="score", x="session", hue="subject", palette="viridis")

<Axes: xlabel='session', ylabel='score'>
And the computation time in seconds
plt.figure()
ax = sns.barplot(data=results, y="time", x="session", hue="subject", palette="Reds")
ax.set_ylabel("Time (s)")
plt.show()

Total running time of the script: (0 minutes 6.546 seconds)
Estimated memory usage: 294 MB