Note
Go to the end to download the full example code.
Within Session P300#
This example shows how to perform a within session analysis on three different P300 datasets.
We will compare two pipelines :
Riemannian geometry
XDAWN with Linear Discriminant Analysis
We will use the P300 paradigm, which uses the AUC as metric.
# Authors: Pedro Rodrigues <pedro.rodrigues01@gmail.com>
#
# License: BSD (3-clause)
import warnings
import matplotlib.pyplot as plt
import seaborn as sns
from mne.decoding import Vectorizer
from pyriemann.estimation import Xdawn, XdawnCovariances
from pyriemann.tangentspace import TangentSpace
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.pipeline import make_pipeline
import moabb
from moabb.datasets import BNCI2014_009
from moabb.evaluations import WithinSessionEvaluation
from moabb.paradigms import P300
getting rid of the warnings about the future
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
moabb.set_log_level("info")
Create Pipelines#
Pipelines must be a dict of sklearn pipeline transformer.
pipelines = {}
We have to do this because the classes are called ‘Target’ and ‘NonTarget’ but the evaluation function uses a LabelEncoder, transforming them to 0 and 1
labels_dict = {"Target": 1, "NonTarget": 0}
pipelines["RG+LDA"] = make_pipeline(
XdawnCovariances(
nfilter=2, classes=[labels_dict["Target"]], estimator="lwf", xdawn_estimator="scm"
),
TangentSpace(),
LDA(solver="lsqr", shrinkage="auto"),
)
pipelines["Xdw+LDA"] = make_pipeline(
Xdawn(nfilter=2, estimator="scm"), Vectorizer(), LDA(solver="lsqr", shrinkage="auto")
)
Evaluation#
We define the paradigm (P300) and use all three datasets available for it. The evaluation will return a DataFrame containing a single AUC score for each subject / session of the dataset, and for each pipeline.
Results are saved into the database, so that if you add a new pipeline, it will not run again the evaluation unless a parameter has changed. Results can be overwritten if necessary.
paradigm = P300(resample=128)
dataset = BNCI2014_009()
dataset.subject_list = dataset.subject_list[:2]
datasets = [dataset]
overwrite = True # set to True if we want to overwrite cached results
evaluation = WithinSessionEvaluation(
paradigm=paradigm, datasets=datasets, suffix="examples", overwrite=overwrite
)
results = evaluation.process(pipelines)
BNCI2014-009-WithinSession: 0%| | 0/2 [00:00<?, ?it/s]
0%| | 0.00/18.5M [00:00<?, ?B/s]
0%| | 8.19k/18.5M [00:00<04:30, 68.5kB/s]
0%| | 39.9k/18.5M [00:00<01:41, 183kB/s]
1%|▏ | 113k/18.5M [00:00<00:49, 374kB/s]
1%|▌ | 249k/18.5M [00:00<00:27, 670kB/s]
3%|█ | 520k/18.5M [00:00<00:14, 1.24MB/s]
6%|██ | 1.06M/18.5M [00:00<00:07, 2.34MB/s]
12%|████▎ | 2.15M/18.5M [00:00<00:03, 4.51MB/s]
23%|████████▍ | 4.21M/18.5M [00:00<00:01, 8.46MB/s]
33%|████████████▏ | 6.13M/18.5M [00:01<00:01, 10.8MB/s]
44%|████████████████▏ | 8.11M/18.5M [00:01<00:00, 12.5MB/s]
54%|████████████████████ | 10.0M/18.5M [00:01<00:00, 13.5MB/s]
65%|███████████████████████▉ | 12.0M/18.5M [00:01<00:00, 14.3MB/s]
75%|███████████████████████████▊ | 13.9M/18.5M [00:01<00:00, 14.8MB/s]
86%|███████████████████████████████▊ | 15.9M/18.5M [00:01<00:00, 15.3MB/s]
96%|███████████████████████████████████▋ | 17.8M/18.5M [00:01<00:00, 15.5MB/s]
0%| | 0.00/18.5M [00:00<?, ?B/s]
100%|█████████████████████████████████████| 18.5M/18.5M [00:00<00:00, 41.1GB/s]
BNCI2014-009-WithinSession: 50%|█████ | 1/2 [00:11<00:11, 11.12s/it]
0%| | 0.00/18.5M [00:00<?, ?B/s]
0%| | 16.4k/18.5M [00:00<02:16, 136kB/s]
0%| | 56.3k/18.5M [00:00<01:13, 251kB/s]
1%|▎ | 153k/18.5M [00:00<00:36, 499kB/s]
2%|▋ | 344k/18.5M [00:00<00:19, 926kB/s]
4%|█▍ | 728k/18.5M [00:00<00:10, 1.73MB/s]
8%|██▉ | 1.49M/18.5M [00:00<00:05, 3.28MB/s]
16%|█████▊ | 2.89M/18.5M [00:00<00:02, 5.98MB/s]
27%|█████████▉ | 4.94M/18.5M [00:00<00:01, 9.48MB/s]
36%|█████████████▍ | 6.73M/18.5M [00:01<00:01, 11.1MB/s]
47%|█████████████████▍ | 8.70M/18.5M [00:01<00:00, 12.7MB/s]
57%|█████████████████████▏ | 10.6M/18.5M [00:01<00:00, 13.6MB/s]
67%|████████████████████████▉ | 12.5M/18.5M [00:01<00:00, 14.3MB/s]
78%|████████████████████████████▊ | 14.4M/18.5M [00:01<00:00, 14.7MB/s]
93%|██████████████████████████████████▍ | 17.2M/18.5M [00:01<00:00, 17.3MB/s]
0%| | 0.00/18.5M [00:00<?, ?B/s]
100%|█████████████████████████████████████| 18.5M/18.5M [00:00<00:00, 64.8GB/s]
BNCI2014-009-WithinSession: 100%|██████████| 2/2 [00:21<00:00, 10.81s/it]
BNCI2014-009-WithinSession: 100%|██████████| 2/2 [00:21<00:00, 10.85s/it]
Plot Results#
Here we plot the results to compare the two pipelines
fig, ax = plt.subplots(facecolor="white", figsize=[8, 4])
sns.stripplot(
data=results,
y="score",
x="pipeline",
ax=ax,
jitter=True,
alpha=0.5,
zorder=1,
palette="Set1",
)
sns.pointplot(data=results, y="score", x="pipeline", ax=ax, palette="Set1")
ax.set_ylabel("ROC AUC")
ax.set_ylim(0.5, 1)
plt.show()

Total running time of the script: (0 minutes 24.256 seconds)
Estimated memory usage: 378 MB