Note
Go to the end to download the full example code.
Tutorial 5: Creating a dataset class#
# Author: Gregoire Cattan
#
# https://github.com/plcrodrigues/Workshop-MOABB-BCI-Graz-2019
from pyriemann.classification import MDM
from pyriemann.estimation import ERPCovariances
from sklearn.pipeline import make_pipeline
from moabb.datasets import Cattan2019_VR
from moabb.datasets.braininvaders import BI2014a
from moabb.datasets.compound_dataset import CompoundDataset
from moabb.datasets.utils import blocks_reps
from moabb.evaluations import WithinSessionEvaluation
from moabb.paradigms.p300 import P300
Initialization#
This tutorial illustrates how to use the CompoundDataset to: 1) Select a few subjects/sessions/runs in an existing dataset 2) Merge two CompoundDataset into a new one 3) … and finally use this new dataset on a pipeline (this steps is not specific to CompoundDataset)
Let’s define a paradigm and a pipeline for evaluation first.
paradigm = P300()
pipelines = {}
pipelines["MDM"] = make_pipeline(ERPCovariances(estimator="lwf"), MDM(metric="riemann"))
Creation a selection of subject#
We are going to great two CompoundDataset, namely CustomDataset1 & 2. A CompoundDataset accepts a subjects_list of subjects. It is a list of tuple. A tuple contains 4 values: - the original dataset - the subject number to select - the sessions. It can be:
a session name (‘0’)
a list of sessions ([‘0’, ‘1’])
None to select all the sessions attributed to a subject
the runs. As for sessions, it can be a single run name, a list or None` (to select all runs).
class CustomDataset1(CompoundDataset):
def __init__(self):
biVR = Cattan2019_VR(virtual_reality=True, screen_display=True)
runs = blocks_reps([0, 2], [0, 1, 2, 3, 4], biVR.n_repetitions)
subjects_list = [
(biVR, 1, "0VR", runs),
(biVR, 2, "0VR", runs),
]
CompoundDataset.__init__(
self,
subjects_list=subjects_list,
code="CustomDataset1",
interval=[0, 1.0],
)
class CustomDataset2(CompoundDataset):
def __init__(self):
bi2014 = BI2014a()
subjects_list = [
(bi2014, 4, None, None),
(bi2014, 7, None, None),
]
CompoundDataset.__init__(
self,
subjects_list=subjects_list,
code="CustomDataset2",
interval=[0, 1.0],
)
Merging the datasets#
We are now going to merge the two CompoundDataset into a single one. The implementation is straight forward. Instead of providing a list of subjects, you should provide a list of CompoundDataset. subjects_list = [CustomDataset1(), CustomDataset2()]
class CustomDataset3(CompoundDataset):
def __init__(self):
subjects_list = [CustomDataset1(), CustomDataset2()]
CompoundDataset.__init__(
self,
subjects_list=subjects_list,
code="CustomDataset3",
interval=[0, 1.0],
)
Evaluate and display#
Let’s use a WithinSessionEvaluation to evaluate our new dataset. If you already new how to do this, nothing changed: The CompoundDataset can be used as a normal dataset.
datasets = [CustomDataset3()]
evaluation = WithinSessionEvaluation(
paradigm=paradigm, datasets=datasets, overwrite=False, suffix="newdataset"
)
scores = evaluation.process(pipelines)
print(scores)
CustomDataset3-WithinSession: 0%| | 0/4 [00:00<?, ?it/s]No hdf5_path provided, models will not be saved.
CustomDataset3-WithinSession: 25%|██▌ | 1/4 [00:06<00:20, 6.72s/it]No hdf5_path provided, models will not be saved.
CustomDataset3-WithinSession: 50%|█████ | 2/4 [00:13<00:13, 6.52s/it]
0%| | 0.00/46.4M [00:00<?, ?B/s]
0%| | 1.02k/46.4M [00:00<1:26:45, 8.92kB/s]
0%| | 87.0k/46.4M [00:00<02:27, 315kB/s]
1%|▎ | 331k/46.4M [00:00<00:50, 918kB/s]
2%|▋ | 812k/46.4M [00:00<00:21, 2.09MB/s]
4%|█▎ | 1.65M/46.4M [00:00<00:11, 3.99MB/s]
7%|██▋ | 3.33M/46.4M [00:00<00:05, 7.83MB/s]
14%|█████ | 6.42M/46.4M [00:00<00:02, 14.8MB/s]
20%|███████▍ | 9.39M/46.4M [00:00<00:01, 19.2MB/s]
28%|██████████▎ | 12.9M/46.4M [00:01<00:01, 24.0MB/s]
34%|████████████▋ | 15.9M/46.4M [00:01<00:01, 25.9MB/s]
42%|███████████████▋ | 19.7M/46.4M [00:01<00:00, 29.3MB/s]
50%|██████████████████▎ | 23.0M/46.4M [00:01<00:00, 30.6MB/s]
57%|████████████████████▉ | 26.3M/46.4M [00:01<00:00, 31.3MB/s]
64%|███████████████████████▌ | 29.5M/46.4M [00:01<00:00, 31.5MB/s]
71%|██████████████████████████▍ | 33.1M/46.4M [00:01<00:00, 32.8MB/s]
79%|█████████████████████████████▍ | 36.9M/46.4M [00:01<00:00, 34.1MB/s]
87%|████████████████████████████████▏ | 40.4M/46.4M [00:01<00:00, 34.0MB/s]
95%|███████████████████████████████████ | 44.0M/46.4M [00:01<00:00, 34.1MB/s]
0%| | 0.00/46.4M [00:00<?, ?B/s]
100%|██████████████████████████████████████| 46.4M/46.4M [00:00<00:00, 185GB/s]
No hdf5_path provided, models will not be saved.
CustomDataset3-WithinSession: 75%|███████▌ | 3/4 [00:27<00:10, 10.21s/it]
0%| | 0.00/74.3M [00:00<?, ?B/s]
0%| | 12.3k/74.3M [00:00<13:20, 92.8kB/s]
0%| | 96.3k/74.3M [00:00<03:47, 326kB/s]
1%|▏ | 426k/74.3M [00:00<01:13, 1.01MB/s]
2%|▌ | 1.24M/74.3M [00:00<00:25, 2.92MB/s]
4%|█▎ | 2.62M/74.3M [00:00<00:12, 5.91MB/s]
6%|██▍ | 4.81M/74.3M [00:00<00:06, 10.5MB/s]
12%|████▎ | 8.62M/74.3M [00:00<00:03, 17.3MB/s]
17%|██████▎ | 12.6M/74.3M [00:01<00:02, 23.6MB/s]
22%|████████ | 16.3M/74.3M [00:01<00:02, 23.7MB/s]
27%|██████████ | 20.2M/74.3M [00:01<00:01, 27.8MB/s]
31%|███████████▌ | 23.3M/74.3M [00:01<00:01, 28.6MB/s]
36%|█████████████▏ | 26.5M/74.3M [00:01<00:01, 29.6MB/s]
41%|███████████████ | 30.3M/74.3M [00:01<00:01, 31.9MB/s]
45%|████████████████▋ | 33.5M/74.3M [00:01<00:01, 31.9MB/s]
50%|██████████████████▍ | 37.0M/74.3M [00:01<00:01, 32.7MB/s]
54%|████████████████████ | 40.3M/74.3M [00:01<00:01, 32.3MB/s]
59%|█████████████████████▋ | 43.6M/74.3M [00:02<00:00, 32.4MB/s]
64%|███████████████████████▌ | 47.2M/74.3M [00:02<00:00, 33.5MB/s]
69%|█████████████████████████▍ | 51.0M/74.3M [00:02<00:00, 34.8MB/s]
73%|███████████████████████████▏ | 54.5M/74.3M [00:02<00:00, 34.6MB/s]
78%|████████████████████████████▉ | 58.2M/74.3M [00:02<00:00, 31.4MB/s]
84%|██████████████████████████████▉ | 62.1M/74.3M [00:02<00:00, 33.6MB/s]
88%|████████████████████████████████▋ | 65.5M/74.3M [00:02<00:00, 33.6MB/s]
93%|██████████████████████████████████▎ | 69.0M/74.3M [00:02<00:00, 32.6MB/s]
98%|████████████████████████████████████▏| 72.6M/74.3M [00:02<00:00, 33.8MB/s]
0%| | 0.00/74.3M [00:00<?, ?B/s]
100%|██████████████████████████████████████| 74.3M/74.3M [00:00<00:00, 336GB/s]
No hdf5_path provided, models will not be saved.
CustomDataset3-WithinSession: 100%|██████████| 4/4 [00:56<00:00, 17.59s/it]
CustomDataset3-WithinSession: 100%|██████████| 4/4 [00:56<00:00, 14.15s/it]
score time samples ... n_sessions dataset pipeline
0 0.655000 0.325585 120.0 ... 1 CustomDataset3 MDM
1 0.577500 0.317788 120.0 ... 1 CustomDataset3 MDM
2 0.643062 2.076298 768.0 ... 1 CustomDataset3 MDM
3 0.545191 4.597882 1356.0 ... 1 CustomDataset3 MDM
[4 rows x 9 columns]
Total running time of the script: (0 minutes 57.182 seconds)
Estimated memory usage: 727 MB