Tutorial 5: Creating a dataset class#

# Author: Gregoire Cattan
#
# https://github.com/plcrodrigues/Workshop-MOABB-BCI-Graz-2019

from pyriemann.classification import MDM
from pyriemann.estimation import ERPCovariances
from sklearn.pipeline import make_pipeline

from moabb.datasets import Cattan2019_VR
from moabb.datasets.braininvaders import BI2014a
from moabb.datasets.compound_dataset import CompoundDataset
from moabb.datasets.utils import blocks_reps
from moabb.evaluations import WithinSessionEvaluation
from moabb.paradigms.p300 import P300

Initialization#

This tutorial illustrates how to use the CompoundDataset to: 1) Select a few subjects/sessions/runs in an existing dataset 2) Merge two CompoundDataset into a new one 3) … and finally use this new dataset on a pipeline (this steps is not specific to CompoundDataset)

Let’s define a paradigm and a pipeline for evaluation first.

paradigm = P300()
pipelines = {}
pipelines["MDM"] = make_pipeline(ERPCovariances(estimator="lwf"), MDM(metric="riemann"))

Creation a selection of subject#

We are going to great two CompoundDataset, namely CustomDataset1 & 2. A CompoundDataset accepts a subjects_list of subjects. It is a list of tuple. A tuple contains 4 values: - the original dataset - the subject number to select - the sessions. It can be:

  • a session name (‘0’)

  • a list of sessions ([‘0’, ‘1’])

  • None to select all the sessions attributed to a subject

  • the runs. As for sessions, it can be a single run name, a list or None` (to select all runs).

class CustomDataset1(CompoundDataset):
    def __init__(self):
        biVR = Cattan2019_VR(virtual_reality=True, screen_display=True)
        runs = blocks_reps([0, 2], [0, 1, 2, 3, 4], biVR.n_repetitions)
        subjects_list = [
            (biVR, 1, "0VR", runs),
            (biVR, 2, "0VR", runs),
        ]
        CompoundDataset.__init__(
            self,
            subjects_list=subjects_list,
            code="CustomDataset1",
            interval=[0, 1.0],
        )


class CustomDataset2(CompoundDataset):
    def __init__(self):
        bi2014 = BI2014a()
        subjects_list = [
            (bi2014, 4, None, None),
            (bi2014, 7, None, None),
        ]
        CompoundDataset.__init__(
            self,
            subjects_list=subjects_list,
            code="CustomDataset2",
            interval=[0, 1.0],
        )

Merging the datasets#

We are now going to merge the two CompoundDataset into a single one. The implementation is straight forward. Instead of providing a list of subjects, you should provide a list of CompoundDataset. subjects_list = [CustomDataset1(), CustomDataset2()]

class CustomDataset3(CompoundDataset):
    def __init__(self):
        subjects_list = [CustomDataset1(), CustomDataset2()]
        CompoundDataset.__init__(
            self,
            subjects_list=subjects_list,
            code="CustomDataset3",
            interval=[0, 1.0],
        )

Evaluate and display#

Let’s use a WithinSessionEvaluation to evaluate our new dataset. If you already new how to do this, nothing changed: The CompoundDataset can be used as a normal dataset.

datasets = [CustomDataset3()]
evaluation = WithinSessionEvaluation(
    paradigm=paradigm, datasets=datasets, overwrite=False, suffix="newdataset"
)
scores = evaluation.process(pipelines)

print(scores)
CustomDataset3-WithinSession:   0%|          | 0/4 [00:00<?, ?it/s]No hdf5_path provided, models will not be saved.

CustomDataset3-WithinSession:  25%|██▌       | 1/4 [00:30<01:32, 30.82s/it]No hdf5_path provided, models will not be saved.

CustomDataset3-WithinSession:  50%|█████     | 2/4 [00:37<00:33, 16.55s/it]

  0%|                                              | 0.00/46.4M [00:00<?, ?B/s]

  0%|                                     | 12.3k/46.4M [00:00<08:17, 93.3kB/s]

  0%|                                       | 119k/46.4M [00:00<01:54, 406kB/s]

  1%|▍                                     | 526k/46.4M [00:00<00:36, 1.25MB/s]

  3%|█                                    | 1.38M/46.4M [00:00<00:14, 3.19MB/s]

  6%|██▍                                  | 3.00M/46.4M [00:00<00:06, 6.81MB/s]

 12%|████▌                                | 5.68M/46.4M [00:00<00:03, 12.5MB/s]

 19%|███████                              | 8.84M/46.4M [00:00<00:02, 18.0MB/s]

 27%|██████████▏                          | 12.7M/46.4M [00:01<00:01, 22.4MB/s]

 36%|█████████████▎                       | 16.7M/46.4M [00:01<00:01, 27.3MB/s]

 43%|███████████████▉                     | 20.0M/46.4M [00:01<00:00, 29.0MB/s]

 51%|██████████████████▉                  | 23.7M/46.4M [00:01<00:00, 31.2MB/s]

 58%|█████████████████████▌               | 27.1M/46.4M [00:01<00:00, 32.1MB/s]

 66%|████████████████████████▍            | 30.7M/46.4M [00:01<00:00, 33.1MB/s]

 74%|███████████████████████████▌         | 34.5M/46.4M [00:01<00:00, 34.6MB/s]

 82%|██████████████████████████████▎      | 38.0M/46.4M [00:01<00:00, 34.7MB/s]

 89%|█████████████████████████████████    | 41.5M/46.4M [00:01<00:00, 33.8MB/s]

 97%|███████████████████████████████████▊ | 45.0M/46.4M [00:01<00:00, 34.1MB/s]

  0%|                                              | 0.00/46.4M [00:00<?, ?B/s]
100%|██████████████████████████████████████| 46.4M/46.4M [00:00<00:00, 183GB/s]
No hdf5_path provided, models will not be saved.

CustomDataset3-WithinSession:  75%|███████▌  | 3/4 [00:51<00:15, 15.66s/it]

  0%|                                              | 0.00/74.3M [00:00<?, ?B/s]

  0%|                                      | 15.4k/74.3M [00:00<10:36, 117kB/s]

  0%|                                       | 109k/74.3M [00:00<02:42, 456kB/s]

  0%|▏                                      | 250k/74.3M [00:00<01:47, 690kB/s]

  1%|▍                                     | 898k/74.3M [00:00<00:29, 2.52MB/s]

  3%|▉                                    | 1.93M/74.3M [00:00<00:14, 4.94MB/s]

  5%|█▋                                   | 3.46M/74.3M [00:00<00:08, 8.11MB/s]

  8%|███                                  | 6.05M/74.3M [00:00<00:05, 13.5MB/s]

 13%|████▋                                | 9.53M/74.3M [00:00<00:03, 18.9MB/s]

 18%|██████▌                              | 13.2M/74.3M [00:01<00:02, 24.1MB/s]

 23%|████████▍                            | 16.8M/74.3M [00:01<00:02, 27.7MB/s]

 27%|██████████                           | 20.3M/74.3M [00:01<00:02, 26.0MB/s]

 31%|███████████▋                         | 23.4M/74.3M [00:01<00:01, 27.4MB/s]

 36%|█████████████▍                       | 26.9M/74.3M [00:01<00:01, 29.6MB/s]

 41%|███████████████                      | 30.2M/74.3M [00:01<00:01, 30.4MB/s]

 45%|████████████████▋                    | 33.4M/74.3M [00:01<00:01, 31.0MB/s]

 50%|██████████████████▍                  | 36.9M/74.3M [00:01<00:01, 32.2MB/s]

 54%|████████████████████                 | 40.3M/74.3M [00:01<00:01, 32.6MB/s]

 59%|█████████████████████▋               | 43.6M/74.3M [00:01<00:00, 32.3MB/s]

 63%|███████████████████████▍             | 47.0M/74.3M [00:02<00:00, 33.0MB/s]

 69%|█████████████████████████▎           | 50.9M/74.3M [00:02<00:00, 29.2MB/s]

 74%|███████████████████████████▏         | 54.7M/74.3M [00:02<00:00, 30.4MB/s]

 79%|█████████████████████████████▎       | 58.7M/74.3M [00:02<00:00, 33.1MB/s]

 84%|██████████████████████████████▉      | 62.1M/74.3M [00:02<00:00, 33.0MB/s]

 88%|████████████████████████████████▋    | 65.6M/74.3M [00:02<00:00, 33.4MB/s]

 93%|██████████████████████████████████▍  | 69.2M/74.3M [00:02<00:00, 29.5MB/s]

 98%|████████████████████████████████████▍| 73.0M/74.3M [00:02<00:00, 31.7MB/s]

  0%|                                              | 0.00/74.3M [00:00<?, ?B/s]
100%|██████████████████████████████████████| 74.3M/74.3M [00:00<00:00, 339GB/s]
No hdf5_path provided, models will not be saved.

CustomDataset3-WithinSession: 100%|██████████| 4/4 [01:20<00:00, 20.75s/it]
CustomDataset3-WithinSession: 100%|██████████| 4/4 [01:20<00:00, 20.13s/it]
      score      time  samples  ... n_sessions         dataset  pipeline
0  0.635000  0.327744    120.0  ...          1  CustomDataset3       MDM
1  0.582500  0.321615    120.0  ...          1  CustomDataset3       MDM
2  0.628115  2.076844    768.0  ...          1  CustomDataset3       MDM
3  0.575290  4.496056   1356.0  ...          1  CustomDataset3       MDM

[4 rows x 9 columns]

Total running time of the script: (1 minutes 21.116 seconds)

Estimated memory usage: 671 MB

Gallery generated by Sphinx-Gallery