Getting Started

This tutorial takes you through a basic working example of how to use this codebase, including all the different components, up to the results generation. If you’d like to know about the statistics and plotting, see the next tutorial.

# Authors: Vinay Jayaram <vinayjayaram13@gmail.com>
#
# License: BSD (3-clause)

Introduction

To use the codebase you need an evaluation and a paradigm, some algorithms, and a list of datasets to run it all on. You can find those in the following submodules; detailed tutorials are given for each of them.

import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC

If you would like to specify the logging level when it is running, you can use the standard python logging commands through the top-level moabb module

import moabb
from moabb.datasets import BNCI2014001, utils
from moabb.evaluations import CrossSessionEvaluation
from moabb.paradigms import LeftRightImagery
from moabb.pipelines.features import LogVariance

In order to create pipelines within a script, you will likely need at least the make_pipeline function. They can also be specified via a .yml file. Here we will make a couple pipelines just for convenience

moabb.set_log_level("info")

Create pipelines

We create two pipelines: channel-wise log variance followed by LDA, and channel-wise log variance followed by a cross-validated SVM (note that a cross-validation via scikit-learn cannot be described in a .yml file). For later in the process, the pipelines need to be in a dictionary where the key is the name of the pipeline and the value is the Pipeline object

pipelines = {}
pipelines["AM+LDA"] = make_pipeline(LogVariance(), LDA())
parameters = {"C": np.logspace(-2, 2, 10)}
clf = GridSearchCV(SVC(kernel="linear"), parameters)
pipe = make_pipeline(LogVariance(), clf)

pipelines["AM+SVM"] = pipe

Datasets

Datasets can be specified in many ways: Each paradigm has a property ‘datasets’ which returns the datasets that are appropriate for that paradigm

print(LeftRightImagery().datasets)

Out:

[<moabb.datasets.bnci.BNCI2014001 object at 0x7feebb7b5520>, <moabb.datasets.bnci.BNCI2014004 object at 0x7feebb7b54c0>, <moabb.datasets.gigadb.Cho2017 object at 0x7feebb7b5df0>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7feebb7b5dc0>, <moabb.datasets.mpi_mi.MunichMI object at 0x7feebb7b5910>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7feebb7b5850>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7feebb7b5880>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7feebb7b5310>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7feebb7b5280>, <moabb.datasets.Zhou2016.Zhou2016 object at 0x7feebb7b5fd0>]

Or you can run a search through the available datasets:

print(utils.dataset_search(paradigm="imagery", min_subjects=6))

Out:

[<moabb.datasets.alex_mi.AlexMI object at 0x7feebb7b5400>, <moabb.datasets.bnci.BNCI2014001 object at 0x7feebb7b5c40>, <moabb.datasets.bnci.BNCI2014002 object at 0x7feebb7b5fd0>, <moabb.datasets.bnci.BNCI2014004 object at 0x7feebb7b5280>, <moabb.datasets.bnci.BNCI2015001 object at 0x7feebb7b5850>, <moabb.datasets.bnci.BNCI2015004 object at 0x7feebb7b5dc0>, <moabb.datasets.gigadb.Cho2017 object at 0x7feebb7b5910>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7feebb7b54c0>, <moabb.datasets.mpi_mi.MunichMI object at 0x7feebb7b5df0>, <moabb.datasets.upper_limb.Ofner2017 object at 0x7feebb7b5fa0>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7feebb7b5520>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7feebb7b5a90>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7feebb7b5bb0>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7feebb7b5ca0>]

Or you can simply make your own list (which we do here due to computational constraints)

dataset = BNCI2014001()
dataset.subject_list = dataset.subject_list[:2]
datasets = [dataset]

Paradigm

Paradigms define the events, epoch time, bandpass, and other preprocessing parameters. They have defaults that you can read in the documentation, or you can simply set them as we do here. A single paradigm defines a method for going from continuous data to trial data of a fixed size. To learn more look at the tutorial Exploring Paradigms

fmin = 8
fmax = 35
paradigm = LeftRightImagery(fmin=fmin, fmax=fmax)

Evaluation

An evaluation defines how the training and test sets are chosen. This could be cross-validated within a single recording, or across days, or sessions, or subjects. This also is the correct place to specify multiple threads.

evaluation = CrossSessionEvaluation(
    paradigm=paradigm, datasets=datasets, suffix="examples", overwrite=False
)
results = evaluation.process(pipelines)

Out:

001-2014-CrossSession:   0%|          | 0/2 [00:00<?, ?it/s]
001-2014-CrossSession:  50%|#####     | 1/2 [00:04<00:04,  4.08s/it]
001-2014-CrossSession: 100%|##########| 2/2 [00:08<00:00,  4.12s/it]
001-2014-CrossSession: 100%|##########| 2/2 [00:08<00:00,  4.11s/it]

Results are returned as a pandas DataFrame, and from here you can do as you want with them

print(results.head())

Out:

      score      time  samples subject  ... channels  n_sessions   dataset pipeline
0  0.801698  0.049598    144.0       1  ...       22           2  001-2014   AM+LDA
1  0.786458  0.039401    144.0       1  ...       22           2  001-2014   AM+LDA
2  0.498650  0.039219    144.0       2  ...       22           2  001-2014   AM+LDA
3  0.576582  0.037826    144.0       2  ...       22           2  001-2014   AM+LDA
4  0.773920  0.184529    144.0       1  ...       22           2  001-2014   AM+SVM

[5 rows x 9 columns]

Total running time of the script: ( 0 minutes 8.258 seconds)

Gallery generated by Sphinx-Gallery