Tutorial 0: Getting Started#

This tutorial takes you through a basic working example of how to use this codebase, including all the different components, up to the results generation. If you’d like to know about the statistics and plotting, see the next tutorial.

# Authors: Vinay Jayaram <vinayjayaram13@gmail.com>
#
# License: BSD (3-clause)

Introduction#

To use the codebase you need an evaluation and a paradigm, some algorithms, and a list of datasets to run it all on. You can find those in the following submodules; detailed tutorials are given for each of them.

import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.metrics import accuracy_score, roc_auc_score
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC

If you would like to specify the logging level when it is running, you can use the standard python logging commands through the top-level moabb module

import moabb
from moabb.datasets import BNCI2014_001, utils
from moabb.evaluations import CrossSessionEvaluation
from moabb.paradigms import LeftRightImagery
from moabb.pipelines.features import LogVariance

In order to create pipelines within a script, you will likely need at least the make_pipeline function. They can also be specified via a .yml file. Here we will make a couple pipelines just for convenience

Create pipelines#

We create two pipelines: channel-wise log variance followed by LDA, and channel-wise log variance followed by a cross-validated SVM (note that a cross-validation via scikit-learn cannot be described in a .yml file). For later in the process, the pipelines need to be in a dictionary where the key is the name of the pipeline and the value is the Pipeline object

pipelines = {}
pipelines["AM+LDA"] = make_pipeline(LogVariance(), LDA())
parameters = {"C": np.logspace(-2, 2, 10)}
clf = GridSearchCV(SVC(kernel="linear"), parameters)
pipe = make_pipeline(LogVariance(), clf)

pipelines["AM+SVM"] = pipe

Datasets#

Datasets can be specified in many ways: Each paradigm has a property ‘datasets’ which returns the datasets that are appropriate for that paradigm

/home/runner/work/moabb/moabb/moabb/datasets/fake.py:92: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_FAKEDATASET-IMAGERY-10-2--60-60--120-120--FAKE1-FAKE2-FAKE3--C3-CZ-C4_PATH"
  set_config(key, temp_dir)
/home/runner/work/moabb/moabb/moabb/datasets/fake.py:92: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_FAKEVIRTUALREALITYDATASET-P300-21-1--60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60--120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120--TARGET-NONTARGET--C3-CZ-C4_PATH"
  set_config(key, temp_dir)
[<moabb.datasets.bnci.bnci_2014.BNCI2014_001 object at 0x7f3dfda778e0>, <moabb.datasets.bnci.bnci_2014.BNCI2014_004 object at 0x7f3dfda77160>, <moabb.datasets.beetl.Beetl2021_A object at 0x7f3dfda76110>, <moabb.datasets.beetl.Beetl2021_B object at 0x7f3dfda76200>, <moabb.datasets.gigadb.Cho2017 object at 0x7f3dfda777f0>, <moabb.datasets.dreyer2023.Dreyer2023 object at 0x7f3dfda769e0>, <moabb.datasets.dreyer2023.Dreyer2023A object at 0x7f3dfda77640>, <moabb.datasets.dreyer2023.Dreyer2023B object at 0x7f3dfda77250>, <moabb.datasets.dreyer2023.Dreyer2023C object at 0x7f3dfda757e0>, <moabb.datasets.mpi_mi.GrosseWentrup2009 object at 0x7f3dfdb4f640>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7f3dfdb4f310>, <moabb.datasets.liu2024.Liu2024 object at 0x7f3dfdb4d000>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7f3dfdb4c940>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7f3dfdcec850>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7f3dfdb4efb0>, <moabb.datasets.stieger2021.Stieger2021 object at 0x7f3dfda757b0>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7f3dfda77220>, <moabb.datasets.Zhou2016.Zhou2016 object at 0x7f3dfdb4f460>]

Or you can run a search through the available datasets:

print(utils.dataset_search(paradigm="imagery", min_subjects=6))
[<moabb.datasets.alex_mi.AlexMI object at 0x7f3dfda77640>, <moabb.datasets.bnci.bnci_2014.BNCI2014_001 object at 0x7f3dfda76ce0>, <moabb.datasets.bnci.bnci_2014.BNCI2014_002 object at 0x7f3dfda75150>, <moabb.datasets.bnci.bnci_2014.BNCI2014_004 object at 0x7f3dfda769e0>, <moabb.datasets.bnci.bnci_2015.BNCI2015_001 object at 0x7f3dfda755d0>, <moabb.datasets.bnci.bnci_2015.BNCI2015_004 object at 0x7f3dfda74160>, <moabb.datasets.bnci.bnci_2019.BNCI2019_001 object at 0x7f3dfda77b20>, <moabb.datasets.bnci.bnci_2020.BNCI2020_001 object at 0x7f3dfda76290>, <moabb.datasets.bnci.bnci_2022_001.BNCI2022_001 object at 0x7f3dfe301a80>, <moabb.datasets.bnci.bnci_2024_001.BNCI2024_001 object at 0x7f3dfda75e40>, <moabb.datasets.bnci.bnci_2025.BNCI2025_001 object at 0x7f3dfe301720>, <moabb.datasets.bnci.bnci_2025.BNCI2025_002 object at 0x7f3dfe301750>, <moabb.datasets.gigadb.Cho2017 object at 0x7f3dfdb4d630>, <moabb.datasets.dreyer2023.Dreyer2023 object at 0x7f3dfdb4f460>, <moabb.datasets.dreyer2023.Dreyer2023A object at 0x7f3dfdb4efb0>, <moabb.datasets.dreyer2023.Dreyer2023B object at 0x7f3dfdb4f640>, <moabb.datasets.dreyer2023.Dreyer2023C object at 0x7f3dfdb4c940>, <moabb.datasets.fake.FakeDataset object at 0x7f3dfdb4d000>, <moabb.datasets.mpi_mi.GrosseWentrup2009 object at 0x7f3dfd6ad3f0>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7f3dfd6ae230>, <moabb.datasets.liu2024.Liu2024 object at 0x7f3dfd6ade40>, <moabb.datasets.upper_limb.Ofner2017 object at 0x7f3dfdb4f310>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7f3dfd6acf70>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7f3dfd6adff0>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7f3dfd6ac790>, <moabb.datasets.bbci_eeg_fnirs.Shin2017B object at 0x7f3dfd6ae1a0>, <moabb.datasets.stieger2021.Stieger2021 object at 0x7f3dfd6ad600>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7f3dfd6adea0>]

Or you can simply make your own list (which we do here due to computational constraints)

Paradigm#

Paradigms define the events, epoch time, bandpass, and other preprocessing parameters. They have defaults that you can read in the documentation, or you can simply set them as we do here. A single paradigm defines a method for going from continuous data to trial data of a fixed size. To learn more look at the tutorial Exploring Paradigms

fmin = 8
fmax = 35
# You can inject custom scoring directly into the paradigm (single or multi-metric).
custom_scorer = [
    accuracy_score,
    (roc_auc_score, {"needs_threshold": True}),
]
paradigm = LeftRightImagery(fmin=fmin, fmax=fmax, scorer=custom_scorer)

Evaluation#

An evaluation defines how the training and test sets are chosen. This could be cross-validated within a single recording, or across days, or sessions, or subjects. This also is the correct place to specify multiple threads.

evaluation = CrossSessionEvaluation(
    paradigm=paradigm, datasets=datasets, suffix="examples", overwrite=False
)
results = evaluation.process(pipelines)
/home/runner/work/moabb/moabb/moabb/analysis/results.py:95: RuntimeWarning: Setting non-standard config type: "MOABB_RESULTS"
  set_config("MOABB_RESULTS", osp.join(osp.expanduser("~"), "mne_data"))

BNCI2014-001-CrossSession:   0%|          | 0/2 [00:00<?, ?it/s]
BNCI2014-001-CrossSession:  50%|█████     | 1/2 [00:10<00:10, 10.84s/it]
BNCI2014-001-CrossSession: 100%|██████████| 2/2 [00:19<00:00,  9.72s/it]
BNCI2014-001-CrossSession: 100%|██████████| 2/2 [00:19<00:00,  9.89s/it]
2026-02-08 21:19:21,429 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 1 | 0train: Score 0.729
2026-02-08 21:19:21,550 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 1 | 1test: Score 0.715
2026-02-08 21:19:21,669 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 1 | 0train: Score 0.743
2026-02-08 21:19:21,806 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 1 | 1test: Score 0.715
2026-02-08 21:19:21,941 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 2 | 0train: Score 0.597
2026-02-08 21:19:22,059 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 2 | 1test: Score 0.521
2026-02-08 21:19:22,181 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 2 | 0train: Score 0.500
2026-02-08 21:19:22,314 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 2 | 1test: Score 0.500

Results are returned as a pandas DataFrame. When multiple metrics are provided, MOABB adds a primary score plus one column per metric (e.g., score_accuracy_score, score_roc_auc_score).

print(results.head())
      score      time  ...  pipeline                  codecarbon_task_name
0  0.729167  0.011064  ...    AM+LDA  92491d41-8548-4801-ab08-4267a3f02f05
1  0.715278  0.010687  ...    AM+LDA  e845872e-7c4f-48ab-bae7-db65c865b1de
2  0.597222  0.010505  ...    AM+LDA  356802ad-46b1-418d-8a4a-c48d133c4e5c
3  0.520833  0.008078  ...    AM+LDA  99d22294-627a-43f3-afe3-a8c565d3a83b
4  0.743056  0.130285  ...    AM+SVM  77d70942-a839-4d7e-8c42-8c3bdfbbc3f1

[5 rows x 13 columns]

Total running time of the script: (0 minutes 22.212 seconds)

Estimated memory usage: 806 MB

Gallery generated by Sphinx-Gallery