Note
Go to the end to download the full example code.
Tutorial 0: Getting Started#
This tutorial takes you through a basic working example of how to use this codebase, including all the different components, up to the results generation. If you’d like to know about the statistics and plotting, see the next tutorial.
# Authors: Vinay Jayaram <vinayjayaram13@gmail.com>
#
# License: BSD (3-clause)
Introduction#
To use the codebase you need an evaluation and a paradigm, some algorithms, and a list of datasets to run it all on. You can find those in the following submodules; detailed tutorials are given for each of them.
import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
If you would like to specify the logging level when it is running, you can use the standard python logging commands through the top-level moabb module
import moabb
from moabb.datasets import BNCI2014_001, utils
from moabb.evaluations import CrossSessionEvaluation
from moabb.paradigms import LeftRightImagery
from moabb.pipelines.features import LogVariance
In order to create pipelines within a script, you will likely need at least the make_pipeline function. They can also be specified via a .yml file. Here we will make a couple pipelines just for convenience
moabb.set_log_level("info")
Create pipelines#
We create two pipelines: channel-wise log variance followed by LDA, and channel-wise log variance followed by a cross-validated SVM (note that a cross-validation via scikit-learn cannot be described in a .yml file). For later in the process, the pipelines need to be in a dictionary where the key is the name of the pipeline and the value is the Pipeline object
pipelines = {}
pipelines["AM+LDA"] = make_pipeline(LogVariance(), LDA())
parameters = {"C": np.logspace(-2, 2, 10)}
clf = GridSearchCV(SVC(kernel="linear"), parameters)
pipe = make_pipeline(LogVariance(), clf)
pipelines["AM+SVM"] = pipe
Datasets#
Datasets can be specified in many ways: Each paradigm has a property ‘datasets’ which returns the datasets that are appropriate for that paradigm
print(LeftRightImagery().datasets)
/home/runner/work/moabb/moabb/moabb/datasets/fake.py:93: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_FAKEDATASET-IMAGERY-10-2--60-60--120-120--FAKE1-FAKE2-FAKE3--C3-CZ-C4_PATH"
set_config(key, temp_dir)
/home/runner/work/moabb/moabb/moabb/datasets/fake.py:93: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_FAKEVIRTUALREALITYDATASET-P300-21-1--60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60--120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120--TARGET-NONTARGET--C3-CZ-C4_PATH"
set_config(key, temp_dir)
[<moabb.datasets.bnci.BNCI2014_001 object at 0x7f902ad12a40>, <moabb.datasets.bnci.BNCI2014_004 object at 0x7f902ad10160>, <moabb.datasets.beetl.Beetl2021_A object at 0x7f902ad11780>, <moabb.datasets.beetl.Beetl2021_B object at 0x7f902ac72bf0>, <moabb.datasets.gigadb.Cho2017 object at 0x7f902ad105b0>, <moabb.datasets.dreyer2023.Dreyer2023 object at 0x7f902ac714e0>, <moabb.datasets.dreyer2023.Dreyer2023A object at 0x7f902ad12e30>, <moabb.datasets.dreyer2023.Dreyer2023B object at 0x7f902ac70c10>, <moabb.datasets.dreyer2023.Dreyer2023C object at 0x7f902ac71330>, <moabb.datasets.mpi_mi.GrosseWentrup2009 object at 0x7f902ac706a0>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7f9029d92560>, <moabb.datasets.liu2024.Liu2024 object at 0x7f9029d92110>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7f9029d92a70>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7f9029d926b0>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7f902ad116f0>, <moabb.datasets.stieger2021.Stieger2021 object at 0x7f9029d93730>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7f9029d90550>, <moabb.datasets.Zhou2016.Zhou2016 object at 0x7f9029d91f00>]
Or you can run a search through the available datasets:
print(utils.dataset_search(paradigm="imagery", min_subjects=6))
[<moabb.datasets.alex_mi.AlexMI object at 0x7f902ad12e30>, <moabb.datasets.bnci.BNCI2014_001 object at 0x7f902ad106a0>, <moabb.datasets.bnci.BNCI2014_002 object at 0x7f902b911630>, <moabb.datasets.bnci.BNCI2014_004 object at 0x7f902b912590>, <moabb.datasets.bnci.BNCI2015_001 object at 0x7f902b911120>, <moabb.datasets.bnci.BNCI2015_004 object at 0x7f902b9135e0>, <moabb.datasets.gigadb.Cho2017 object at 0x7f902b9125c0>, <moabb.datasets.dreyer2023.Dreyer2023 object at 0x7f9029d92a70>, <moabb.datasets.dreyer2023.Dreyer2023A object at 0x7f902b9117b0>, <moabb.datasets.dreyer2023.Dreyer2023B object at 0x7f9029d92710>, <moabb.datasets.dreyer2023.Dreyer2023C object at 0x7f9029d91ae0>, <moabb.datasets.fake.FakeDataset object at 0x7f9029d91f00>, <moabb.datasets.mpi_mi.GrosseWentrup2009 object at 0x7f9029d90550>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7f9029d93730>, <moabb.datasets.liu2024.Liu2024 object at 0x7f9029d92560>, <moabb.datasets.upper_limb.Ofner2017 object at 0x7f9029d92110>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7f902a16b880>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7f902a16a650>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7f902a16a350>, <moabb.datasets.bbci_eeg_fnirs.Shin2017B object at 0x7f902a16b3a0>, <moabb.datasets.stieger2021.Stieger2021 object at 0x7f902a168250>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7f902a16b4f0>]
Or you can simply make your own list (which we do here due to computational constraints)
dataset = BNCI2014_001()
dataset.subject_list = dataset.subject_list[:2]
datasets = [dataset]
Paradigm#
Paradigms define the events, epoch time, bandpass, and other preprocessing parameters. They have defaults that you can read in the documentation, or you can simply set them as we do here. A single paradigm defines a method for going from continuous data to trial data of a fixed size. To learn more look at the tutorial Exploring Paradigms
Evaluation#
An evaluation defines how the training and test sets are chosen. This could be cross-validated within a single recording, or across days, or sessions, or subjects. This also is the correct place to specify multiple threads.
evaluation = CrossSessionEvaluation(
paradigm=paradigm, datasets=datasets, suffix="examples", overwrite=False
)
results = evaluation.process(pipelines)
/home/runner/work/moabb/moabb/moabb/analysis/results.py:95: RuntimeWarning: Setting non-standard config type: "MOABB_RESULTS"
set_config("MOABB_RESULTS", osp.join(osp.expanduser("~"), "mne_data"))
BNCI2014-001-CrossSession: 0%| | 0/2 [00:00<?, ?it/s]2025-08-04 16:02:59,763 INFO MainThread moabb.datasets.download MNE_DATA is not already configured. It will be set to default location in the home directory - /home/runner/mne_data
All datasets will be downloaded to this location, if anything is already downloaded, please move manually to this location
/home/runner/work/moabb/moabb/moabb/datasets/download.py:60: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_BNCI_PATH"
set_config(key, get_config("MNE_DATA"))
BNCI2014-001-CrossSession: 50%|█████ | 1/2 [00:05<00:05, 5.23s/it]
BNCI2014-001-CrossSession: 100%|██████████| 2/2 [00:09<00:00, 4.92s/it]
BNCI2014-001-CrossSession: 100%|██████████| 2/2 [00:09<00:00, 4.97s/it]
2025-08-04 16:03:09,460 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 1 | 0train: Score 0.786
2025-08-04 16:03:09,576 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 1 | 1test: Score 0.802
2025-08-04 16:03:09,691 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 1 | 0train: Score 0.797
2025-08-04 16:03:09,822 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 1 | 1test: Score 0.774
2025-08-04 16:03:09,951 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 2 | 0train: Score 0.577
2025-08-04 16:03:10,065 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 2 | 1test: Score 0.499
2025-08-04 16:03:10,181 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 2 | 0train: Score 0.551
2025-08-04 16:03:10,309 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 2 | 1test: Score 0.471
Results are returned as a pandas DataFrame, and from here you can do as you want with them
print(results.head())
score time samples ... n_sessions dataset pipeline
0 0.797068 0.145860 144.0 ... 2 BNCI2014-001 AM+SVM
1 0.773920 0.140112 144.0 ... 2 BNCI2014-001 AM+SVM
2 0.550733 0.243366 144.0 ... 2 BNCI2014-001 AM+SVM
3 0.471451 0.161490 144.0 ... 2 BNCI2014-001 AM+SVM
4 0.786458 0.029756 144.0 ... 2 BNCI2014-001 AM+LDA
[5 rows x 9 columns]
Total running time of the script: (0 minutes 12.266 seconds)
Estimated memory usage: 776 MB