Note
Go to the end to download the full example code.
Tutorial 0: Getting Started#
This tutorial takes you through a basic working example of how to use this codebase, including all the different components, up to the results generation. If you’d like to know about the statistics and plotting, see the next tutorial.
# Authors: Vinay Jayaram <vinayjayaram13@gmail.com>
#
# License: BSD (3-clause)
Introduction#
To use the codebase you need an evaluation and a paradigm, some algorithms, and a list of datasets to run it all on. You can find those in the following submodules; detailed tutorials are given for each of them.
import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
If you would like to specify the logging level when it is running, you can use the standard python logging commands through the top-level moabb module
import moabb
from moabb.datasets import BNCI2014_001, utils
from moabb.evaluations import CrossSessionEvaluation
from moabb.paradigms import LeftRightImagery
from moabb.pipelines.features import LogVariance
In order to create pipelines within a script, you will likely need at least the make_pipeline function. They can also be specified via a .yml file. Here we will make a couple pipelines just for convenience
moabb.set_log_level("info")
Create pipelines#
We create two pipelines: channel-wise log variance followed by LDA, and channel-wise log variance followed by a cross-validated SVM (note that a cross-validation via scikit-learn cannot be described in a .yml file). For later in the process, the pipelines need to be in a dictionary where the key is the name of the pipeline and the value is the Pipeline object
pipelines = {}
pipelines["AM+LDA"] = make_pipeline(LogVariance(), LDA())
parameters = {"C": np.logspace(-2, 2, 10)}
clf = GridSearchCV(SVC(kernel="linear"), parameters)
pipe = make_pipeline(LogVariance(), clf)
pipelines["AM+SVM"] = pipe
Datasets#
Datasets can be specified in many ways: Each paradigm has a property ‘datasets’ which returns the datasets that are appropriate for that paradigm
print(LeftRightImagery().datasets)
/home/runner/work/moabb/moabb/moabb/datasets/fake.py:93: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_FAKEDATASET-IMAGERY-10-2--60-60--120-120--FAKE1-FAKE2-FAKE3--C3-CZ-C4_PATH"
set_config(key, temp_dir)
/home/runner/work/moabb/moabb/moabb/datasets/fake.py:93: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_FAKEVIRTUALREALITYDATASET-P300-21-1--60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60-60--120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120-120--TARGET-NONTARGET--C3-CZ-C4_PATH"
set_config(key, temp_dir)
[<moabb.datasets.bnci.BNCI2014_001 object at 0x7f975d2fe890>, <moabb.datasets.bnci.BNCI2014_004 object at 0x7f975d2fccd0>, <moabb.datasets.beetl.Beetl2021_A object at 0x7f975d2fcf40>, <moabb.datasets.beetl.Beetl2021_B object at 0x7f975d2fd000>, <moabb.datasets.gigadb.Cho2017 object at 0x7f975d2fc910>, <moabb.datasets.dreyer2023.Dreyer2023 object at 0x7f975d2fcee0>, <moabb.datasets.dreyer2023.Dreyer2023A object at 0x7f975d2fd0c0>, <moabb.datasets.dreyer2023.Dreyer2023B object at 0x7f975d2fe6e0>, <moabb.datasets.dreyer2023.Dreyer2023C object at 0x7f975d2fd210>, <moabb.datasets.mpi_mi.GrosseWentrup2009 object at 0x7f975d67ab60>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7f975f239c00>, <moabb.datasets.liu2024.Liu2024 object at 0x7f975f23bb20>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7f975d4b4eb0>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7f975d4b4670>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7f975f23b9d0>, <moabb.datasets.stieger2021.Stieger2021 object at 0x7f975d4b4c10>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7f975d67bac0>, <moabb.datasets.Zhou2016.Zhou2016 object at 0x7f975d4b4ac0>]
Or you can run a search through the available datasets:
print(utils.dataset_search(paradigm="imagery", min_subjects=6))
[<moabb.datasets.alex_mi.AlexMI object at 0x7f975e2ef1c0>, <moabb.datasets.bnci.BNCI2014_001 object at 0x7f975e2eff70>, <moabb.datasets.bnci.BNCI2014_002 object at 0x7f975e2ec0d0>, <moabb.datasets.bnci.BNCI2014_004 object at 0x7f975e2ede40>, <moabb.datasets.bnci.BNCI2015_001 object at 0x7f975e2ef3a0>, <moabb.datasets.bnci.BNCI2015_004 object at 0x7f975e2ec730>, <moabb.datasets.gigadb.Cho2017 object at 0x7f975e2ed720>, <moabb.datasets.dreyer2023.Dreyer2023 object at 0x7f975e2ed0f0>, <moabb.datasets.dreyer2023.Dreyer2023A object at 0x7f975e2ec280>, <moabb.datasets.dreyer2023.Dreyer2023B object at 0x7f975e2ec850>, <moabb.datasets.dreyer2023.Dreyer2023C object at 0x7f975e2ed3f0>, <moabb.datasets.fake.FakeDataset object at 0x7f975e2eee90>, <moabb.datasets.mpi_mi.GrosseWentrup2009 object at 0x7f975e2ec6a0>, <moabb.datasets.Lee2019.Lee2019_MI object at 0x7f975e2eed10>, <moabb.datasets.liu2024.Liu2024 object at 0x7f975e2efb20>, <moabb.datasets.upper_limb.Ofner2017 object at 0x7f975e2ecf70>, <moabb.datasets.physionet_mi.PhysionetMI object at 0x7f975e2ed390>, <moabb.datasets.schirrmeister2017.Schirrmeister2017 object at 0x7f975e2ec3d0>, <moabb.datasets.bbci_eeg_fnirs.Shin2017A object at 0x7f975e2ec6d0>, <moabb.datasets.bbci_eeg_fnirs.Shin2017B object at 0x7f975e2eebc0>, <moabb.datasets.stieger2021.Stieger2021 object at 0x7f975e2ec1c0>, <moabb.datasets.Weibo2014.Weibo2014 object at 0x7f975d679750>]
Or you can simply make your own list (which we do here due to computational constraints)
dataset = BNCI2014_001()
dataset.subject_list = dataset.subject_list[:2]
datasets = [dataset]
Paradigm#
Paradigms define the events, epoch time, bandpass, and other preprocessing parameters. They have defaults that you can read in the documentation, or you can simply set them as we do here. A single paradigm defines a method for going from continuous data to trial data of a fixed size. To learn more look at the tutorial Exploring Paradigms
Evaluation#
An evaluation defines how the training and test sets are chosen. This could be cross-validated within a single recording, or across days, or sessions, or subjects. This also is the correct place to specify multiple threads.
evaluation = CrossSessionEvaluation(
paradigm=paradigm, datasets=datasets, suffix="examples", overwrite=False
)
results = evaluation.process(pipelines)
/home/runner/work/moabb/moabb/moabb/analysis/results.py:93: RuntimeWarning: Setting non-standard config type: "MOABB_RESULTS"
set_config("MOABB_RESULTS", osp.join(osp.expanduser("~"), "mne_data"))
2025-07-24 14:37:13,758 INFO MainThread moabb.evaluations.base Processing dataset: BNCI2014-001
BNCI2014-001-CrossSession: 0%| | 0/2 [00:00<?, ?it/s]MNE_DATA is not already configured. It will be set to default location in the home directory - /home/runner/mne_data
All datasets will be downloaded to this location, if anything is already downloaded, please move manually to this location
/home/runner/work/moabb/moabb/moabb/datasets/download.py:57: RuntimeWarning: Setting non-standard config type: "MNE_DATASETS_BNCI_PATH"
set_config(key, get_config("MNE_DATA"))
2025-07-24 14:37:18,660 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 1 | 0train: Score 0.786
2025-07-24 14:37:18,796 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 1 | 1test: Score 0.802
2025-07-24 14:37:19,049 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 1 | 0train: Score 0.797
2025-07-24 14:37:19,310 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 1 | 1test: Score 0.774
BNCI2014-001-CrossSession: 50%|█████ | 1/2 [00:05<00:05, 5.67s/it]2025-07-24 14:37:23,695 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 2 | 0train: Score 0.577
2025-07-24 14:37:23,829 INFO MainThread moabb.evaluations.base AM+LDA | BNCI2014-001 | 2 | 1test: Score 0.499
2025-07-24 14:37:24,188 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 2 | 0train: Score 0.551
2025-07-24 14:37:24,475 INFO MainThread moabb.evaluations.base AM+SVM | BNCI2014-001 | 2 | 1test: Score 0.471
BNCI2014-001-CrossSession: 100%|██████████| 2/2 [00:10<00:00, 5.37s/it]
BNCI2014-001-CrossSession: 100%|██████████| 2/2 [00:10<00:00, 5.42s/it]
Results are returned as a pandas DataFrame, and from here you can do as you want with them
print(results.head())
score time samples ... n_sessions dataset pipeline
0 0.797068 0.144498 144.0 ... 2 BNCI2014-001 AM+SVM
1 0.773920 0.137100 144.0 ... 2 BNCI2014-001 AM+SVM
2 0.550733 0.248049 144.0 ... 2 BNCI2014-001 AM+SVM
3 0.471451 0.163396 144.0 ... 2 BNCI2014-001 AM+SVM
4 0.786458 0.026378 144.0 ... 2 BNCI2014-001 AM+LDA
[5 rows x 9 columns]
Total running time of the script: (0 minutes 12.167 seconds)
Estimated memory usage: 773 MB