Note
Click here to download the full example code
Examples of how to use MOABB to benchmark pipelines.#
Benchmarking with MOABB#
This example shows how to use MOABB to benchmark a set of pipelines on all available datasets. For this example, we will use only one dataset to keep the computation time low, but this benchmark is designed to easily scale to many datasets.
# Authors: Sylvain Chevallier <sylvain.chevallier@universite-paris-saclay.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
from moabb import benchmark, set_log_level
from moabb.analysis.plotting import score_plot
from moabb.paradigms import LeftRightImagery
set_log_level("info")
Loading the pipelines#
The ML pipelines used in benchmark are defined in YAML files, following a simple format. It simplifies sharing and reusing pipelines across benchmarks, reproducing state-of-the-art results.
MOABB comes with complete list of pipelines that cover most of the successful approaches in the literature. You can find them in the pipelines folder. For this example, we will use a folder with only 2 pipelines, to keep the computation time low.
This is an example of a pipeline defined in YAML, defining on which paradigms it can be used, the original publication, and the steps to perform using a scikit-learn API. In this case, a CSP + SVM pipeline, the covariance are estimated to compute a CSP filter and then a linear SVM is trained on the CSP filtered signals.
name: CSP + SVM
paradigms:
- LeftRightImagery
citations:
- https://doi.org/10.1007/BF01129656
- https://doi.org/10.1109/MSP.2008.4408441
pipeline:
- name: Covariances
from: pyriemann.estimation
parameters:
estimator: oas
- name: CSP
from: pyriemann.spatialfilters
parameters:
nfilter: 6
- name: SVC
from: sklearn.svm
parameters:
kernel: "linear"
The sample_pipelines
folder contains a second pipeline, a logistic regression
performed in the tangent space using Riemannian geometry.
Selecting the datasets (optional)#
If you want to limit your benchmark on a subset of datasets, you can use the
include_datasets
and exclude_datasets
arguments. You will need either
to provide the dataset’s object, or a the dataset’s code. To get the list of
available dataset’s code for a given paradigm, you can use the following command:
paradigm = LeftRightImagery()
for d in paradigm.datasets:
print(d.code)
BNCI2014-001
BNCI2014-004
Cho2017
GrosseWentrup2009
Lee2019-MI
Liu2024
PhysionetMotorImagery
Schirrmeister2017
Shin2017A
Stieger2021
Weibo2014
Zhou2016
In this example, we will use only the last dataset, ‘Zhou 2016’.
Running the benchmark#
The benchmark is run using the benchmark
function. You need to specify the
folder containing the pipelines to use, the kind of evaluation and the paradigm
to use. By default, the benchmark will use all available datasets for all
paradigms listed in the pipelines. You could restrict to specific evaluation and
paradigm using the evaluations
and paradigms
arguments.
To save computation time, the results are cached. If you want to re-run the
benchmark, you can set the overwrite
argument to True
.
It is possible to indicate the folder to cache the results and the one to save
the analysis & figures. By default, the results are saved in the results
folder, and the analysis & figures are saved in the benchmark
folder.
results = benchmark(
pipelines="./sample_pipelines/",
evaluations=["WithinSession"],
paradigms=["LeftRightImagery"],
include_datasets=["Zhou2016"],
results="./results/",
overwrite=False,
plot=False,
output="./benchmark/",
)
Zhou2016-WithinSession: 0%| | 0/4 [00:00<?, ?it/s]
0%| | 0.00/156M [00:00<?, ?B/s]
0%| | 69.6k/156M [00:00<06:27, 403kB/s]
0%| | 296k/156M [00:00<02:47, 932kB/s]
1%|▏ | 888k/156M [00:00<01:15, 2.06MB/s]
2%|▌ | 2.51M/156M [00:00<00:25, 5.92MB/s]
3%|█▎ | 5.15M/156M [00:00<00:12, 11.7MB/s]
5%|█▊ | 7.35M/156M [00:00<00:11, 12.9MB/s]
7%|██▌ | 10.3M/156M [00:00<00:08, 17.2MB/s]
9%|███▎ | 13.4M/156M [00:01<00:06, 21.0MB/s]
10%|███▊ | 15.9M/156M [00:01<00:07, 19.4MB/s]
12%|████▌ | 18.8M/156M [00:01<00:06, 22.1MB/s]
14%|█████▎ | 21.9M/156M [00:01<00:05, 24.4MB/s]
16%|█████▉ | 24.5M/156M [00:01<00:06, 21.8MB/s]
18%|██████▋ | 27.5M/156M [00:01<00:05, 23.9MB/s]
20%|███████▍ | 30.6M/156M [00:01<00:04, 26.0MB/s]
21%|████████ | 33.3M/156M [00:01<00:05, 22.9MB/s]
23%|████████▊ | 36.4M/156M [00:02<00:04, 25.0MB/s]
25%|█████████▌ | 39.5M/156M [00:02<00:04, 26.6MB/s]
27%|██████████▎ | 42.3M/156M [00:02<00:04, 23.6MB/s]
29%|███████████ | 45.5M/156M [00:02<00:04, 25.8MB/s]
31%|███████████▊ | 48.6M/156M [00:02<00:04, 23.6MB/s]
33%|████████████▌ | 51.6M/156M [00:02<00:04, 25.4MB/s]
35%|█████████████▎ | 54.8M/156M [00:02<00:03, 27.1MB/s]
37%|██████████████ | 57.7M/156M [00:02<00:04, 24.1MB/s]
39%|██████████████▊ | 60.8M/156M [00:02<00:03, 25.9MB/s]
41%|███████████████▌ | 64.0M/156M [00:03<00:03, 27.6MB/s]
43%|████████████████▎ | 66.9M/156M [00:03<00:03, 24.5MB/s]
45%|█████████████████ | 70.1M/156M [00:03<00:03, 26.5MB/s]
47%|█████████████████▊ | 73.4M/156M [00:03<00:02, 28.2MB/s]
49%|██████████████████▌ | 76.3M/156M [00:03<00:03, 25.0MB/s]
51%|███████████████████▎ | 79.5M/156M [00:03<00:02, 26.8MB/s]
53%|████████████████████▏ | 82.8M/156M [00:03<00:02, 28.3MB/s]
55%|████████████████████▊ | 85.7M/156M [00:03<00:02, 25.1MB/s]
57%|█████████████████████▌ | 88.8M/156M [00:04<00:02, 26.8MB/s]
59%|██████████████████████▍ | 92.1M/156M [00:04<00:02, 24.7MB/s]
61%|███████████████████████▏ | 95.2M/156M [00:04<00:02, 26.5MB/s]
63%|███████████████████████▉ | 98.5M/156M [00:04<00:02, 28.1MB/s]
65%|█████████████████████████▎ | 101M/156M [00:04<00:02, 25.0MB/s]
67%|██████████████████████████ | 105M/156M [00:04<00:01, 26.9MB/s]
69%|██████████████████████████▉ | 108M/156M [00:04<00:01, 28.4MB/s]
71%|███████████████████████████▋ | 111M/156M [00:04<00:01, 25.3MB/s]
73%|████████████████████████████▍ | 114M/156M [00:04<00:01, 27.1MB/s]
75%|█████████████████████████████▎ | 117M/156M [00:05<00:01, 28.6MB/s]
77%|██████████████████████████████ | 120M/156M [00:05<00:01, 25.9MB/s]
79%|██████████████████████████████▉ | 124M/156M [00:05<00:01, 27.7MB/s]
81%|███████████████████████████████▋ | 127M/156M [00:05<00:00, 29.2MB/s]
83%|████████████████████████████████▌ | 130M/156M [00:05<00:00, 26.4MB/s]
85%|█████████████████████████████████▎ | 133M/156M [00:05<00:00, 27.7MB/s]
88%|██████████████████████████████████▏ | 137M/156M [00:05<00:00, 29.5MB/s]
90%|██████████████████████████████████▉ | 140M/156M [00:05<00:00, 26.4MB/s]
92%|███████████████████████████████████▊ | 143M/156M [00:06<00:00, 28.0MB/s]
94%|████████████████████████████████████▌ | 147M/156M [00:06<00:00, 29.5MB/s]
96%|█████████████████████████████████████▍ | 150M/156M [00:06<00:00, 26.5MB/s]
98%|██████████████████████████████████████▏| 153M/156M [00:06<00:00, 28.1MB/s]
0%| | 0.00/156M [00:00<?, ?B/s]
100%|████████████████████████████████████████| 156M/156M [00:00<00:00, 433GB/s]
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 60 events (all good), 0 – 5 s (baseline off), ~8.0 MB, data loaded,
'left_hand': 30
'right_hand': 30>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 59 events (all good), 0 – 5 s (baseline off), ~7.9 MB, data loaded,
'left_hand': 30
'right_hand': 29>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
Zhou2016-WithinSession: 25%|##5 | 1/4 [00:15<00:46, 15.52s/it]/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 40 events (all good), 0 – 5 s (baseline off), ~5.4 MB, data loaded,
'left_hand': 20
'right_hand': 20>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
Zhou2016-WithinSession: 50%|##### | 2/4 [00:20<00:18, 9.09s/it]/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
Zhou2016-WithinSession: 75%|#######5 | 3/4 [00:24<00:07, 7.10s/it]/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 40 events (all good), 0 – 5 s (baseline off), ~5.4 MB, data loaded,
'left_hand': 20
'right_hand': 20>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
/home/runner/work/moabb/moabb/moabb/datasets/preprocessing.py:279: UserWarning: warnEpochs <Epochs | 50 events (all good), 0 – 5 s (baseline off), ~6.7 MB, data loaded,
'left_hand': 25
'right_hand': 25>
warn(f"warnEpochs {epochs}")
Zhou2016-WithinSession: 100%|##########| 4/4 [00:29<00:00, 6.09s/it]
Zhou2016-WithinSession: 100%|##########| 4/4 [00:29<00:00, 7.35s/it]
dataset evaluation pipeline avg score
0 Zhou2016 WithinSession CSP + SVM 0.932315
1 Zhou2016 WithinSession Tangent Space LR 0.941601
Benchmark prints a summary of the results. Detailed results are saved in a
pandas dataframe, and can be used to generate figures. The analysis & figures
are saved in the benchmark
folder.
score_plot(results)
plt.show()
/home/runner/work/moabb/moabb/moabb/analysis/plotting.py:70: UserWarning: The palette list has more values (6) than needed (2), which may not be intended.
sea.stripplot(
Total running time of the script: ( 0 minutes 30.701 seconds)
Estimated memory usage: 326 MB