Skip to main content
Ctrl+K
Using MOABB in academic work? Cite MOABB DOI: 10.5281/zenodo.10034223 · Explore benchmark results
moabb 1.4.3-dev documentation - Home moabb 1.4.3-dev documentation - Home
  • The largest EEG benchmark
  • Datasets
  • Installation
  • Examples
  • API
    • Citation
    • Release notes
  • GitHub
  • PyPI
  • The largest EEG benchmark
  • Datasets
  • Installation
  • Examples
  • API
  • Citation
  • Release notes
  • GitHub
  • PyPI

Section Navigation

  • Getting Started!
    • Tutorial 0: Getting Started
    • Tutorial 1: Simple Motor Imagery
    • Tutorial 2: Using multiple datasets
    • Tutorial 3: Benchmarking multiple pipelines
    • Tutorial 4: Creating custom datasets
    • Tutorial 5: Combining Multiple Datasets into a Single Dataset
  • Paradigm-Specific Evaluation Examples (Within- & Cross-Session)
    • Cross-Session Motor Imagery
    • Cross-Session on Multiple Datasets
    • Cross-Subject SSVEP
    • Changing epoch size in P300 VR dataset
    • Within Session P300
    • Within Session SSVEP
  • Data Management and Configuration
    • Load Model (Scikit) with MOABB
    • Convert a MOABB dataset to BIDS
    • Change Download Directory
    • Cache on disk intermediate data processing states
    • Explore Paradigm Object
    • Fixed interval windows processing
  • Benchmarking and Pipeline Evaluation
    • Benchmarking with MOABB showing the CO2 footprint
    • Benchmarking with MOABB
    • Benchmarking with MOABB with Grid Search
    • Tutorial: Within-Session Splitting on Real MI Dataset
  • Advanced examples
    • Pipelines using the mne-features library
    • Dataset bubble plot
    • Examples of analysis of a Dreyer2023 A dataset.
    • FilterBank CSP versus CSP
    • GridSearch within a session
    • Hinss2021 classification example
    • MNE Epochs-based pipelines
    • Spectral analysis of the trials
    • Playing with the pre-processing steps
    • Pseudo-Online Motor Imagery with Sliding Window
    • Riemannian Artifact Rejection as a Pre-processing Step
    • Select Electrodes and Resampling
    • Time-Resolved Decoding with SlidingEstimator
    • Statistical Analysis and Chance Level Assessment
    • Using X y data (epoched data) instead of continuous signal
  • Evaluation with learning curve
    • Within Session P300 with Learning Curve
    • Within Session Motor Imagery with Learning Curve
    • Within Session P300 with Learning Curve
  • MOABB Examples
  • Benchmarking and Pipeline Evaluation
  • Benchmarking with MOABB with Grid Search

Note

Go to the end to download the full example code.

Benchmarking with MOABB with Grid Search#

This example shows how to use MOABB to benchmark a set of pipelines on all available datasets. In particular we run the Gridsearch to select the best hyperparameter of some pipelines and save the gridsearch. For this example, we will use only one dataset to keep the computation time low, but this benchmark is designed to easily scale to many datasets.

# Authors: Igor Carrara <igor.carrara@inria.fr>
#
# License: BSD (3-clause)

import matplotlib.pyplot as plt

from moabb import benchmark, set_log_level
from moabb.analysis.chance_level import chance_by_chance
from moabb.analysis.plotting import score_plot
from moabb.paradigms import LeftRightImagery


set_log_level("info")

In this example, we will use only the dataset ‘Zhou 2016’.

Running the benchmark#

The benchmark is run using the benchmark function. You need to specify the folder containing the pipelines to use, the kind of evaluation and the paradigm to use. By default, the benchmark will use all available datasets for all paradigms listed in the pipelines. You could restrict to specific evaluation and paradigm using the evaluations and paradigms arguments.

To save computation time, the results are cached. If you want to re-run the benchmark, you can set the overwrite argument to True.

It is possible to indicate the folder to cache the results and the one to save the analysis & figures. By default, the results are saved in the results folder, and the analysis & figures are saved in the benchmark folder.

# In the results folder we will save the gridsearch evaluation
# When write the pipeline in ylm file we need to specify the parameter that we want to test, in format
# pipeline-name__estimator-name_parameter. Note that pipeline and estimator names MUST
# be in lower case (no capital letters allowed).
# If the grid search is already implemented it will load the previous results
#
# Optional: CodeCarbon Configuration for GridSearch Benchmarks
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Grid search can be computationally expensive. You may want to track emissions
# during the optimization process. Configure CodeCarbon as needed:
#
# .. code-block:: python
#
#     codecarbon_config = {
#         'tracking_mode': 'machine',
#         'save_to_file': True,
#         'output_file': 'gridsearch_emissions.csv',
#         'log_level': 'info'
#     }
#
# With ``tracking_mode='machine'``, CodeCarbon will track the entire machine's
# power consumption, which is useful for benchmarking.

results = benchmark(
    pipelines="./pipelines_grid/",
    evaluations=["WithinSession"],
    paradigms=["LeftRightImagery"],
    include_datasets=["Zhou2016"],
    results="./results/",
    overwrite=False,
    output="./benchmark/",
    suffix="benchmark_grid",
    plot=False,
)
The paradigms being run are dict_keys(['LeftRightImagery'])
Datasets considered for LeftRightImagery paradigm ['Zhou2016']

Zhou2016-WithinSession:   0%|          | 0/4 [00:00<?, ?it/s]
Zhou2016-WithinSession:  25%|██▌       | 1/4 [01:55<05:46, 115.59s/it]
Zhou2016-WithinSession:  50%|█████     | 2/4 [03:48<03:48, 114.22s/it]
Zhou2016-WithinSession:  75%|███████▌  | 3/4 [05:40<01:53, 113.24s/it]
Zhou2016-WithinSession: 100%|██████████| 4/4 [07:32<00:00, 112.63s/it]
Zhou2016-WithinSession: 100%|██████████| 4/4 [07:32<00:00, 113.16s/it]
/home/runner/work/moabb/moabb/moabb/analysis/results.py:179: H5pyDeprecationWarning: Creating a dataset without passing data or dtype is deprecated. Pass an explicit dtype. Using dtype='f4' will keep the current default behaviour.
  dset.create_dataset(
    dataset     evaluation   pipeline  avg score  carbon emission
0  Zhou2016  WithinSession         EN   0.949629         0.060919
1  Zhou2016  WithinSession    EN Grid   0.951031         0.139296
2  Zhou2016  WithinSession  CSP + LDA   0.930930         0.095691

Benchmark prints a summary of the results. Detailed results are saved in a pandas dataframe, and can be used to generate figures. The analysis & figures are saved in the benchmark folder.

Compute chance levels for the dataset used in the benchmark.

paradigm = LeftRightImagery()
chance_levels = chance_by_chance(results, alpha=[0.05, 0.01])

score_plot(results, chance_level=chance_levels)
plt.show()
plot benchmark grid search
/home/runner/work/moabb/moabb/moabb/analysis/plotting.py:420: UserWarning: The palette list has more values (6) than needed (3), which may not be intended.
  sea.stripplot(

Total running time of the script: (7 minutes 37.634 seconds)

Download Jupyter notebook: plot_benchmark_grid_search.ipynb

Download Python source code: plot_benchmark_grid_search.py

Download zipped: plot_benchmark_grid_search.zip

Gallery generated by Sphinx-Gallery

previous

Benchmarking with MOABB

next

Tutorial: Within-Session Splitting on Real MI Dataset

On this page
  • Running the benchmark
Download source code
Download Jupyter notebook
Download zipped

Run this example

Open in Colab

© Copyright 2018-2026 MOABB contributors.

Built with the PyData Sphinx Theme 0.16.1.