Skip to main content
Ctrl+K
moabb 1.2.0-dev documentation - Home moabb 1.2.0-dev documentation - Home
  • The largest EEG benchmark
  • Datasets
  • Installation
  • Examples
  • API
    • Citation
    • Release notes
  • The largest EEG benchmark
  • Datasets
  • Installation
  • Examples
  • API
  • Citation
  • Release notes

Section Navigation

  • Getting Started!
    • Tutorial 0: Getting Started
    • Tutorial 1: Simple Motor Imagery
    • Tutorial 2: Using multiple datasets
    • Tutorial 3: Benchmarking multiple pipelines
    • Tutorial 4: Creating a dataset class
    • Tutorial 5: Creating a dataset class
  • Paradigm-Specific Evaluation Examples (Within- & Cross-Session)
    • Cross-Session Motor Imagery
    • Cross-Session on Multiple Datasets
    • Cross-Subject SSVEP
    • Changing epoch size in P300 VR dataset
    • Within Session P300
    • Within Session SSVEP
  • Data Management and Configuration
    • Load Model (Scikit) with MOABB
    • Convert a MOABB dataset to BIDS
    • Change Download Directory
    • Cache on disk intermediate data processing states
    • Explore Paradigm Object
    • Fixed interval windows processing
  • Benchmarking and Pipeline Evaluation
    • Benchmarking with MOABB showing the CO2 footprint
    • Examples of how to use MOABB to benchmark pipelines.
    • Benchmarking with MOABB with Grid Search
  • Advanced examples
    • Dataset bubble plot
    • Examples of analysis of a Dreyer2023 A dataset.
    • FilterBank CSP versus CSP
    • GridSearch within a session
    • Hinss2021 classification example
    • MNE Epochs-based pipelines
    • Pipelines using the mne-features library
    • Spectral analysis of the trials
    • Playing with the pre-processing steps
    • Select Electrodes and Resampling
    • Statistical Analysis
  • Evaluation with learning curve
    • Within Session P300 with Learning Curve
    • Within Session Motor Imagery with Learning Curve
    • Within Session P300 with Learning Curve
  • MOABB Examples
  • Benchmarking and Pipeline Evaluation
  • Benchmarking with MOABB with Grid Search

Note

Go to the end to download the full example code.

Benchmarking with MOABB with Grid Search#

This example shows how to use MOABB to benchmark a set of pipelines on all available datasets. In particular we run the Gridsearch to select the best hyperparameter of some pipelines and save the gridsearch. For this example, we will use only one dataset to keep the computation time low, but this benchmark is designed to easily scale to many datasets.

# Authors: Igor Carrara <igor.carrara@inria.fr>
#
# License: BSD (3-clause)

import matplotlib.pyplot as plt

from moabb import benchmark, set_log_level
from moabb.analysis.plotting import score_plot


set_log_level("info")

In this example, we will use only the dataset ‘Zhou 2016’.

Running the benchmark#

The benchmark is run using the benchmark function. You need to specify the folder containing the pipelines to use, the kind of evaluation and the paradigm to use. By default, the benchmark will use all available datasets for all paradigms listed in the pipelines. You could restrict to specific evaluation and paradigm using the evaluations and paradigms arguments.

To save computation time, the results are cached. If you want to re-run the benchmark, you can set the overwrite argument to True.

It is possible to indicate the folder to cache the results and the one to save the analysis & figures. By default, the results are saved in the results folder, and the analysis & figures are saved in the benchmark folder.

# In the results folder we will save the gridsearch evaluation
# When write the pipeline in ylm file we need to specify the parameter that we want to test, in format
# pipeline-name__estimator-name_parameter. Note that pipeline and estimator names MUST
# be in lower case (no capital letters allowed).
# If the grid search is already implemented it will load the previous results

results = benchmark(
    pipelines="./pipelines_grid/",
    evaluations=["WithinSession"],
    paradigms=["LeftRightImagery"],
    include_datasets=["Zhou2016"],
    results="./results/",
    overwrite=False,
    plot=False,
    output="./benchmark/",
)
Zhou2016-WithinSession:   0%|          | 0/4 [00:00<?, ?it/s]
Zhou2016-WithinSession:  25%|██▌       | 1/4 [00:22<01:08, 22.78s/it]
Zhou2016-WithinSession:  50%|█████     | 2/4 [00:44<00:44, 22.06s/it]
Zhou2016-WithinSession:  75%|███████▌  | 3/4 [01:05<00:21, 21.84s/it]
Zhou2016-WithinSession: 100%|██████████| 4/4 [01:27<00:00, 21.77s/it]
Zhou2016-WithinSession: 100%|██████████| 4/4 [01:27<00:00, 21.89s/it]
    dataset     evaluation   pipeline  avg score
0  Zhou2016  WithinSession    EN Grid   0.943531
1  Zhou2016  WithinSession  CSP + LDA   0.937954
2  Zhou2016  WithinSession         EN   0.943619

Benchmark prints a summary of the results. Detailed results are saved in a pandas dataframe, and can be used to generate figures. The analysis & figures are saved in the benchmark folder.

score_plot(results)
plt.show()
Scores per dataset and algorithm

Total running time of the script: (1 minutes 28.633 seconds)

Estimated memory usage: 606 MB

Download Jupyter notebook: plot_benchmark_grid_search.ipynb

Download Python source code: plot_benchmark_grid_search.py

Download zipped: plot_benchmark_grid_search.zip

Gallery generated by Sphinx-Gallery

previous

Examples of how to use MOABB to benchmark pipelines.

next

Advanced examples

On this page
  • Running the benchmark

© Copyright 2018-2025 MOABB contributors.

Built with the PyData Sphinx Theme 0.16.1.