Mother of all BCI Benchmarks


Build a comprehensive benchmark of popular Brain-Computer Interface (BCI) algorithms applied on an extensive list of freely available EEG datasets.


This is an open science project that may evolve depending on the need of the community.

DOI Build Status Code style: black codecov PyPI Downloads


Thank you for visiting the Mother of all BCI Benchmark documentation and associated GitHub repository

This document is a hub to give you some information about the project. Jump straight to one of the sections below, or just scroll down to find out more.

The problem#

Brain-Computer Interfaces allow to interact with a computer using brain signals. In this project, we focus mostly on electroencephalographic signals (EEG), that is a very active research domain, with worldwide scientific contributions. Still:

  • Reproducible Research in BCI has a long way to go.

  • While many BCI datasets are made freely available, researchers do not publish code, and reproducing results required to benchmark new algorithms turns out to be trickier than it should be.

  • Performances can be significantly impacted by parameters of the preprocessing steps, toolboxes used and implementation “tricks” that are almost never reported in the literature.

As a result, there is no comprehensive benchmark of BCI algorithms, and newcomers are spending a tremendous amount of time browsing literature to find out what algorithm works best and on which dataset.

The solution#

The Mother of all BCI Benchmarks allows to:

  • Build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.

  • The code is available on GitHub, serving as a reference point for the future algorithmic developments.

  • Algorithms can be ranked and promoted on a website, providing a clear picture of the different solutions available in the field.

This project will be successful when we read in an abstract “ … the proposed method obtained a score of 89% on the MOABB (Mother of All BCI Benchmarks), outperforming the state of the art by 5% …”.


First, you could take a look at our tutorials that cover the most important concepts and use cases. Also, we have a gallery of examples available.

Core Team#

This project is under the umbrella of NeuroTechX, the international community for NeuroTech enthusiasts.

The project is currently maintained by:

Sylvain Chevallier Bruno Aristimunha Igor Carrara Pierre Guetschel Sara Sedlar
Sylvain Chevallier Bruno Aristimunha Igor Carrara Pierre Guetschel Sara Sedlar

The Mother of all BCI Benchmarks was founded by Alexander Barachant and Vinay Jayaram, who are experts in the field of Brain-Computer Interfaces (BCI). At moment, both works as Research Scientist

Alexander Barachant Vinay Jayaram
Alexander Barachant Vinay Jayaram


The MOABB is a community project, and we are always thankful for all the contributors!

Special acknowledge for the extra MOABB contributors:

Pedro Rodrigues
 Pedro L. C. Rodrigues

What do we need?#

You! In whatever way you can help.

We need expertise in programming, user experience, software sustainability, documentation and technical writing and project management.

We’d love your feedback along the way.

Our primary goal is to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets, and we’re excited to support the professional development of any and all of our contributors. If you’re looking to learn to code, try out working collaboratively, or translate your skills to the digital domain, we’re here to help.

Contact us#

If you want to report a problem or suggest an enhancement, we’d love for you to open an issue at this GitHub repository because then we can get right on it.

For a less formal discussion or exchanging ideas, you can also reach us on the Gitter channel or join our weekly office hours! This an open video meeting happening on a regular basis, please ask the link on the gitter channel. We are also on NeuroTechX Slack channel #moabb.

Citing MOABB and related publications#

If you use MOABB in your experiments, please cite this library when publishing a paper to increase the visibility of open science initiatives:

  • Aristimunha, B., Carrara, I., Guetschel, P., Sedlar, S., Rodrigues, P., Sosulski, J., Narayanan, D., Bjareholt, E., Quentin, B., Schirrmeister, R. T.,Kalunga, E., Darmet, L., Gregoire, C., Abdul Hussain, A., Gatti, R., Goncharenko, V., Thielen, J., Moreau, T., Roy, Y., Jayaram, V., Barachant,A., & Chevallier, S. Mother of all BCI Benchmarks (MOABB), 2023. DOI: 10.5281/zenodo.10034223

and here is the Bibtex version:

  author = {Aristimunha, Bruno and Carrara, Igor and Guetschel, Pierre and Sedlar, Sara and Rodrigues, Pedro and Sosulski, Jan and Narayanan, Divyesh and Bjareholt, Erik and Quentin, Barthelemy and Schirrmeister, Robin Tibor and Kalunga, Emmanuel and Darmet, Ludovic and Gregoire, Cattan and Abdul Hussain, Ali and Gatti, Ramiro and Goncharenko, Vladislav and Thielen, Jordy and Moreau, Thomas and Roy, Yannick and Jayaram, Vinay and Barachant, Alexandre and Chevallier, Sylvain},
  doi = {10.5281/zenodo.10034223},
  title = {{Mother of all BCI Benchmarks}},
  url = {},
  version = {1.0.0},
  year = {2023}

If you want to cite the scientific contributions of MOABB, you could use the following paper:

Here is the BibTeX entry for the above paper:

  title={MOABB: trustworthy algorithm benchmarking for BCIs},
  author={Jayaram, Vinay and Barachant, Alexandre},
  journal={Journal of neural engineering},
  publisher={IOP Publishing}

If you publish a paper using MOABB, please contact us on gitter or open an issue, and we will add your paper to the dedicated wiki page.