What’s new#

  • “Enhancements” for new features

  • “Bugs” for bug fixes

  • “API changes” for backward-incompatible changes

Version 1.5 (Source - GitHub)#

Enhancements#

  • Introduce a new logo for the MOABB library (#858 by Pierre Guetschel and community)

  • Better verbosity control for initialization of the library (#850 by Bruno Aristimunha)

  • Enhanced BNCI datasets with comprehensive participant demographics documentation (by Bruno Aristimunha)

  • Added _participant_demographics class attribute to BNCI2014-002, BNCI2014-009, BNCI2015-001, and BNCI2015-004 for programmatic access to demographics (by Bruno Aristimunha)

  • Improved BIDS conversion with guaranteed montage preservation for all BNCI datasets (by Bruno Aristimunha)

  • Dataset-specific metadata extraction for BNCI2014-001, BNCI2014-004, and BNCI2014-008 with age, gender, and clinical information (by Bruno Aristimunha)

  • Improved error messages for dataset compatibility checks in evaluations - now provides specific reasons when datasets are incompatible (e.g., “dataset has only 1 session(s), but CrossSessionEvaluation requires at least 2 sessions”) by Bruno Aristimunha

  • Ability to join rows from the tables of MOABB predictive performance scores and detailed CodeCarbon compute profiling metrics by the column codecarbon_task_name in MOABB results and the column task_name in CodeCarbon results (#866 by Ethan Davis).

  • Adding two c-VEP datasets: moabb.datasets.MartinezCagigal2023Checker and moabb.datasets.MartinezCagigal2023Pary by Victor Martinez-Cagigal

  • Allow custom paradigms to have multiple scores for evaluations (#948 by Ethan Davis)

  • Ability to parameterize the scoring rule of paradigms (#948 by Ethan Davis)

  • Extend scoring configuration to accept lists of metric callables, scorer objects, and tuple kwargs (e.g., needs_proba/needs_threshold) for multi-metric evaluations (#948 by Ethan Davis and Bruno Aristimunha)

  • Implement moabb.evaluations.WithinSubjectSplitter for k-fold cross-validation within each subject across all sessions (by Bruno Aristimunha)

  • Add cv_class and cv_kwargs parameters to all evaluation classes (WithinSessionEvaluation, CrossSessionEvaluation, CrossSubjectEvaluation) for custom cross-validation strategies (#963 by Bruno Aristimunha)

  • Implement moabb.evaluations.splitters.LearningCurveSplitter as a dedicated sklearn-compatible cross-validator for learning curves, enabling learning curve analysis with any evaluation type (#963 by Bruno Aristimunha)

  • Auto-generate dataset documentation admonitions (Participants, Equipment, Preprocessing, Data Access, Experimental Protocol) from class-level METADATA when missing, while preserving manually written sections (#960 by Bruno Aristimunha)

  • Add additional_metadata parameter to paradigm.get_data() to fetch additional metadata columns from BIDS events.tsv files. Supports "all" to load all columns or a list of specific column names (#744 by Matthias Dold)

  • Add get_additional_metadata() method to moabb.datasets.base.BaseDataset allowing datasets to provide additional metadata for epochs. Implemented for BIDS datasets in moabb.datasets.base.BaseBIDSDataset (#744 by Matthias Dold)

API changes#

  • Removed SinglePass and FilterBank intermediate classes from motor imagery and P300 paradigms. FilterBankLeftRightImagery now inherits from LeftRightImagery, FilterBankMotorImagery inherits from MotorImagery, and P300 inherits directly from BaseP300. RestingStateToP300Adapter now inherits from BaseP300. Docstring inheritance is handled via NumpyDocstringInheritanceInitMeta (#467 by Bruno Aristimunha).

  • Allow CodeCarbon script level configurations when instantiating a moabb.evaluations.base.BaseEvaluation child class (#866 by Ethan Davis).

  • When CodeCarbon is installed, MOABB HDF5 results have an additional column codecarbon_task_name. If CodeCarbon is configured to save to file, its own tabular results have a column task_name. These columns are unique UUID4s. Related rows can be joined to see detailed costs and benefits of predictive performance and computing profiling metrics (#866 by Ethan Davis).

  • Isolated model fitting, duration tracking, and CodeCarbon compute profiling tracking. New and consistent ordering of duration and CodeCarbon tracking across all evaluations: (Higher priority, closest to model fitting) required duration tracking, (lower priority, second closest to model fitting) optional CodeCarbon tracking (#866 by Ethan Davis).

  • Replaced unreliable wall clock duration tracking (Python’s time.time()) in favor of performance counter duration tracking (Python’s time.perf_counter()) (#866 by Ethan Davis).

  • Enable choice of online or offline CodeCarbon through the parameterization of codecarbon_config when instantiating a moabb.evaluations.base.BaseEvaluation child class (#956 by Ethan Davis)

  • Renamed stimulus channel from stim to STI in BNCI motor imagery and error-related potential datasets for clarity and BIDS compliance (by Bruno Aristimunha).

  • Added four new BNCI P300/ERP dataset classes: moabb.datasets.BNCI2015_009 (AMUSE), moabb.datasets.BNCI2015_010 (RSVP), moabb.datasets.BNCI2015_012 (PASS2D), and moabb.datasets.BNCI2015_013 (ErrP) (by Bruno Aristimunha).

  • Removed data_size and n_perms parameters from moabb.evaluations.WithinSessionEvaluation. Use cv_class=LearningCurveSplitter with cv_kwargs=dict(data_size=..., n_perms=...) instead (#963 by Bruno Aristimunha)

  • Learning curve results now automatically include “data_size” and “permutation” columns when using LearningCurveSplitter (#963 by Bruno Aristimunha)

Requirements#

  • Allows CodeCarbon environment variables or a configuration file to be defined in the home directory or the current working directory (#866 by Ethan Davis).

  • Added filelock as a core dependency to fix missing import errors in utils (#959 by Mateusz Naklicki).

Bugs#

Code health#

  • Further reorganized BNCI datasets into year-specific modules (bnci_2003, bnci_2014, bnci_2015, bnci_2019) with shared helpers in legacy_base for clearer maintenance. The temporary legacy.py file has been removed (by Bruno Aristimunha).

  • Added new datasets moabb.datasets.BNCI2020_001, moabb.datasets.BNCI2020_002, moabb.datasets.BNCI2022_001, moabb.datasets.BNCI2025_001, and moabb.datasets.BNCI2025_002 (by Bruno Aristimunha).

  • Persist docs/test CI MNE dataset cache across runs to reduce cold-cache downloads (#946 by Bruno Aristimunha)

  • Refactor evaluation scoring into shared utility functions for future improvements (#948 by Bruno Aristimunha)

  • Centralize CV resolution in BaseEvaluation with new _resolve_cv() method for consistent cross-validation handling across all evaluation types. Add _build_result() and _build_scored_result() helpers to centralize result dict construction across WithinSession, CrossSession, and CrossSubject evaluations, replacing manual dict assembly in each (#963 by Bruno Aristimunha)

  • Remove redundant learning curve methods (get_data_size_subsets(), score_explicit(), _evaluate_learning_curve()) from WithinSessionEvaluation in favor of unified splitter-based approach (#963 by Bruno Aristimunha)

  • Generic metadata column registration: LearningCurveSplitter declares a metadata_columns class attribute, and BaseEvaluation auto-detects it via hasattr(cv_class, "metadata_columns") instead of hardcoding class checks, making it extensible to future custom splitters (#963 by Bruno Aristimunha)

  • Fix get_n_splits() delegation in WithinSessionSplitter and WithinSubjectSplitter to properly forward to the inner cv_class.get_n_splits() instead of hardcoding n_folds, giving correct split counts when using custom CV classes like LearningCurveSplitter (#963 by Bruno Aristimunha)

  • Remove duplicate get_inner_splitter_metadata() from WithinSessionSplitter, WithinSubjectSplitter, and CrossSubjectSplitter. All splitters now store a _current_splitter reference, and BaseEvaluation._build_scored_result() reads metadata generically from it (#963 by Bruno Aristimunha)

  • Extract _fit_cv(), _maybe_save_model_cv(), and _attach_emissions() into BaseEvaluation, removing duplicated model-fitting, model-saving, and carbon-tracking boilerplate from WithinSessionEvaluation, CrossSessionEvaluation, and CrossSubjectEvaluation (#963 by Bruno Aristimunha)

  • Extract _load_data() helper into BaseEvaluation to centralize data loading logic (epoch requirement checking and paradigm.get_data() call) that was duplicated across all three evaluation classes (#963 by Bruno Aristimunha)

  • Extract _get_nchan() helper into BaseEvaluation to replace repeated channel count extraction (X.info["nchan"] if isinstance(X, BaseEpochs) else X.shape[1]) in all evaluation classes (#963 by Bruno Aristimunha)

  • Move _pipeline_requires_epochs() from evaluations.py to utils.py for shared access by BaseEvaluation._load_data() (#963 by Bruno Aristimunha)

  • Move WithinSessionSplitter creation outside the per-session loop in WithinSessionEvaluation, since splitter parameters do not change per session (#963 by Bruno Aristimunha)

  • Add a compile smoke test (moabb/tests/test_compilation.py) that validates syntax for all Python files under moabb/ using py_compile (#960 by Bruno Aristimunha)

Version 1.4.3 (Stable - PyPi)#

Enhancements#

  • Add “Open in Colab” buttons for gallery examples (#853 by Bruno Aristimunha)

  • Refresh docs homepage design and citation visibility (#853 by Bruno Aristimunha)

  • Add moabb.datasets.preprocessing.FixedPipeline and moabb.datasets.preprocessing.make_fixed_pipeline() to avoid scikit-learn unfitted pipeline warnings (#850 by Bruno Aristimunha)

API changes#

  • None.

Requirements#

Bugs#

Code health#

Version 1.4.2#

Enhancements#

Bugs#

API changes#

  • None.

Version - 1.4#

Enhancements#

Bugs#

API changes#

  • None.

Version - 1.3#

Enhancements#

Bugs#

API changes#

  • Removing the deep learning module from inside moabb in favour of braindecode integration (#692 by Bruno Aristimunha )

Version - 1.2.0#

Enhancements#

Bugs#

API changes#

Version - 1.1.1#

Enhancements#

Bugs#

API changes#

  • Include optuna as soft-dependency in the benchmark function and in the base of evaluation (#630 by Igor Carrara)

Version - 1.1.0#

Enhancements#

Bugs#

API changes#

  • None

Version - 1.0.0#

Enhancements#

Bugs#

API changes#

  • None

Version - 0.5.0#

Enhancements#

Bugs#

API changes#

  • None

Version - 0.4.6#

Enhancements#

Bugs#

Version - 0.4.5#

Enhancements#

Bugs#

Version - 0.4.4#

Enhancements#

Bugs#

API changes#

  • Minimum supported Python version is now 3.7

  • MOABB now depends on scikit-learn >= 1.0

Version - 0.4.3#

Enhancements#

Bugs#

API changes#

Version - 0.4.2#

Enhancements#

  • None

Bugs#

API changes#

  • None

Version - 0.4.1#

Enhancements#

  • None

Bugs#

API changes#

  • Remove update_path on all datasets, update_path parameter in dataset.data_path() is deprecated (#207 by Sylvain Chevallier)

Version - 0.4.0#

Enhancements#

Bugs#

API changes#

  • Drop update_path from moabb.download.data_path and moabb.download.data_dl

Version 0.3.0#

Enhancements#

Bugs#

API changes#

  • None

Version 0.2.1#

Enhancements#

Bugs#

API changes#

  • None

Version 0.2.0#

Enhancements#

Bugs#

  • None

API changes#

  • None