What’s new#

  • “Enhancements” for new features

  • “Bugs” for bug fixes

  • “API changes” for backward-incompatible changes

Version 1.5 (Source - GitHub)#

Enhancements#

  • Implementation of Pseudo-Online framework (#641 by Igor Carrara and Bruno Aristimunha)

  • Introduce a new logo for the MOABB library (#858 by Pierre Guetschel and community)

  • Better verbosity control for initialization of the library (#850 by Bruno Aristimunha)

  • Enhanced BNCI datasets with comprehensive participant demographics documentation (by Bruno Aristimunha)

  • Added _participant_demographics class attribute to BNCI2014-002, BNCI2014-009, BNCI2015-001, and BNCI2015-004 for programmatic access to demographics (by Bruno Aristimunha)

  • Improved BIDS conversion with guaranteed montage preservation for all BNCI datasets (by Bruno Aristimunha)

  • Dataset-specific metadata extraction for BNCI2014-001, BNCI2014-004, and BNCI2014-008 with age, gender, and clinical information (by Bruno Aristimunha)

  • Improved error messages for dataset compatibility checks in evaluations - now provides specific reasons when datasets are incompatible (e.g., “dataset has only 1 session(s), but CrossSessionEvaluation requires at least 2 sessions”) by Bruno Aristimunha

  • Ability to join rows from the tables of MOABB predictive performance scores and detailed CodeCarbon compute profiling metrics by the column codecarbon_task_name in MOABB results and the column task_name in CodeCarbon results (#866 by Ethan Davis).

  • Adding two c-VEP datasets: moabb.datasets.MartinezCagigal2023Checker and moabb.datasets.MartinezCagigal2023Pary by Victor Martinez-Cagigal

  • Allow custom paradigms to have multiple scores for evaluations (#948 by Ethan Davis)

  • Ability to parameterize the scoring rule of paradigms (#948 by Ethan Davis)

  • Extend scoring configuration to accept lists of metric callables, scorer objects, and tuple kwargs (e.g., needs_proba/needs_threshold) for multi-metric evaluations (#948 by Ethan Davis and Bruno Aristimunha)

  • Implement moabb.evaluations.WithinSubjectSplitter for k-fold cross-validation within each subject across all sessions (by Bruno Aristimunha)

  • Add cv_class and cv_kwargs parameters to all evaluation classes (WithinSessionEvaluation, CrossSessionEvaluation, CrossSubjectEvaluation) for custom cross-validation strategies (#963 by Bruno Aristimunha)

  • Implement moabb.evaluations.splitters.LearningCurveSplitter as a dedicated sklearn-compatible cross-validator for learning curves, enabling learning curve analysis with any evaluation type (#963 by Bruno Aristimunha)

  • Auto-generate dataset documentation admonitions (Participants, Equipment, Preprocessing, Data Access, Experimental Protocol) from class-level METADATA when missing, while preserving manually written sections (#960 by Bruno Aristimunha)

  • Add a “Report an Issue on GitHub” feedback section to all dataset docstrings so users can easily report dataset problems (#982 by Bruno Aristimunha)

  • Add additional_metadata parameter to paradigm.get_data() to fetch additional metadata columns from BIDS events.tsv files. Supports "all" to load all columns or a list of specific column names (#744 by Matthias Dold)

  • Add get_additional_metadata() method to moabb.datasets.base.BaseDataset allowing datasets to provide additional metadata for epochs. Implemented for BIDS datasets in moabb.datasets.base.BaseBIDSDataset (#744 by Matthias Dold)

  • Add automatic HED 8.4.0 (Hierarchical Event Descriptors) annotations to BIDS export with 83 validated paradigm-specific tags covering all MOABB datasets, HEDVersion in dataset_description.json, events.json sidecar patching, per-dataset override via ExperimentMetadata.hed_tags, and Label/ fallback for unmapped events (#974 by Bruno Aristimunha)

  • Add advanced tutorial on Riemannian Artifact Rejection (Riemannian Potato and Potato Field) as a pre-processing step using pipeline surgery (by Davoud Hajhassani and Bruno Aristimunha)

  • Add pipeline surgery methods (find_steps, insert_step, remove_step) to moabb.datasets.preprocessing.FixedPipeline for easier pipeline manipulation (by Davoud Hajhassani and Bruno Aristimunha)

  • Add license metadata to all datasets with known licenses, covering BNCI, BrainInvaders, ErpCore2021, Castillos, MartinezCagigal2023, Beetl2021, Kojima2024, Dreyer2023, and many others (#989 by Bruno Aristimunha)

  • Add parametrized test test_all_datasets_have_license to ensure all datasets declare a license in their documentation metadata (#989 by Bruno Aristimunha)

  • Add and correct license and repository fields in DocumentationMetadata across all datasets against upstream sources; standardize all license strings to SPDX identifiers; correct DOIs for BNCI2014_002 and MAMEM1/2/3 datasets (by Katelyn Begany)

  • Add convert_to_bids() method for exporting raw EEG datasets to clean BIDS-compliant directory structures without processing-pipeline hash in filenames (by Bruno Aristimunha)

  • Expose subjects and sessions parameters on all dataset constructors to allow filtering at instantiation time (e.g., PhysionetMI(subjects=[1, 2, 3])). Add all_subjects property and sessions parameter to get_data() (by Bruno Aristimunha)

API changes#

  • Removed SinglePass and FilterBank intermediate classes from motor imagery and P300 paradigms. FilterBankLeftRightImagery now inherits from LeftRightImagery, FilterBankMotorImagery inherits from MotorImagery, and P300 inherits directly from BaseP300. RestingStateToP300Adapter now inherits from BaseP300. Docstring inheritance is handled via NumpyDocstringInheritanceInitMeta (#467 by Bruno Aristimunha).

  • Allow CodeCarbon script level configurations when instantiating a moabb.evaluations.base.BaseEvaluation child class (#866 by Ethan Davis).

  • When CodeCarbon is installed, MOABB HDF5 results have an additional column codecarbon_task_name. If CodeCarbon is configured to save to file, its own tabular results have a column task_name. These columns are unique UUID4s. Related rows can be joined to see detailed costs and benefits of predictive performance and computing profiling metrics (#866 by Ethan Davis).

  • Isolated model fitting, duration tracking, and CodeCarbon compute profiling tracking. New and consistent ordering of duration and CodeCarbon tracking across all evaluations: (Higher priority, closest to model fitting) required duration tracking, (lower priority, second closest to model fitting) optional CodeCarbon tracking (#866 by Ethan Davis).

  • Replaced unreliable wall clock duration tracking (Python’s time.time()) in favor of performance counter duration tracking (Python’s time.perf_counter()) (#866 by Ethan Davis).

  • Enable choice of online or offline CodeCarbon through the parameterization of codecarbon_config when instantiating a moabb.evaluations.base.BaseEvaluation child class (#956 by Ethan Davis)

  • Renamed stimulus channel from stim to STI in BNCI motor imagery and error-related potential datasets for clarity and BIDS compliance (by Bruno Aristimunha).

  • Added four new BNCI P300/ERP dataset classes: moabb.datasets.BNCI2015_009 (AMUSE), moabb.datasets.BNCI2015_010 (RSVP), moabb.datasets.BNCI2015_012 (PASS2D), and moabb.datasets.BNCI2015_013 (ErrP) (by Bruno Aristimunha).

  • Removed data_size and n_perms parameters from moabb.evaluations.WithinSessionEvaluation. Use cv_class=LearningCurveSplitter with cv_kwargs=dict(data_size=..., n_perms=...) instead (#963 by Bruno Aristimunha)

  • Learning curve results now automatically include “data_size” and “permutation” columns when using LearningCurveSplitter (#963 by Bruno Aristimunha)

Requirements#

  • Allows CodeCarbon environment variables or a configuration file to be defined in the home directory or the current working directory (#866 by Ethan Davis).

  • Added filelock as a core dependency to fix missing import errors in utils (#959 by Mateusz Naklicki).

Bugs#

  • Fixed incorrect DOIs in Dreyer2023, RomaniBF2025ERP, BNCI2015_003, BNCI2015_004, and BNCI2015_012 datasets (#977 by Bruno Aristimunha)

  • Added missing metadata DOIs for AlexMI, PhysionetMI, GrosseWentrup2009, Shin2017A, Shin2017B, BNCI2014_004, and BNCI2003_004 datasets (#977 by Bruno Aristimunha)

  • Fixed montage not being set before BIDS cache conversion in BNCI datasets (by Bruno Aristimunha)

  • Fixed measurement date setting for BNCI datasets to use specific collection years from papers (by Bruno Aristimunha)

  • Ensured proper subject ID assignment for BIDS compliance across all BNCI datasets (by Bruno Aristimunha)

  • Correct moabb.pipelines.classification.SSVEP_CCA, moabb.pipelines.classification.SSVEP_TRCA and moabb.pipelines.classification.SSVEP_MsetCCA behavior (#625 by Sylvain Chevallier)

  • Fix scikit-learn LogisticRegression elasticnet penalty parameter deprecation by re-adding penalty=’elasticnet’ for ElasticNet configurations with 0 < l1_ratio < 1 (#869 by Bruno Aristimunha)

  • Fixing option to pickle model (#870 by Ethan Davis)

  • Normalize Zenodo download paths and add a custom user-agent to improve download robustness (#946 by Bruno Aristimunha)

  • Use the BNCI mirror host to avoid download timeouts (#946 by Bruno Aristimunha)

  • Prevent Python mutable default argument when defining CodeCarbon configurations (#956 by Ethan Davis)

  • Fix copytree FileExistsError in BrainInvaders2013a download by adding dirs_exist_ok=True (by Bruno Aristimunha)

  • Ensure optional additional scoring columns in evaluation results (#957 by Ethan Davis)

  • Fix pandas ArrowStringArray shuffle warning by converting .unique() results to numpy arrays in splitters, avoiding issues with newer pandas versions (#963 by Bruno Aristimunha)

  • LearningCurveSplitter now skips training splits that collapse to a single class (e.g., with very small data_size) and emits a RuntimeWarning instead of producing NaN results (#963 by Bruno Aristimunha)

  • Fix double µV-to-V conversion in BNCI2003-004 and BNCI2015-006: data loaded in microvolts was labeled as volts without unit conversion, causing a second scaling during EDF export via mne_bids (by Bruno Aristimunha)

  • Fix Beetl2021_A and Beetl2021_B 403 Forbidden errors by skipping Figshare API calls when data already exists locally, and fix double-nested zip extraction directory structure (#969 by Bruno Aristimunha)

  • Fix wrong channel names in Riemannian Artifact Rejection tutorial that caused pick() to fail on BNCI2014-009 (by Bruno Aristimunha)

  • Fix moabb.datasets.RomaniBF2025ERP to follow MOABB nomenclature pattern by using dynamic folder name MNE-{code}-data instead of hardcoded folder name. Automatically migrates legacy folder BrainForm-BIDS-eeg-dataset to new nomenclature for backward compatibility (by Bruno Aristimunha)

  • Move BIDS cache lock file from the BIDS subject folder to the code/ folder for BIDS validator compliance. Lock files are now written per-session as code/sub-{subject}_ses-{session}_desc-{hash}_lockfile.json. Backward compatibility is preserved for caches created with the old location (#986 by Pierre Guetschel and Bruno Aristimunha)

  • Fixed erase() in moabb.datasets.bids_interface.BIDSInterfaceBase to handle multi-session datasets correctly by using per-session rm() calls instead of a single subject-level call, which previously caused a RuntimeError when looking up scans.tsv across multiple sessions (#986 by Pierre Guetschel and Bruno Aristimunha)

  • Fix MOABB_RESULTS default path to respect MNE_DATA configuration instead of hardcoding ~/mne_data, and fix docs CI cache to use workspace-relative MNE_DATA path and cache ~/.mne config directory (by Bruno Aristimunha)

  • Fix moabb.datasets.RomaniBF2025ERP get_data() failing with description merge error when adding stim channel, causing sessions to be silently dropped (#991 by Bruno Aristimunha)

  • Fix docs CI cache: set MNE_DATA env var and persist ~/.mne config directory so dataset paths survive cache restore (by Bruno Aristimunha)

  • Fix moabb.datasets.Liu2024 download failure by switching Figshare URLs from figshare.com/ndownloader to ndownloader.figshare.com and adding BadZipFile recovery for corrupted cached downloads (#992 by Bruno Aristimunha)

  • Fix TRCA Riemannian mean convergence failure by regularizing ill-conditioned cross-covariance matrices in moabb.pipelines.classification.SSVEP_TRCA. Eigenvalue clamping bounds the condition number, eliminating Convergence not reached and invalid value encountered in log warnings (by Bruno Aristimunha)

Code health#

  • Resolve all 216 pytest warnings across the test suite by addressing root causes: clear Epochs annotations before concatenation, replace lambda with named function in test pipelines, re-apply montage after add_reference_channels, conditionally pass groups parameter in splitters using GroupsConsumerMixin, use os.environ in FakeDataset to avoid non-standard config warnings, and suppress intentional OptunaSearchCV experimental warnings at init (by Bruno Aristimunha)

  • Added systematic DOI validation test suite that checks format, docstring tracking, resolution, and author overlap across all datasets (#977 by Bruno Aristimunha)

  • Further reorganized BNCI datasets into year-specific modules (bnci_2003, bnci_2014, bnci_2015, bnci_2019) with shared helpers in legacy_base for clearer maintenance. The temporary legacy.py file has been removed (by Bruno Aristimunha).

  • Added new datasets moabb.datasets.BNCI2020_001, moabb.datasets.BNCI2020_002, moabb.datasets.BNCI2022_001, moabb.datasets.BNCI2025_001, and moabb.datasets.BNCI2025_002 (by Bruno Aristimunha).

  • Persist docs/test CI MNE dataset cache across runs to reduce cold-cache downloads (#946 by Bruno Aristimunha)

  • Refactor evaluation scoring into shared utility functions for future improvements (#948 by Bruno Aristimunha)

  • Centralize CV resolution in BaseEvaluation with new _resolve_cv() method for consistent cross-validation handling across all evaluation types. Add _build_result() and _build_scored_result() helpers to centralize result dict construction across WithinSession, CrossSession, and CrossSubject evaluations, replacing manual dict assembly in each (#963 by Bruno Aristimunha)

  • Remove redundant learning curve methods (get_data_size_subsets(), score_explicit(), _evaluate_learning_curve()) from WithinSessionEvaluation in favor of unified splitter-based approach (#963 by Bruno Aristimunha)

  • Generic metadata column registration: LearningCurveSplitter declares a metadata_columns class attribute, and BaseEvaluation auto-detects it via hasattr(cv_class, "metadata_columns") instead of hardcoding class checks, making it extensible to future custom splitters (#963 by Bruno Aristimunha)

  • Fix get_n_splits() delegation in WithinSessionSplitter and WithinSubjectSplitter to properly forward to the inner cv_class.get_n_splits() instead of hardcoding n_folds, giving correct split counts when using custom CV classes like LearningCurveSplitter (#963 by Bruno Aristimunha)

  • Remove duplicate get_inner_splitter_metadata() from WithinSessionSplitter, WithinSubjectSplitter, and CrossSubjectSplitter. All splitters now store a _current_splitter reference, and BaseEvaluation._build_scored_result() reads metadata generically from it (#963 by Bruno Aristimunha)

  • Extract _fit_cv(), _maybe_save_model_cv(), and _attach_emissions() into BaseEvaluation, removing duplicated model-fitting, model-saving, and carbon-tracking boilerplate from WithinSessionEvaluation, CrossSessionEvaluation, and CrossSubjectEvaluation (#963 by Bruno Aristimunha)

  • Extract _load_data() helper into BaseEvaluation to centralize data loading logic (epoch requirement checking and paradigm.get_data() call) that was duplicated across all three evaluation classes (#963 by Bruno Aristimunha)

  • Extract _get_nchan() helper into BaseEvaluation to replace repeated channel count extraction (X.info["nchan"] if isinstance(X, BaseEpochs) else X.shape[1]) in all evaluation classes (#963 by Bruno Aristimunha)

  • Move _pipeline_requires_epochs() from evaluations.py to utils.py for shared access by BaseEvaluation._load_data() (#963 by Bruno Aristimunha)

  • Move WithinSessionSplitter creation outside the per-session loop in WithinSessionEvaluation, since splitter parameters do not change per session (#963 by Bruno Aristimunha)

  • Add a compile smoke test (moabb/tests/test_compilation.py) that validates syntax for all Python files under moabb/ using py_compile (#960 by Bruno Aristimunha)

  • Add persistent DOI resolution cache (moabb/tests/doi_cache.json) for test_doi_validation.py to avoid network requests on every test run, reducing DOI test time from ~9 minutes to <1 second. Refresh with --update-doi-cache (#996 by Bruno Aristimunha)

Version 1.4.3 (Stable - PyPi)#

Enhancements#

  • Add “Open in Colab” buttons for gallery examples (#853 by Bruno Aristimunha)

  • Refresh docs homepage design and citation visibility (#853 by Bruno Aristimunha)

  • Add moabb.datasets.preprocessing.FixedPipeline and moabb.datasets.preprocessing.make_fixed_pipeline() to avoid scikit-learn unfitted pipeline warnings (#850 by Bruno Aristimunha)

API changes#

  • None.

Requirements#

Bugs#

Code health#

Version 1.4.2#

Enhancements#

Bugs#

API changes#

  • None.

Version - 1.4#

Enhancements#

Bugs#

API changes#

  • None.

Version - 1.3#

Enhancements#

Bugs#

API changes#

  • Removing the deep learning module from inside moabb in favour of braindecode integration (#692 by Bruno Aristimunha )

Version - 1.2.0#

Enhancements#

Bugs#

API changes#

Version - 1.1.1#

Enhancements#

Bugs#

API changes#

  • Include optuna as soft-dependency in the benchmark function and in the base of evaluation (#630 by Igor Carrara)

Version - 1.1.0#

Enhancements#

Bugs#

API changes#

  • None

Version - 1.0.0#

Enhancements#

Bugs#

API changes#

  • None

Version - 0.5.0#

Enhancements#

Bugs#

API changes#

  • None

Version - 0.4.6#

Enhancements#

Bugs#

Version - 0.4.5#

Enhancements#

Bugs#

Version - 0.4.4#

Enhancements#

Bugs#

API changes#

  • Minimum supported Python version is now 3.7

  • MOABB now depends on scikit-learn >= 1.0

Version - 0.4.3#

Enhancements#

Bugs#

API changes#

Version - 0.4.2#

Enhancements#

  • None

Bugs#

API changes#

  • None

Version - 0.4.1#

Enhancements#

  • None

Bugs#

API changes#

  • Remove update_path on all datasets, update_path parameter in dataset.data_path() is deprecated (#207 by Sylvain Chevallier)

Version - 0.4.0#

Enhancements#

Bugs#

API changes#

  • Drop update_path from moabb.download.data_path and moabb.download.data_dl

Version 0.3.0#

Enhancements#

Bugs#

API changes#

  • None

Version 0.2.1#

Enhancements#

Bugs#

API changes#

  • None

Version 0.2.0#

Enhancements#

Bugs#

  • None

API changes#

  • None