What’s new#
“Enhancements” for new features
“Bugs” for bug fixes
“API changes” for backward-incompatible changes
Version 1.5 (Source - GitHub)#
Enhancements#
Introduce a new logo for the MOABB library (#858 by Pierre Guetschel and community)
Better verbosity control for initialization of the library (#850 by Bruno Aristimunha)
Enhanced BNCI datasets with comprehensive participant demographics documentation (by Bruno Aristimunha)
Added
_participant_demographicsclass attribute to BNCI2014-002, BNCI2014-009, BNCI2015-001, and BNCI2015-004 for programmatic access to demographics (by Bruno Aristimunha)Improved BIDS conversion with guaranteed montage preservation for all BNCI datasets (by Bruno Aristimunha)
Dataset-specific metadata extraction for BNCI2014-001, BNCI2014-004, and BNCI2014-008 with age, gender, and clinical information (by Bruno Aristimunha)
Improved error messages for dataset compatibility checks in evaluations - now provides specific reasons when datasets are incompatible (e.g., “dataset has only 1 session(s), but CrossSessionEvaluation requires at least 2 sessions”) by Bruno Aristimunha
Ability to join rows from the tables of MOABB predictive performance scores and detailed CodeCarbon compute profiling metrics by the column codecarbon_task_name in MOABB results and the column task_name in CodeCarbon results (#866 by Ethan Davis).
Adding two c-VEP datasets:
moabb.datasets.MartinezCagigal2023Checkerandmoabb.datasets.MartinezCagigal2023Paryby Victor Martinez-CagigalAllow custom paradigms to have multiple scores for evaluations (#948 by Ethan Davis)
Ability to parameterize the scoring rule of paradigms (#948 by Ethan Davis)
Extend scoring configuration to accept lists of metric callables, scorer objects, and tuple kwargs (e.g., needs_proba/needs_threshold) for multi-metric evaluations (#948 by Ethan Davis and Bruno Aristimunha)
Implement
moabb.evaluations.WithinSubjectSplitterfor k-fold cross-validation within each subject across all sessions (by Bruno Aristimunha)Add
cv_classandcv_kwargsparameters to all evaluation classes (WithinSessionEvaluation, CrossSessionEvaluation, CrossSubjectEvaluation) for custom cross-validation strategies (#963 by Bruno Aristimunha)Implement
moabb.evaluations.splitters.LearningCurveSplitteras a dedicated sklearn-compatible cross-validator for learning curves, enabling learning curve analysis with any evaluation type (#963 by Bruno Aristimunha)Auto-generate dataset documentation admonitions (Participants, Equipment, Preprocessing, Data Access, Experimental Protocol) from class-level
METADATAwhen missing, while preserving manually written sections (#960 by Bruno Aristimunha)Add
additional_metadataparameter toparadigm.get_data()to fetch additional metadata columns from BIDSevents.tsvfiles. Supports"all"to load all columns or a list of specific column names (#744 by Matthias Dold)Add
get_additional_metadata()method tomoabb.datasets.base.BaseDatasetallowing datasets to provide additional metadata for epochs. Implemented for BIDS datasets inmoabb.datasets.base.BaseBIDSDataset(#744 by Matthias Dold)
API changes#
Removed
SinglePassandFilterBankintermediate classes from motor imagery and P300 paradigms.FilterBankLeftRightImagerynow inherits fromLeftRightImagery,FilterBankMotorImageryinherits fromMotorImagery, andP300inherits directly fromBaseP300.RestingStateToP300Adapternow inherits fromBaseP300. Docstring inheritance is handled viaNumpyDocstringInheritanceInitMeta(#467 by Bruno Aristimunha).Allow CodeCarbon script level configurations when instantiating a
moabb.evaluations.base.BaseEvaluationchild class (#866 by Ethan Davis).When CodeCarbon is installed, MOABB HDF5 results have an additional column codecarbon_task_name. If CodeCarbon is configured to save to file, its own tabular results have a column task_name. These columns are unique UUID4s. Related rows can be joined to see detailed costs and benefits of predictive performance and computing profiling metrics (#866 by Ethan Davis).
Isolated model fitting, duration tracking, and CodeCarbon compute profiling tracking. New and consistent ordering of duration and CodeCarbon tracking across all evaluations: (Higher priority, closest to model fitting) required duration tracking, (lower priority, second closest to model fitting) optional CodeCarbon tracking (#866 by Ethan Davis).
Replaced unreliable wall clock duration tracking (Python’s time.time()) in favor of performance counter duration tracking (Python’s time.perf_counter()) (#866 by Ethan Davis).
Enable choice of online or offline CodeCarbon through the parameterization of codecarbon_config when instantiating a
moabb.evaluations.base.BaseEvaluationchild class (#956 by Ethan Davis)Renamed stimulus channel from
stimtoSTIin BNCI motor imagery and error-related potential datasets for clarity and BIDS compliance (by Bruno Aristimunha).Added four new BNCI P300/ERP dataset classes:
moabb.datasets.BNCI2015_009(AMUSE),moabb.datasets.BNCI2015_010(RSVP),moabb.datasets.BNCI2015_012(PASS2D), andmoabb.datasets.BNCI2015_013(ErrP) (by Bruno Aristimunha).Removed
data_sizeandn_permsparameters frommoabb.evaluations.WithinSessionEvaluation. Usecv_class=LearningCurveSplitterwithcv_kwargs=dict(data_size=..., n_perms=...)instead (#963 by Bruno Aristimunha)Learning curve results now automatically include “data_size” and “permutation” columns when using
LearningCurveSplitter(#963 by Bruno Aristimunha)
Requirements#
Allows CodeCarbon environment variables or a configuration file to be defined in the home directory or the current working directory (#866 by Ethan Davis).
Added
filelockas a core dependency to fix missing import errors in utils (#959 by Mateusz Naklicki).
Bugs#
Fixed montage not being set before BIDS cache conversion in BNCI datasets (by Bruno Aristimunha)
Fixed measurement date setting for BNCI datasets to use specific collection years from papers (by Bruno Aristimunha)
Ensured proper subject ID assignment for BIDS compliance across all BNCI datasets (by Bruno Aristimunha)
Correct
moabb.pipelines.classification.SSVEP_CCA,moabb.pipelines.classification.SSVEP_TRCAandmoabb.pipelines.classification.SSVEP_MsetCCAbehavior (#625 by Sylvain Chevallier)Fix scikit-learn LogisticRegression elasticnet penalty parameter deprecation by re-adding penalty=’elasticnet’ for ElasticNet configurations with 0 < l1_ratio < 1 (#869 by Bruno Aristimunha)
Fixing option to pickle model (#870 by Ethan Davis)
Normalize Zenodo download paths and add a custom user-agent to improve download robustness (#946 by Bruno Aristimunha)
Use the BNCI mirror host to avoid download timeouts (#946 by Bruno Aristimunha)
Prevent Python mutable default argument when defining CodeCarbon configurations (#956 by Ethan Davis)
Fix copytree FileExistsError in BrainInvaders2013a download by adding dirs_exist_ok=True (by Bruno Aristimunha)
Ensure optional additional scoring columns in evaluation results (#957 by Ethan Davis)
Fix pandas
ArrowStringArrayshuffle warning by converting.unique()results to numpy arrays in splitters, avoiding issues with newer pandas versions (#963 by Bruno Aristimunha)LearningCurveSplitternow skips training splits that collapse to a single class (e.g., with very smalldata_size) and emits aRuntimeWarninginstead of producing NaN results (#963 by Bruno Aristimunha)
Code health#
Further reorganized BNCI datasets into year-specific modules (
bnci_2003,bnci_2014,bnci_2015,bnci_2019) with shared helpers inlegacy_basefor clearer maintenance. The temporarylegacy.pyfile has been removed (by Bruno Aristimunha).Added new datasets
moabb.datasets.BNCI2020_001,moabb.datasets.BNCI2020_002,moabb.datasets.BNCI2022_001,moabb.datasets.BNCI2025_001, andmoabb.datasets.BNCI2025_002(by Bruno Aristimunha).Persist docs/test CI MNE dataset cache across runs to reduce cold-cache downloads (#946 by Bruno Aristimunha)
Refactor evaluation scoring into shared utility functions for future improvements (#948 by Bruno Aristimunha)
Centralize CV resolution in BaseEvaluation with new
_resolve_cv()method for consistent cross-validation handling across all evaluation types. Add_build_result()and_build_scored_result()helpers to centralize result dict construction across WithinSession, CrossSession, and CrossSubject evaluations, replacing manual dict assembly in each (#963 by Bruno Aristimunha)Remove redundant learning curve methods (
get_data_size_subsets(),score_explicit(),_evaluate_learning_curve()) from WithinSessionEvaluation in favor of unified splitter-based approach (#963 by Bruno Aristimunha)Generic metadata column registration:
LearningCurveSplitterdeclares ametadata_columnsclass attribute, andBaseEvaluationauto-detects it viahasattr(cv_class, "metadata_columns")instead of hardcoding class checks, making it extensible to future custom splitters (#963 by Bruno Aristimunha)Fix
get_n_splits()delegation inWithinSessionSplitterandWithinSubjectSplitterto properly forward to the innercv_class.get_n_splits()instead of hardcodingn_folds, giving correct split counts when using custom CV classes likeLearningCurveSplitter(#963 by Bruno Aristimunha)Remove duplicate
get_inner_splitter_metadata()fromWithinSessionSplitter,WithinSubjectSplitter, andCrossSubjectSplitter. All splitters now store a_current_splitterreference, andBaseEvaluation._build_scored_result()reads metadata generically from it (#963 by Bruno Aristimunha)Extract
_fit_cv(),_maybe_save_model_cv(), and_attach_emissions()intoBaseEvaluation, removing duplicated model-fitting, model-saving, and carbon-tracking boilerplate fromWithinSessionEvaluation,CrossSessionEvaluation, andCrossSubjectEvaluation(#963 by Bruno Aristimunha)Extract
_load_data()helper intoBaseEvaluationto centralize data loading logic (epoch requirement checking andparadigm.get_data()call) that was duplicated across all three evaluation classes (#963 by Bruno Aristimunha)Extract
_get_nchan()helper intoBaseEvaluationto replace repeated channel count extraction (X.info["nchan"] if isinstance(X, BaseEpochs) else X.shape[1]) in all evaluation classes (#963 by Bruno Aristimunha)Move
_pipeline_requires_epochs()fromevaluations.pytoutils.pyfor shared access byBaseEvaluation._load_data()(#963 by Bruno Aristimunha)Move
WithinSessionSplittercreation outside the per-session loop inWithinSessionEvaluation, since splitter parameters do not change per session (#963 by Bruno Aristimunha)Add a compile smoke test (
moabb/tests/test_compilation.py) that validates syntax for all Python files undermoabb/usingpy_compile(#960 by Bruno Aristimunha)
Version 1.4.3 (Stable - PyPi)#
Enhancements#
Add “Open in Colab” buttons for gallery examples (#853 by Bruno Aristimunha)
Refresh docs homepage design and citation visibility (#853 by Bruno Aristimunha)
Add
moabb.datasets.preprocessing.FixedPipelineandmoabb.datasets.preprocessing.make_fixed_pipeline()to avoid scikit-learn unfitted pipeline warnings (#850 by Bruno Aristimunha)
API changes#
None.
Requirements#
Improve compatibility with Python 3.14 (#848 by Bruno Aristimunha)
Bugs#
Fixing warnings from the latest scikit-learn version within the Preprocessing logic (#850 by Bruno Aristimunha)
Fixing compatibility with Scikit-learn 1.8 (#852 by Bruno Aristimunha)
Code health#
Generate notebooks in docs CI for Colab integration (#853 by Bruno Aristimunha)
Version 1.4.2#
Enhancements#
Adding dataset
moabb.datasets.RomaniBF2025ERPconverted to BIDS (#825 by Romani Michele)Improve compute_pvals_perm function (#818 by Quentin Barthelemy)
Bugs#
Fixes the management of include/exclude datasets in
moabb.benchmark(), adds additional verifications (#834 by Anton Andreev)Fixing pagination issue with figshare (#839 by Bruno Aristimunha)
Fixes
moabb.datasets.preprocessing.SetRawAnnotationsin case no STIM channel is present (#838 by Pierre Guetschel and Simon Kojima)
API changes#
None.
Version - 1.4#
Enhancements#
Update to pyRiemann 0.9 and numpy 2.0 for improved compatibility (#789 by Gregoire Cattan and Bruno Aristimunha)
Adding
moabb.datasets.Kojima2024A(#807 by Simon Kojima)Adding
moabb.datasets.Kojima2024B(#806 by Simon Kojima)Add new dataset
moabb.datasets.BNCI2003_IVadataset (#811 by Griffin Keeler)Added the ability to feed pipelines using a list of dictionaries in
moabb.benchmark()(#826 by Anton Andreev)
Bugs#
Fixing label swapped issue with
moabb.datasets.Kalunga2016dataset (#814 by Griffin Keeler)Fix the
moabb.datasets.Dreyer2023(#828 by Simon Kojima)
API changes#
None.
Version - 1.3#
Enhancements#
Adding a tutorial for
moabb.evaluations.splitters.WithinSessionSplitter(#776 by Thomas Kooiman, Paul Verhoeven, Jorge Sanmartin Martinez, and Radovan Vodila )Adding new motor imagery dataset, Dreyer2023 (#404 by Sara Sedlar, Sylvain Chevallier and Bruno Aristimunha)
Reordering the examples in the documentation (#706 by Bruno Aristimunha)
Creating the meta information for the BIDS converted datasets (#688 by Bruno Aristimunha)
Adding
moabb.datasets.Beetl2021_Aandmoabb.datasets.Beetl2021_B(#675 by Samuel Boehm)Adding
moabb.evaluations.splitters.CrossSubjectSplitter(#722 by Bruna Lopes and Bruno Aristimunha)Adding
moabb.evaluations.splitters.CrossSessionSplitter(#720 by Bruna Lopes and Bruno Aristimunha)Adding
moabb.datasets.base.BaseBIDSDatasetandmoabb.datasets.base.LocalBIDSDataset(#724 by Pierre Guetschel)Adding
moabb.analysis.plotting.dataset_bubble_plot()plus the corresponding tutorial (#753 by Pierre Guetschel)Adding
moabb.datasets.utils.plot_all_datasets()and update the tutorial (#758 by Pierre Guetschel)Improve the dataset model cards in each API page (#765 by Pierre Guetschel)
Refactor
moabb.evaluation.CrossSessionEvaluation,moabb.evaluation.CrossSubjectEvaluationandmoabb.evaluation.WithinSessionEvaluationto use the new splitter classes (#769 by Bruno Aristimunha)Adding tutorial on using mne-features (#762 by Alexander de Ranitz, Luuk Neervens, Charlynn van Osch and Bruno Aristimunha)
Creating tutorial to expose the pre-processing steps (#771 by Bruno Aristimunha)
Add function to auto-generate tables for the paper results documentation page (#785 by Lucas Heck)
Improving the Filterbank tutorial and implementing the mutual information selection to reproduce the FilterbankCSP (#787 by Bruno Aristimunha)
A tutorial on how to create and use a MOABB dataset from X y (non continuous, epoched) data (#800 by Anton Andreev)
Improving the parallel writing of results (#803 by Bruno Aristimunha)
Bugs#
Fix regression in evaluations ignoring
process_pipelineflag (#774 by Bruno Aristimunha)Fix caching issue with incomplete results (#715 by Sylvain Chevallier)
Fix learning curve example (#717 by Pierre Guetschel)
Pick all data channels in filter preprocessing step (#729 by Pierre Guetschel)
Fix CI for permutation testing (#757 by Quentin Barthelemy)
Fix download issue with Schirrmeister2017 dataset (#751 by Zheyu Yao)
Fix code carbon example code (#777 by Amar Enkhbat)
Including the fix_bad_channels for the
moabb.datasets.Stieger2021(#783 by Bruno Aristimunha)Fix the
moabb.datasets.Wang2016(#781 by Ulysse Durand)Fix warnings raised when building the documentation (#784 by Lucas Heck)
Remove an unnecessary line in the README.md (#791 by Lionel Kusch)
Update the dead link about the tutorial of GitHub in CONTRIBUTING.md (#792 by Lionel Kusch)
Fix: number of trial per class for PHMD_ML dataset (#797 by Gregoire Cattan)
Converting the
moabb.datasets.Zhou2016to BIDS (#802 by Bruno Aristimunha)
API changes#
Removing the deep learning module from inside moabb in favour of braindecode integration (#692 by Bruno Aristimunha )
Version - 1.2.0#
Enhancements#
Adding
moabb.evaluations.splitters.WithinSessionSplitter(#664 by Bruna Lopes_)Update version of pyRiemann to 0.7 (#671 by Gregoire Cattan)
Add columns definitions in the datasets doc (#672 by Pierre Guetschel)
Add ERP CORE datasets
moabb.datasets.ErpCore2021dataset (#627 by Taha Habib)Update paths of BIDS cache to better follow the standards. Cache created in previous MOABB versions should still be compatible (#707 by Pierre Guetschel)
Bugs#
Fix Stieger2021 dataset bugs (#651 by Martin Wimpff)
Unpinning major version Scikit-learn and numpy (#652 by Bruno Aristimunha)
Replacing the func:numpy.string_ to func:numpy.bytes_ (#665 by Bruno Aristimunha)
Fixing the set_download_dir that was not working when we tried to set the dir more than 10 times at the same time (#668 by Bruno Aristimunha)
Creating stimulus channels in
moabb.datasets.Zhou2016andmoabb.datasets.PhysionetMIto allow braindecode compatibility (#669 by Bruno Aristimunha)Improving the CI (#686 by Bruno Aristimunha)
Making the download test work again (#693 by Bruno Aristimunha)
Fix the EpochSelectChannel that caused incorrect channel selection in example (#685 by AFF)
Fixing the logger on the Stieger2021 and Wang2016 dataset (#693 by Bruno Aristimunha)
Change the way of creating the path to the folder (#697 by Sebastien Velut)
Fixing bug with braindecode and moabb datasets EPFLP300 (#696 by Bruno Aristimunha)
Fixing the dataset details for bids conversion (#698 by Bruno Aristimunha)
Fixing unit issue and lack of montage with
moabb.datasets.Rodrigues2017,moabb.datasets.Rodrigues2017,moabb.datasets.BaseCastillos2023,moabb.datasets.BaseCastillos2023,moabb.datasets.Huebner2018,moabb.datasets.Cattan2019_PHMD,moabb.datasets.Ofner2017(#700 Bruno Aristimunha)Fix t-test permutation tests (#684 and #709 by Gregoire Cattan, Anton Andreev, Marco Congedo and Bruno Aristimunha)
API changes#
Removing the braindecode module from inside moabb (#666 by Bruno Aristimunha )
Version - 1.1.1#
Enhancements#
Add possibility to use OptunaGridSearch (#630 by Igor Carrara)
Add scripts to upload results on PapersWithCode (#561 by Pierre Guetschel)
Centralize dataset summary tables in CSV files (#635 by Pierre Guetschel)
Add new dataset
moabb.datasets.Liu2024dataset (#619 by Taha Habib)Add choice to choose the size of time window (by Sebastien Velut)
Bugs#
Fix caching in the workflows (#632 by Pierre Guetschel)
API changes#
Include optuna as soft-dependency in the benchmark function and in the base of evaluation (#630 by Igor Carrara)
Version - 1.1.0#
Enhancements#
Add cache option to the evaluation (#518 by Bruno Aristimunha)
Option to interpolate channel in paradigms’ match_all method (#480 by Gregoire Cattan)
Add leave k-Subjects out evaluations (#470 by Bruno Aristimunha)
Update Braindecode dependency to 0.8 (#542 by Pierre Guetschel)
Improve transform function of AugmentedDataset (#541 by Quentin Barthelemy)
Add new paper results website (#556 by Bruno Aristimunha)
Move cVEP common functions to
moabb.datasets.utils(#564 #557 by Pierre Guetschel)Normalize c-VEP description tables (#562 #566 by Pierre Guetschel and Bruno Aristimunha)
Update citation in README (#573 by Igor Carrara)
Update pyRiemann dependency (#577 by Gregoire Cattan)
Add resting stage Hinss2021 dataset (#580 by Gregoire Cattan and Yash Chauhan)
Expose the learning rate parameter in the keras deep learning methods and optimize parameters (#589 and #592 by Bruno Aristimunha)
Updating the braindecode pipelines for the new braindecode version 0.8.1 (#589 by Bruno Aristimunha)
Add SSVEP and ERP paradigms to DL pipelines (#590 by Pierre Guetschel)
Allow to pass a single pipeline file to
benchmark(#591 by Pierre Guetschel)Add new dataset
moabb.datasets.Stieger2021(#604 by Reinmar Kobler and Bruno Aristimunha)Exposing the drop_rate for all the deep learning parameters (#592 by Bruno Aristimunha)
Add new dataset
moabb.datasets.Rodrigues2017dataset (#602 by Gregoire Cattan and Pedro L. C. Rodrigues)Change unittest to pytest (#618 by Bruno Aristimunha)
Remove tensorflow import warning (#622 by Bruno Aristimunha)
Bugs#
Fix TRCA implementation for different stimulation freqs and for signal filtering (:gh:522 by Sylvain Chevallier)
Fix saving to BIDS runs with a description string in their name (#530 by Pierre Guetschel)
Fix import of keras BatchNormalization for TF 2.13 and higher (#544 by Brian Irvine)
Fix the doc summary tables of
moabb.datasets.Lee2019_SSVEP(#548 #547 #546 by Pierre Guetschel)Fix the doc summary for Castillos2023 dataset (#561 by Bruno Aristimunha)
Fix format string receiving incorrect number of args in bids interface (#563 by Pierre Guetschel)
Fix number of sessions in doc of
moabb.datasets.Sosulski2019(#565 by Pierre Guetschel)Fix code column of
moabb.datasets.CastillosCVEP100andmoabb.datasets.CastillosCVEP100(#567 by Pierre Guetschel)MAINT updating the packages pre-release (#578 by Bruno Aristimunha)
Fix mne_bids version incompatibility with mne (#586 by Bruna Lopes)
Updating the parameters of the SSVEP_TRCA method (#589 by Bruno Aristimunha)
Fix and updating the parameters for the benchmark function (#588 by Bruno Aristimunha)
Fix result table display (#599 by Sylvain Chevallier)
Fix
moabb.datasets.preprocessing.SetRawAnnotationssetting incorrect annotations when the dataset’s interval does not start at 0 (#607 by Pierre Guetschel)Fix download link for GigaDB Cho2017 and Lee2019 datasets (#621 by Anton Andreev)
API changes#
None
Version - 1.0.0#
Enhancements#
Adding extra thank you section in the documentation (#390 by Bruno Aristimunha)
Adding new script to get the meta information of the datasets (#389 by Bruno Aristimunha)
Fixing the dataset description based on the meta information (#389 and 398 by Bruno Aristimunha and Sara Sedlar)
Adding second deployment of the documentation (#374 by Bruno Aristimunha)
Adding Parallel evaluation for
moabb.evaluations.WithinSessionEvaluation(),moabb.evaluations.CrossSessionEvaluation()(#364 by Bruno Aristimunha)Add example with VirtualReality BrainInvaders dataset (#393 by Gregoire Cattan and Pedro L. C. Rodrigues)
Adding saving option for the models (#401 by Bruno Aristimunha and Igor Carrara)
Adding example to load different type of models (#401 by Bruno Aristimunha and Igor Carrara)
Add resting state paradigm with dataset and example (#400 by Gregoire Cattan and Pedro L. C. Rodrigues)
Speeding the augmentation method by 400% with NumPy vectorization (#419 by Bruno Aristimunha)
Add possibility to convert datasets to BIDS, plus example (PR #408, PR #391 by Pierre Guetschel and Bruno Aristimunha)
Allow caching intermediate processing steps on disk, plus example (PR #408, issue #385 by Pierre Guetschel)
Restructure the paradigms and datasets to move all preprocessing steps to
moabb.datasets.preprocessingand as sklearn pipelines (PR #408 by Pierre Guetschel)Add
moabb.paradigms.FixedIntervalWindowsProcessing()andmoabb.paradigms.FilterBankFixedIntervalWindowsProcessing(), plus example (PR #408, issue #424 by Pierre Guetschel)Define
moabb.paradigms.base.BaseProcessing(), common parent tomoabb.paradigms.base.BaseParadigm()andmoabb.paradigms.BaseFixedIntervalWindowsProcessing()(PR #408 by Pierre Guetschel)Allow passing a fixed processing pipeline to
moabb.paradigms.base.BaseProcessing.get_data()and cache its result on disk (PR #408, issue #367 by Pierre Guetschel)Update
moabb.datasets.fake.FakeDataset()’s code to be unique for each parameter combination (PR #408 by Pierre Guetschel)Systematically set the annotations when loading data, eventually using the stim channel (PR #408 by Pierre Guetschel)
Allow
moabb.datasets.utils.dataset_search()to search across paradigmsparadigm=None(PR #408 by Pierre Guetschel)Improving the review processing with more pre-commit bots (#435 by Bruno Aristimunha)
Add methods
make_processing_pipelinesandmake_labels_pipelinetomoabb.paradigms.base.BaseProcessing(#447 by Pierre Guetschel)Pipelines’ digests are now computed from the whole processing+classification pipeline (#447 by Pierre Guetschel)
Update all dataset codes to remove white spaces and underscores (#448 by Pierre Guetschel)
Add
moabb.utils.depreciated_alias()decorator (#455 by Pierre Guetschel)Rename many dataset class names to standardize and deprecate old names (#455 by Pierre Guetschel)
Change many dataset codes to match the class names (#455 by Pierre Guetschel)
Add
moabb.datasets.compound_dataset.utils.compound_dataset_list(#455 by Pierre Guetschel)Add c-VEP paradigm and Thielen2021 c-VEP dataset (#463 by Jordy Thielen)
Add option to plot scores vertically. (#417 by Sara Sedlar)
Change naming scheme for runs and sessions to align to BIDS standard (#471 by Pierre Guetschel)
Increase the python version to 3.11 (#470 by Bruno Aristimunha)
Add match_all method in paradigm to support CompoundDataset evaluation with MNE epochs (#473 by Gregoire Cattan)
Automate setting of event_id in compound dataset and add data_origin information to the data (#475 by Gregoire Cattan)
Add possibility of not saving the model (#489 by Igor Carrara)
Add CVEP and BurstVEP dataset from Castillos from Toulouse lab (#531 by Sebastien Velut)
Add c-VEP dataset from Thielen et al. 2015 (#557 by Jordy Thielen)
Bugs#
Restore 3 subject from Cho2017 (#392 by Igor Carrara and Sylvain Chevallier)
Correct downloading with VirtualReality BrainInvaders dataset (#393 by Gregoire Cattan)
Rename event subtraction in
moabb.datasets.Shin2017B()(#397 by Pierre Guetschel)Save parameters of
moabb.datasets.PhysionetMI()(#403 by Pierre Guetschel)Fixing issue with parallel evaluation (#401 by Bruno Aristimunha and Igor Carrara)
Fixing SSLError from BCI competition IV (#404 by Bruno Aristimunha)
Fixing
moabb.datasets.bnci.MNEBNCI.data_path()that returned the data itself instead of paths (#412 by Pierre Guetschel)Adding
moabb.datasets.fake()in the init file to use in braindecode object (#414 by Bruno Aristimunha)Fixing the parallel download issue when the dataset have the same directory (#421 by Sara Sedlar)
Fixing fixes the problem with the annotation loading for the P300 datasets Sosulski2019, Huebner2017 and Huebner2018 (#396 by Sara Sedlar)
Removing the print in the dataset list (#423 by Bruno Aristimunha)
Fixing bug in
moabb.pipeline.utils_pytorch.BraindecodeDatasetLoader()where incorrect y was used in transform calls (#426 by Gabriel Schwartz)Fixing one test in
moabb.pipeline.utils_pytorch.BraindecodeDatasetLoader()(#426 by Bruno Aristimunha)Fix
moabb.benchmark()overwritinginclude_datasetslist (#408 by Pierre Guetschel)Fix
moabb.paradigms.base.BaseParadigm()using attributes before defining them (PR #408, issue #425 by Pierre Guetschel)Fix
moabb.paradigms.FakeImageryParadigm(),moabb.paradigms.FakeP300Paradigm()andmoabb.paradigms.FakeSSVEPParadigm()is_validmethods to only accept the correct datasets (PR #408 by Pierre Guetschel)Fix
dataset_listconstruction, which could be empty due to bad import order (PR #449 by Thomas Moreau).Fixing dataset downloader from servers with non-http (PR #433 by Sara Sedlar)
Fix
dataset_listto include deprecated datasets (PR #464 by Bruno Aristimunha)Fixed bug in
moabb.analysis.results.get_string_rep()to handle addresses such as 0x__0A as well (PR #468 by Anton Andreev)Moving the
moabb.evualation.grid_search()to inside the base evaluation (#487 by Bruno Aristimunha)Removing joblib Parallel (#488 by Igor Carrara)
Fix case when events specified via
raw.annotationsbut no events (#491 by Pierre Guetschel)Fix bug in downloading Shin2017A dataset (#493 by Igor Carrara)
Fix the cropped option in the dataset preprocessing (#502 by Bruno Aristimunha)
Fix bug in
moabb.datasets.utils.dataset_search()with missing cvep paradigm (#557 by Jordy Thielen)Fix mistakes in
moabb.datasets.thielen2021()considering wrong docs and hardcoded trial stim channel (#557 by Jordy Thielen)
API changes#
None
Version - 0.5.0#
Enhancements#
Speeding the augmentation model (#365 by Bruno Aristimunha)
Add VirtualReality BrainInvaders dataset (#358 by Gregoire Cattan)
Switch to python-3.8, update dependencies, fix code link in doc, add code coverage (#315 by Sylvain Chevallier)
Adding a comprehensive benchmarking function (#264 by Divyesh Narayanan and Sylvain Chevallier)
Add meta-information for datasets in documentation (#317 by Bruno Aristimunha)
Add GridSearchCV for different evaluation procedure (#319 by Igor Carrara)
Add new tutorial to benchmark with GridSearchCV (#323 by Igor Carrara)
Add six deep learning models (Tensorflow), and build a tutorial to show to use the deep learning model (#326 by Igor Carrara, Bruno Aristimunha and Sylvain Chevallier)
Add a augmentation model to the pipeline (#326 by Igor Carrara)
Add BrainDecode example (#340 by Igor Carrara and Bruno Aristimunha)
Add Google Analytics to the documentation (#335 by Bruno Aristimunha)
Add support to Braindecode classifier (#328 by Bruno Aristimunha)
Add CodeCarbon to track emission CO₂ (#350 by Igor Carrara, Bruno Aristimunha and Sylvain Chevallier)
Add CodeCarbon example (#356 by Igor Carrara and Bruno Aristimunha)
Add MsetCCA method for SSVEP classification, parametrise CCA n_components in CCA based methods (#359 by Emmanuel Kalunga and Sylvain Chevallier)
Set epochs’ metadata field in get_data (#371 by Pierre Guetschel)
Add possibility to use transformers to apply fixed pre-processings before evaluations (#372 by Pierre Guetschel)
Add seed parameter to FakeDataset (#372 by Pierre Guetschel)
Bugs#
Fix circular import with braindecode (#363 by Bruno Aristimunha)
Fix bug for MotorImagery when we handle all events (#327 by Igor Carrara)
Fixing CI to handle with new deep learning dependencies (#332 and #326 by Igor Carrara, Bruno Aristimunha and Sylvain Chevallier)
Correct CI error due to isort (#330 by Bruno Aristimunha)
Restricting Python <= 3.11 version and adding tensorflow, keras, scikeras, braindecode, skorch and torch, as optional dependence (#329 by Bruno Aristimunha)
Fix numpy variable to handle with the new version of python (#324 by Bruno Aristimunha)
Correct CI error due to black (#292 by Sylvain Chevallier)
Preload Schirrmeister2017 raw files (#290 by Pierre Guetschel)
Incorrect event assignation for Lee2019 in MNE >= 1.0.0 (#298 by Sylvain Chevallier)
Correct usage of name simplification function in analyze (#306 by Divyesh Narayanan)
Fix downloading path issue for Weibo2014 and Zhou2016, numy error in DemonsP300 (#315 by Sylvain Chevallier)
Fix unzip error for Huebner2017 and Huebner2018 (#318 by Sylvain Chevallier)
Fix n_classes when events set to None (#337 by Igor Carrara and Sylvain Chevallier)
Change n_jobs=-1 to self.n_jobs in GridSearch (#344 by Igor Carrara)
Fix dropped epochs issue (#371 by Pierre Guetschel)
Fix redundancy website issue (#372 by Bruno Aristimunha)
API changes#
None
Version - 0.4.6#
Enhancements#
Add P300 BrainInvaders datasets (#283 by Sylvain Chevallier)
Add explicit warning when lambda function are used to parametrize pipelines (#278 by Jan Sosulski)
Bugs#
Correct default path for ERP visualization (#279 by Jan Sosulski)
Correct documentation (#282 and #284 by Jan Sosulski)
Version - 0.4.5#
Enhancements#
Progress bars, pooch, tqdm (#258 by Divyesh Narayanan and Sylvain Chevallier)
Adding test and example for set_download_dir (#249 by Divyesh Narayanan)
Update to newer version of Schirrmeister2017 dataset (#265 by Robin Schirrmeister)
Adding Huebner2017 and Huebner2018 P300 datasets (#260 by Jan Sosulski)
Adding Sosulski2019 auditory P300 datasets (#266 by Jan Sosulski)
New script to visualize ERP on all datasets, as a sanity check (#261 by Jan Sosulski)
Bugs#
Removing dependency on mne method for PhysionetMI data downloading, renaming runs (#257 by Divyesh Narayanan)
Correcting events management in Schirrmeister2017, renaming session and run (#255 by Pierre Guetschel and Sylvain Chevallier)
Switch session and runs in MAMEM1, 2 and 3 to avoid error in WithinSessionEvaluation (#256 by Sylvain Chevallier)
Correct doctstrings for the documentation, including Lee2017 (#256 by Sylvain Chevallier)
Version - 0.4.4#
Enhancements#
Add TRCA algorithm for SSVEP (#238 by Ludovic Darmet)
Bugs#
Remove unused argument from dataset_search (#243 by Divyesh Narayanan)
Remove MNE call to _fetch_dataset and use MOABB _fetch_file (#235 by Jan Sosulski)
Correct doc formatting (#232 by Sylvain Chevallier)
API changes#
Minimum supported Python version is now 3.7
MOABB now depends on scikit-learn >= 1.0
Version - 0.4.3#
Enhancements#
Rewrite Lee2019 to add P300 and SSVEP datasets (#217 by Pierre Guetschel)
Bugs#
Avoid information leakage for MNE Epochs pipelines in evaluation (#222 by Sylvain Chevallier)
Correct error in set_download_dir (#225 by Sylvain Chevallier)
Ensure that channel order is consistent across dataset when channel argument is specified in paradigm (#229 by Sylvain Chevallier)
API changes#
ch_names argument added to init of moabb.datasets.fake.FakeDataset (#229 by Sylvain Chevallier)
Version - 0.4.2#
Enhancements#
None
Bugs#
Correct error when downloading Weibo dataset (#212 by Sylvain Chevallier)
API changes#
None
Version - 0.4.1#
Enhancements#
None
Bugs#
Correct path error for first time launch (#204 by Sylvain Chevallier)
Fix optional dependencies issues for PyPi (#205 by Sylvain Chevallier)
API changes#
Remove update_path on all datasets, update_path parameter in dataset.data_path() is deprecated (#207 by Sylvain Chevallier)
Version - 0.4.0#
Enhancements#
Implementation for learning curves (#155 by Jan Sosulski)
Adding Neiry Demons P300 dataset (#156 by Vladislav Goncharenko)
Coloredlogs removal (#163 by Vladislav Goncharenko)
Update for README (#164 by Vladislav Goncharenko and Sylvain Chevallier)
Test all relevant python versions in Github Actions CI (#167 by Vladislav Goncharenko)
Adding motor imagery part of the Lee2019 dataset (#170 by Ali Abdul Hussain)
CI: deploy docs from CI pipeline (#124 by Erik Bjäreholt, Divyesh Narayanan and Sylvain Chevallier)
Remove dependencies: WFDB and pyunpack (#180 and #188 by Sylvain Chevallier)
Add support for FigShare API (#188 by Sylvain Chevallier)
New download backend function relying on Pooch, handling FTP, HTTP and HTTPS (#188 by Sylvain Chevallier)
Complete rework of examples and tutorial (#188 by Sylvain Chevallier)
Change default storage location for results: instead of moabb source code directory it is now stored in mne_data (#188 by Sylvain Chevallier)
Major update of test (#188 by Sylvain Chevallier)
Adding troubleshooting and badges in README (#189 by Jan Sosulski and Sylvain Chevallier)
Use MNE epoch in evaluation (#192 by Sylvain Chevallier)
Allow changing of storage location (#192 by Divyesh Narayanan and Sylvain Chevallier)
Deploy docs on moabb.github.io (#196 by Sylvain Chevallier)
Broadening subject_list type for
moabb.datasets.BaseDataset()(#198 by Sylvain Chevallier)Adding this what’s new (#200 by Sylvain Chevallier)
Improving cache usage and save computation time in CI (#200 by Sylvain Chevallier)
Rewrite Lee2019 to add P300 and SSVEP datasets (#217 by Pierre Guetschel)
Bugs#
Restore basic logging (#177 by Jan Sosulski)
Correct wrong type of results dataframe columns (#188 by Sylvain Chevallier)
Add
acceptarg to acknowledge licence formoabb.datasets.Shin2017A()andmoabb.datasets.Shin2017B()(#201 by Sylvain Chevallier)
API changes#
Drop update_path from moabb.download.data_path and moabb.download.data_dl
Version 0.3.0#
Enhancements#
Expose sklearn error_score parameter (#70 by Jan Sosulski)
Adds a
unit_factorattribute to base_paradigms (#72 by Jan Sosulski)Allow event lists in P300 paradigm (#83 by Jan Sosulski)
Return epochs instead of np.ndarray in process_raw (#86 by Jan Sosulski)
Set path for hdf5 files (#92 by Jan Sosulski)
Update to MNE 0.21 (#101 by Ramiro Gatti and Sylvain Chevallier)
Adding a baseline correction (#115 by Ramiro Gatti)
Adding SSVEP datasets: MAMEM1, MAMEM2, MAMEM3, Nakanishi2015, Wang2016, (#118 by Sylvain Chevallier, Quentin Barthelemy, and Divyesh Narayanan)
Switch to GitHub Actions (#124 by Erik Bjäreholt)
Allow recording of additional scores/parameters/metrics in evaluation (#127 and #128 by Jan Sosulski)
Fix Ofner2017 and PhysionetMI annotations (#135 by Ali Abdul Hussain)
Adding Graz workshop tutorials (#130 and #137 by Sylvain Chevallier and Lucas Custódio)
Adding pre-commit configuration using isort, black and flake8 (#140 by Vladislav Goncharenko)
style: format Python code with black (#147 by Erik Bjäreholt)
Switching to Poetry dependency management (#150 by Vladislav Goncharenko)
Using Prettier to format md and yml files (#151 by Vladislav Goncharenko)
Bugs#
Use stim_channels or check annotation when loading files in Paradigm (#72 by Jan Sosulski)
Correct MNE issues (#76 by Sylvain Chevallier)
Fix capitalization in channel names of cho dataset (#90 by Jan Sosulski)
Correct failing CI tests (#100 by Sylvain Chevallier)
Fix EPFL dataset flat signal sections and wrong scaling (#104 and #96 by Jan Sosulski)
Fix schirrmeister dataset for Python3.8 (#105 by Robin Schirrmeister)
Correct event detection problem and duplicate event error (#106 by Sylvain Chevallier)
Fix channel selection in paradigm (#108 by Sylvain Chevallier)
Fix upperlimb Ofner2017 error and gdf import problem (#111 and #110 by Sylvain Chevallier)
Fix event_id in events_from_annotations missed (#112 by Ramiro Gatti)
Fix h5py>=3.0 compatibility issue (#138 by Mohammad Mostafa Farzan)
Python 2 support removal (#148 by Vladislav Goncharenko)
Travis-ci config removal (#149 by Vladislav Goncharenko)
API changes#
None
Version 0.2.1#
Enhancements#
Add Tikhonov regularized CSP in
moabb.pipelines.cspfrom the paper (#60 by Vinay Jayaram)update to MNE version 0.19 (#73 by Jan Sosulski)
Improve doc building in CI (#60 by Sylvain Chevallier)
Bugs#
Update GigaDB Cho2017 URL (Pedro L. C. Rodrigues and Vinay Jayaram)
Fix braininvaders ERP data (Pedro L. C. Rodrigues)
Replace MNE
read_montagewithmake_standard_montage(Jan Sosulski)Correct Flake and PEP8 error (Sylvain Chevallier)
API changes#
None
Version 0.2.0#
Enhancements#
MOABB corresponding to the paper version by Vinay Jayaram and Alexandre Barachant
Creating P300 paradigm and BNCI datasets (#53 by Pedro L. C. Rodrigues)
Adding EPFL P300 dataset (#56 by Pedro L. C. Rodrigues)
Adding BrainInvaders P300 dataset (#57 by Pedro L. C. Rodrigues)
Creating SSVEP paradigm and SSVEPExo dataset (#59 by Sylvain Chevallier)
Bugs#
None
API changes#
None