API Reference#

Datasets#

A dataset handle and abstract low level access to the data. the dataset will takes data stored locally, in the format in which they have been downloaded, and will convert them into a MNE raw object. There are options to pool all the different recording sessions per subject or to evaluate them separately.

See NeuroTechX/moabb for detail on datasets (electrodes, number of trials, sessions, etc.)

Motor Imagery Datasets#

AlexMI()

Alex Motor Imagery dataset.

BNCI2014_001()

BNCI 2014-001 Motor Imagery dataset.

BNCI2014_002()

BNCI 2014-002 Motor Imagery dataset.

BNCI2014_004()

BNCI 2014-004 Motor Imagery dataset.

BNCI2015_001()

BNCI 2015-001 Motor Imagery dataset.

BNCI2015_004()

BNCI 2015-004 Motor Imagery dataset.

Cho2017()

Motor Imagery dataset from Cho et al 2017.

Lee2019_MI([train_run, test_run, ...])

BMI/OpenBMI dataset for MI.

GrosseWentrup2009()

Munich Motor Imagery dataset.

Ofner2017([imagined, executed])

Motor Imagery ataset from Ofner et al 2017.

PhysionetMI([imagined, executed])

Physionet Motor Imagery dataset.

Schirrmeister2017()

High-gamma dataset described in Schirrmeister et al. 2017.

Shin2017A([accept])

Motor Imagey Dataset from Shin et al 2017.

Shin2017B([accept])

Mental Arithmetic Dataset from Shin et al 2017.

Weibo2014()

Motor Imagery dataset from Weibo et al 2014.

Zhou2016()

Motor Imagery dataset from Zhou et al 2016.

ERP Datasets#

BI2012([Training, Online])

P300 dataset BI2012 from a "Brain Invaders" experiment.

BI2013a([NonAdaptive, Adaptive, Training, ...])

P300 dataset BI2013a from a "Brain Invaders" experiment.

BI2014a()

P300 dataset BI2014a from a "Brain Invaders" experiment.

BI2014b()

P300 dataset BI2014b from a "Brain Invaders" experiment.

BI2015a()

P300 dataset BI2015a from a "Brain Invaders" experiment.

BI2015b()

P300 dataset BI2015b from a "Brain Invaders" experiment.

Cattan2019_VR([virtual_reality, screen_display])

Dataset of an EEG-based BCI experiment in Virtual Reality using P300.

BNCI2014_008()

BNCI 2014-008 P300 dataset.

BNCI2014_009()

BNCI 2014-009 P300 dataset.

BNCI2015_003()

BNCI 2015-003 P300 dataset.

DemonsP300()

Visual P300 dataset recorded in Virtual Reality (VR) game Raccoons versus Demons.

EPFLP300()

P300 dataset from Hoffmann et al 2008.

Huebner2017([interval, raw_slice_offset, ...])

Learning from label proportions for a visual matrix speller (ERP) dataset from Hübner et al 2017 [R0a211c89d39d-1].

Huebner2018([interval, raw_slice_offset, ...])

Mixture of LLP and EM for a visual matrix speller (ERP) dataset from Hübner et al 2018 [R8f30fc0d0ace-1].

Lee2019_ERP([train_run, test_run, ...])

BMI/OpenBMI dataset for P300.

Sosulski2019([use_soas_as_sessions, ...])

P300 dataset from initial spot study.

SSVEP Datasets#

Kalunga2016()

SSVEP Exo dataset.

Nakanishi2015()

SSVEP Nakanishi 2015 dataset.

Wang2016()

SSVEP Wang 2016 dataset.

MAMEM1()

SSVEP MAMEM 1 dataset.

MAMEM2()

SSVEP MAMEM 2 dataset.

MAMEM3()

SSVEP MAMEM 3 dataset.

Lee2019_SSVEP([train_run, test_run, ...])

BMI/OpenBMI dataset for SSVEP.

c-VEP Datasets#

Thielen2021()

c-VEP dataset from Thielen et al. (2021).

Resting State Datasets#

Cattan2019_PHMD()

Passive Head Mounted Display with Music Listening dataset.

Base & Utils#

base.BaseDataset(subjects, ...[, doi, ...])

Abstract Moabb BaseDataset.

base.CacheConfig([save_raw, save_epochs, ...])

Configuration for caching of datasets.

fake.FakeDataset([event_list, n_sessions, ...])

Fake Dataset for test purpose.

fake.FakeVirtualRealityDataset([seed])

Fake Cattan2019_VR dataset for test purpose.

download.data_path(url, sign[, path, ...])

Get path to local copy of given dataset URL.

download.data_dl(url, sign[, path, ...])

Download file from url to specified path.

download.fs_issue_request(method, url, headers)

Wrapper for HTTP request.

download.fs_get_file_list(article_id[, version])

List all the files associated with a given article.

download.fs_get_file_hash(filelist)

Returns a dict associating figshare file id to MD5 hash.

download.fs_get_file_id(filelist)

Returns a dict associating filename to figshare file id.

download.fs_get_file_name(filelist)

Returns a dict associating figshare file id to filename.

utils.dataset_search([paradigm, ...])

Returns a list of datasets that match a given criteria.

utils.find_intersecting_channels(datasets[, ...])

Given a list of dataset instances return a list of channels shared by all datasets.

Compound Datasets#

ERP Datasets#

BI2014a_Il()

A selection of subject from BI2014a with AUC < 0.7 with pipeline: ERPCovariances(estimator="lwf"), MDM(metric="riemann")

BI2014b_Il()

A selection of subject from BI2014b with AUC < 0.7 with pipeline: ERPCovariances(estimator="lwf"), MDM(metric="riemann")

BI2015a_Il()

A selection of subject from BI2015a with AUC < 0.7 with pipeline: ERPCovariances(estimator="lwf"), MDM(metric="riemann")

BI2015b_Il()

A selection of subject from BI2015b with AUC < 0.7 with pipeline: ERPCovariances(estimator="lwf"), MDM(metric="riemann")

Cattan2019_VR_Il()

A selection of subject from Cattan2019_VR with AUC < 0.7 with pipeline: ERPCovariances(estimator="lwf"), MDM(metric="riemann")

BI_Il()

Subjects from braininvaders datasets with AUC < 0.7 with pipeline: ERPCovariances(estimator="lwf"), MDM(metric="riemann")

Evaluations#

An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) – it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.

Evaluations#

WithinSessionEvaluation([n_perms, data_size])

Performance evaluation within session (k-fold cross-validation)

CrossSessionEvaluation(paradigm[, datasets, ...])

Cross-session performance evaluation.

CrossSubjectEvaluation(paradigm[, datasets, ...])

Cross-subject evaluation performance.

Base & Utils#

base.BaseEvaluation(paradigm[, datasets, ...])

Base class that defines necessary operations for an evaluation.

Paradigms#

A paradigm defines how the raw data will be converted to trials ready to be processed by a decoding algorithm.

This is a function of the paradigm used, i.e. in motor imagery one can have two-class, multi-class, or continuous paradigms; similarly, different preprocessing is necessary for ERP vs ERD paradigms.

Motor Imagery Paradigms#

MotorImagery([n_classes])

N-class motor imagery.

LeftRightImagery(**kwargs)

Motor Imagery for left hand/right hand classification.

FilterBankLeftRightImagery(**kwargs)

Filter Bank Motor Imagery for left hand/right hand classification.

FilterBankMotorImagery([n_classes])

Filter bank n-class motor imagery.

P300 Paradigms#

SinglePass([fmin, fmax])

Single Bandpass filter P300.

P300(**kwargs)

P300 for Target/NonTarget classification.

SSVEP Paradigms#

SSVEP([fmin, fmax])

Single bandpass filter SSVEP.

FilterBankSSVEP([filters])

Filtered bank n-class SSVEP paradigm.

c-VEP Paradigms#

CVEP([fmin, fmax])

Single bandpass c-VEP paradigm for epoch-level decoding.

FilterBankCVEP([filters])

Filterbank c-VEP paradigm for epoch-level decoding.

Fixed Interval Windows Processings#

FixedIntervalWindowsProcessing([fmin, fmax, ...])

Fixed interval windows processing.

FilterBankFixedIntervalWindowsProcessing([...])

Filter bank fixed interval windows processing.

Base & Utils#

motor_imagery.BaseMotorImagery([filters, ...])

Base Motor imagery paradigm.

motor_imagery.SinglePass([fmin, fmax])

Single Bandpass filter motor Imagery.

motor_imagery.FilterBank([filters])

Filter Bank MI.

p300.BaseP300([filters, events, tmin, tmax, ...])

Base P300 paradigm.

ssvep.BaseSSVEP([filters, events, ...])

Base SSVEP Paradigm.

BaseFixedIntervalWindowsProcessing([...])

Base class for fixed interval windows processing.

base.BaseParadigm(filters[, events, tmin, ...])

Base class for paradigms.

base.BaseProcessing(filters[, tmin, tmax, ...])

Base Processing.

Pipelines#

Pipeline defines all steps required by an algorithm to obtain predictions.

Pipelines are typically a chain of sklearn compatible transformers and end with a sklearn compatible estimator.

Pipelines#

features.LogVariance()

LogVariance transformer.

features.FM([freq])

Transformer to scale sampling frequency.

features.ExtendedSSVEPSignal()

Prepare FilterBank SSVEP EEG signal for estimating extended covariances.

features.AugmentedDataset([order, lag])

Dataset augmentation methods in a higher dimensional space.

features.StandardScaler_Epoch()

Function to standardize the X raw data for the DeepLearning Method.

csp.TRCSP([nfilter, metric, log, alpha])

Weighted Tikhonov-regularized CSP as described in Lotte and Guan 2011.

classification.SSVEP_CCA(interval, freqs[, ...])

Classifier based on Canonical Correlation Analysis for SSVEP.

classification.SSVEP_TRCA(interval, freqs[, ...])

Classifier based on the Task-Related Component Analysis method [1]_ for SSVEP.

classification.SSVEP_MsetCCA(freqs[, ...])

Classifier based on MsetCCA for SSVEP.

deep_learning.KerasDeepConvNet(loss[, ...])

Keras implementation of the Deep Convolutional Network as described in [R679315cfbef6-1].

deep_learning.KerasEEGITNet(loss[, ...])

Keras implementation of the EEGITNet as described in [Rf5b2ee1af1ae-1].

deep_learning.KerasEEGNet_8_2(loss[, ...])

Keras implementation of the EEGNet as described in [Rd83becb56589-1].

deep_learning.KerasEEGNeX(loss[, optimizer, ...])

Keras implementation of the EEGNex as described in [R643fa75c3283-1].

deep_learning.KerasEEGTCNet(loss[, ...])

Keras implementation of the EEGTCNet as described in [R89b58824c471-1].

deep_learning.KerasShallowConvNet(loss[, ...])

Keras implementation of the Shallow Convolutional Network as described in [R2ccacb732305-1].

Base & Utils#

utils.create_pipeline_from_config(config)

Create a pipeline from a config file.

utils.FilterBank(estimator[, flatten])

Apply a given identical pipeline over a bank of filter.

utils_deep_model.EEGNet(data, input_layer[, ...])

EEGNet block implementation as described in [R820c2366bc63-1].

utils_deep_model.EEGNet_TC(self, input_layer)

utils_deep_model.TCN_block(input_layer, ...)

Temporal Convolutional Network (TCN), TCN_block from [R2eea69aed7b6-1].

utils_pytorch.BraindecodeDatasetLoader([...])

Class to Load the data from MOABB in a format compatible with braindecode.

utils_pytorch.InputShapeSetterEEG([...])

Sets the input dimension of the PyTorch module to the input dimension of the training data.

Analysis#

Plotting#

plotting.score_plot(data[, pipelines, ...])

Plot scores for all pipelines and all datasets

plotting.paired_plot(data, alg1, alg2)

Generate a figure with a paired plot.

plotting.summary_plot(sig_df, effect_df[, ...])

Significance matrix to compare pipelines.

plotting.meta_analysis_plot(stats_df, alg1, alg2)

Meta-analysis to compare two algorithms across several datasets.

Statistics#

meta_analysis.find_significant_differences(df)

Compute differences between pipelines across datasets.

meta_analysis.compute_dataset_statistics(df)

Compute meta-analysis statistics from results dataframe.

meta_analysis.combine_effects(effects, nsubs)

Combine effects for meta-analysis statistics.

meta_analysis.combine_pvalues(p, nsubs)

Combine p-values for meta-analysis statistics.

meta_analysis.collapse_session_scores(df)

Prepare results dataframe for computing statistics.

Utils#

Benchmark#

benchmark([pipelines, evaluations, ...])

Run benchmarks for selected pipelines and datasets.

Utils#

set_log_level([level])

Set log level.

setup_seed(seed)

Set the seed for random, numpy, TensorFlow and PyTorch.

set_download_dir(path)

Set the download directory if required to change from default mne path.

make_process_pipelines(processing, dataset)

Shortcut for the method moabb.paradigms.base.BaseProcessing.make_process_pipelines()