moabb.evaluations.base.BaseEvaluation

class moabb.evaluations.base.BaseEvaluation(paradigm, datasets=None, random_state=None, n_jobs=1, overwrite=False, error_score='raise', suffix='', hdf5_path=None, additional_columns=None, return_epochs=False, mne_labels=False)[source][source]

Base class that defines necessary operations for an evaluation. Evaluations determine what the train and test sets are and can implement additional data preprocessing steps for more complicated algorithms.

Parameters
  • paradigm (Paradigm instance) – The paradigm to use.

  • datasets (List of Dataset instance) – The list of dataset to run the evaluation. If none, the list of compatible dataset will be retrieved from the paradigm instance.

  • random_state (int, RandomState instance, default=None) – If not None, can guarantee same seed for shuffling examples.

  • n_jobs (int, default=1) – Number of jobs for fitting of pipeline.

  • overwrite (bool, default=False) – If true, overwrite the results.

  • error_score (“raise” or numeric, default=”raise”) – Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised.

  • suffix (str) – Suffix for the results file.

  • hdf5_path (str) – Specific path for storing the results.

  • additional_columns (None) – Adding information to results.

  • return_epochs (bool, default=False) – use MNE epoch to train pipelines.

  • mne_labels (bool, default=False) – if returning MNE epoch, use original dataset label if True

Methods

evaluate(dataset, pipelines)

Evaluate results on a single dataset.

is_valid(dataset)

Verify the dataset is compatible with evaluation.

process(pipelines)

Runs all pipelines on all datasets.

get_results

push_result

abstract evaluate(dataset, pipelines)[source][source]

Evaluate results on a single dataset.

This method return a generator. each results item is a dict with the following convension:

res = {'time': Duration of the training ,
       'dataset': dataset id,
       'subject': subject id,
       'session': session id,
       'score': score,
       'n_samples': number of training examples,
       'n_channels': number of channel,
       'pipeline': pipeline name}
abstract is_valid(dataset)[source][source]

Verify the dataset is compatible with evaluation.

This method is called to verify dataset given in the constructor are compatible with the evaluation context.

This method should return false if the dataset does not match the evaluation. This is for example the case if the dataset does not contain enought session for a cross-session eval.

Parameters

dataset (dataset instance) – The dataset to verify.

process(pipelines)[source][source]

Runs all pipelines on all datasets.

This function will apply all provided pipelines and return a dataframe containing the results of the evaluation.

Parameters

pipelines (dict of pipeline instance.) – A dict containing the sklearn pipeline to evaluate.

Returns

results – A dataframe containing the results.

Return type

pd.DataFrame