Evaluations#

An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) – it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.

Evaluations#

WithinSessionEvaluation([n_perms, data_size])

Performance evaluation within session (k-fold cross-validation)

CrossSessionEvaluation(paradigm[, datasets, ...])

Cross-session performance evaluation.

CrossSubjectEvaluation(paradigm[, datasets, ...])

Cross-subject evaluation performance.

Base & Utils#

base.BaseEvaluation(paradigm[, datasets, ...])

Base class that defines necessary operations for an evaluation.