An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) – it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.


WithinSessionEvaluation(n_perms, list, …)

Within Session evaluation.

CrossSessionEvaluation(paradigm[, datasets, …])

Cross session Context.

CrossSubjectEvaluation(paradigm[, datasets, …])

Cross Subject evaluation Context.

Base & Utils

base.BaseEvaluation(paradigm[, datasets, …])

Base class that defines necessary operations for an evaluation.