An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) – it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.


WithinSessionEvaluation(n_perms, list, …)

Performance evaluation within session (k-fold cross-validation)

CrossSessionEvaluation(paradigm[, datasets, …])

Cross-session performance evaluation.

CrossSubjectEvaluation(paradigm[, datasets, …])

Cross-subject evaluation performance.

Base & Utils

base.BaseEvaluation(paradigm[, datasets, …])

Base class that defines necessary operations for an evaluation.