Welcome to the documentation of BenchNIRS!

BenchNIRS

Benchmarking framework for machine learning with fNIRS

Features:

  • loading of open access datasets

  • signal processing and feature extraction on fNIRS data

  • training, optimisation and evaluation of machine learning models (including deep learning)

  • production of training graphs, metrics and other useful figures for evaluation

  • benchmarking and comparison of machine learning models

  • much more!

https://img.shields.io/badge/doi-10.3389%2Ffnrgo.2023.994969-blue https://img.shields.io/badge/license-GNU%20GPLv3%2b-lightgrey https://gitlab.com/HanBnrd/benchnirs/badges/main/pipeline.svg https://img.shields.io/pypi/v/benchnirs https://static.pepy.tech/badge/benchnirs

Recommendation checklist

Below is a proposed checklist of recommendations towards best practice for machine learning classification with fNIRS (for brain-computer interface applications).

Methodology:
Plan classes before designing the experiment (to avoid using return to baseline as control baseline task)
Use nested cross-validation, also called double cross-validation with the outer cross-validation (leaving out the test sets) for evaluation and the inner cross-validation (leaving out the validation sets) for the optimisation of models
Optimise the hyperparameters (with grid-search for instance) on validation sets
Use the test sets for evaluation and nothing else (no optimisation should be performed with the test set)
Create the training, validation and test sets in accordance with what the model is hypothesised to generalise (eg. unseen subject, unseen session, etc.), thanks to group k-fold cross-validation for example
Pay attention to not include test data when performing normalisation
Take extra care to not have any of the sets overlap (training, validation and test sets), the test set used to report results more than anything must consist of unseen data only
Pay attention to class imbalance (using metrics more appropriate than accuracy such as F1 score for example)
Perform a statistical analysis to find significance of the results when comparing results to chance level and classifiers to each other
Reporting:
Describe what data is used as input of the classifier and its shape
Describe the number of input examples in the dataset
Describe the details of the cross-validations implementations
Describe the details of each model used including the architecture of the model and every hyperparameter
Describe which hyperparameters have been optimised and how
Clearly state the number of classes and the chance level
Provide all necessary information related to the statistical analysis of the results, including the name of the tests, the verification of their assumptions and the p-values

Acknowledgements

If you are using BenchNIRS, please cite this article.

Indices and tables