diff --git a/vignettes/tutorial.Rmd b/vignettes/tutorial.Rmd index 57207a9..17e2710 100644 --- a/vignettes/tutorial.Rmd +++ b/vignettes/tutorial.Rmd @@ -1,79 +1,79 @@ --- title: "Quick-start with challengeR" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Quick-start with challengeR} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Introduction -This tutorial intends to give customized scripts to genrate reports quicky, whithout going through all the installation and usage steps. +This tutorial intends to give customized scripts to generate reports quickly, without going through all the installation and usage steps in detail. -The tutorial contains the following scripts, which are included in the "Tutorial" folder: +The tutorial contains the following scripts, which are included in the "vignettes" directory: - SingleTask_aggregate-then-rank.R - MultiTask_rank-then-aggregate.R - MultiTask_test-then-rank.R How to use the tutorial scripts in RStudio: 1. Specify where the report should be generated. ```{r, eval=F} setwd("myWorkingDirectoryFilePath") ``` 2. Open the script. 3. Select all the text from the script file (CTRL+a), and run all the code (CTRL+enter). 4. The report will be generated in the previously specified working directory ("myWorkingDirectoryFilePath"). 5. Check out the report, and the script to modify and adapt the desired parameters. # Usage Each script contains the following steps, as described in the README: 1. Load package 2. Load data (randomly generated?) 3. Perform ranking - Define challenge object - Perform ranking 4. Uncertainity analisys (bootstrapping) 5. Generate report The scrips will be now explained in more detail: #### SingleTask_aggregate-then-rank.R As the name indicates, in this script a single task evaluation will be performed. The applied ranking method is "metric-based aggregation". It is the most commonly applied method, and it begins by aggregating metric values across all test cases for each algorithm. This aggregate is then used to compute a rank for each algorithm. #### MultiTask_rank-then-aggregate.R As the name indicates, in this script a multi task evaluation will be performed. The applied ranking method is "case-based aggregation". It is the second most commonly applied method, and it begins with computing a rank for each test case for each algorithm (”rank first”). The final rank is based on the aggregated test-case ranks. Distance-based approaches for rank aggregation can also be used. #### MultiTask_test-then-rank.R As the name indicates, in this script a multi task evaluation will be performed. The applied ranking method is "significance ranking". In a complementary approach, statistical hypothesis tests are computed for each possible pair of algorithms to assess differences in metric values between the algorithms. Then ranking is performed according to the resulting relations or according to the number of significant one-sided test results. In the latter case, if algorithms have the same number of significant test results then they obtain the same rank. Various test statistics can be used. For more hints, see the README. # Terms of use Licenced under GPL-3. If you use this software for a publication, cite Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*