diff --git a/DESCRIPTION b/DESCRIPTION
index 9333e66..1baaa30 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,13 +1,13 @@
 Package: challengeR
 Type: Package
 Title: Analyzing assessment data of biomedical image analysis competitions and visualization of results
-Version: 0.3.2
-Date: 2020-04-03
+Version: 0.3.3
+Date: 2020-04-18
 Author: Manuel Wiesenfarth, Annette Kopp-Schneider
 Maintainer: Manuel Wiesenfarth <m.wiesenfarth@dkfz.de>
 Description: Analyzing assessment data of biomedical image analysis competitions and visualization of results.
 License: GPL-3
 Depends: R (>= 3.5.2), purrr, ggplot2
 Imports: knitr, plyr, rlang, rmarkdown, viridisLite, methods,graph, tidyr, reshape2, dplyr, relations
 Suggests: foreach, doParallel, ggpubr,Rgraphviz
 
diff --git a/README.md b/README.md
index 9e8bc9d..52a75b6 100644
--- a/README.md
+++ b/README.md
@@ -1,372 +1,386 @@
 Methods and open-source toolkit for analyzing and visualizing challenge
 results
 ================
 
-Note that this is ongoing work (version 0.3.2), there may be updates
+  - [Installation](#installation)
+  - [Terms of use](#terms-of-use)
+  - [Usage](#usage)
+  - [Changes](#changes)
+  - [Reference](#reference)
+
+Note that this is ongoing work (version 0.3.3), there may be updates
 with possibly major changes. *Please make sure that you use the most
 current version\!*
 
 Change log at the end of this document.
 
 # Installation
 
 Requires R version \>= 3.5.2 (<https://www.r-project.org>).
 
 Further, a recent version of Pandoc (\>= 1.12.3) is required. RStudio
 (<https://rstudio.com>) automatically includes this so you do not need
 to download Pandoc if you plan to use rmarkdown from the RStudio IDE,
 otherwise you’ll need to install Pandoc for your platform
 (<https://pandoc.org/installing.html>). Finally, if you want to generate
 a pdf report you will need to have LaTeX installed (e.g. MiKTeX, MacTeX
 or TinyTeX).
 
-To get the current development version of the R package from
-Github:
+To get the current development version of the R package from Github:
 
 ``` r
 if (!requireNamespace("devtools", quietly = TRUE)) install.packages("devtools")
 if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager")
 BiocManager::install("Rgraphviz", dependencies = TRUE)
 devtools::install_github("wiesenfa/challengeR", dependencies = TRUE)
 ```
 
 If you are asked whether you want to update installed packages and you
 type “a” for all, you might need administrator rights to update R core
 packages. You can also try to type “n” for updating no packages. If you
 are asked “Do you want to install from sources the packages which need
 compilation? (Yes/no/cancel)”, you can safely type “no”.
 
 If you get *Warning messages* (in contrast to *Error* messages), these
 might not be problematic and you can try to proceed.
 
 # Terms of use
 
 Licenced under GPL-3. If you use this software for a publication, cite
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein,
 L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for
 analyzing and visualizing challenge results. *arXiv preprint
 arXiv:1910.05121*
 
 # Usage
 
 Each of the following steps have to be run to generate the report: (1)
 Load package, (2) load data, (3) perform ranking, (4) perform
 bootstrapping and (5) generation of the report
 
 ## 1\. Load package
 
 Load package
 
 ``` r
 library(challengeR)
 ```
 
 ## 2\. Load data
 
 ### Data requirements
 
 Data requires the following *columns*
 
   - a *task identifier* in case of multi-task challenges.
   - a *test case identifier*
   - the *algorithm name*
   - the *metric value*
 
 In case of missing metric values, a missing observation has to be
 provided (either as blank field or “NA”).
 
 For example, in a challenge with 2 tasks, 2 test cases and 2 algorithms,
 where in task “T2”, test case “case2”, algorithm “A2” didn’t give a
 prediction (and thus NA or a blank field for missing value is inserted),
 the data set might look like this:
 
 | Task | TestCase | Algorithm | MetricValue |
 | :--- | :------- | :-------- | ----------: |
 | T1   | case1    | A1        |       0.266 |
 | T1   | case1    | A2        |       0.202 |
 | T1   | case2    | A1        |       0.573 |
 | T1   | case2    | A2        |       0.945 |
 | T2   | case1    | A1        |       0.372 |
 | T2   | case1    | A2        |       0.898 |
 | T2   | case2    | A1        |       0.908 |
 | T2   | case2    | A2        |          NA |
 
 ### Load data
 
 If you have assessment data at hand stored in a csv file (if you want to
 use simulated data skip the following code line) use
 
 ``` r
 data_matrix=read.csv(file.choose()) # type ?read.csv for help
 ```
 
 This allows to choose a file interactively, otherwise replace
 *file.choose()* by the file path (in style “/path/to/dataset.csv”) in
 quotation marks.
 
 For illustration purposes, in the following simulated data is generated
 *instead* (skip the following code chunk if you have already loaded
-data). The data is also stored as “data\_matrix.csv” in the
-repository.
+data). The data is also stored as “data\_matrix.csv” in the repository.
 
 ``` r
 if (!requireNamespace("permute", quietly = TRUE)) install.packages("permute")
 
 n=50
 
 set.seed(4)
 strip=runif(n,.9,1)
 c_ideal=cbind(task="c_ideal",
             rbind(
               data.frame(alg_name="A1",value=runif(n,.9,1),case=1:n),
               data.frame(alg_name="A2",value=runif(n,.8,.89),case=1:n),
               data.frame(alg_name="A3",value=runif(n,.7,.79),case=1:n),
               data.frame(alg_name="A4",value=runif(n,.6,.69),case=1:n),
               data.frame(alg_name="A5",value=runif(n,.5,.59),case=1:n)
             ))
 
 set.seed(1)
 c_random=data.frame(task="c_random",
                        alg_name=factor(paste0("A",rep(1:5,each=n))),
                        value=plogis(rnorm(5*n,1.5,1)),case=rep(1:n,times=5)
                        )
 
 strip2=seq(.8,1,length.out=5)
 a=permute::allPerms(1:5)
 c_worstcase=data.frame(task="c_worstcase",
                      alg_name=c(t(a)),
                      value=rep(strip2,nrow(a)),
                      case=rep(1:nrow(a),each=5)
                      )
 c_worstcase=rbind(c_worstcase,
                 data.frame(task="c_worstcase",alg_name=1:5,value=strip2,case=max(c_worstcase$case)+1)
           )
 c_worstcase$alg_name=factor(c_worstcase$alg_name,labels=paste0("A",1:5))
 
 data_matrix=rbind(c_ideal, c_random, c_worstcase)
 ```
 
 ## 3 Perform ranking
 
 ### 3.1 Define challenge object
 
 Code differs slightly for single and multi task challenges.
 
 In case of a single task challenge use
 
 ``` r
 # Use only task "c_random" in object data_matrix
   dataSubset=subset(data_matrix, task=="c_random")
 
   challenge=as.challenge(dataSubset, 
                         # Specify which column contains the algorithm, 
                         # which column contains a test case identifier 
                         # and which contains the metric value:
                         algorithm="alg_name", case="case", value="value", 
                         # Specify if small metric values are better
                         smallBetter = FALSE)
 ```
 
-*Instead*, for a multi-task challenge
-use
+*Instead*, for a multi-task challenge use
 
 ``` r
 # Same as above but with 'by="task"' where variable "task" contains the task identifier
   challenge=as.challenge(data_matrix, 
                          by="task", 
                          algorithm="alg_name", case="case", value="value", 
                          smallBetter = FALSE)
 ```
 
 ### 3.2 Perform ranking
 
 Different ranking methods are available, choose one of them:
 
-  - for “aggregate-then-rank” use (here: take mean for
-aggregation)
+  - for “aggregate-then-rank” use (here: take mean for aggregation)
 
 <!-- end list -->
 
 ``` r
 ranking=challenge%>%aggregateThenRank(FUN = mean, # aggregation function, 
                                                   # e.g. mean, median, min, max, 
                                                   # or e.g. function(x) quantile(x, probs=0.05)
                                       na.treat=0, # either "na.rm" to remove missing data, 
                                                   # set missings to numeric value (e.g. 0) 
                                                   # or specify a function, 
                                                   # e.g. function(x) min(x)
                                       ties.method = "min" # a character string specifying 
                                                           # how ties are treated, see ?base::rank
                                             )  
 ```
 
   - *alternatively*, for “rank-then-aggregate” with arguments as above
     (here: take mean for aggregation):
 
 <!-- end list -->
 
 ``` r
 ranking=challenge%>%rankThenAggregate(FUN = mean,
                                       ties.method = "min"
                                       )
 ```
 
   - *alternatively*, for test-then-rank based on Wilcoxon signed rank
     test:
 
 <!-- end list -->
 
 ``` r
 ranking=challenge%>%testThenRank(alpha=0.05, # significance level
                                  p.adjust.method="none",  # method for adjustment for
                                                           # multiple testing, see ?p.adjust
                                  na.treat=0, # either "na.rm" to remove missing data,
                                              # set missings to numeric value (e.g. 0)
                                              # or specify a function, e.g. function(x) min(x)
                                  ties.method = "min" # a character string specifying
                                                      # how ties are treated, see ?base::rank
                      )
 ```
 
 ## 4\. Perform bootstrapping
 
 Perform bootstrapping with 1000 bootstrap samples using one CPU
 
 ``` r
 set.seed(1)
 ranking_bootstrapped=ranking%>%bootstrap(nboot=1000)
 ```
 
 If you want to use multiple CPUs (here: 8 CPUs), use
 
 ``` r
 library(doParallel)
 registerDoParallel(cores=8)  
 set.seed(1)
 ranking_bootstrapped=ranking%>%bootstrap(nboot=1000, parallel=TRUE, progress = "none")
 stopImplicitCluster()
 ```
 
 ## 5\. Generate the report
 
 Generate report in PDF, HTML or DOCX format. Code differs slightly for
 single and multi task challenges.
 
 ### 5.1 For single task challenges
 
 ``` r
 ranking_bootstrapped %>% 
   report(title="singleTaskChallengeExample", # used for the title of the report
          file = "filename", 
          format = "PDF", # format can be "PDF", "HTML" or "Word"
          latex_engine="pdflatex", #LaTeX engine for producing PDF output. Options are "pdflatex", "lualatex", and "xelatex"
          clean=TRUE #optional. Using TRUE will clean intermediate files that are created during rendering.
         ) 
 ```
 
 Argument *file* allows for specifying the output file path as well,
 otherwise the working directory is used. If file is specified but does
 not have a file extension, an extension will be automatically added
 according to the output format given in *format*. Using argument
 *clean=FALSE* allows to retain intermediate files, such as separate
 files for each figure.
 
 If argument “file” is omitted, the report is created in a temporary
 folder with file name “report”.
 
 ### 5.1 For multi task challenges
 
 Same as for single task challenges, but additionally consensus ranking
 (rank aggregation across tasks) has to be given.
 
 Compute ranking consensus across tasks (here: consensus ranking
-according to mean ranks across
-tasks):
+according to mean ranks across tasks):
 
 ``` r
 # See ?relation_consensus for different methods to derive consensus ranking
 meanRanks=ranking%>%consensus(method = "euclidean") 
 meanRanks # note that there may be ties (i.e. some algorithms have identical mean rank)
 ```
 
 Generate report as above, but with additional specification of consensus
 ranking
 
 ``` r
 ranking_bootstrapped %>% 
   report(consensus=meanRanks,
          title="multiTaskChallengeExample",
          file = "filename", 
          format = "PDF", # format can be "PDF", "HTML" or "Word"
          latex_engine="pdflatex"#LaTeX engine for producing PDF output. Options are "pdflatex", "lualatex", and "xelatex"
         )
 ```
 
 # Changes
 
+#### Version 0.3.3
+
+  - Force line break to avoid that authors exceed the page in generated
+    PDF reports
+
+#### Version 0.3.2
+
+  - Correct names of authors
+
+#### Version 0.3.1
+
+  - Refactoring
+
 #### Version 0.3.0
 
   - Major bug fix release
 
 #### Version 0.2.5
 
   - Bug fixes
 
 #### Version 0.2.4
 
   - Automatic insertion of missings
 
 #### Version 0.2.3
 
   - Bug fixes
-  - Reports for subsets (top list) of algorithms: Use e.g.
-    `subset(ranking_bootstrapped, top=3) %>% report(...)` (or
+  - Reports for subsets (top list) of algorithms: Use
+    e.g. `subset(ranking_bootstrapped, top=3) %>% report(...)` (or
     `subset(ranking, top=3) %>% report(...)` for report without
     bootstrap results) to only show the top 3 algorithms according to
     the chosen ranking methods, where `ranking_bootstrapped` and
     `ranking` objects as defined in the example. Line plot for ranking
     robustness can be used to check whether algorithms performing well
     in other ranking methods are excluded. Bootstrapping still takes
     entire uncertainty into account. Podium plot neglect and ranking
     heatmap neglect excluded algorithms. Only available for single task
     challenges (for mutli task challenges not sensible because each task
     would contain a different sets of algorithms).
-  - Reports for subsets of tasks: Use e.g. `subset(ranking_bootstrapped,
+  - Reports for subsets of tasks: Use e.g. `subset(ranking_bootstrapped,
     tasks=c("task1", "task2","task3)) %>% report(...)` to restrict
     report to tasks “task1”, “task2”,"task3. You may want to recompute
     the consensus ranking before using `meanRanks=subset(ranking,
     tasks=c("task1", "task2","task3))%>%consensus(method = "euclidean")`
 
 #### Version 0.2.1
 
   - Introduction in reports now mentions e.g. ranking method, number of
     test cases,…
   - Function `subset()` allows selection of tasks after bootstrapping,
-    e.g. `subset(ranking_bootstrapped,1:3)`
+    e.g. `subset(ranking_bootstrapped,1:3)`
   - `report()` functions gain argument `colors` (default:
     `default_colors`). Change e.g. to `colors=viridisLite::inferno`
     which “is designed in such a way that it will analytically be
     perfectly perceptually-uniform, both in regular form and also when
     converted to black-and-white. It is also designed to be perceived by
     readers with the most common form of color blindness.” See package
     `viridis` for further similar functions.
 
 #### Version 0.2.0
 
   - Improved layout in case of many algorithms and tasks (while probably
     still not perfect)
   - Consistent coloring of algorithms across figures
   - `report()` function can be applied to ranked object before
     bootstrapping (and thus excluding figures based on bootstrapping),
     i.e. in the example `ranking %>% report(...)`
   - bug fixes
 
 # Reference
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein,
 L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for
 analyzing and visualizing challenge results. *arXiv preprint
 arXiv:1910.05121*
 
 ![alt text](HIP_Logo.png)
diff --git a/Readme.Rmd b/Readme.Rmd
index 8f71eb1..ef1ef63 100644
--- a/Readme.Rmd
+++ b/Readme.Rmd
@@ -1,317 +1,326 @@
 ---
 title: Methods and open-source toolkit for analyzing and visualizing challenge results
 output:
   github_document:
     toc: yes
     toc_depth: 1
   pdf_document:
     toc: yes
     toc_depth: '3'
 editor_options:
   chunk_output_type: console
 ---
 
 
 
 
 ```{r, echo = FALSE}
 knitr::opts_chunk$set(
  collapse = TRUE,
   comment = "#>",
  # fig.path = "README-",
     fig.width = 9,
     fig.height = 5,
     width=160
 )
 ```
 
 
 Note that this is ongoing work (version `r packageVersion("challengeR")`), there may be updates with possibly major changes. *Please make sure that you use the most current version!* 
 
 Change log at the end of this document.
 
 
 # Installation
 
 Requires R version >= 3.5.2 (https://www.r-project.org).
 
 Further, a recent version of Pandoc (>= 1.12.3) is required. RStudio (https://rstudio.com) automatically includes this so you do not need to download Pandoc if you plan to use rmarkdown from the RStudio IDE, otherwise you’ll need to install Pandoc for your platform (https://pandoc.org/installing.html). Finally, if you want to generate a pdf report you will need to have LaTeX installed (e.g. MiKTeX, MacTeX or TinyTeX).
 
 
 To get the current development version of the R package from Github:
 
 ```{r, eval=F,R.options,}
 if (!requireNamespace("devtools", quietly = TRUE)) install.packages("devtools")
 if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager")
 BiocManager::install("Rgraphviz", dependencies = TRUE)
 devtools::install_github("wiesenfa/challengeR", dependencies = TRUE)
 ```
 
 If you are asked whether you want to update installed packages and you type "a" for all, you might need administrator rights to update R core packages. You can also try to type "n" for updating no packages. If you are asked "Do you want to install from sources the packages which need compilation? (Yes/no/cancel)", you can safely type "no".
 
 If you get *Warning messages* (in contrast to *Error* messages), these might not be problematic and you can try to proceed. 
 
 # Terms of use
 Licenced under GPL-3. If you use this software for a publication, cite
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*
 
 
  
 # Usage
 Each of the following steps have to be run to generate the report: (1) Load package, (2) load data, (3) perform ranking, (4) perform bootstrapping and (5) generation of the report
 
 ## 1. Load package
 Load package
 ```{r, eval=F}
 library(challengeR)
 ```
 
 
 ## 2. Load data
 
 ### Data requirements
 Data requires the following *columns*  
 
 * a *task identifier* in case of multi-task challenges.
 * a *test case identifier* 
 * the *algorithm name* 
 * the *metric value* 
 
 In case of missing metric values, a missing observation has to be provided (either as blank field or "NA").
 
 
 For example, in a challenge with 2 tasks, 2 test cases and 2 algorithms, where in task "T2", test case "case2", algorithm "A2" didn't give a prediction (and thus NA or a blank field for missing value is inserted), the data set might look like this: 
 
 ```{r, eval=T, echo=F,results='asis'}
 set.seed(1)
 a=cbind(expand.grid(Task=paste0("T",1:2),TestCase=paste0("case",1:2),Algorithm=paste0("A",1:2)),MetricValue=round(c(runif(7,0,1),NA),3))
 print(knitr::kable(a[order(a$Task,a$TestCase,a$Algorithm),],row.names=F))
 ```
 
 
 
 ### Load data
 If you have assessment data at hand stored in a csv file (if you want to use simulated data skip the following code line) use
 ```{r, eval=F, echo=T}
 data_matrix=read.csv(file.choose()) # type ?read.csv for help
 
 ```
 
 This allows to choose a file interactively, otherwise replace *file.choose()* by the file path (in style "/path/to/dataset.csv") in quotation marks.
 
 
 
 
 
 For illustration purposes, in the following simulated data is generated *instead* (skip the following code chunk if you have already loaded data). The data is also stored as "data_matrix.csv" in the repository.
 ```{r, eval=F, echo=T}
 if (!requireNamespace("permute", quietly = TRUE)) install.packages("permute")
 
 n=50
 
 set.seed(4)
 strip=runif(n,.9,1)
 c_ideal=cbind(task="c_ideal",
             rbind(
               data.frame(alg_name="A1",value=runif(n,.9,1),case=1:n),
               data.frame(alg_name="A2",value=runif(n,.8,.89),case=1:n),
               data.frame(alg_name="A3",value=runif(n,.7,.79),case=1:n),
               data.frame(alg_name="A4",value=runif(n,.6,.69),case=1:n),
               data.frame(alg_name="A5",value=runif(n,.5,.59),case=1:n)
             ))
 
 set.seed(1)
 c_random=data.frame(task="c_random",
                        alg_name=factor(paste0("A",rep(1:5,each=n))),
                        value=plogis(rnorm(5*n,1.5,1)),case=rep(1:n,times=5)
                        )
 
 strip2=seq(.8,1,length.out=5)
 a=permute::allPerms(1:5)
 c_worstcase=data.frame(task="c_worstcase",
                      alg_name=c(t(a)),
                      value=rep(strip2,nrow(a)),
                      case=rep(1:nrow(a),each=5)
                      )
 c_worstcase=rbind(c_worstcase,
                 data.frame(task="c_worstcase",alg_name=1:5,value=strip2,case=max(c_worstcase$case)+1)
           )
 c_worstcase$alg_name=factor(c_worstcase$alg_name,labels=paste0("A",1:5))
 
 data_matrix=rbind(c_ideal, c_random, c_worstcase)
 
 ```
 
 
 ## 3 Perform ranking 
 
 ### 3.1 Define challenge object
 Code differs slightly for single and multi task challenges.
 
 In case of a single task challenge use
 
 ```{r, eval=F, echo=T}
 # Use only task "c_random" in object data_matrix
   dataSubset=subset(data_matrix, task=="c_random")
 
   challenge=as.challenge(dataSubset, 
                         # Specify which column contains the algorithm, 
                         # which column contains a test case identifier 
                         # and which contains the metric value:
                         algorithm="alg_name", case="case", value="value", 
                         # Specify if small metric values are better
                         smallBetter = FALSE)
 ```
 
 *Instead*, for a multi-task challenge use 
 
 ```{r, eval=F, echo=T}
 # Same as above but with 'by="task"' where variable "task" contains the task identifier
   challenge=as.challenge(data_matrix, 
                          by="task", 
                          algorithm="alg_name", case="case", value="value", 
                          smallBetter = FALSE)
 ```
 
 
 ### 3.2 Perform ranking 
 
 Different ranking methods are available, choose one of them:
 
 - for "aggregate-then-rank" use (here: take mean for aggregation)
 ```{r, eval=F, echo=T}
 ranking=challenge%>%aggregateThenRank(FUN = mean, # aggregation function, 
                                                   # e.g. mean, median, min, max, 
                                                   # or e.g. function(x) quantile(x, probs=0.05)
                                       na.treat=0, # either "na.rm" to remove missing data, 
                                                   # set missings to numeric value (e.g. 0) 
                                                   # or specify a function, 
                                                   # e.g. function(x) min(x)
                                       ties.method = "min" # a character string specifying 
                                                           # how ties are treated, see ?base::rank
                                             )  
 ```
 
 - *alternatively*, for  "rank-then-aggregate" with arguments as above (here: take mean for aggregation):
 ```{r, eval=F, echo=T}
 ranking=challenge%>%rankThenAggregate(FUN = mean,
                                       ties.method = "min"
                                       )
 ```
 
 - *alternatively*, for test-then-rank based on Wilcoxon signed rank test:
 ```{r, eval=F, echo=T}
 ranking=challenge%>%testThenRank(alpha=0.05, # significance level
                                  p.adjust.method="none",  # method for adjustment for
                                                           # multiple testing, see ?p.adjust
                                  na.treat=0, # either "na.rm" to remove missing data,
                                              # set missings to numeric value (e.g. 0)
                                              # or specify a function, e.g. function(x) min(x)
                                  ties.method = "min" # a character string specifying
                                                      # how ties are treated, see ?base::rank
                      )
 
 ```
 
 ## 4. Perform bootstrapping
 
 Perform bootstrapping with 1000 bootstrap samples using one CPU
 ```{r, eval=F, echo=T}
 set.seed(1)
 ranking_bootstrapped=ranking%>%bootstrap(nboot=1000)
 ```
 
 If you want to use multiple CPUs (here: 8 CPUs), use
 
 ```{r, eval=F, echo=T}
 library(doParallel)
 registerDoParallel(cores=8)  
 set.seed(1)
 ranking_bootstrapped=ranking%>%bootstrap(nboot=1000, parallel=TRUE, progress = "none")
 stopImplicitCluster()
 ```
 
 
 
 ## 5. Generate the report
 Generate report in PDF, HTML or DOCX format. Code differs slightly for single and multi task challenges.
 
 ### 5.1 For single task challenges
 ```{r, eval=F, echo=T}
 ranking_bootstrapped %>% 
   report(title="singleTaskChallengeExample", # used for the title of the report
          file = "filename", 
          format = "PDF", # format can be "PDF", "HTML" or "Word"
          latex_engine="pdflatex", #LaTeX engine for producing PDF output. Options are "pdflatex", "lualatex", and "xelatex"
          clean=TRUE #optional. Using TRUE will clean intermediate files that are created during rendering.
         ) 
 
 ```
 
 Argument *file* allows for specifying the output file path as well, otherwise the working directory is used.
 If file is specified but does not have a file extension, an extension will be automatically added according to the output format given in *format*. 
 Using argument *clean=FALSE* allows to retain intermediate files, such as separate files for each figure.
 
 If argument "file" is omitted, the report is  created in a temporary folder with file name "report".
 
 
 
 
 ### 5.1 For multi task challenges
 Same as for single task challenges, but additionally consensus ranking (rank aggregation across tasks) has to be given.
 
 Compute ranking consensus across tasks (here: consensus ranking according to mean ranks across tasks):
  
 ```{r, eval=F, echo=T}
 # See ?relation_consensus for different methods to derive consensus ranking
 meanRanks=ranking%>%consensus(method = "euclidean") 
 meanRanks # note that there may be ties (i.e. some algorithms have identical mean rank)
 ```
 
 Generate report as above, but with additional specification of consensus ranking
 ```{r, eval=F, echo=T}
 ranking_bootstrapped %>% 
   report(consensus=meanRanks,
          title="multiTaskChallengeExample",
          file = "filename", 
          format = "PDF", # format can be "PDF", "HTML" or "Word"
          latex_engine="pdflatex"#LaTeX engine for producing PDF output. Options are "pdflatex", "lualatex", and "xelatex"
         )
 ```
 
 
 # Changes
 
+#### Version 0.3.3
+- Force line break to avoid that authors exceed the page in generated PDF reports
+
+#### Version 0.3.2
+- Correct names of authors
+
+#### Version 0.3.1
+- Refactoring
+
 #### Version 0.3.0
 - Major bug fix release
 
 #### Version 0.2.5
 - Bug fixes
 
 
 #### Version 0.2.4
 - Automatic insertion of missings
 
 #### Version 0.2.3
 - Bug fixes
 - Reports for subsets (top list) of algorithms: Use e.g. `subset(ranking_bootstrapped, top=3) %>% report(...)` (or `subset(ranking, top=3) %>% report(...)` for report without bootstrap results) to only show the top 3 algorithms according to the chosen ranking methods, where  `ranking_bootstrapped` and `ranking` objects as defined in the example. Line plot for ranking robustness can be used to check whether algorithms performing well in other ranking methods are excluded. Bootstrapping still takes entire uncertainty into account. Podium plot neglect and ranking heatmap neglect excluded algorithms. Only available for single task challenges (for mutli task challenges not sensible because each task would contain a different sets of algorithms). 
 - Reports for subsets of tasks: Use e.g. `subset(ranking_bootstrapped, tasks=c("task1", "task2","task3)) %>% report(...)` to restrict report to tasks "task1", "task2","task3. You may want to recompute the consensus ranking before using `meanRanks=subset(ranking, tasks=c("task1", "task2","task3))%>%consensus(method = "euclidean")`
 
 #### Version 0.2.1
 - Introduction in reports now mentions e.g. ranking method, number of test cases,...
 - Function `subset()` allows selection of tasks after bootstrapping, e.g. `subset(ranking_bootstrapped,1:3)`
 - `report()` functions gain argument `colors` (default: `default_colors`). Change e.g. to `colors=viridisLite::inferno` which "is designed in such a way that it will analytically be perfectly perceptually-uniform, both in regular form and also when converted to black-and-white. It is also designed to be perceived by readers with the most common form of color blindness." See package `viridis` for further similar functions.
 
 #### Version 0.2.0
 - Improved layout in case of many algorithms and tasks (while probably still not perfect)
 - Consistent coloring of algorithms across figures
 - `report()` function can be applied to ranked object before bootstrapping (and thus excluding figures based on bootstrapping), i.e. in the example `ranking %>% report(...)`
 - bug fixes
   
   
 # Reference
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*
 
 ![alt text](HIP_Logo.png){width=100px}
diff --git a/inst/appdir/reportMultiple.Rmd b/inst/appdir/reportMultiple.Rmd
index 7b1c5c8..f5b3f92 100644
--- a/inst/appdir/reportMultiple.Rmd
+++ b/inst/appdir/reportMultiple.Rmd
@@ -1,565 +1,565 @@
 ---
 params:
   object: NA
   colors: NA
   name: NULL
   consensus: NA
 title: "Benchmarking report for `r params$name` "
-author: created by challengeR `r packageVersion('challengeR')` (Wiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider, 2019)
+author: "created by challengeR v`r packageVersion('challengeR')`  \nWiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider (2019)"
 date: "`r Sys.setlocale('LC_TIME', 'English'); format(Sys.time(), '%d %B, %Y')`"
 editor_options: 
   chunk_output_type: console
 ---
 
 <!-- This text is outcommented -->
 <!-- R code chunks start with "```{r }" and end with "```" -->
 <!-- Please do not change anything inside of code chunks, otherwise any latex code is allowed -->
 
 <!-- inline code with `r 0` -->
 
 
 ```{r setup, include=FALSE}
 options(width=80)
 out.format <- knitr::opts_knit$get("out.format")
 img_template <- switch( out.format,
                      word = list("img-params"=list(dpi=150,
                                                fig.width=6,
                                                fig.height=6,
                                                out.width="504px",
                                                out.height="504px")),
                      {
                        # default
                        list("img-params"=list( fig.width=7,fig.height = 3,dpi=300))
                      } )
 
 knitr::opts_template$set( img_template )
 
 knitr::opts_chunk$set(echo = F,#fig.width=7,fig.height = 3,dpi=300,
                       fig.align="center")
 theme_set(theme_light())
 
 ```
 
 
 ```{r }
 boot_object = params$object
 ordering_consensus=names(params$consensus)
 color.fun=params$colors
 ```
 
 ```{r }
 challenge_multiple=boot_object$data
 
 ranking.fun=boot_object$FUN
 object=challenge_multiple%>%ranking.fun
 
 cols_numbered=cols=color.fun(length(ordering_consensus))
 names(cols)=ordering_consensus
 names(cols_numbered)= paste(1:length(cols),names(cols))
 
 
 ```
 
 
 This document presents a systematic report on a benchmark study. Input data comprises raw metric values for all algorithms and test cases. Generated plots are:
 
 * Visualization of assessment data: Dot- and boxplots, podium plots and ranking heatmaps
 * Visualization of ranking robustness: Line plots
 * Visualization of ranking stability: Blob plots, violin plots and significance maps
 * Visualization of cross-task insights
 
 
 Ranking of algorithms within tasks according to the following chosen ranking scheme:
 
 ```{r,results='asis'}
 a=(  lapply(object$FUN.list,function(x) {
                  if (!is.character(x)) return(paste0("aggregate using function ",
                                                      paste(gsub("UseMethod","",
                                                                 deparse(functionBody(x))),
                                                            collapse=" ")
                                                      ))
                  else if (x=="rank") return(x)
                  else return(paste0("aggregate using function ",x))
   }))
 cat("&nbsp; &nbsp; *",paste0(a,collapse=" then "),"*",sep="")
 
 if (is.character(object$FUN.list[[1]]) && object$FUN.list[[1]]=="significance") cat("\n\n Column 'prop.sign' is equal to the number of pairwise significant test results for a given algorithm divided by the number of algorithms.")
 ```
 
 
 Ranking list for each task:
 ```{r,results='asis'}
 for (t in 1:length(object$matlist)){
   cat("\n",names(object$matlist)[t],": ")
   n.cases=nrow(challenge_multiple[[t]])/length(unique(challenge_multiple[[t]][[attr(challenge_multiple,"algorithm")]]))
   cat("\nAnalysis based on ", 
       n.cases,
       " test cases which included", 
       sum(is.na(challenge_multiple[[t]][[attr(challenge_multiple,"value")]])),
       " missing values.")
   
   if (n.cases<log2(5000)) warning("Associated figures based on bootstrapping should be treated with caution due to small number of test cases!")
   
   x=object$matlist[[t]]
   print(knitr::kable(x[order(x$rank),]))
 }
 
 ```
 
 \bigskip
 
 Consensus ranking according to chosen method `r attr(params$consensus,"method")`:
 ```{r}
 knitr::kable(data.frame(value=round(params$consensus,3), 
                         rank=rank(params$consensus, 
                                   ties.method="min")))
 ```
 
 
 # Visualization of raw assessment data
 Algorithms are ordered according to chosen ranking scheme for each task.
 
 ## Dot- and boxplots
 
 *Dot- and boxplots* for visualizing raw assessment data separately for each algorithm. Boxplots representing descriptive statistics over all test cases (median, quartiles and outliers) are combined with horizontally jittered dots representing individual test cases.
 
 \bigskip
 
 ```{r boxplots}
 temp=boxplot(object,size=.8)
 temp=lapply(temp,function(x) utils::capture.output(x+xlab("Algorithm")+ylab("Metric value")))
 
 ```
 
 
 
 ## Podium plots
 *Podium plots* (see also Eugster et al, 2008) for visualizing raw assessment data. Upper part (spaghetti plot): Participating algorithms are color-coded, and each colored dot in the plot represents a metric value achieved with the respective algorithm. The actual metric value is encoded by the y-axis. Each podium (here: $p$=`r length(ordering_consensus)`) represents one possible rank, ordered from best (1) to last (here: `r length(ordering_consensus)`). The assignment of metric values (i.e. colored dots) to one of the podiums is based on the rank that the respective algorithm achieved on the corresponding test case. Note that the plot part above each podium place is further subdivided into $p$ "columns", where each column represents one participating algorithm (here: $p=$ `r length(ordering_consensus)`).  Dots corresponding to identical test cases are connected by a line, leading to the shown spaghetti structure. Lower part: Bar charts represent the relative frequency for each algorithm to achieve the rank encoded by the podium place. 
 
 ```{r ,eval=T,fig.width=12, fig.height=6,include=FALSE}
 plot.new()
 algs=ordering_consensus
 l=legend("topright", 
          paste0(1:length(algs),": ",algs), 
          lwd = 1, cex=1.4,seg.len=1.1,
          title="Rank: Alg.",
          plot=F) 
 
 w <- grconvertX(l$rect$w, to='ndc') - grconvertX(0, to='ndc')
 h<- grconvertY(l$rect$h, to='ndc') - grconvertY(0, to='ndc')
 addy=max(grconvertY(l$rect$h,"user","inches"),6)
 ```
 
 
 ```{r podium,eval=T,fig.width=12, fig.height=addy}
 #c(bottom, left, top, right
 op<-par(pin=c(par()$pin[1],6),
         omd=c(0, 1-w, 0, 1),
         mar=c(par('mar')[1:3], 0)+c(-.5,0.5,-.5,0),
         cex.axis=1.5,
         cex.lab=1.5,
         cex.main=1.7)#,mar=c(5, 4, 4, 2) + 0.1)
 
 oh=grconvertY(l$rect$h,"user","lines")-grconvertY(6,"inches","lines")
 if (oh>0) par(oma=c(oh,0,0,0))
 
 
 set.seed(38)
 podium(object,
        col=cols,
        lines.show = T, lines.alpha = .4,
        dots.cex=.9,
        ylab="Metric value",
        layout.heights=c(1,.35),
        legendfn = function(algs, cols) {
                  legend(par('usr')[2], par('usr')[4], 
                  xpd=NA, 
                  paste0(1:length(algs),": ",algs), 
                  lwd = 1, col =  cols, 
                  bg = NA,
                  cex=1.4, seg.len=1.1,
                  title="Rank: Alg.") 
         }
       )
 par(op)
   
 ```
 
 
 ## Ranking heatmaps
 *Ranking heatmaps* for visualizing raw assessment data. Each cell $\left( i, A_j \right)$ shows the absolute frequency of test cases in which algorithm $A_j$ achieved rank $i$.
 
 \bigskip
 
 ```{r rankingHeatmap,fig.width=9, fig.height=9,out.width='70%'}
 temp=utils::capture.output(rankingHeatmap(object))
 ```
 
 
 
 # Visualization of ranking stability
 
 
 
 ## *Blob plot* for visualizing ranking stability 
 based on bootstrap sampling}\label{blobByTask}
 
 Algorithms are color-coded, and the area of each blob at position $\left( A_i, \text{rank } j \right)$ is proportional to the relative frequency $A_i$ achieved rank $j$ across $b=$ `r ncol(boot_object$bootsrappedRanks[[1]])` bootstrap samples. The median rank for each algorithm is indicated by a black cross. 95\% bootstrap intervals across bootstrap samples are indicated by black lines. 
 
 
 \bigskip
 
 ```{r blobplot_bootstrap,fig.width=9, fig.height=9}
 pl=list()
 for (subt in names(boot_object$bootsrappedRanks)){
   a=list(bootsrappedRanks=list(boot_object$bootsrappedRanks[[subt]]),
          matlist=list(boot_object$matlist[[subt]]))
   names(a$bootsrappedRanks)=names(a$matlist)=subt
   class(a)="bootstrap.list"
   r=boot_object$matlist[[subt]]
 
   pl[[subt]]=stabilityByTask(a,
                              max_size =8,
                              ordering=rownames(r[order(r$rank),]),
                              size.ranks=.25*theme_get()$text$size,
                              size=8,
                              shape=4) + scale_color_manual(values=cols)
 
 }
 
 # if (length(boot_object$matlist)<=6 &nrow((boot_object$matlist[[1]]))<=10 ){
 #   ggpubr::ggarrange(plotlist = pl)
 # } else {
   for (i in 1:length(pl)) print(pl[[i]])
 #}
 
 ```
 
 
 ## *Violin plot* for visualizing ranking stability based on bootstrapping \label{violin}
 
 The ranking list based on the full assessment data is pairwisely compared with the ranking lists based on the individual bootstrap samples (here $b=$ `r ncol(boot_object$bootsrappedRanks[[1]])` samples). For each pair of rankings, Kendall's $\tau$ correlation is computed. Kendall’s $\tau$ is a scaled index determining the correlation between the lists. It is computed by evaluating the number of pairwise concordances and discordances between ranking lists and produces values between $-1$ (for inverted order) and $1$ (for identical order). A violin plot, which simultaneously depicts a boxplot and a density plot, is generated from the results.
 
 \bigskip
 
 ```{r violin}
 violin(boot_object)
 ```
 
 
 
 
 
 ## *Significance maps* for visualizing ranking stability based on statistical significance
 
 *Significance maps* depict incidence matrices of
 pairwise significant test results for the one-sided Wilcoxon signed rank test at a 5\% significance level with adjustment for multiple testing according to Holm. Yellow shading indicates that metric values of the algorithm on the x-axis were significantly superior to those from the algorithm on the y-axis, blue color indicates no significant difference.
 
 
 \bigskip
 
 ```{r significancemap,fig.width=6, fig.height=6,out.width='200%'}
 temp=utils::capture.output(significanceMap(object,alpha=0.05,p.adjust.method="holm")
         )
 ```
 
 <!-- \subsubsection{Hasse diagram} -->
 
 <!-- ```{r single_stability_significance_hasse, fig.height=19} -->
 <!-- plot(relensemble) -->
 <!-- ``` -->
 
 
 
 
 ## Ranking robustness to ranking methods
 *Line plots* for visualizing rankings robustness across different ranking methods. Each algorithm is represented by one colored line. For each ranking method encoded on the x-axis, the height of the line represents the corresponding rank. Horizontal lines indicate identical ranks for all methods.
 
 \bigskip
 
 ```{r lineplot,fig.width=7,fig.height = 5}
 if (length(boot_object$matlist)<=6 &
     nrow((boot_object$matlist[[1]]))<=10 ){
   methodsplot(challenge_multiple,
               ordering = ordering_consensus,
               na.treat=object$call[[1]][[1]]$na.treat) + scale_color_manual(values=cols)
 } else {
   x=challenge_multiple
   for (subt in names(challenge_multiple)){
      dd=as.challenge(x[[subt]],
                      value=attr(x,"value"), 
                      algorithm=attr(x,"algorithm") ,
                      case=attr(x,"case"),
                      annotator = attr(x,"annotator"), 
                      by=attr(x,"by"),
                      smallBetter = !attr(x,"largeBetter"),
                      na.treat=object$call[[1]][[1]]$na.treat
                      )
  
     print(methodsplot(dd,
                       ordering = ordering_consensus) + scale_color_manual(values=cols)
           )
   }
 }
 ```
 
 
 
 
 
 # Visualization of cross-task insights
 
 Algorithms are ordered according to consensus ranking.
 
 
 
 
 ## Characterization of algorithms
 
 ### Ranking stability: Variability of achieved rankings across tasks
 <!-- Variability of achieved rankings across tasks: If a -->
 <!-- reasonably large number of tasks is available, a blob plot -->
 <!-- can be drawn, visualizing the distribution -->
 <!-- of ranks each algorithm attained across tasks. -->
 <!-- Displayed are all ranks and their frequencies an algorithm -->
 <!-- achieved in any task. If all tasks would provide the same -->
 <!-- stable ranking, narrow intervals around the diagonal would -->
 <!-- be expected. -->
 blob plot similar to the one shown in Fig.~\ref{blobByTask} substituting rankings based on bootstrap samples with the rankings corresponding to multiple tasks. This way, the distribution of ranks across tasks can be intuitively visualized.
 
 
 \bigskip
 
 ```{r blobplot_raw}
 #stability.ranked.list
 stability(object,ordering=ordering_consensus,max_size=9,size=8,shape=4)+scale_color_manual(values=cols)
 ```
 
 
 ### Ranking stability: Ranking variability via bootstrap approach
 
 Blob plot of bootstrap results over the different tasks separated
 by algorithm allows another perspective on the assessment data. This gives deeper insights into the characteristics
 of tasks and the ranking uncertainty of the algorithms in each
 task. 
 <!-- 1000 bootstrap Rankings were performed for each task. -->
 <!-- Each algorithm is considered separately and for each subtask (x-axis) all observed ranks across bootstrap samples (y-axis) are displayed. Additionally, medians and IQR is shown in black. -->
 
 <!-- We see which algorithm is consistently among best, which is consistently among worst, which vary extremely... -->
 
 
 \bigskip
 
 ```{r blobplot_bootstrap_byAlgorithm,fig.width=7,fig.height = 5}
 #stabilityByAlgorithm.bootstrap.list
 if (length(boot_object$matlist)<=6 &nrow((boot_object$matlist[[1]]))<=10 ){
   stabilityByAlgorithm(boot_object,
                        ordering=ordering_consensus,
                        max_size = 9,
                        size=4,
                        shape=4,
                        single = F) + scale_color_manual(values=cols)
 } else {
   pl=stabilityByAlgorithm(boot_object,
                           ordering=ordering_consensus,
                           max_size = 9,
                           size=4,
                           shape=4,
                           single = T)
   for (i in 1:length(pl)) print(pl[[i]] + 
                                   scale_color_manual(values=cols) +
                                   guides(size = guide_legend(title="%"),color="none")
                                 )
 }
 
 ```
 
 <!-- Stacked frequencies of observed ranks across bootstrap samples are displayed with colouring according to subtask. Vertical lines provide original (non-bootstrap) rankings for each subtask. -->
 
 An alternative representation is provided by a stacked
 frequency plot of the observed ranks, separated by algorithm. Observed ranks across bootstrap samples are
 displayed with colouring according to task. For algorithms that
 achieve the same rank in different tasks for the full assessment
 data set, vertical lines are on top of each other. Vertical lines
 allow to compare the achieved rank of each algorithm over
 different tasks.
 
 \bigskip
 
 
 ```{r stackedFrequencies_bootstrap_byAlgorithm,fig.width=7,fig.height = 5}
 #stabilityByAlgorithmStacked.bootstrap.list
 stabilityByAlgorithmStacked(boot_object,ordering=ordering_consensus)
 ```
 
 
 
 
 ## Characterization of tasks
 
 
 ### Visualizing bootstrap results
 To investigate which
 tasks separate algorithms well (i.e., lead to a stable ranking),
 two visualization methods are recommended.
 
 Bootstrap results can be shown in a blob plot showing one plot for each
 task. In this view, the spread of the blobs for each algorithm
 can be compared across tasks. Deviations from the diagonal indicate deviations
 from the consensus ranking (over tasks). Specifically, if rank
 distribution of an algorithm is consistently below the diagonal,
 the algorithm performed better in this task than on average
 across tasks, while if the rank distribution of an algorithm
 is consistently above the diagonal, the algorithm performed
 worse in this task than on average across tasks. At the bottom
 of each panel, ranks for each algorithm in the tasks is provided.
 
 
 <!-- Shows which subtask leads to stable ranking and in which subtask ranking is more uncertain. -->
 
 
 Same as in Section \ref{blobByTask} but now ordered according to consensus.
 
 \bigskip
 
 ```{r blobplot_bootstrap_byTask,fig.width=9, fig.height=9}
 #stabilityByTask.bootstrap.list
 if (length(boot_object$matlist)<=6 &nrow((boot_object$matlist[[1]]))<=10 ){
   stabilityByTask(boot_object,
                   ordering=ordering_consensus,
                   max_size = 9,
                   size=4,
                   shape=4) + scale_color_manual(values=cols)
 } else {
   pl=list()
   for (subt in names(boot_object$bootsrappedRanks)){
     a=list(bootsrappedRanks=list(boot_object$bootsrappedRanks[[subt]]),
            matlist=list(boot_object$matlist[[subt]]))
     names(a$bootsrappedRanks)=names(a$matlist)=subt
     class(a)="bootstrap.list"
     r=boot_object$matlist[[subt]]
     
     pl[[subt]]=stabilityByTask(a,
                                max_size = 9,
                                ordering=ordering_consensus,
                                size.ranks=.25*theme_get()$text$size,
                                size=4,
                                shape=4) + scale_color_manual(values=cols)
   }
   for (i in 1:length(pl)) print(pl[[i]])
 }
 ```
 
 
 ### Cluster Analysis
 <!-- Quite a different question of interest -->
 <!-- is to investigate the similarity of tasks with respect to their -->
 <!-- rankings, i.e., which tasks lead to similar ranking lists and the -->
 <!-- ranking of which tasks are very different. For this question -->
 <!-- a hierarchical cluster analysis is performed based on the -->
 <!-- distance between ranking lists. Different distance measures -->
 <!-- can be used (here: Spearman's footrule distance) -->
 <!-- as well as different agglomeration methods (here: complete and average).  -->
 
 
 Dendrogram from hierarchical cluster analysis} and \textit{network-type graphs} for assessing the similarity of tasks based on challenge rankings. 
 
 A dendrogram is a visualization approach based on hierarchical clustering. It depicts clusters according to a chosen distance measure (here: Spearman's footrule) as well as a chosen agglomeration method (here: complete and average agglomeration). 
 \bigskip
 
 ```{r , fig.width=6, fig.height=5,out.width='60%'}
 #d=relation_dissimilarity.ranked.list(object,method=kendall)
 
 # use ranking list
   relensemble=as.relation.ranked.list(object)
  
 # # use relations
 #   a=challenge_multi%>%decision.challenge(p.adjust.method="none")
 #   aa=lapply(a,as.relation.challenge.incidence)
 #   names(aa)=names(challenge_multi)
 #   relensemble= do.call(relation_ensemble,args = aa)
 d <- relation_dissimilarity(relensemble, method = "symdiff")
 ```
 
   
 ```{r dendrogram_complete, fig.width=6, fig.height=5,out.width='60%'}
 if (length(relensemble)>2) {
   plot(hclust(d,method="complete")) #,main="Symmetric difference distance - complete"
 } else cat("\nCluster analysis only sensible if there are >2 tasks.\n\n")
 ```
 
 \bigskip
 
 
 ```{r dendrogram_average, fig.width=6, fig.height=5,out.width='60%'}
 if (length(relensemble)>2) plot(hclust(d,method="average")) #,main="Symmetric difference distance - average"
 ```
 
 <!-- An alternative representation of -->
 <!-- distances between tasks (see Eugster et al, 2008) is provided by networktype -->
 <!-- graphs. -->
 <!-- Every task is represented by a node and nodes are connected -->
 <!-- by edges. Distance between nodes increase exponentially with -->
 <!-- the chosen distance measure d (here: distance between nodes -->
 <!-- equal to 1:05d). Thick edges represent smaller distance, i.e., -->
 <!-- the ranking lists of corresponding tasks are similar. Tasks with -->
 <!-- a unique winner are filled to indicate the algorithm. In case -->
 <!-- there are more than one first-ranked algorithm, nodes remain -->
 <!-- uncoloured. -->
 
 
 In network-type graphs (see Eugster et al, 2008), every task is represented by a node and nodes are connected by edges whose length is determined by a chosen distance measure. Here, distances between nodes are chosen to increase exponentially in Spearman's footrule distance with growth rate 0.05 to accentuate large distances.
 Hence, tasks that are similar with respect to their algorithm ranking appear closer together than those that are dissimilar. Nodes representing tasks with a unique winner are colored-coded by the winning algorithm. In case there are more than one first-ranked algorithms in a task, the corresponding node remains uncolored.
 \bigskip
 
 ```{r ,eval=T,fig.width=12, fig.height=6,include=FALSE}
 if (length(relensemble)>2) {
   netw=network(object,
                method = "symdiff", 
                edge.col=grDevices::grey.colors,
                edge.lwd=1,
                rate=1.05,
                cols=cols
                )
   
   plot.new()
   leg=legend("topright",  names(netw$leg.col), lwd = 1, col = netw$leg.col, bg =NA,plot=F,cex=.8)
   w <- grconvertX(leg$rect$w, to='inches')
   addy=6+w
 } else addy=1
 
 ```
 
 ```{r network, fig.width=addy, fig.height=6,out.width='100%'}
 if (length(relensemble)>2) {
   plot(netw,
        layoutType = "neato",
        fixedsize=TRUE,
        # fontsize,
        # width,
        # height,
        shape="ellipse",
        cex=.8
        )
 }
 
 ```
 
 
 # Reference
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*
 
 M. J. A. Eugster, T. Hothorn, and F. Leisch, “Exploratory
 and inferential analysis of benchmark experiments,”
 Institut fuer Statistik, Ludwig-Maximilians-
 Universitaet Muenchen, Germany, Technical Report 30,
 2008. [Online]. Available: http://epub.ub.uni-muenchen.
 de/4134/.
 
 
 
 
 
 
 
diff --git a/inst/appdir/reportMultipleShort.Rmd b/inst/appdir/reportMultipleShort.Rmd
index deb75dd..f363c6e 100644
--- a/inst/appdir/reportMultipleShort.Rmd
+++ b/inst/appdir/reportMultipleShort.Rmd
@@ -1,409 +1,409 @@
 ---
 params:
   object: NA
   colors: NA
   name: NULL
   consensus: NA
 title: "Benchmarking report for `r params$name` "
-author: created by challengeR `r packageVersion('challengeR')` (Wiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider, 2019)
+author: "created by challengeR v`r packageVersion('challengeR')`  \nWiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider (2019)"
 date: "`r Sys.setlocale('LC_TIME', 'English'); format(Sys.time(), '%d %B, %Y')`"
 editor_options: 
   chunk_output_type: console
 ---
 
 <!-- This text is outcommented -->
 <!-- R code chunks start with "```{r }" and end with "```" -->
 <!-- Please do not change anything inside of code chunks, otherwise any latex code is allowed -->
 
 <!-- inline code with `r 0` -->
 
 
 ```{r setup, include=FALSE}
 options(width=80)
 out.format <- knitr::opts_knit$get("out.format")
 img_template <- switch( out.format,
                      word = list("img-params"=list(dpi=150,
                                                fig.width=6,
                                                fig.height=6,
                                                out.width="504px",
                                                out.height="504px")),
                      {
                        # default
                        list("img-params"=list( fig.width=7,fig.height = 3,dpi=300))
                      } )
 
 knitr::opts_template$set( img_template )
 
 knitr::opts_chunk$set(echo = F,#fig.width=7,fig.height = 3,dpi=300,
                       fig.align="center")
 theme_set(theme_light())
 
 ```
 
 
 ```{r }
 object = params$object
 ordering_consensus=names(params$consensus)
 color.fun=params$colors
 
 ```
 
 ```{r }
 challenge_multiple=object$data
 
 ranking.fun=object$FUN
 
 cols_numbered=cols=color.fun(length(ordering_consensus))
 names(cols)=ordering_consensus
 names(cols_numbered)= paste(1:length(cols),names(cols))
 
 ```
 
 
 This document presents a systematic report on a benchmark study. Input data comprises raw metric values for all algorithms and test cases. Generated plots are:
 
 * Visualization of assessment data: Dot- and boxplots, podium plots and ranking heatmaps
 * Visualization of ranking robustness: Line plots
 * Visualization of ranking stability: Significance maps
 * Visualization of cross-task insights
 
 
 Ranking of algorithms within tasks according to the following chosen ranking scheme:
 
 ```{r,results='asis'}
 a=(  lapply(object$FUN.list,function(x) {
                  if (!is.character(x)) return(paste0("aggregate using function ",
                                                      paste(gsub("UseMethod","",
                                                                 deparse(functionBody(x))),
                                                            collapse=" ")
                                                      ))
                  else if (x=="rank") return(x)
                  else return(paste0("aggregate using function ",x))
   }))
 cat("&nbsp; &nbsp; *",paste0(a,collapse=" then "),"*",sep="")
 
 if (is.character(object$FUN.list[[1]]) && object$FUN.list[[1]]=="significance") cat("\n\n Column 'prop.sign' is equal to the number of pairwise significant test results for a given algorithm divided by the number of algorithms.")
 ```
 
 
 Ranking list for each task:
 ```{r,results='asis'}
 for (t in 1:length(object$matlist)){
   cat("\n",names(object$matlist)[t],": ")
   n.cases=nrow(challenge_multiple[[t]])/length(unique(challenge_multiple[[t]][[attr(challenge_multiple,"algorithm")]]))
   
   cat("\nAnalysis based on ", 
       n.cases,
       " test cases which included", 
       sum(is.na(challenge_multiple[[t]][[attr(challenge_multiple,"value")]])),
       " missing values.")
   
   x=object$matlist[[t]]
   print(knitr::kable(x[order(x$rank),]))
 }
 
 ```
 
 \bigskip
 
 Consensus ranking according to chosen method `r attr(params$consensus,"method")`:
 ```{r}
 knitr::kable(data.frame(value=round(params$consensus,3), 
                         rank=rank(params$consensus, 
                                   ties.method="min")))
 ```
 
 
 # Visualization of raw assessment data
 Algorithms are ordered according to chosen ranking scheme for each task.
 
 ## Dot- and boxplots
 
 *Dot- and boxplots* for visualizing raw assessment data separately for each algorithm. Boxplots representing descriptive statistics over all test cases (median, quartiles and outliers) are combined with horizontally jittered dots representing individual test cases.
 
 \bigskip
 
 ```{r boxplots}
 temp=boxplot(object, size=.8)
 temp=lapply(temp, function(x) utils::capture.output(x+xlab("Algorithm")+ylab("Metric value")))
 
 ```
 
 
 
 ## Podium plots
 *Podium plots* (see also Eugster et al, 2008) for visualizing raw assessment data. Upper part (spaghetti plot): Participating algorithms are color-coded, and each colored dot in the plot represents a metric value achieved with the respective algorithm. The actual metric value is encoded by the y-axis. Each podium (here: $p$=`r length(ordering_consensus)`) represents one possible rank, ordered from best (1) to last (here: `r length(ordering_consensus)`). The assignment of metric values (i.e. colored dots) to one of the podiums is based on the rank that the respective algorithm achieved on the corresponding test case. Note that the plot part above each podium place is further subdivided into $p$ "columns", where each column represents one participating algorithm (here: $p=$ `r length(ordering_consensus)`).  Dots corresponding to identical test cases are connected by a line, leading to the shown spaghetti structure. Lower part: Bar charts represent the relative frequency for each algorithm to achieve the rank encoded by the podium place. 
 
 ```{r ,eval=T,fig.width=12, fig.height=6,include=FALSE}
 plot.new()
 algs=ordering_consensus
 l=legend("topright", 
          paste0(1:length(algs),": ",algs), 
          lwd = 1, cex=1.4,seg.len=1.1,
          title="Rank: Alg.",
          plot=F) 
 
 w <- grconvertX(l$rect$w, to='ndc') - grconvertX(0, to='ndc')
 h<- grconvertY(l$rect$h, to='ndc') - grconvertY(0, to='ndc')
 addy=max(grconvertY(l$rect$h,"user","inches"),6)
 ```
 
 
 ```{r podium,eval=T,fig.width=12, fig.height=addy}
 #c(bottom, left, top, right
 
 op<-par(pin=c(par()$pin[1],6),
         omd=c(0, 1-w, 0, 1),
         mar=c(par('mar')[1:3], 0)+c(-.5,0.5,-.5,0),
         cex.axis=1.5,
         cex.lab=1.5,
         cex.main=1.7)
 
 oh=grconvertY(l$rect$h,"user","lines")-grconvertY(6,"inches","lines")
 if (oh>0) par(oma=c(oh,0,0,0))
 
 
 set.seed(38)
 podium(object,
        col=cols,
        lines.show = T, lines.alpha = .4,
        dots.cex=.9,
        ylab="Metric value",
        layout.heights=c(1,.35),
        legendfn = function(algs, cols) {
                  legend(par('usr')[2], par('usr')[4], 
                  xpd=NA, 
                  paste0(1:length(algs),": ",algs), 
                  lwd = 1, col =  cols, 
                  bg = NA,
                  cex=1.4, seg.len=1.1,
                  title="Rank: Alg.") 
         }
       )
 par(op)
   
 ```
 
 
 ## Ranking heatmaps
 *Ranking heatmaps* for visualizing raw assessment data. Each cell $\left( i, A_j \right)$ shows the absolute frequency of test cases in which algorithm $A_j$ achieved rank $i$.
 
 \bigskip
 
 ```{r rankingHeatmap,fig.width=9, fig.height=9,out.width='70%'}
 temp=utils::capture.output(rankingHeatmap(object))
 ```
 
 
 
 # Visualization of ranking stability
 
 
 
 
 
 ## *Significance maps* for visualizing ranking stability based on statistical significance
 
 *Significance maps* depict incidence matrices of
 pairwise significant test results for the one-sided Wilcoxon signed rank test at a 5\% significance level with adjustment for multiple testing according to Holm. Yellow shading indicates that metric values of the algorithm on the x-axis were significantly superior to those from the algorithm on the y-axis, blue color indicates no significant difference.
 
 
 \bigskip
 
 ```{r significancemap,fig.width=6, fig.height=6,out.width='200%'}
 temp=utils::capture.output(significanceMap(object,alpha=0.05,p.adjust.method="holm")
         )
 
 ```
 
 <!-- \subsubsection{Hasse diagram} -->
 
 <!-- ```{r single_stability_significance_hasse, fig.height=19} -->
 <!-- plot(relensemble) -->
 <!-- ``` -->
 
 
 
 
 ## Ranking robustness to ranking methods
 *Line plots* for visualizing rankings robustness across different ranking methods. Each algorithm is represented by one colored line. For each ranking method encoded on the x-axis, the height of the line represents the corresponding rank. Horizontal lines indicate identical ranks for all methods.
 
 \bigskip
 
 ```{r lineplot,fig.width=7,fig.height = 5}
 if (length(object$matlist)<=6 &nrow((object$matlist[[1]]))<=10 ){
   methodsplot(challenge_multiple,
               ordering = ordering_consensus,
               na.treat=object$call[[1]][[1]]$na.treat) + scale_color_manual(values=cols)
 } else {
   x=challenge_multiple
   for (subt in names(challenge_multiple)){
      dd=as.challenge(x[[subt]],
                      value=attr(x,"value"), 
                      algorithm=attr(x,"algorithm") ,
                      case=attr(x,"case"),
                      annotator = attr(x,"annotator"), 
                      by=attr(x,"by"),
                      smallBetter = !attr(x,"largeBetter"),
                      na.treat=object$call[[1]][[1]]$na.treat
                      )
  
     print(methodsplot(dd,
                       ordering = ordering_consensus) + scale_color_manual(values=cols)
           )
   }
 }
 ```
 
 
 
 
 
 # Visualization of cross-task insights
 
 Algorithms are ordered according to consensus ranking.
 
 
 
 
 ## Characterization of algorithms
 
 ### Ranking stability: Variability of achieved rankings across tasks
 <!-- Variability of achieved rankings across tasks: If a -->
 <!-- reasonably large number of tasks is available, a blob plot -->
 <!-- can be drawn, visualizing the distribution -->
 <!-- of ranks each algorithm attained across tasks. -->
 <!-- Displayed are all ranks and their frequencies an algorithm -->
 <!-- achieved in any task. If all tasks would provide the same -->
 <!-- stable ranking, narrow intervals around the diagonal would -->
 <!-- be expected. -->
 
 <!-- blob plot similar to the one shown in Fig.~\ref{blobByTask} substituting rankings based on bootstrap samples with the rankings corresponding to multiple tasks. This way, the distribution of ranks across tasks can be intuitively visualized. -->
 
 
 \bigskip
 
 ```{r blobplot_raw}
 #stability.ranked.list
 stability(object,ordering=ordering_consensus,max_size=9,size=8,shape=4)+scale_color_manual(values=cols)
 ```
 
 
 
 
 ## Characterization of tasks
 
 
 
 ### Cluster Analysis
 <!-- Quite a different question of interest -->
 <!-- is to investigate the similarity of tasks with respect to their -->
 <!-- rankings, i.e., which tasks lead to similar ranking lists and the -->
 <!-- ranking of which tasks are very different. For this question -->
 <!-- a hierarchical cluster analysis is performed based on the -->
 <!-- distance between ranking lists. Different distance measures -->
 <!-- can be used (here: Spearman's footrule distance) -->
 <!-- as well as different agglomeration methods (here: complete and average).  -->
 
 
 Dendrogram from hierarchical cluster analysis} and \textit{network-type graphs} for assessing the similarity of tasks based on challenge rankings. 
 
 A dendrogram is a visualization approach based on hierarchical clustering. It depicts clusters according to a chosen distance measure (here: Spearman's footrule) as well as a chosen agglomeration method (here: complete and average agglomeration). 
 \bigskip
 
 ```{r , fig.width=6, fig.height=5,out.width='60%'}
 #d=relation_dissimilarity.ranked.list(object,method=kendall)
 
 # use ranking list
   relensemble=as.relation.ranked.list(object)
  
 # # use relations
 #   a=challenge_multi%>%decision.challenge(p.adjust.method="none")
 #   aa=lapply(a,as.relation.challenge.incidence)
 #   names(aa)=names(challenge_multi)
 #   relensemble= do.call(relation_ensemble,args = aa)
 d <- relation_dissimilarity(relensemble, method = "symdiff")
 ```
 
   
 ```{r dendrogram_complete, fig.width=6, fig.height=5,out.width='60%'}
 if (length(relensemble)>2) {
   plot(hclust(d,method="complete")) #,main="Symmetric difference distance - complete"
 } else cat("\nCluster analysis only sensible if there are >2 tasks.\n\n")
 ```
 
 \bigskip
 
 
 ```{r dendrogram_average, fig.width=6, fig.height=5,out.width='60%'}
 if (length(relensemble)>2) plot(hclust(d,method="average")) #,main="Symmetric difference distance - average"
 ```
 
 <!-- An alternative representation of -->
 <!-- distances between tasks (see Eugster et al, 2008) is provided by networktype -->
 <!-- graphs. -->
 <!-- Every task is represented by a node and nodes are connected -->
 <!-- by edges. Distance between nodes increase exponentially with -->
 <!-- the chosen distance measure d (here: distance between nodes -->
 <!-- equal to 1:05d). Thick edges represent smaller distance, i.e., -->
 <!-- the ranking lists of corresponding tasks are similar. Tasks with -->
 <!-- a unique winner are filled to indicate the algorithm. In case -->
 <!-- there are more than one first-ranked algorithm, nodes remain -->
 <!-- uncoloured. -->
 
 
 In network-type graphs (see Eugster et al, 2008), every task is represented by a node and nodes are connected by edges whose length is determined by a chosen distance measure. Here, distances between nodes are chosen to increase exponentially in Spearman's footrule distance with growth rate 0.05 to accentuate large distances.
 Hence, tasks that are similar with respect to their algorithm ranking appear closer together than those that are dissimilar. Nodes representing tasks with a unique winner are colored-coded by the winning algorithm. In case there are more than one first-ranked algorithms in a task, the corresponding node remains uncolored.
 \bigskip
 
 ```{r ,eval=T,fig.width=12, fig.height=6,include=FALSE}
 if (length(relensemble)>2) {
   netw=network(object,
                method = "symdiff", 
                edge.col=grDevices::grey.colors,
                edge.lwd=1,
                rate=1.05,
                cols=cols
                )
   
   plot.new()
   leg=legend("topright",  names(netw$leg.col), lwd = 1, col = netw$leg.col, bg =NA,plot=F,cex=.8)
   w <- grconvertX(leg$rect$w, to='inches')
   addy=6+w
 } else addy=1
 
 ```
 
 ```{r network, fig.width=addy, fig.height=6,out.width='100%'}
 if (length(relensemble)>2) {
   plot(netw,
        layoutType = "neato",
        fixedsize=TRUE,
        # fontsize,
        # width,
        # height,
        shape="ellipse",
        cex=.8
        )
 }
 
 ```
 
 
 # Reference
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*
 
 M. J. A. Eugster, T. Hothorn, and F. Leisch, “Exploratory
 and inferential analysis of benchmark experiments,”
 Institut fuer Statistik, Ludwig-Maximilians-
 Universitaet Muenchen, Germany, Technical Report 30,
 2008. [Online]. Available: http://epub.ub.uni-muenchen.
 de/4134/.
 
 
 
 
 
 
 
diff --git a/inst/appdir/reportSingle.Rmd b/inst/appdir/reportSingle.Rmd
index e4f0fff..90efe46 100644
--- a/inst/appdir/reportSingle.Rmd
+++ b/inst/appdir/reportSingle.Rmd
@@ -1,297 +1,297 @@
 ---
 params:
   object: NA
   colors: NA
   name: NULL
 title: "Benchmarking report for `r params$name` "
-author: created by challengeR `r packageVersion('challengeR')` (Wiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider, 2019)
+author: "created by challengeR v`r packageVersion('challengeR')`  \nWiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider (2019)"
 date: "`r Sys.setlocale('LC_TIME', 'English'); format(Sys.time(), '%d %B, %Y')`"
 editor_options: 
   chunk_output_type: console
 ---
 
 
 
 
 ```{r setup, include=FALSE}
 options(width=80)
 # out.format <- knitr::opts_knit$get("out.format")
 # img_template <- switch( out.format,
 #                      word = list("img-params"=list(fig.width=6,
 #                                                    fig.height=6,
 #                                                    dpi=150)),
 #                      {
 #                        # default
 #                        list("img-params"=list( dpi=150,
 #                                                fig.width=6,
 #                                                fig.height=6,
 #                                                out.width="504px",
 #                                                out.height="504px"))
 #                      } )
 # 
 # knitr::opts_template$set( img_template )
 
 knitr::opts_chunk$set(echo = F,fig.width=7,fig.height = 3,dpi=300,fig.align="center")
 #theme_set(theme_light())
 theme_set(theme_light(base_size=11))
 
 ```
 
 ```{r }
 boot_object = params$object
 color.fun=params$colors
 ```
 
 
 ```{r }
 challenge_single=boot_object$data
 ordering=  names(sort(t(boot_object$mat[,"rank",drop=F])["rank",]))
 ranking.fun=boot_object$FUN
 object=challenge_single%>%ranking.fun
 
 object$fulldata=boot_object$fulldata  # only not NULL if subset of algorithms used
 
 cols_numbered=cols=color.fun(length(ordering))
 names(cols)=ordering
 names(cols_numbered)= paste(1:length(cols),names(cols))
 
 ```
 
 
 
 <!-- ***** -->
 
 <!-- This text is outcommented -->
 <!-- R code chunks start with "```{r }" and end with "```" -->
 <!-- Please do not change anything inside of code chunks, otherwise any latex code is allowed -->
 
 <!-- inline code with `r 0` -->
 
 
 This document presents a systematic report on a benchmark study. Input data comprises raw metric values for all algorithms and test cases. Generated plots are:
 
 * Visualization of assessment data: Dot- and boxplot, podium plot and ranking heatmap
 * Visualization of ranking robustness: Line plot
 * Visualization of ranking stability: Blob plot, violin plot and significance map
 
 ```{r}
 n.cases=nrow(challenge_single)/length(unique(challenge_single[[attr(challenge_single,"algorithm")]]))
 ```
 
 Analysis based on `r n.cases` test cases which included `r sum(is.na(challenge_single[[attr(challenge_single,"value")]]))` missing values.
 
 ```{r,results='asis'}
 if (!is.null(boot_object$fulldata)) {
   cat("Only top ",
       length(levels(boot_object$data[[attr(boot_object$data,"algorithm")]])), 
       " out of ", 
       length(levels(boot_object$fulldata[[attr(boot_object$data,"algorithm")]])), 
       " algorithms visualized.\n")
 }
 ```
 
 
 ```{r}
 if (n.cases<log2(5000)) warning("Bootstrapping in case of few test cases should be treated with caution!")
 ```
 
 Algorithms are ordered according to the following chosen ranking scheme:
 
 ```{r,results='asis'}
 a=(  lapply(object$FUN.list,function(x) {
                if (!is.character(x)) return(paste0("aggregate using function ",
                                                    paste(gsub("UseMethod",
                                                               "",
                                                               deparse(functionBody(x))),
                                                          collapse=" "))
                                             )
                else if (x=="rank") return(x)
                else return(paste0("aggregate using function ",x))
        })
      )
 cat("&nbsp; &nbsp; *",paste0(a,collapse=" then "),"*",sep="")
 
 
 if (is.character(object$FUN.list[[1]]) && object$FUN.list[[1]]=="significance") cat("\n\n Column 'prop.sign' is equal to the number of pairwise significant test results for a given algorithm divided by the number of algorithms.")
 ```
 
 
 
 Ranking list:
 
 ```{r}
 #knitr::kable(object$mat[order(object$mat$rank),])
 print(object)
 
 ```
 
 
 
 
 
 
 
 # Visualization of raw assessment data
 
 ## Dot- and boxplot
 
 *Dot- and boxplots* for visualizing raw assessment data separately for each algorithm. Boxplots representing descriptive statistics over all test cases (median, quartiles and outliers) are combined with horizontally jittered dots representing individual test cases.
 \bigskip
 
 ```{r boxplots}
 boxplot(object,size=.8)+xlab("Algorithm")+ylab("Metric value")
 
 ```
 
 
 
 ## Podium plot
 *Podium plots* (see also Eugster et al, 2008) for visualizing raw assessment data. Upper part (spaghetti plot): Participating algorithms are color-coded, and each colored dot in the plot represents a metric value achieved with the respective algorithm. The actual metric value is encoded by the y-axis. Each podium (here: $p$=`r length(ordering)`) represents one possible rank, ordered from best (1) to last (here: `r length(ordering)`). The assignment of metric values (i.e. colored dots) to one of the podiums is based on the rank that the respective algorithm achieved on the corresponding test case. Note that the plot part above each podium place is further subdivided into $p$ "columns", where each column represents one participating algorithm (here: $p=$ `r length(ordering)`).  Dots corresponding to identical test cases are connected by a line, leading to the shown spaghetti structure. Lower part: Bar charts represent the relative frequency for each algorithm to achieve the rank encoded by the podium place. 
 \bigskip
 
 
 ```{r ,eval=T,fig.width=12, fig.height=6,include=FALSE}
 plot.new()
 algs=levels(challenge_single[[attr(challenge_single,"algorithm")]])
 
 l=legend("topright", 
          paste0(1:length(algs),": ",algs), 
          lwd = 1, cex=1.4,seg.len=1.1,
          title="Rank: Alg.",
          plot=F) 
 w <- grconvertX(l$rect$w, to='ndc') - grconvertX(0, to='ndc')
 h<- grconvertY(l$rect$h, to='ndc') - grconvertY(0, to='ndc')
 addy=max(grconvertY(l$rect$h,"user","inches"),6)
 ```
 
 
 ```{r podium,eval=T,fig.width=12, fig.height=addy}
 op<-par(pin=c(par()$pin[1],6),
         omd=c(0, 1-w, 0, 1),
         mar=c(par('mar')[1:3], 0)+c(-.5,0.5,-3.3,0),
         cex.axis=1.5,
         cex.lab=1.5,
         cex.main=1.7)
 oh=grconvertY(l$rect$h,"user","lines")-grconvertY(6,"inches","lines")
 if (oh>0) par(oma=c(oh,0,0,0))
 
 set.seed(38)
 podium(object, 
        col=cols,
        lines.show = T,lines.alpha = .4,
        dots.cex=.9,ylab="Metric value",layout.heights=c(1,.35),
        legendfn = function(algs, cols) {
          legend(par('usr')[2], 
                 par('usr')[4], 
                 xpd=NA, 
                 paste0(1:length(algs),": ",algs), 
                 lwd = 1, 
                 col =  cols,
                 bg = NA,
                 cex=1.4,
                 seg.len=1.1,
                 title="Rank: Alg.") 
         }
         )
 par(op)
 ```
 
 
 ## Ranking heatmap
 *Ranking heatmaps* for visualizing raw assessment data. Each cell $\left( i, A_j \right)$ shows the absolute frequency of test cases in which algorithm $A_j$ achieved rank $i$.
 
 \bigskip
 
 ```{r rankingHeatmap,fig.width=9, fig.height=9,out.width='70%'}
 rankingHeatmap(object)
 ```
 
 
 
 # Visualization of ranking stability
 
 
 
 <!-- Results based on `r ncol(boot_object$bootsrappedRanks)` bootstrap samples. -->
 
 ## *Blob plot* for visualizing ranking stability based on bootstrap sampling
 
 Algorithms are color-coded, and the area of each blob at position $\left( A_i, \text{rank } j \right)$ is proportional to the relative frequency $A_i$ achieved rank $j$ across $b=$ `r ncol(boot_object$bootsrappedRanks)` bootstrap samples. The median rank for each algorithm is indicated by a black cross. 95\% bootstrap intervals across bootstrap samples are indicated by black lines. 
 
 \bigskip
 
 ```{r blobplot,fig.width=7,fig.height = 7}
 stability(boot_object,max_size = 8,size.ranks=.25*theme_get()$text$size,size=8,shape=4 )+scale_color_manual(values=cols)
 ```
 
 
 ## Violin plot for visualizing ranking stability based on bootstrapping
 
 The ranking list based on the full assessment data is pairwisely compared with the ranking lists based on the individual bootstrap samples (here $b=$ `r ncol(boot_object$bootsrappedRanks)` samples). For each pair of rankings, Kendall's $\tau$ correlation is computed. Kendall’s $\tau$ is a scaled index determining the correlation between the lists. It is computed by evaluating the number of pairwise concordances and discordances between ranking lists and produces values between $-1$ (for inverted order) and $1$ (for identical order). A violin plot, which simultaneously depicts a boxplot and a density plot, is generated from the results.
 \bigskip
 
 ```{r violin}
 violin(boot_object)+xlab("")
 ```
 
 
 
 
 
 ## *Significance maps* for visualizing ranking stability based on statistical significance
 
 *Significance maps* depict incidence matrices of
 pairwise significant test results for the one-sided Wilcoxon signed rank test at a 5\% significance level with adjustment for multiple testing according to Holm. Yellow shading indicates that metric values of the algorithm on the x-axis were significantly superior to those from the algorithm on the y-axis, blue color indicates no significant difference.
 \bigskip
 
 ```{r significancemap,fig.width=7, fig.height=6}
 print(significanceMap(object,alpha=0.05,p.adjust.method="holm")
         )
 ```
 
 
 
 
 <!-- \subsubsection{Hasse diagram} -->
 
 <!-- ```{r single_stability_significance_hasse, fig.height=19} -->
 <!-- plot(relensemble) -->
 <!-- ``` -->
 
 
 
 
 ## Ranking robustness with respect to ranking methods
 *Line plots* for visualizing rankings robustness across different ranking methods. Each algorithm is represented by one colored line. For each ranking method encoded on the x-axis, the height of the line represents the corresponding rank. Horizontal lines indicate identical ranks for all methods.
 
 \bigskip
 
 ```{r lineplot,fig.width=7,fig.height = 5}
 methodsplot(object )+scale_color_manual(values=cols)
 ```
 
 
 
 
 
 # Reference
 
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*
 
 
 M. J. A. Eugster, T. Hothorn, and F. Leisch, “Exploratory
 and inferential analysis of benchmark experiments,”
 Institut fuer Statistik, Ludwig-Maximilians-
 Universitaet Muenchen, Germany, Technical Report 30,
 2008. [Online]. Available: http://epub.ub.uni-muenchen.
 de/4134/.
 
 
 
 
 
 
 
 
 
diff --git a/inst/appdir/reportSingleShort.Rmd b/inst/appdir/reportSingleShort.Rmd
index e32cde9..87a7980 100644
--- a/inst/appdir/reportSingleShort.Rmd
+++ b/inst/appdir/reportSingleShort.Rmd
@@ -1,261 +1,261 @@
 ---
 params:
   object: NA
   colors: NA
   name: NULL
 title: "Benchmarking report for `r params$name` "
-author: created by challengeR `r packageVersion('challengeR')` (Wiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider, 2019)
+author: "created by challengeR v`r packageVersion('challengeR')`  \nWiesenfarth, Reinke, Landman, Cardoso, Maier-Hein & Kopp-Schneider (2019)"
 date: "`r Sys.setlocale('LC_TIME', 'English'); format(Sys.time(), '%d %B, %Y')`"
 editor_options: 
   chunk_output_type: console
 ---
 
 
 
 
 ```{r setup, include=FALSE}
 options(width=80)
 # out.format <- knitr::opts_knit$get("out.format")
 # img_template <- switch( out.format,
 #                      word = list("img-params"=list(fig.width=6,
 #                                                    fig.height=6,
 #                                                    dpi=150)),
 #                      {
 #                        # default
 #                        list("img-params"=list( dpi=150,
 #                                                fig.width=6,
 #                                                fig.height=6,
 #                                                out.width="504px",
 #                                                out.height="504px"))
 #                      } )
 # 
 # knitr::opts_template$set( img_template )
 
 knitr::opts_chunk$set(echo = F,fig.width=7,fig.height = 3,dpi=300,fig.align="center")
 #theme_set(theme_light())
 theme_set(theme_light(base_size=11))
 
 ```
 
 ```{r }
 object = params$object
 color.fun=params$colors
 ```
 
 
 ```{r }
 challenge_single=object$data
 ordering=  names(sort(t(object$mat[,"rank",drop=F])["rank",]))
 ranking.fun=object$FUN
 
 cols_numbered=cols=color.fun(length(ordering))
 names(cols)=ordering
 names(cols_numbered)= paste(1:length(cols),names(cols))
 
 ```
 
 
 
 <!-- ***** -->
 
 <!-- This text is outcommented -->
 <!-- R code chunks start with "```{r }" and end with "```" -->
 <!-- Please do not change anything inside of code chunks, otherwise any latex code is allowed -->
 
 <!-- inline code with `r 0` -->
 
 This document presents a systematic report on a benchmark study. Input data comprises raw metric values for all algorithms and test cases. Generated plots are:
 
 * Visualization of assessment data: Dot- and boxplot, podium plot and ranking heatmap
 * Visualization of ranking robustness: Line plot
 * Visualization of ranking stability: Significance map
 
 
 Analysis based on `r nrow(challenge_single)/length(unique(challenge_single[[attr(challenge_single,"algorithm")]]))` test cases which included `r sum(is.na(challenge_single[[attr(challenge_single,"value")]]))` missing values.
 
 ```{r,results='asis'}
 if (!is.null(object$fulldata)) {
   cat("Only top ",
       length(levels(object$data[[attr(object$data,"algorithm")]])), 
       " out of ", 
       length(levels(object$fulldata[[attr(object$data,"algorithm")]])), 
       " algorithms visualized.\n")
 }
 ```
 
 
 
 Algorithms are ordered according to the following chosen ranking scheme:
 
 ```{r,results='asis'}
 a=(  lapply(object$FUN.list,function(x) {
                if (!is.character(x)) return(paste0("aggregate using function ",
                                                    paste(gsub("UseMethod",
                                                               "",
                                                               deparse(functionBody(x))),
                                                          collapse=" "))
                                             )
                else if (x=="rank") return(x)
                else return(paste0("aggregate using function ",x))
        })
      )
 cat("&nbsp; &nbsp; *",paste0(a,collapse=" then "),"*",sep="")
 
 
 if (is.character(object$FUN.list[[1]]) && object$FUN.list[[1]]=="significance") cat("\n\n Column 'prop.sign' is equal to the number of pairwise significant test results for a given algorithm divided by the number of algorithms.")
 ```
 
 Ranking list:
 
 ```{r}
 print(object)
 ```
 
 
 
 
 
 
 
 # Visualization of raw assessment data
 
 ## Dot- and boxplot
 
 *Dot- and boxplots* for visualizing raw assessment data separately for each algorithm. Boxplots representing descriptive statistics over all test cases (median, quartiles and outliers) are combined with horizontally jittered dots representing individual test cases.
 \bigskip
 
 ```{r boxplots}
 boxplot(object,size=.8)+xlab("Algorithm")+ylab("Metric value")
 
 ```
 
 
 
 ## Podium plot
 *Podium plots* (see also Eugster et al, 2008) for visualizing raw assessment data. Upper part (spaghetti plot): Participating algorithms are color-coded, and each colored dot in the plot represents a metric value achieved with the respective algorithm. The actual metric value is encoded by the y-axis. Each podium (here: $p$=`r length(ordering)`) represents one possible rank, ordered from best (1) to last (here: `r length(ordering)`). The assignment of metric values (i.e. colored dots) to one of the podiums is based on the rank that the respective algorithm achieved on the corresponding test case. Note that the plot part above each podium place is further subdivided into $p$ "columns", where each column represents one participating algorithm (here: $p=$ `r length(ordering)`).  Dots corresponding to identical test cases are connected by a line, leading to the shown spaghetti structure. Lower part: Bar charts represent the relative frequency for each algorithm to achieve the rank encoded by the podium place. 
 \bigskip
 
 
 ```{r ,eval=T,fig.width=12, fig.height=6,include=FALSE}
 plot.new()
 algs=levels(challenge_single[[attr(challenge_single,"algorithm")]])
 
 l=legend("topright", 
          paste0(1:length(algs),": ",algs), 
          lwd = 1, cex=1.4,seg.len=1.1,
          title="Rank: Alg.",
          plot=F) 
 w <- grconvertX(l$rect$w, to='ndc') - grconvertX(0, to='ndc')
 h<- grconvertY(l$rect$h, to='ndc') - grconvertY(0, to='ndc')
 addy=max(grconvertY(l$rect$h,"user","inches"),6)
 ```
 
 
 ```{r podium,eval=T,fig.width=12, fig.height=addy}
 op<-par(pin=c(par()$pin[1],6),
         omd=c(0, 1-w, 0, 1),
         mar=c(par('mar')[1:3], 0)+c(-.5,0.5,-3.3,0),
         cex.axis=1.5,
         cex.lab=1.5,
         cex.main=1.7)
 oh=grconvertY(l$rect$h,"user","lines")-grconvertY(6,"inches","lines")
 if (oh>0) par(oma=c(oh,0,0,0))
 
 set.seed(38)
 podium(object, 
        col=cols,
        lines.show = T,lines.alpha = .4,
        dots.cex=.9,ylab="Metric value",layout.heights=c(1,.35),
        legendfn = function(algs, cols) {
          legend(par('usr')[2], 
                 par('usr')[4], 
                 xpd=NA, 
                 paste0(1:length(algs),": ",algs), 
                 lwd = 1, 
                 col =  cols,
                 bg = NA,
                 cex=1.4,
                 seg.len=1.1,
                 title="Rank: Alg.") 
         }
         )
 par(op)
 ```
 
 
 ## Ranking heatmap
 *Ranking heatmaps* for visualizing raw assessment data. Each cell $\left( i, A_j \right)$ shows the absolute frequency of test cases in which algorithm $A_j$ achieved rank $i$.
 
 \bigskip
 
 ```{r rankingHeatmap,fig.width=9, fig.height=9,out.width='70%'}
 rankingHeatmap(object)
 ```
 
 
 
 # Visualization of ranking stability
 
 
 
 
 
 
 
 ## *Significance maps* for visualizing ranking stability based on statistical significance
 
 *Significance maps* depict incidence matrices of
 pairwise significant test results for the one-sided Wilcoxon signed rank test at a 5\% significance level with adjustment for multiple testing according to Holm. Yellow shading indicates that metric values of the algorithm on the x-axis were significantly superior to those from the algorithm on the y-axis, blue color indicates no significant difference.
 \bigskip
 
 ```{r significancemap,fig.width=7, fig.height=6}
 print(significanceMap(object,alpha=0.05,p.adjust.method="holm")
         )
 ```
 
 
 
 
 <!-- \subsubsection{Hasse diagram} -->
 
 <!-- ```{r single_stability_significance_hasse, fig.height=19} -->
 <!-- plot(relensemble) -->
 <!-- ``` -->
 
 
 
 
 ## Ranking robustness with respect to ranking methods
 *Line plots* for visualizing rankings robustness across different ranking methods. Each algorithm is represented by one colored line. For each ranking method encoded on the x-axis, the height of the line represents the corresponding rank. Horizontal lines indicate identical ranks for all methods.
 
 \bigskip
 
 ```{r lineplot,fig.width=7,fig.height = 5}
 methodsplot(object )+scale_color_manual(values=cols)
 ```
 
 
 
 
 
 # Reference
 
 
 Wiesenfarth, M., Reinke, A., Landman, B.A., Cardoso, M.J., Maier-Hein, L. and Kopp-Schneider, A. (2019). Methods and open-source toolkit for analyzing and visualizing challenge results. *arXiv preprint arXiv:1910.05121*
 
 
 M. J. A. Eugster, T. Hothorn, and F. Leisch, “Exploratory
 and inferential analysis of benchmark experiments,”
 Institut fuer Statistik, Ludwig-Maximilians-
 Universitaet Muenchen, Germany, Technical Report 30,
 2008. [Online]. Available: http://epub.ub.uni-muenchen.
 de/4134/.