- User Since
- Aug 1 2016, 12:10 PM (216 w, 3 d)
It's important to mention that the subset of algorithms should be drawn from the final ranking to avoid wrong results. So if bootstrapping should be performed, create the subset from the bootstrapped ranking, not from the initial ranking that is passed to perform bootstrapping.
Ok, weird. For me it says: "The top 5 out of 5 algorithms are considered."
- This is on purpose as I wanted to avoid further nested ifs, but can be discussed.
- This was the only variable where I found all algorithm factors. How can they be accessed now? fulldata is not working anymore after you latest changes.
Are you using the latest develop branch? There everything works for me.
Wed, Sep 23
This issue is fixed in T27677.
Mon, Sep 21
In section 3.2.1 "Visualizing bootstrap results", two visualization methods are mentioned, but only one is contained in the report: "To investigate which tasks separate algorithms well (i.e., lead to a stable ranking), two visualization methods
The plots can already be limited to the top x performing algorithms. This can be reused in that case. Then the question is how many legend items are we going to "guarantee" appearing nicely?
Fri, Sep 18
The 19 algorithms are shown correctly. The strategy for more algorithms is discussed in T27748.
The 19 algorithms are shown correctly.
It works with the vector. The generated jpeg and png files have a quite low resolution and are as such not really reusable (e.g. in a presentation slide). Can you increase the resolution?
The warning is raised because the label is defined when bootstrapping is used but also referenced when bootstrapping is not used.
Thu, Sep 17
Almost all suggestions from Lena can be realized, apart from listing the metrics in the meta data which we don't know.
Is it not possible to create a plot first on a canvas that contains everything and as a second step add the scaled plot to the report?
Wed, Sep 16
I like the proposal!
"0 missing cases have been found in the data set. However, performance of not all algorithms has been observed for all cases. Therefor, missings have been inserted in the following cases:"
Thu, Sep 10
I checked the report and I think the handling of missings is not clear: First saying no observations are missing, followed by algorithm performances are missing. The value of the replacement should not appear in the table.
Task is added as NA for both specified task name and dummy task name.
Tests will be fixed in T27694.
Wed, Sep 9
The tests in test-rankThenAggregate.R are failing because the expectation is "rank_FUN" but should be "rank_mean" or "rank_median" respectively after the changes. This will be fixed in T27694.
Fri, Sep 4
Tue, Sep 1
Mon, Aug 31
@reinkea Do you have a dataset containing more algorithms?
Jun 22 2020
The error mentioned in the comment does not occur anymore in the latest develop branch.