- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Sep 30 2020
The idea is to raise an error if the user tries to extract a task that is not contained in the set of tasks.
In T27282#211185, @eisenman wrote:The function testThenRank has an argument FUN that is set to "significance" internally. The value that is actually passed is not used.
@wiesenfa Should "significance" be the default value in the signature already or should the argument be removed?
Sep 29 2020
The function testThenRank has an argument FUN that is set to "significance" internally. The value that is actually passed is not used.
@wiesenfa Should "significance" be the default value in the signature already or should the argument be removed?
Sep 28 2020
We talked about this in the meeting this morning. It makes sense to proceed stepwise: First, having consistent descriptions and Lena's feedback (T27677) integrated for release v1.0. We should not reuse text from the paper until it is accepted. Once it is accepted, we can still decide on the revision of the report text (T27420).
Sep 25 2020
Whitespaces should be trimmed here as well, e.g. if a space character is entered and report generated, this error also occurs.
Same for "Introduce the number of samples and click >>".
In my opinion, this sentence can be removed in general. At this stage the user should already know how to navigate. It does not add any valuable information.
I agree!
Sep 24 2020
It's important to mention that the subset of algorithms should be drawn from the final ranking to avoid wrong results. So if bootstrapping should be performed, create the subset from the bootstrapped ranking, not from the initial ranking that is passed to perform bootstrapping.
In T27774#210754, @wiesenfa wrote:In T27774#210741, @eisenman wrote:Ok, weird. For me it says: "The top 5 out of 5 algorithms are considered."
your result is strange given the fact that not the correct things are actually counted
In T27774#210731, @wiesenfa wrote:
- I think it will be confusing if it writes "top 5 of 5 algorithms", I would put an if
- which changes? as.challenge does not have a full data attribute. The changes in subset have been done by you, see my comments in T27685.
Ok, weird. For me it says: "The top 5 out of 5 algorithms are considered."
In T27774#210733, @wiesenfa wrote:currently It says "The top 0 out of 0 algorithms are considered." in my latest example
- This is on purpose as I wanted to avoid further nested ifs, but can be discussed.
- This was the only variable where I found all algorithm factors. How can they be accessed now? fulldata is not working anymore after you latest changes.
Are you using the latest develop branch? There everything works for me.
Sep 23 2020
This issue is fixed in T27677.
In T27677#210406, @eisenman wrote:In section 3.2.1 "Visualizing bootstrap results", two visualization methods are mentioned, but only one is contained in the report: "To investigate which tasks separate algorithms well (i.e., lead to a stable ranking), two visualization methods
are recommended."Do we add a second plot or adapt the text?
Sep 21 2020
In section 3.2.1 "Visualizing bootstrap results", two visualization methods are mentioned, but only one is contained in the report: "To investigate which tasks separate algorithms well (i.e., lead to a stable ranking), two visualization methods
are recommended."
The plots can already be limited to the top x performing algorithms. This can be reused in that case. Then the question is how many legend items are we going to "guarantee" appearing nicely?
Sep 18 2020
The 19 algorithms are shown correctly. The strategy for more algorithms is discussed in T27748.
The 19 algorithms are shown correctly.
It works with the vector. The generated jpeg and png files have a quite low resolution and are as such not really reusable (e.g. in a presentation slide). Can you increase the resolution?
The warning is raised because the label is defined when bootstrapping is used but also referenced when bootstrapping is not used.
Sep 17 2020
Almost all suggestions from Lena can be realized, apart from listing the metrics in the meta data which we don't know.
Is it not possible to create a plot first on a canvas that contains everything and as a second step add the scaled plot to the report?