- User Since
- Mar 2 2020, 3:23 PM (46 w, 1 d)
Dec 18 2020
Dec 17 2020
platform is output with Sys.info(); linux and Mac systems are equal and windows may differ
otherwise you could use a platform dependent expect_equal but I think your idea is sufficient
would mention it there at least though because bootstrapping may be time consuming
don't think its necessary. there is anyway ordering applied to it in later stages.
otherwise ranking step (rank.aggregated.list()) could always do the ordering directly (then the test would check whether ordering is working). As mentioned before I avoided such change before the release because of (very unlikely) risk of breaking something in the first level hierarchy
I do see those as well on Mac
testthat previously 2.3.2, now 3.0.1.
the issue is however not connected to test_that I see that in the first 2 tests in aggregate-then-rank the ordering in the data set is not used but sorted
Dec 14 2020
seems to not fail on windows machines. but does on my Mac (and thus might also on linux). minor issue, defer to post 1.0.0
Dec 10 2020
Dec 7 2020
would defer to after release
please close if ok
added to documentation of as.challenge():
(arg annotator:) If multiple annotators annotated the test cases, a string specifying the name of the column that contains the annotator identifiers. Only applies to rang-then-aggregate. Use with caution: Currently not tested.
podium() does have a different syntax for layouting (it is not ggplot2 but base graphics) but vignette does not describe layouting for ggplot2 plots either
report() does not know what type of consensus ranking method had been used.
I added sentence
"Consensus ranking according to mean ranks across tasks if method="euclidean" where in case of ties (equal ranks for multiple algorithms) the average rank is used, i.e. ties.method="average"."
to help for consensus() and would suggest to also add this to the vignette/readme, but not to report. Could someone of you please take care of this?
@eisenman do you want to do this with roxygen?
kept select.if(), winner(), extract.workfolow and compareRanks()
and removed everything not supported anymore.
as.warehouse (benchmarkUtils) is not exported, recommend to leave because this may come handy for specific situations
Dec 4 2020
What is the problem?
in the extend of the vignette it would be simply
legend now always at bottom and dynamically adjusts to number of algorithms and algorithm name length.
Took me almost the day...
max number of tasks / algorithms now 20 in stacked frequency and line plots, respectively for legend appear on right, otherwise put on bottom of plot. could also be put to lower number
only podium plots and line plots actually require algorithms in legend, otherwise algorithm is identifiable from x-axis or facet label. Now, these redundant legends are removed.
besides that stacked frequency plots have colored tasks and respective legend
Dec 3 2020
works again for single task visualizations. in case multiple plots (a list of plots) is created %++% instead of + must be used for scale_*_() etc
mention in vignette
ok. just an idea, thought this might help issues with the server because people can run stuff by themselves and still seize the user interface.
can be closed or considered later
Nov 27 2020
Nov 26 2020
Nov 23 2020
Nov 12 2020
It has to be
i.e. a vector of algorithm names in the ranking order
see reportMultiple.Rmd: ordering_consensus=names(params$consensus)