User Details
- User Since
- Mar 2 2020, 3:23 PM (253 w, 1 d)
Mar 31 2023
Or we just forbid parallelization with windows... Parallelization of R in Windows is such a series of workarounds....
using doRNG might be the best version should work on any OS
Oh I HATE it!
Could you please try (first installing package "doRNG" https://cran.r-project.org/web/packages/doRNG/index.html ):
Mar 10 2023
Could someone please try on Windows
oh I hate it so much. I know the problem, only Windows is affected. Parallelization does not work with forking there, I keep forgetting this. I'll look for a solution on windows
May 23 2022
Thank you so much @aekavur ! It helps a lot to understand the reason finally!
May 16 2022
if the output is NULL, object[[by]] is not a factor, i.e. class(object[[by]]) is "character", in this case you need to use use unique() and probably your solution
May 13 2022
Thanks Emre!
Thats a weird change. I didn't find any mention in R changelog.
probably instead of
algorithms=factor(unique(object[[by]]))
it will be preferred
May 9 2022
From the change log for R 4.2.0
Feb 28 2022
thanks Emre. that's problematic, confidence intervals are missing. Could you share a code file for testing with artificial data (ideally not with the report as output but the plot itself)? Then I will try to look into it. or is this difficult for you?
Feb 24 2022
I think the solution is to consider rank not as continuous but a factor (essentially a string)
That means first following
Feb 21 2022
THanks Emre! This sounds like a lot of effort. Please give me some time to have a look at it
Feb 14 2022
I guess overall it's a matter of taste.
Fully automatic one has several problems: in case of the 30 algorithms, scale starts with 0 which is not sensible. I'm not sure what happens with something like 27 or 17 algorithms (a number which doesn't divide by 5). in case of the 7 alogirhtms it starts with 2 which I find a bit weird, I would expect a scale starting with 1. Thus, I would at least include the limits=c(1,max(...)) argument which however as said before may lead to sequences like 1,7,13,... but maybe this is not so much of a problem.
If I remember correctly this didn't work layout-wise for large number of algorithms. Numbers will either overlap or need to get very small/size of figure will need to be increased.
try to test with something like 20 algorithms, how does the report look then?
what's the problem with 1,5,10,15,18? the scale isn't affected, so for me it wouldn't matter that it's not the same intervals. in principle you could also omit the 18, i.e. only 1,5,10,15. Instead of all integers, I would rather use the automatic choice.
Feb 11 2022
not sure whether this is a good idea. imagine a challenge with 18 algorithms. there will be only a 1 and an 18 and nothing in between, this may make it difficult to read. what do you think?
Could you try to replace "breaks" by "labels" in
Feb 7 2022
I guess na.treat it is only needed for the line plot for comparing to other ranking methods?
In this case, a message could be thrown when compiling the report saying something like "line plot comparing ranking methods omitted since na.treat is not specified. Specify na.treat in as.challenge() if inclusion of line plot is desired" and allow compilation of the report (excluding line plot).
(Note that you can define na.treat both in as.challenge() as well as in the ranking functions).
May 4 2021
Apr 26 2021
@eisenman the change in develop branch has not been uploaded to GitHub, is this not automatically synchronized?. So the user who reported the bug still has the same problem. It would be good to merge into master as well
Apr 23 2021
now test case in test-report.R
Apr 22 2021
very simple fix in rankingHeatmap.challenge
@eisenman can this be merged into master branch?
Apr 19 2021
Mar 3 2021
graph had only been used for networks, I guess version 1.62 is sufficient
Jan 27 2021
additional test in test-blobPlotStabilityByAlgorithm for the case "one task out of 3 tasks contains >1 test cases". in this case bootstrap() gives result for this remaining task and stability() only produces plot with this task. @eisenman could you please check and close?
bootstrap() now gives error if all tasks only contain 1 test case and a message if some tasks contain only 1 test case (tasks with 1 test case are excluded from bootstrapping)
test-bootstrap.R contains tests. @eisenman could you please check test
Jan 25 2021
task names ( names(x$matlist) ) had been used for facet titles instead of algorithm names ( stored in "ordering").
Dec 18 2020
Dec 17 2020
platform is output with Sys.info(); linux and Mac systems are equal and windows may differ
good idea!
otherwise you could use a platform dependent expect_equal but I think your idea is sufficient
would mention it there at least though because bootstrapping may be time consuming
don't think its necessary. there is anyway ordering applied to it in later stages.
otherwise ranking step (rank.aggregated.list()) could always do the ordering directly (then the test would check whether ordering is working). As mentioned before I avoided such change before the release because of (very unlikely) risk of breaking something in the first level hierarchy
I do see those as well on Mac
R 4.0.2.
testthat previously 2.3.2, now 3.0.1.
the issue is however not connected to test_that I see that in the first 2 tests in aggregate-then-rank the ordering in the data set is not used but sorted
Dec 14 2020
seems to not fail on windows machines. but does on my Mac (and thus might also on linux). minor issue, defer to post 1.0.0
Dec 10 2020
Dec 7 2020
would defer to after release
please close if ok
added to documentation of as.challenge():
(arg annotator:) If multiple annotators annotated the test cases, a string specifying the name of the column that contains the annotator identifiers. Only applies to rang-then-aggregate. Use with caution: Currently not tested.
added
podium(ranking)
to vignette.
podium() does have a different syntax for layouting (it is not ggplot2 but base graphics) but vignette does not describe layouting for ggplot2 plots either
report() does not know what type of consensus ranking method had been used.
I added sentence
"Consensus ranking according to mean ranks across tasks if method="euclidean" where in case of ties (equal ranks for multiple algorithms) the average rank is used, i.e. ties.method="average"."
to help for consensus() and would suggest to also add this to the vignette/readme, but not to report. Could someone of you please take care of this?
@eisenman do you want to do this with roxygen?