@eisenman please close when finished
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Aug 31 2020
if at least one quarter of cases have a duplicate, the error message is extended by "Or are you considering a multi-task challenge and forgot to specify argument 'by'? "
Otherwise the error message is now further improved, e.g. if it is a single task, the error message doesn't mention "in task dummyTask any more".
Example error message:
@eisenman I don't get this warning. Could you give more details/ an example? or is this already solved/not relevant anymore?
yes this has historical reasons (largeBetter had been "inverseOrder" in the beginning which was horrible;-))
smallBetter would have the advantage that the user interface would not change, modifying as.challenge with largeBetter would mean that existing scripts lead to wrong results which is problematic.
could someone of you please take care of this one?
see comment in T27241
@aguilera
regarding issue 1: Rmd should be changed if we want to keep that the current package version at the beginning is automatically inserted. otherwise we could only work with the md file and drop the Rmd file. the readme.md file has to stay in the root directory such that GitHub finds it. If .Rmd file is also in root the easiest thing is to compile it such that it will be in the same directory. I don't remember, why do we need to change this?
@eisenman can larger number of algorithms be checked? what's your opinion? please close otherwise.
solved (at least for 19 tasks) in merge for T27474
items 3 and 4 solved (see separate tasks).
solved at least for 19 algorithms.
solved at least for 19 algorithms (MSD data).
maybe introduce some utility function that takes care of it
Aug 27 2020
Aug 7 2020
should work now, tests successful.
However warning is
Performance of not all algorithms is observed for all cases in task 'dummyTask'
despite being single task challenge. see also separate task #T27657
- compareRanks() allows to compare 2 ranking lists and compute Kendall's tau, would leave it in package
- benchmarkUtils allows to link with benchmark package (CRAN archived) which has some more features, but is not maintained anymore. might be dropped
Aug 6 2020
can be closed?
the complication arises to handle the automatic sizing of the plot, which is why in the first chunk it reads out the width without plotting and in the second chunk the figure width is adapted
the function is network() and is already in the report
May 14 2020
Something important I think is also, that all plot functions work outside of the report as intended (it is desirable that users can also create their own reports). This includes choosing the correct function and giving an error if a function does not work with single tasks, e.g.
Thanks.
In a single task situation, in practice a task will not have a name, so there should be no title and there should not be the need to set a task name I think....
I'll have a look at it, but please give me some time
I think this should be well thought through before putting into action
it's not because of missing test cases. It is because in certain situation no Kendall can be computed. Don't use treat.na for this.
May 11 2020
I agree, it would be nice if the actual number of NAs would be reported (together with the na.treat method) and not the number after na.treat which is then obviously 0.
layouting (circle sizes, distances, font sizes, size of plot) needs to be automatically optimized which I failed so far
Did you take care that FUN="mean" and FUN=mean is handled differently (in the former case it is a name, in the latter case it is a function)?
might indeed reduce complexity, however,
- many functions need to be adapted requiring some care
- behavior is sometimes by purpose different, e.g. there are plot titles with the task name in multi-task challenges while there is none in single task challenges
- many visualizations apply only to multi-task challenges and trying to use them in single task challenges throws an error, this would be needed to be handled
- also reports for multi task challenges contain more visualizations which would be uninformative for single task challenges (could however be handled by checking the number of tasks internally)
- a workaround would be necessary adding a task column to single class challenges (with the same label in every row)
Apr 29 2020
ok
Apr 27 2020
@eisenman : could you please check whether there is something to be done?
can this be closed?
@eisenman: did you take over solving this task? Or should I still do something here?
Apr 20 2020
This is strange. This should normally not happen only if there is a strange package interfering . It is helpful to run sessionInfo() or slightly more detailed devtools::session_info() after a bug happened and report that in the task
Apr 17 2020
Apr 3 2020
Apr 2 2020
Mar 27 2020
There should be a team section where all of you are acknowledged
in my opinion this is a desirable feature