Page MenuHomePhabricator

[Checklist] [Test data] Define publicly available user test data
Closed, ResolvedPublic

Description

We want to publicly provide test data that can be used to perform manual user tests as defined in the GUI checklists.
This task is intended to define requirements of data for specific tests / checklists, collect different data and discuss the suitability of test data.

Event Timeline

kalali triaged this task as Normal priority.May 6 2021, 2:11 PM
kalali created this task.
kalali moved this task from Backlog to Cycle on the MITK (v2021.10) board.
kalali added a parent task: Restricted Maniphest Task.

I've started a draft for a message to send around, found here if you want to have a look or add something: https://hub.dkfz.de/s/HzpkipeWRkq3K8K

So far, I have not heard from anybody about the message I sent around. I wanted to send another reminder of it soon, in case some people just forgot

What is the status here? And what was your impression during the last release-cycle: Is it necessary to define publicly available user test data or did the approach work to have checklist-tester use their own data?
Do we need to explicitly mention test data in the checklists or is it sufficient to provide examples for 2D, 3D, 4D data and other types of data that is required?

At least I remember that we had problems when specific data was required, e.g. surfaces or contourmodels.

Status-wise nothing has really changed since we sent out the messages a while back.
Regarding the last release-cycle: my impression was that it would help to direct testers more to suitable test data. People who have suitable data themselves can still use that, but those who don't are blocked from testing if not provided with data. As I see it, we could either achieve this by either

  • naming suitable provided test data explicitly in the checklists
  • creating a reference document that lists data types and respective test data
  • restructuring / renaming files in MITK-data to make it more clear of what type they are

I think option 2 would be pretty simple to implement and might not reduce the likelihood of people using their own data when possible, as option 1 might do when directly recommending certain test data, but would add another layer of complexity / another file to keep track of (for us, as well as the testers).

TODOs:

  • Reference all "external" data in README files
  • In particular, check DICOM data if we have consent and if it really is anonymous regarding DICOM tags etc.