Page MenuHomePhabricator | MITK

3d+t image loaded with wrong timegeometry
Open, HighPublic

Description

Data set is dicom. Data can be provided.

max TimeBounds: 
 Step 0: 22 ms
 Step 1: 45 ms
 Step 2: 67 ms
 Step 3: 89 ms
 Step 4: 112 ms
 Step 5: 134 ms
 Step 6: 157 ms
 Step 7: 180 ms
 Step 8: 202 ms
 Step 9: 227 ms
 Step 10: 250 ms
 Step 11: 272 ms
 Step 12: 295 ms
 Step 13: 317 ms
 Step 14: 339 ms
 Step 15: 362 ms
 Step 16: 384 ms
 Step 17: 407 ms
 Step 18: 430 ms
 Step 19: 452 ms
 Step 20: 475 ms
 Step 21: 497 ms
 Step 22: 519 ms
 Step 23: 542 ms
 Step 24: 564 ms
 Step 25: 587 ms
 Step 26: 610 ms
 Step 27: 632 ms
 Step 28: 655 ms
 Step 29: 677 ms
 Step 30: 700 ms
 Step 31: 7.38474e+06 ms

Step 31 should be 720 ms. Which dicom tag is used to get the timegeometry information?

The large timestep leads to very long loading times: T24565

Event Timeline

hentsch created this task.Apr 3 2018, 4:20 PM
hentsch triaged this task as High priority.
hentsch added a project: Restricted Project.
hentsch added a parent task: Restricted Maniphest Task.Apr 5 2018, 5:10 PM
floca added a comment.Apr 6 2018, 5:48 PM

I have checked the data set provided.

The current behavior of the code is comprehensible. The acquesition time between the first slice of an frame and the last slice of the same frame differs about 2 hours. This is what you see as the last large max bound. So currently the data is loaded "interleaved", so to say. The reason why you don't see the fact in the workbench is that the time geometry has none overlapping timesteps (so the information gets dropped if we generate the time geometry). (We keept the time steps none overlapping, because we weren't sure if other side effects in old code could happen if we change this and the cost/benefit ratio does not seem worth it)

So this results into two options:

  1. The data is not interleaved, so one frame after another. Currently the data is loaded with all frames starting within the first 700 ms but taking arround 2 hours. -> Then we need another reader configuration because the current sorting/splitting is not correct.
  2. The data is interleaved, but we only need the difference between the first slices (~22 ms). -> We can keep everything as it is. But ignore the max bound.
  3. The data is interleaved and we only need the absolute time per slice (liek for PET SVU correction). -> We have some work to do in your plugin :(.

You should first, clarify how the data should be structured after correct loading.