Page MenuHomePhabricator

US DICOM images are read as 3D volume
Open, NormalPublic

Description

Loading Philipps movies and Canon movies (see test data provided by super task T27554) can be read without error. However, they are displayed as a 3D volume (which sort of is correct).

Note: The provided data could not be correctly time resolved as all slices have the same AcquesitionDateTime and nothing to differentiate like trigger time. There is also no geometry information (spacing, frame of reference, origin; !) so MITK IO cannot detact that it should split into different time points. Thus all slices are stacked according to there instance number.

If we would want to load such a data as 2D+t we would at least need a new reader (that splits by instance number and ignores the fact that there is no timing information, but how to construct the time geometry?!?).

Event Timeline

floca triaged this task as Normal priority.Jul 19 2020, 3:22 PM
floca created this task.
floca added a project: Missing Info.

@kleesiek Is it realy temporal resolved (what would be the resolution?) or is it a b scan an therefore more like a volume? Did you anonymize the data somehow, that might thrown away to much geometry information or so?

  • data is taken as is (see also comment to T27568)
  • I'd say the probably correct description would be 2D-Realtime B-Mode where images are made at a constant frame rate (20 fps in the case of the Canon video). If you consider moving the transducer over the body it not necessarily results in a 3D image, more 2D+t - if you keep the transducer at a fixed spot and e.g. only tilt the tranducer, it is more 3D like.
  • I am not aware that there are sensors in the transducer that would give us its position and orientation in space
  • Is e.g. a "Play" button for the z-dimension a possible solution?
floca added a comment.Sep 4 2020, 12:34 PM
  • data is taken as is (see also comment to T27568)
  • I'd say the probably correct description would be 2D-Realtime B-Mode where images are made at a constant frame rate (20 fps in the case of the Canon video). If you consider moving the transducer over the body it not necessarily results in a 3D image, more 2D+t - if you keep the transducer at a fixed spot and e.g. only tilt the tranducer, it is more 3D like.
  • I am not aware that there are sensors in the transducer that would give us its position and orientation in space

Yes you are right. I thought to much in the lines of CAMIs work.

  • Is e.g. a "Play" button for the z-dimension a possible solution?

Technically yes. Currenty something like that is implemented in VideoMaker plugin. But currently such general controll plugin is yet not available.

floca added a comment.Sep 4 2020, 1:55 PM

If looked a bit closer. The time information is encoded in Tags of the MultiFrame and the CINE module.

In this case

  • Number of Frames (0028,0008)
  • Frame Increment Pointer (0028,0009) -> Pointing to Frame Time
  • Frame Time (0018,1063)

So we could rework the generation of time geometry and volume seperation according to that information. But I also think that it will point to dedicated US reader, that handles this case of US Multi-frame Image CIOD for 2D+t images, as it might make the business logic to complex for the general reader if it should also be deduced and covered. But I must have a closer look on the current reader (may be also in conjunction with T27432).

For US images we should add at least the following tags of interest.

result.insert(MakeEntry(DICOMTag(0x0008, 0x2142))); //cine start time
result.insert(MakeEntry(DICOMTag(0x0008, 0x2143))); //cine stop time
result.insert(MakeEntry(DICOMTag(0x0018, 0x1063))); //cine frame time
result.insert(MakeEntry(DICOMTag(0x0018, 0x1065))); //cine frame vector
result.insert(MakeEntry(DICOMTag(0x0018, 0x1066))); //cine frame delay
result.insert(MakeEntry(DICOMTag(0x0028, 0x0008))); //number of frames
result.insert(MakeEntry(DICOMTag(0x0028, 0x0009))); //frame increment pointer