Page MenuHomePhabricator

Implement proof of concept for new segmentation data type
Open, WishlistPublic

Description

We want to establish a new segmentation data type or possibly even separate data types for segmentations and segments (labels) that are compatible to the DICOM Segmentation IOD.

This new data type will be the foundation of a completely rewritten MITK Segmentation. Even though the current MITK Segmentation implementations will be replaced by a single new implementation, we need adaptors to the legacy segmentation data types to guarantee compatibility.

Related Objects

Event Timeline

The current idea is to be guided by the DICOM Segmentation IOD and its features. This includes binary and fractional (both probability and occupancy) segments, properties like label, description, color, and last but not least a less coupled relation between segmentations and segments (for example different samplings, extents, and so on).

I'll start with a very basic proof of concept implementation of the new data type(s) by implementing two mitk::BaseData subclasses, mitk::Segmentation and mitk::Segment, while the latter one actually derives from mitk::Image.

The pixel type should be std::uint16_t/unsigned short, as the fractional DICOM tag has 16 bits as maximum range of possible values.

  • Implement predicate to filter segments based on a segmentation node
  • Persist user-drawn contours in addition to resulting segment

Found some bugs in MITK mainly because wrong assumptions made by mappers and stuff about data nodes. Dereferencing of null pointers, retrieval of non-existent properties, usage of non-initialized images and so on. Should be integrated into master no matter what will happen to the proof of concept branch.

Meanwhile, the fixes were cherry-picked and merged into the master.

Pushed new branch T23742-Fix.

This comment was removed by kislinsk.
kislinsk removed kislinsk as the assignee of this task.Aug 24 2018, 10:49 AM
jungsi added a subscriber: jungsi.

Regarding the segmentation data type the PolySeg project should be kept in mind as it offers flexibility in the representation of the segments.
They also define a segmentation and a segment class although neither of them directy "is" the data. A segment keeps a list of representations (i.e. vtkDataObjects in their case) that actually hold the image / surface / contour data.
Making the segment directy inherit from Image creates a strong bond to the representation as labelmask (binary or fractional) but leaves out representation of contours and surfaces for example.
In this regard one proposal would roughly follow this class layout:


Here one segment could be represented as one label (or as contours, a surface, etc.).
In the case of a labelmap it would make sense to hold a binary image or as Stefan mentioned uint16 or unsigned short for fractional labels.

A major drawback I see with this approach is that "interaction" of segments with other segments might be a bit costly as all Segments in the Segmentation have to be iterated over.
Examples for such interactions include:

  • A second segment is created and the user wants to override the first segment with drawing the second one
  • A second segment is created and the user wants to not include areas already included in segment one

For these two cases it has to be checked wether the newly added pixels are already "taken" by one of the other segments.
For this a mask would be useful that contains zeros for pixels not contained in any segment and the id or pixel value of the segment else.
This pretty much describes a multilabel image.
So one proposed solution to the computational cost of dealing with seperate binary images could be to aggregate them in an additional "SegmentationLayer".
In one layer segments can't overlap (you'd need another layer for that) as it is handled in the current implementation.
Collisions of segments could be easily looked up by checking the layer's aggregated labelmap.

Points of discussion

  • Any questions, concerns or feedback regarding the proposed structure?
  • PolySeg (and Slicer using it) comes with several restrictions to its flexibility:
    • It enforces all Segments in a Segmentation to contain the same representations and be of the same master representation
    • Only a defined MasterRepresentation is saved to disk (and used as starting point for conversions).
    • To edit a segment using tools the master representation HAS to be binary labelmap. This is probably because it may be difficult to have segments with different master representations interact e.g. by adding parts of segment 2 to segment 1 using the add tool.

This in turn means that most likely binary labelmap will always be the master representation (as you'd probably want to edit it) and not much changes.
It might be interesting to loosen these restrictions so that Segments of different representations can be loaded, manipulated and saved (again in different representations), so each segment would have to have its own master representation in contrast of the whole segmentation having one. It would have to be checked what reason PolySeg is giving for the restriction. One reason could be: In case there are two segments, the first with a master representation of labelmap and the second a contour. Both could be shown as a labelmap by creating that representation of the second segment. When the user now expands the first segment into the second segment and wants to override the classification from 2 to 1 what is supposed to happen? Label 2's labelmap representation could easily be adjusted but actually its master representation has to be changed as all other representations are generated from that. So the question arises how to apply changes from differing representations without loosing information.

  • Currently as the user is segmenting (e.g. using the Add-Tool) the drawn contour is directly converted to a labelmask and the contour (actual drawing information) is lost. I would propose actually saving the contour instead and holding it e.g. as master representation. The user can still immediately be presented with a labelmask (if he so wishes) but the additional information is kept. When saving the user should then choose if he wants to save the contour or apply the conversion and save the labelmask Not all tools produce contours. The issue remains how pixel based tools shall interact with contour masters. So instead it could be discussed to save the detailed user input seperately as long as it is possible.
  • How should a segmentation and especially the segments be represented in the data manager (also from the technical standpoint). Should there be a segmentation node with child nodes for the segments? Should they be grouped with the reference image? How should the different representations be shown if at all in the data manager? How should the relation of the segmentation to the reference image be linked? What happens when a user drags a segment somewhere else? Should segments even show up in the data manager or only in the plugin view?

The object hierarchy could be something like that, where only the reference image and the representations contain data for the render windows

  • Reference image
    • Segmentation
      • Segment_1
        • binaryLabelMap
        • contours
      • Segment_2
        • binaryLabelMap
        • contours

The question is how this hierarchy should be shown in the data manager (which objects shall be present to not overflow the data manager).

  • Auflistung von Anforderungen!
  • Check
    • Dicom Derived Tag (verbinden der repräsentationen und originale) (Source)
    • Dicom Surface Segmentation
    • Source Entity und Image (Tags) Referenced Image and Instance Sequence
  • Bei Contour Tools Contours speichern
  • Mischen der Masterrepräsentationen in Anwendung und Speichern
  • Komplexität bei Arbeit mit binär und fractional (Toolunterstützung)
  • Wird ContourModel noch woanders verwendet? Noch aktuell?
  • Complexity Plugin
  • Tool defines what data works
  • Wishlist GUI abstrakt toolkitunabhängig definieren (Parameterdefinition)

The current status of this task is documented by the Bachelor's thesis of @jungsi.