Page MenuHomePhabricator

MedSAM in MITK
Closed, ResolvedPublic

Description

MedSAM aims to fulfill the role of a foundation model for universal medical image segmentation. It is trained on diverse and large-scale medical image segmentation dataset with 1,570,263 medical image-mask pairs, covering 10 imaging modalities, over 30 cancer types, and a multitude of imaging protocols.

Paper:
https://www.nature.com/articles/s41467-024-44824-z
Code:
https://github.com/bowang-lab/MedSAM

Also interesting to checkout:
MedSAM-Lite: A lightweight version of MedSAM for fast training and inference.
https://github.com/bowang-lab/MedSAM/tree/LiteMedSAM

An overall feasibility test should be done to check if it's worth (both interest-wise & effort-wise) integrating into MITK.

Event Timeline

a178n triaged this task as Normal priority.Jan 26 2024, 12:18 PM
a178n created this task.

I think the slicer plugin is not so interesting, but the MedSAM branch it used LiteMedSAM https://github.com/bowang-lab/MedSAM/blob/LiteMedSAM/README.md.

Ok, I have updated the task description.

MedSAM is same old SAM, pretrained using medical image-based dataset.
They have trained the visual transformer vit_b type model and released new pretrained weights. So fundamentally, there is no change in architecture. However, from code perspective, they have modified/trimmed down SAM code to their application and packs with it in the repo.
Salient points:

  1. Authors recommend bounding box type interaction pattern. Click-based pattern would also work, (I tried) as it the SAM behind the scenes but, it wouldn't be by design.
  2. There no out of the box support of nrrd or nifty formats. Data needs to be pre-processed before hand before the inference workflow. That would mean we have to have a script bridging MITK and MedSAM; just like current SAM tool.
  3. Only vit_b model weights are available. Easily fits on a 4GB VRAM graphics card.
  4. Eventhough MedSAM code supports 3D images, its just iterates over each slice.
  5. LiteMedSAM has another version based on TinyVIT model.

So, it's pretty straightforward to integrate in MITK, except for the bounding box interaction in MITK. And, MedSAM/Lite can share the same virtual env that MITK creates for SAM.

Thank you for looking into that. Let's discuss it further in the next meeting, but I think we should try to get it available in the next release.
For Boundingboxes, I would align with the Bounding Box interactions we have to build up at some point for generating detection labels.

@m113r Do you know tools that have a good approach (fast, high usability) to define 3D Bounding Boxes? 2D is simple, but 3D can be more tricky if you want to meaningfull reduce clicks/interactions.

s434n added a subscriber: s434n.

Discussion result:
Simplify bounding box interaction / Add simpler version and make it available outside of the Image Cropper

kislinsk claimed this task.
kislinsk added a project: Moved to git.dkfz.de.

This task was closed here on Phabricator since it was migrated to GitLab. Please continue on GitLab.