Page MenuHomePhabricator

nnUNet in MITK: Downloading pre-trained models in Kaapana context
Open, LowPublic

Description

Discuss with the Kaapana team regarding the storage of pre-trained nnUNet models and their awareness in MITK.
How can MITK point to the same stash of pre-trained models and share the models? Check if this works out-of-the-box

Also, check if MITK would need extra privileges to access the internet in the Kaapana context for downloading models.

Related Objects

StatusAssignedTask
OpenNone

Event Timeline

a178n created this task.

To point 1, in Kubernetes you can mount any path/volume to the desired mounting point. So I could just mount the directory in the MITK container in e.g. /models

Point 2, downloading in MITK: This is a proxy issue. Without proxy the container has internet access, if the host has internet.
If the server is behind a proxy, the proxy has to be set/added at the container level and in general in Kubernetes.

What is the advantage of running nnunet in MITK in kaapana, over directly running nnunet in kaapana? What I am not sure, because I almost never use a server with GPU is, if the MITK container can directly/automatically use the GPU resources, it might be necessary to add the resources already, when starting the MITK container. And therefore this resource would be blocked from other tasks...

In T29243#240152, @gaoh wrote:

What is the advantage of running nnunet in MITK in kaapana, over directly running nnunet in kaapana?

If you have a bunch of data, then the airflow workflow is the way to go.

This ticket covers the "corner case" (that's why it is low prio). That someone is in a interactive segmentation session. There we would also allow him/her to use the nnUnet tool.
In this case it seems to make no sense to get the models again, but instead use/offer the already downloaded models that are also used in the airflow workflow.

ok, but then the GPU problem is also feasible! If it is for a pre-known time-limited usage, like in an interactive session, I don't see a problem assigning a GPU to the container.