Task to record MONAI Label server usage issues/bugs/pain-points which can be taken up for discussion upstream.
Description
Status | Assigned | Task | ||
---|---|---|---|---|
Resolved | kislinsk | T29191 [SEG] New segmentation tool candidates | ||
Resolved | kislinsk | T29192 [SEG] MONAI Label support in MITK Workbench | ||
Resolved | kislinsk | T30154 MONAI Label server usage issues |
Event Timeline
CUDA memory not cleared when out-of-memory exception occurs
Not all models have the same VRAM requirement.
When a model cannot run inferencing due to torch.cuda.OutOfMemoryError, exception occurs at server side clogging the GPU. Even if another model which could have potentially ran on the GPU, this clogging blocks it. Ideally TORCH.CUDA.EMPTY_CACHE should have been called to make room for next inferencing calls.
Multistage Vertebra Segmentation
Multistage Vertebra Segmentation is an auto segmentation workflow involving multiple models namely, localization_spine, localization_vertebra & segmentation_vertebra (unexposed).
There could be several issues in this:
- localization_spine: In practice, this gives me binary mask even though there is a whole of labels exposed in the API. This is in compliance with the documentation. But then the labels it claims to segment in the API don't make sense.
"localization_spine": { "type": "segmentation", "labels": { "C1": 1, "C2": 2, "C3": 3, "C4": 4, "C5": 5, "C6": 6, "C7": 7, "Th1": 8, "Th2": 9, "Th3": 10, "Th4": 11, "Th5": 12, "Th6": 13, "Th7": 14, "Th8": 15, "Th9": 16, "Th10": 17, "Th11": 18, "Th12": 19, "L1": 20, "L2": 21, "L3": 22, "L4": 23, "L5": 24 }, "dimension": 3, "description": "A pre-trained model for volumetric (3D) spine localization from CT image", "config": { "device": [ "NVIDIA T550 Laptop GPU" ] }
{F2689053}
Nevertheless, on the bright side, Multistage-Vertebra-Segmentation works and renders labels as promised.
This task was closed here on Phabricator since it was migrated to GitLab. Please continue on GitLab.