User Details
- User Since
- Sep 3 2018, 11:01 AM (326 w, 14 h)
Jul 10 2024
What is with MitkCLGlobalImageFeatures, it is not expected to be part of MITK anymore? Or is it still part of it?
Mar 7 2024
Mar 6 2024
Feb 27 2024
Dec 11 2023
Thank you! Yes this would already be really helpful. Allowing to only create 1 file, and if needed, just change a few properties in it, makes it much more useable.
Aug 2 2023
So I can store files as nrrd and then use the mitkfileconverter to convert them to dcm? And this is better, then directly storeing the files as dcm?
Aug 1 2023
Ok, since this issue made it impossible to use v2023.04 in Kaapaana mitk-flow, I am still on the previous release, where it kind of works?
So I would stay on this previous release until there is a new MITK version addressing these DICOM issues.
For now we are highly dependent on dicoms, so we don't have the option to just store the files in another file format...
Jul 4 2023
Jun 27 2023
@k656s has an example dataset. I am still at the release version before (I think 2022.10?) but I will update to the current release. Could this already help?
If not, I would vote for the task list view reinit after loading opiton.
Jun 22 2023
Mar 27 2023
theoretically yes, but I guess it is not relevant. Since we now use the tasklist feature. And before that, I used the described workaround. So this is not needed anymore.
Jan 25 2023
thanks again, for solving this issue! So in the current kaapana develop the tasklist feature is now integrated!
Dec 12 2022
Dec 6 2022
Nov 25 2022
So the last changes in the branch, are basically the changes I made, so that she could modify the code, already used for the dicomweb/kaapana interaction, to simply apply it for this modul. Since it would have been quite similar. I guess, when the changes of the rest module are done: We (AIH-Cluster will take care of restarting the MITK-Kaapana interaction). I think this could be then a not so complicated add-on.
Oh yes, I also forgot about this, the final state was, as I remember correctly, that she did not succeed in producing meaningful results.
Sep 21 2022
oh so I forgot to send it: But this is, what my browser had cashed: "Yes, so I recently tested it, with the current release."
But also, if I remember correctly, this was due to the Phantom dataset we have as the default dataset:
I removed a few slices at the end, and now it is working.
Jul 22 2022
ok, but then the GPU problem is also feasible! If it is for a pre-known time-limited usage, like in an interactive session, I don't see a problem assigning a GPU to the container.
To point 1, in Kubernetes you can mount any path/volume to the desired mounting point. So I could just mount the directory in the MITK container in e.g. /models
Jun 1 2022
May 17 2022
yes, that will help. Also, this worklist will be quite helpful, allowing a new form of "batch-processing".
May 16 2022
Yes, so I changed it already, and I am using the seriesUID as name for the images. Additionally, I had to add the layer property: Because by introducing the image name property, somehow the layer is also set (to a high number, probably). When I create a new SEG without setting the layer of the image to 0, the SEG is only put on top, when opening the data manager.
Mar 25 2022
Jan 27 2022
Jan 26 2022
Input data to reproduce the error:
We discussed in the meeting:
- Exception handling is a good idea and should be the way to handle it. So mitk has to throw the exception and not show a success.
- Since in the input image a required tag is missing, MITK cannot provide a fix, allowing a success in dcmqi
Jan 25 2022
Dec 1 2021
Nov 30 2021
Yes, sorry I didn't highlight it ;):
Nov 3 2021
Oct 29 2021
In the current setup, the transfer is handled from airflow. This is also the case in the wDB-gateway. This stress test should therefore perform the same way and work. I have also a different test with random data, that work up to a limit. The system has now different recovery possibilities and can therefore handle large (random sorted) datasets better. I guess that there are sometimes sill errors, but since the system then restarts, no-one notices it. But there are still limits (depending mainly on the server (RAM)).
So I would also say this ticket is resolved for now. On a long run, changing the whole import process could remain a valid option.
Sep 10 2021
so, this is what we have tried so far:
helm updated to 3.6
microk9s to 1.22
first problem: v1beta1 version cannot be used anymore --> changed to v1 in the repo
next problem: our treafik is not compatible anymore (with 1.22)
Aug 3 2021
Jul 19 2021
So I tried it out. The problem are not the containeres (opensearch and opensearch-dashboard), but our plugin (workflow-trigger).
The plugin workflow-trigger has some package dependencies based on elastic/kibana. These dependencies have to be changed to the once of opensearch-dashboard.
So I tried to change it, but I get some yarn build issues.
So I am having different issues, probably it would be easier to restart the plugin form fresh. But for this, I would like to know, how the current plugin was created, to get an understanding, on how to create a similar one.
So the current issue is, that the package is searching for specific files, probably an opensearch-dashboard has to have a specific dir-order:
May 25 2021
May 19 2021
The same issue seems to apply to the normal workbench, so this container has also to be updated to the current master
May 5 2021
Apr 8 2021
Mar 22 2021
after tests of the system, the problem might be at the airflow part, this has to be tested
Mar 16 2021
Mar 15 2021
T28207 describes how to test it without the wDB. When searching for DICOM query retrieve there are already different tasks open with all similar problems
Mar 4 2021
Feb 24 2021
I cannot reproduce the error. When sending the data to an instance the data gets imported. When debugging CTP locally there also seems to be no problem. For me also it looks like the problem is not directly in the CTP but in the java library (dcm4che). Did the files got triggered/send to airflow? Did they get stuck in a quarantine folder of the CTP, and if so in which?
Just a remark: The download link does not work for me. I get forbidden access. This is also the case, even if I am logged into the platform. But since there is only one folder in minio I also found it without the link:)
Can you send me a dataset to reproduce it? I could try to debug CTP, but it looks like it is not even a problem in CTP but in org.dcm4chri
Feb 23 2021
Get an Openstack instance (e.g. Ubuntu 20.04 DKFZ image,...)
Feb 22 2021
Can you reproduce the error? If so, we can do something like: https://stackoverflow.com/questions/12096403/java-shutting-down-on-out-of-memory-error
Feb 19 2021
Feb 18 2021
Feb 17 2021
In case of kaapana, the name is currently irrelevant: directly in the next operator, the dcm files are send to the PACs anyway and then the file is deleted.
What would make sense, regading the naming in the PACs and when downloading files, would be to give it the name of the Instance UID.
Feb 16 2021
so maybe a Regex is needed, or just a default file name..
Feb 12 2021
Ah so the idea is not to build it, but to directly use a CI binary.
So MITK flow uses:
# Generate Ninja build script for MITK to build a minimum configuration with apps in Release mode into MITK-superbuild directory RUN cmake -G Ninja -S MITK -B MITK-superbuild RUN cmake -S MITK -B MITK-superbuild -D CMAKE_BUILD_TYPE:STRING=Release -D BUILD_TESTING:BOOL=OFF -D MITK_BUILD_CONFIGURATION:STRING=FlowBenchSegmentationRelease -D MITK_CUSTOM_REVISION_DESC:STRING=MitkFlow RUN cmake --build MITK-superbuild RUN cmake --build MITK-superbuild/MITK-build --target package RUN mkdir /opt/final_package RUN cp /opt/MITK-superbuild/MITK-build/MITK-MitkFlow-linux-x86_64.tar.gz /opt/final_package/MITK-MitkFlow-linux-x86_64.tar.gz
And MITK volume (aka the normal workbench) uses the Release build with segmentation=ON
For MITK-flow and MITK-volume I will switch to the new release (release/T28000-v2021.02). It was only on a develop branch, because there were some bug fixes, not yet in a master branch.
But now with the new release...
Feb 1 2021
To be honest, I just did not understand the checklist correctly. If you know the outcome and read the checklists, they are fine. But for me, it was like a new test starting with loading and testing 3D Data again. So I misunderstood the line Bei den folgenden Aktionen stets mehrere Label, auch auf unterschiedlichen Layern testen. as beeing a new test, because it had of kind of title character. And that is why I thought, I would start with a 3d Image again...