Please check if the problem persists now with T28903: [Segmentation] Monitoring of segmentation nodes in views is error prone/ not safe. merged into develop.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 25 2022
Jan 20 2022
Jan 13 2022
Jan 11 2022
Dec 21 2021
Pulling chart: registry.hzdr.de/santhosh.parampottupadam/microkubsupgrade/kaapana-platform-chart:0.1.2 0.1.2: Pulling from registry.hzdr.de/santhosh.parampottupadam/microkubsupgrade/kaapana-platform-chart ref: registry.hzdr.de/santhosh.parampottupadam/microkubsupgrade/kaapana-platform-chart:0.1.2 digest: d601bd363898b8523cbefb51c7855b44f82baa1fbd7a352f30835c88a767992e size: 171.4 KiB name: kaapana-platform-chart version: 0.1.2 Status: Chart is up to date for registry.hzdr.de/santhosh.parampottupadam/microkubsupgrade/kaapana-platform-chart:0.1.2 Exporting chart: /home/ubuntu/kaapana-platform-chart Successfully exported chart to /home/ubuntu/ Installing kaapana-platform-chart:0.1.2 CHART_PATH /home/ubuntu/kaapana-platform-chart Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0]): missing required field "pathType" in io.k8s.api.networking.v1.HTTPIngressPath
Dec 16 2021
Dec 13 2021
Dec 9 2021
Successfully exported chart to /home/ubuntu/ Installing kaapana-platform-chart:0.1.2 CHART_PATH /home/ubuntu/kaapana-platform-chart Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0]): unknown field "defaultBackend" in io.k8s.api.networking.v1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[0]): missing required field "pathType" in io.k8s.api.networking.v1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[0]): missing required field "backend" in io.k8s.api.networking.v1.HTTPIngressPath] ubuntu@vm-129-189:~$
Dec 8 2021
Dec 7 2021
just saw it. I guess your problem is that you want to write to a directory that does not exist.
do you set --flow.outputdir? If so who?
Dec 6 2021
Found the problem in the code. Will be solved by T28903
Dec 3 2021
chart pushed... but during installation this error comes...
Exporting chart: /home/ubuntu/kaapana-platform-chart Successfully exported chart to /home/ubuntu/ Installing kaapana-platform-chart:0.1.2 CHART_PATH /home/ubuntu/kaapana-platform-chart Error: failed to install CRD crds/crds.yaml: CustomResourceDefinition.apiextensions.k8s.io "traefikservices.traefik.containo.us" is invalid: [spec.versions[0].schema.openAPIV3Schema: Required value: schemas are required, spec.versions[1].schema.openAPIV3Schema: Required value: schemas are required]
Latest Error in Traefik
Nov 30 2021
Yes, sorry I didn't highlight it ;):
can you provide the files necessary to reproduce the problem outside of a container? (so at least image, seg and the scene file you generate automatically, and the call of MITK Workbench you use.)
Nov 26 2021
Error: failed to install CRD crds/crds.yaml: CustomResourceDefinition.apiextensions.k8s.io "tlsstores.traefik.containo.us" is invalid: [spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion(nil): must have exactly one version marked as storage version, status.storedVersions: Invalid value: []string(nil): must have at least one stored version]
Nov 23 2021
ubuntu@vm-129-189:~$ ./install_platform.sh USER: ubuntu Check disk space: ok SIZE: 194G Check if helm is available... ok Get helm deployments... Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
ubuntu@vm-129-189:~$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE base code-server-7b9b68c556-fqrc9 1/1 Running 3 84m base landingpage-7c7bb855b-hz484 1/1 Running 3 86m default kaapana-exp-extensions-87rjt 0/1 Completed 0 84m default kaapana-plugin-kqdd2 0/1 Completed 0 84m default kaapana-stab-extensions-ktk29 0/1 Completed 0 84m flow-jobs dcmsend-00382388 0/1 Completed 0 81m flow-jobs dcmsend-2b49b0c7 0/1 Completed 0 80m flow-jobs dcmsend-5ff032fb 0/1 Completed 0 81m flow-jobs dcmsend-98c4d2c3 0/1 Completed 0 81m flow-jobs dcmsend-e2633395 0/1 Completed 0 81m flow airflow-6ddc54d9b4-hcvb6 2/2 Running 6 86m flow ctp-76cf9bbc9f-6nchk 1/1 Running 3 86m flow postgres-airflow-64954bfb86-t2bjt 1/1 Running 3 86m kube-system coredns-588fd544bf-27hc8 1/1 Running 5 91m kube-system error-pages-57598754db-fz7bc 1/1 Running 3 86m kube-system keycloak-967cbfb55-8jjsv 1/1 Running 3 86m kube-system kube-helm-deployment-7f8464f9df-tnbmg 1/1 Running 3 86m kube-system kube-state-metrics-5695698777-dq864 1/1 Running 3 86m kube-system kubernetes-dashboard-69664c8798-4lw9j 1/1 Running 3 86m kube-system louketo-687bbbf6d9-cjgxs 1/1 Running 3 86m kube-system postgres-keycloak-5cc9b468d9-pkd82 1/1 Running 3 86m kube-system preinstall-extensions-init-w9jss 0/2 Completed 0 86m kube-system traefik-5786899dff-99q9f 1/1 Running 3 86m kube-system update-extensions-init-vjhtq 0/1 Completed 0 86m meta elastic-meta-de-9885fb5b4-wcptq 1/1 Running 3 86m meta init-meta-dfpdt 0/1 Completed 0 86m meta kibana-meta-de-8f7c4bd6c-h7k94 1/1 Running 3 86m monitoring alertmanager-74fc67bd8-5zvtn 1/1 Running 3 86m monitoring grafana-6f6b5dc559-2jhcx 1/1 Running 3 86m monitoring prometheus-76bf79f68f-zcqfp 1/1 Running 3 86m store dcm4chee-78487774cf-86pkq 1/1 Running 3 86m store dicom-init-twb8d 0/1 Completed 0 86m store ldap-684f697598-lzfnj 1/1 Running 3 86m store minio-deployment-79dc7dd464-qv4sk 1/1 Running 3 86m store minio-init-76dn6 0/1 Completed 0 86m store ohif-65cdcd8b87-lhcfd 1/1 Running 3 86m store postgres-dcm4che-75d84848c5-rdqxq 1/1 Running 3 86m ubuntu@vm-129-189:~$ ubuntu@vm-129-189:~$ ubuntu@vm-129-189:~$ ubuntu@vm-129-189:~$ ubuntu@vm-129-189:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION vm-129-189.cloud.dkfz-heidelberg.de Ready <none> 92m v1.21.6-3+dd57cd4fdc581a
Nov 22 2021
Nov 17 2021
Nov 11 2021
Nov 10 2021
Nov 5 2021
Oct 29 2021
Thanks for the update!
In the current setup, the transfer is handled from airflow. This is also the case in the wDB-gateway. This stress test should therefore perform the same way and work. I have also a different test with random data, that work up to a limit. The system has now different recovery possibilities and can therefore handle large (random sorted) datasets better. I guess that there are sometimes sill errors, but since the system then restarts, no-one notices it. But there are still limits (depending mainly on the server (RAM)).
So I would also say this ticket is resolved for now. On a long run, changing the whole import process could remain a valid option.
Oct 28 2021
@gaoh have we also tested it with our wDB stress test?
So after I switched the DICOM send from CTP to Airflow the issue should be solved.
I have sent multiple terabyte of data without any noteworthy issues.
I still think we could probably remove CTP completely (since it only is used as a DICOM receiver anyway and introduces a considerable amount of complexity to the system).
But it should work as it is right now and the removal can be handled as future work.
yes of course
@s280a can we move this into "in progress" ?
Oct 22 2021
Fixed by T28753
Oct 21 2021
Oct 19 2021
Oct 18 2021
Oct 15 2021
Oct 5 2021
Oct 4 2021
Sep 10 2021
so, this is what we have tried so far:
helm updated to 3.6
microk9s to 1.22
first problem: v1beta1 version cannot be used anymore --> changed to v1 in the repo
next problem: our treafik is not compatible anymore (with 1.22)
Aug 26 2021
Aug 20 2021
Aug 12 2021
Jul 30 2021
Jul 29 2021
Jul 21 2021
Thanks Hanno. I think @schererj is the original author, so maybe he can comment or we discuss it in the meeting.
Jul 19 2021
So I tried it out. The problem are not the containeres (opensearch and opensearch-dashboard), but our plugin (workflow-trigger).
The plugin workflow-trigger has some package dependencies based on elastic/kibana. These dependencies have to be changed to the once of opensearch-dashboard.
So I tried to change it, but I get some yarn build issues.
So I am having different issues, probably it would be easier to restart the plugin form fresh. But for this, I would like to know, how the current plugin was created, to get an understanding, on how to create a similar one.
So the current issue is, that the package is searching for specific files, probably an opensearch-dashboard has to have a specific dir-order:
Jul 9 2021
Jul 1 2021
So this task as an _evaluation_ task has a high priority in my opinion. If it turns out to be a bigger thing to replace please stop and we should discuss again
If this is not a big deal, i.e. if it really works after switching to the replacement there would be a big benefit. So it would be great if someone could at least try it and report some results, like how much work it would be. After that we could decide how to prioritize, but from my understanding it would make our Kaapana license task much easier
When I install Kaapana using installation script the containerd version is 1.2.5 which is 2 years old.