diff --git a/example_algos/readme.md b/example_algos/readme.md new file mode 100644 index 0000000..4d8963e --- /dev/null +++ b/example_algos/readme.md @@ -0,0 +1,95 @@ +# Example Algorithms + +This folder contains a few simple example OoD algorithms for the _Medical Out-of-Distribution Analysis Challenge_. + +### Quickstart + +Install python requirements: + +``` +pip install -r requirements_alogs.txt +``` + +And use the data/preprocess.py script to preprocess the data (you may want to use more sophisticated preprocessing for your own submission): + +``` +python data/preprocess.py -i input_folder -o output_folder +``` + +### Run the example algorithms + +Basically all the algorithms in the algorithm folder are ready-to-run, or can be used as a starting point for your own algorithms. All algorithms take the same basic command_line arguments: + +- -r [--run], Which part of the algorithm pipeline you want to run ("train", "predict", "test", "all"). +- -d [--data-dir], The directory containing the preprocessed training data. +- -o [--log-dir], The directory to which you want the logs to be stored in. +- -t [--test-dir], The directory containing the test data (requires a folder with the same name plus the suffix '\_label'). +- -m [--mode], pixel-level or sample-level algorithm ('pixel', 'sample'). +- --logger, if you want to use a logger, can either use a visdom server (running on port 8080) or tensorboard ("visdom", "tensorboard"). + +For more arguments checkout the python files. + +The example Algorithms includes: + +#### 2D Autoencoder (2d_ae.py) + +A simple 2d autoencoder which uses the reconstruction error as OoD score: + +``` +python algorithms/ae_2d.py -r all -o output_dir -t /data/mood/brain/toy --mode pixel --logger visdom -d /data/mood/brain/train_preprocessed +``` + +#### 3D Autoencoder (2d_ae.py) + +A simple 3d autoencoder which uses the reconstruction error as OoD score: + +``` +python algorithms/ae_3d.py -r all -o output_dir -t /data/mood/brain/toy --mode pixel --logger visdom -d /data/mood/brain/train_preprocessed +``` + +#### ceVAE (ce_vae.py) + +A simple context-encoding Variational Autoencoder. Can be used as only a VAE or CE as well. + +``` +python algorithms/ce_vae.py -r all -o output_dir -t /data/mood/brain/toy --mode pixel --logger visdom -d /data/mood/brain/train_preprocessed --ce-factor 0.5 --score-mode combi +``` + +With additional arguments: + +- --ce-factor, determins the 'mixing' between VAE and CE (0.0=VAE only, 1.0=CE only). +- --score-mode, How to determine the OoD score ("rec", "grad", "combi"). + +#### fAnoGAN (f_ano_gan.py) + +A fAnoGAN algorithm build on top of improved Wasserstein GANs. Can use be used as AnoGAN only (without the encoding). + +``` +python algorithms/f_ano_gan.py -r all -o output_dir -t /data/mood/brain/toy --mode pixel --logger visdom -d /data/mood/brain/train_preprocessed --use-encoder +``` + +With argument: + +- --use-encoder/--no-encoder, to determine whether to train an additional encoder (fAnoGAN) to reconstruct the image or use no encoder and use backpropagation for reconstruction. + +### More Code + +While the _example_algos_ code is ready to run there are a lot of excellent code repositories which can also be checked out. Some of the Include: + +- , if you use tensorflow. +- , for more VAE variants. +- , for more pointers/ great algorimths. +- , , collections of basic, easy-to-use algorithms. +- , the original f-AnoGAN implementation. +- +- +- + +And as always, _Papers With Code_: + +- [Anomaly Detection](https://paperswithcode.com/task/anomaly-detection/) +- [Out-of-Distribution Detection](https://paperswithcode.com/task/out-of-distribution-detection/) +- [Outlier Detection](https://paperswithcode.com/task/outlier-detection/) +- [Density Estimation](https://paperswithcode.com/task/density-estimation/) + +Good Luck and Have Fun! =) diff --git a/readme.md b/readme.md index f6c49d2..810a5c2 100644 --- a/readme.md +++ b/readme.md @@ -1,64 +1,68 @@ _Copyright © German Cancer Research Center (DKFZ), Division of Medical Image Computing (MIC). Please make sure that your usage of this code is in compliance with the code license:_ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/MIC-DKFZ/basic_unet_example/blob/master/LICENSE) --- # MOOD 2020 - Repository This repo has the supplementary code for the _Medical Out-of-Distribution Analysis Challenge_ at MICCAI 2020. Also checkout our [Website](http://medicalood.dkfz.de/web/) and [Submission Platform](https://www.synapse.org/mood). ### Requirements Please install and use docker for submission: For GPU support you may need to install the NVIDIA Container Toolkit: Install python requirements: ``` pip install -r requirements.txt ``` We suggest the following folder structure (to work with our examples): ``` data/ --- brain/ ------ brain_train/ ------ toy/ ------ toy_label/ --- colon/ ------ colon_train/ ------ toy/ ------ toy_label/ ``` ### Run Simple Example Have a lot at the simple_example in how to build a simple docker, load and write files, and run a simple evaluation. After installing the requirements you can also try the simple_example: ``` python docker_example/run_example.py -i /data/brain/ --no_gpu False ``` With `-i` you can pass an input folder (which has to contain a _toy_ and _toy_label_ directory) and with `--no_gpu` you can turn on/off GPU support for the docker (you may need to install the NVIDIA Container Toolkit for docker GPU support). ### Test Your Docker After you built your docker you can test you docker locally using the toy cases. After submitting your docker, we will also report the toy-test scores on the toy examples back to you, so you can check if your submission was successful and the scores match: ``` python scripts/test_docker.py -d mood_docker -i /data/ -t sample ``` With `-i` you can pass the name of your docker image, with `-i` pass the path to your base*data dir (see \_Requirements*), with `-t` you can define the Challenge Task (either _sample_ or _pixel_), and with `--no_gpu` you can turn on/off GPU support for the docker (you may need to install the NVIDIA Container Toolkit for docker GPU support). ### Scripts In the scripts folder you can find: - `test_docker.py` : The script to test your docker. - `evalresults.py` : The script with our evaluation code. + +### Example Algorithms + +For _'ready to run'_ simple example algorithms checkout the example_alogs folder.