Status: Currently ExtractSliceFilter::GenerateOutputInformation just uses the base implementation of its superclass. This basically leads to a cloning of the input time geometries for all outputs. By description and rest of the class implementation the filter only produces 2D images even if a 3D+t image is given, because a user specified time step will be extracted.
Even so the filter always generates a 2D image with one time step the whole input time geometry is cloned, instead of the geometry of the selected time step. In case of dynamic data, the unnecesarry cloning of the time geometry causes a large overhead, that hits performance in many occasions (e.g. many mappers use ExtractSliceFilter).
If the filter always produces 2D images with one time step, then at least the GenerateOutputInformation() method should be reimplemented/overwritten to
- handle it in a more efficient manner.
- handle it correctly. The current meta data transfer is wrong anyways, because the geometry information of the output gets completly redefinied in GenerateData() (so only the transfer of properties has an effect).
- Clarify if ExtractSliceFilter realy should only generate 2D images with one time step.
- Correct/optimize the filter accordingly