Page MenuHomePhabricator

When switching Volume Rendering modes the GPU memory is not freed
Closed, ResolvedPublic

Description

Test system: Windows 7, NVIDIA Quadro K5000, NVIDIA Driver 311.35

Steps to reproduce:

  1. Load Volume dataset
  2. Enable Volume Rendering
  3. Switch Rendering mode eg. from texture slicing to GPU raycast

How to check GPU memory consumption:

E.g. on Windows systems (7 or later) use Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) and open the System Information View (via Menubar -> View -> System Information), select the GPU tab to show GPU compute/memory utilization.

On step 2 you'll see that GPU memory is allocated for the GPU slicer, on step 3 you'll see that additional GPU memory is allocated for the GPU raycaster.

Interestingly when switching from GPU raycast to GPU slicing the memory is freed for the raycast renderer. However when switching from slicing to raycast mode the slicing renderer allocated GPU memory is kept.

Another note: The GPU slicer allocates huge amounts of GPU memory as it uses an RGBA texture instead of a single channel texture. Furthermore the dynamic range is reduced for CT datasets as only 8bits are used per channel. Is this a limitation imposed by VTK? E.g. for a large dataset (CT 512x512x1024) the GPU slicer allocated 1024Mbyte (8bit, RGBA) instead of the needed 512Mbyte (@16bit single channel) and also reduced the rendering quality.

Event Timeline

New remote branch pushed: bug-14841-gpu-volumrendering-memory-leak

The memory leak should be fixed. The gpu memory was not released when the gpu scliceing was deinitialized.

[534cc1]: Merge branch 'bug-14841-gpu-volumrendering-memory-leak'

Merged commits:

2013-04-03 15:56:20 Eric Heim [52d3eb]
releasing gpu memory in volumerendering when the gpu slicer is deinitialized


2013-03-31 17:36:31 Sascha Zelzer [1fd96d]
MITK 2013.03.00 version update

Hi Oliver,

Thanks for informing about the memory leak.

Another note: The GPU slicer allocates huge amounts of GPU memory as it uses
an RGBA texture instead of a single channel texture. Furthermore the dynamic
range is reduced for CT datasets as only 8bits are used per channel. Is this
a limitation imposed by VTK? E.g. for a large dataset (CT 512x512x1024) the
GPU slicer allocated 1024Mbyte (8bit, RGBA) instead of the needed 512Mbyte
(@16bit single channel) and also reduced the rendering quality.

Actually all 4 channels of the RGBA texture are used, the alpha channel contains the grayvalue (though with the reduced dynamic range of 8bit).

The RGB channels store a 3D-vectorfield of the precomputed gradient of the grayvalue for lighting calculations.

If you are running out of GPU memory, you may consider using texture-compression, which reduces the memory footprint to 1/8.

Extending the GPU slicer to new OpenGL features like 16bit or floating point textures was not considered, because it was meant to be replaced by the GPU raycasting algorithm.

However, the GPU slicing algorithm is still the most compatible (to even very old graphics cards without opengl 2.0 support) and fastest volume rendering method available in MITK and also the only one that supports direct rendering of 3D 4-channel RGBA images, without lookup in a transferfunctions. (The mapper will then allocate 2 textures, one for the original RGBA volume, and a RGB texture for the gradient computed out of the alpha channel)

Cheers
Markus