Lectures

Volume Rendering

How To Build A 3D Volume Renderer

Prof. Dr. Stefan Röttger, Stefan.Roettger@th-nuernberg.de



Include this page



What is the scope of this lecture?

Volume data is a very common data type in medical visualization. It is generated by CT and MRI and PET computer tomography scanners, which are a powerful 3D sensing technique that has become an important standard in every day clinic routine.

In order to display that volume data, a so called volume renderer is required. In this lecture we are going to investigate the techniques and algorithms employed by a volume renderer. We also get in depth knowledge of the volume rendering principles by building a basic volume renderer by our own.


Lecture 1

What is a volume renderer? We find out by trying one hands on.

Exercise: Get started with Unix and QTV3

Learning Objectives:

Find out what volume data is.
Learn what a volume renderer is capable of.

Objectives Test:

Why do we need a special software to display volume data?


Lecture 2

Volume Rendering Prerequisites: Qt

Exercise: Write a modularized Qt application that allows to open an image file via QFileDialog (e.g. this one: ). The file selector dialog should be triggered from the menu bar via signal/slot mechanism. Then the resulting image file is loaded into a QImage object via QImage::load() and displayed as the background of the main window with QPainter::drawImage() by overriding the widget’s paintEvent() method. Also show the image size in MB as text on top of the image.

Learning Objectives:

Learn basic GUI concepts with Qt.
Get used to 2D graphics paradigms with QPainter.
Get used to the Qt-Framework:
svn co svn://schorsch.efi.fh-nuernberg.de/qt-framework

Objectives Test:

Describe the steps to setup a simple UI with a menu bar with Qt.


Lecture 3

Volume Rendering Prerequisites: OpenGL

Exercise: Extend the Qt frame work with your own QGLWidget derived class that renders a stack of 10 semi-transparent and differently colored (yet untextured) slices within the unit cube (see this example rendering). Check the effect of enabling or disabling the Z-Buffer and blending. Let the camera rotate around the stack and make sure that the order of rendering is always from back to front. Raise the camera a bit so that you look down. Also use a wide angle and tele lens and tilt the camera. Lastly, put the rotating stack on a table with 4 legs using the matrix stack.

Learning Objectives:

Get used to 3D graphics paradigms with OpenGL resp. QGL.

Objectives Test:

What is the difference between OpenGL and QGL?
What is the difference between retained and direct rendering mode? Give examples of software packages for both modes.
Describe the steps to setup a simple 3D scene to be rendered with OpenGL.
What is an alpha value and what needs to be taken into account when rendering objects with alpha values?
If you change some parameters of the calls to gluPerspective, gluLookAt, glRotate, glTranslate, what effect does this have?
If you change the order of the calls to glColor and glVertex, what goes wrong?


Lecture 4

Volume Rendering Prerequisites: 3D Texturing

Exercise: Modulate the geometry of the previous exercise with a “checker board” 3D texture. Check both GL_NEAREST and GL_LINEAR texture filtering modes. Then render real Dicom data instead of the checker board texture (use the dicombase.h module of the frame work to load a Dicom series, e.g. the Artichoke series of the dicom-data repo). Implement a simple MPR (Multi-Planar Reconstruction) user interface that shows two axis-aligned slices through a dicom volume.

Learning Objectives:

Learn texturing principles: Texture Coordinates and Texture Data Storage
Extension of 2D texturing to 3D texturing

Objectives Test:

Sketch an example showing e thtexture coordinates for a pyramid textured with a 3D texture.


Lecture 5

Direct Volume Rendering (DVR)

Exercise: Render view-aligned slices instead of axis-aligned slices (use the slicer.h module of the frame work). Implement the MIP algorithm by using the according OpenGL blending mode.

Learning Objectives:

Understand main differences of Ray Casting and Slicing.
Introduction to VR with MIP.

Objectives Test:

Which scattered rays are neglected with DVR?
How is the intersection of a plane with the unit cube computed?
Describe the MIP rendering technique!


Lecture 6

Optical Model and Transfer Functions

Exercise: Implement the MIP volume rendering technique by using view-aligned slicing. With that foundation, implement the DVR technique by using the according OpenGL blending mode (assuming a linear transfer function). Use the following helper modules:

With the artichoke dicom data the step by step implementation should look like the following:

plain triangle 3D textured triangle textured single slice view-aligned slices with MIP

Learning Objectives:

Be able to derive the line integral of the standard optical model.
Be able to adapt the integral to be used with OpenGL.

Objectives Test:

Describe the standard optical model.
Describe the modifications of the MIP algorithm to adapt to the optical model.


Lecture 7

Advanced Techniques:

Exercise: Show the difference of DVR and GM by using the QtV3 for a CT and a MR dataset.

Learning Objectives:

Know advanced DVR algorithms of practical relevance.

Objectives Test:

What is the main advantage of preintegration?
Describe a use case for the gradient magnitude technique.


Course Certificate (PDF)

Options: