diff --git a/Core/Documentation/Doxygen/Concepts/GeometryOverview.dox b/Core/Documentation/Doxygen/Concepts/GeometryOverview.dox index d0327a4b4e..2d186a605c 100644 --- a/Core/Documentation/Doxygen/Concepts/GeometryOverview.dox +++ b/Core/Documentation/Doxygen/Concepts/GeometryOverview.dox @@ -1,134 +1,136 @@ namespace mitk{ /** \page GeometryOverviewPage Geometry Overview \tableofcontents \section GeometryOverviewPage_Introduction Introduction to Geometries Geometries are used to describe the geometrical properties of data objects in space and time.\n To use the geometry classes in the right way you have to understand the three different coordinate types present in MITK:\n\n \imageMacro{CoordinateTypes.png,"",16}
The different coordinate types\n\n
\n -# World coordinates: - World coordinates are describing the actual spacial position of all MITK objects regarding a global coordinate system, normally specified by the imaging modality - World coordinates are represented by mitk::Point3D objects. - The geometry defines the offset, orientation, and scale of the considered data objects in reference to the world coordinate systems. - World coordinates are always measured in mm - If you are dealing with an image geometry, the origin of an image is pointing to the CENTER of the bottom-left-back voxel.\n - If you are NOT dealing with an image geometry (no defined discrete Voxels), the origin is pointing to the bottom-left-back CORNER - Index coordinates can be converted to world coordinates by calling BaseGeometry::IndexToWorld()\n\n -\imageMacro{worldcoordinateSystem.png,"",16} +\imageMacro{worldcoordinateSystem.png,"",12}
Corner-based coordinates\n\n
-\imageMacro{WorldcoordinateSystemCenterBased.png,"",16} +\imageMacro{WorldcoordinateSystemCenterBased.png,"",12}
Center-based image-coordinates\n\n
\n -# Continuous index coordinates: - Dividing world coordinates through the pixel spacing and simultanously taking the offset into account leads to continuous index coordinates inside your dataobject.\n So continuous coordinates can be float values! - Continuous index coordinates are represented by mitk::Point3D objects. - They can be obtained by calling BaseGeometry::WorldToIndex(), where &pt_mm is a point in worldcoordinates.\n -# Index coordinate system: - Index coordinates are discrete values that address voxels of a data object explicitly. - Index coordinates are represented by mitk::Index3D objects. - Basically they are continuous index coordinates which are rounded from half integer up. - E.g. (0,0) specifies the very first pixel of a 2D image, (0,1) the pixel of the next column in the same row - If you have world coordinates, they can be converted to discrete index coordinates by calling BaseGeometry::WorldToIndex()\n\n \section GeometryOverviewPage_PointsAndVector Difference between Points and Vectors Like ITK, MITK differenciate between points and vectors. A point defines a position in a coordinate system while a vector is the distance between two points. Therefore points and vectors behave different if a coordinate transformation is applied. An offest in a coordinate transformation will affect a transformed point but not a vector. An Example:\n If two systems are given, which differ by a offset of (1,0,0). The point A(2,2,2) in system one will correspont to point A'(3,2,2) in the second system. But a vector a(2,2,2) will correspond to the vector a'(2,2,2). \section GeometryOverviewPage_Concept The Geometry Concept As the superclass of all MITK geometries BaseGeometry holds: - a spacial bounding box which is axes-parallel in index coordinates (often discrete indices of pixels), to be accessed by BaseGeometry::GetBoundingBox() - a time related bounding box which holds the temporal validity of the considered data object in milliseconds (start and end time), to be accessed by BaseGeometry::GetTimeBounds().\n The default for 3D geometries is minus infinity to plus infinity, meaning the object is always displayed independent of displayed time in MITK. - position information in form of a Euclidean transform in respect to world coordinates (i.e. a linear transformation matrix and offset) to convert (discrete or continuous) index coordinates to world coordinates and vice versa,\n to be accessed by BaseGeometry::GetIndexToWorldTransform()\n See also: \ref GeometryOverviewPage_Introduction "Introduction to Geometries" - Many other properties (e.g. origin, extent, ...) which can be found in the \ref BaseGeometry "class documentation" - VERY IMPORTANT:\n A flag called isImageGeometry, which indicates whether the coordinates are center-based or not!\n See also: \ref GeometryOverviewPage_Introduction "Introduction to Geometries" and \ref GeometryOverviewPage_Putting_Together "IMPORTANT: Putting it together for an Image"\n\n Every data object (sub-)class of BaseData has a TimeGeometry which is accessed by BaseData::GetTimeGeometry(). This TimeGeometry holds one or more BaseGeometry objects which describes the object at specific time points, e.g. provides conversion between world and index coordinates and contains bounding boxes covering the area in which the data are placed. There is the possibility of using different implementations of the abstract TimeGeometry class which may differ in how the time steps are saved and the times are calculated. There are two ways to represent a time, either by a TimePointType or a TimeStepType. The first is similar to the continous index coordinates and defines a Time Point in milliseconds from timepoint zero. The second type is similar to index coordinates. These are discrete values which specify the number of the current time step going from 0 to GetNumberOfTimeSteps(). The conversion between a time point and a time step is done by calling the method TimeGeometry::TimeStepToTimePoint() or TimeGeometry::TimePointToTimeStep(). Note that the duration of a time step may differ from object to object, so in general it is better to calculate the corresponding time steps by using time points. Also the distance of the time steps does not need to be equidistant over time, it depends on the used TimeGeometry implementation. Each TimeGeometry has a bounding box covering the whole area in which the corresponding object is situated during all time steps. This bounding box may be accessed by calling TimeGeometry::GetBoundingBoxInWorld() and is always in world coordinates. The bounding box is calculated from all time steps, to manually start this calculation process call TimeGeometry::Update(). The bounding box is not updated if the getter is called. The TimeGeometry does not provide a transformation of world coordinates into image coordinates since each time step may has a different transformation. If a conversion between image and world is needed, the BaseGeometry for a specific time step or time point must be fetched either by TimeGeometry::GetGeometryForTimeStep() or TimeGeometry::GetGeometryForTimePoint() and then the conversion is calculated by using this geometry. The TimeGeometry class is an abstract class therefore it is not possible to instantiate it. Instead a derived class must be used. Currently the only class that can be chosen is ProportionalTimeGeometry() which assumes that the time steps are ordered equidistant. To initialize an object with given geometries call ProportionalTimeGeometry::Initialize() with an existing BaseGeometry and the number of time steps. The given geometries will be copied and not referenced! Also, the BaseGeometry is an abstract class and derived classes must be used. The most simple implementation, i.e. the one to one implementation of the BaseGeometry class, is the class Geometry3D. SlicedGeometry3D is a sub-class of BaseGeometry, which describes data objects consisting of slices, e.g., objects of type Image (or SlicedData, which is the super-class of Image). Therefore, Image::GetTimeGeometry() will contain a list of SlicedGeometry3D instances. There is a special method SlicedData::GetSlicedGeometry(t) which directly returns\n a SlicedGeometry3D to avoid the need of casting. The class SlicedGeometry3D contains a list of PlaneGeometry objects describing the slices in the image.We have here spatial steps from 0 to GetSlices(). SlicedGeometry3D::InitializeEvenlySpaced (PlaneGeometry *planeGeometry, unsigned int slices) initializes a stack of slices with the same thickness, one starting at the position where the previous one ends. PlaneGeometry provides methods for working with 2D manifolds (i.e., simply spoken, an object that can be described using a 2D coordinate-system) in 3D space.\n For example it allows mapping of a 3D point on the 2D manifold using PlaneGeometry::Map(). \n An important subclass of PlaneGeometry is the DisplayGeometry which describes the geometry of the display (the monitor screen). Basically it represents a rectangular view on a 2D world geometry.\n The DisplayGeometry converts between screen and world coordinates, processes input events (e.g. mouse click) and provides methods for zooming and panning.\n -\imageMacro{DisplayGeometry.png,"",16} + +\imageMacro{DisplayGeometry.png,"",9} +
Display Geometry\n\n
Finally there is the AbstractTransformGeometry which describes a 2D manifold in 3D space, defined by a vtkAbstractTransform. It is a abstract superclass for arbitrary user defined geometries.\n An example is the ThinPlateSplineCurvedGeometry.\n \subsection GeometryOverviewPage_Putting_Together IMPORTANT: Putting it together for an Image Please read this section accurately if you are working with Images! The definition of the position of the corners of an image is different than the one of other data objects: As mentioned in the previous section, world coordinates of data objects (e.g. surfaces ) usually specify the bottom left back corner of an object. In contrast to that a geometry of an Image is center-based, which means that the world coordinates of a voxel belonging to an image points to the center of that voxel. E.g: -\imageMacro{PixelCenterBased.png,"",16} +\imageMacro{PixelCenterBased.png,"",6}
Center-based voxel\n\n
If the origin of e.g. a surface lies at (15,10,0) in world coordinates, the origin`s world coordinates for an image are internally calculated like the following:
(15-0.5*X-Spacing\n 10-0.5*Y-Spacing\n 0-0.5*Z-Spacing)\n
If the image`s spacing is (x,y,z)=(1,1,3) then the corner coordinates are (14.5,9.5,-1.5). If your geometry describes an image, the member variable isImageGeometry must be changed to true. This variable indicates also if your geometry is center-based or not.\n The change can be done in two ways:\n -# You are sure that your origin is already center-based. Whether because you adjusted it manually or you copied it from another image.\n In that case, you can call the function setImageGeometry(true) or imageGeometryOn() to set the bool variable to true. -# You created a new geometry, did not manually adjust the origin to be center-based and have the bool value isImageGeometry set to false (default).\n In that case, call the function ChangeImageGeometryConsideringOriginOffset(true). It will adjust your origin automatically and set the bool flag to true.\n If you experience displaced contours, figures or other stuff, it is an indicator that you have not considered the origin offset mentioned above.\n\n An image has a TimeGeometry, which contains one or more SlicedGeometry3D instances (one for each time step), all of which contain one or more instances of (sub-classes of) PlaneGeometry.\n As a reminder: Geometry instances referring to images need a slightly different definition of corners, see BaseGeometry::SetImageGeometry. This is usualy automatically called by Image.\n\n \section GeometryOverviewPage_Connection Connection between MITK, ITK and VTK Geometries -\imageMacro{ITK_VTK_MITK_Geometries.png,"",16} +\imageMacro{ITK_VTK_MITK_Geometries.png,"",10} \n\n - VTK transformation for rendering - ITK transformation for calculations - Both automatically updated when one is changed\n Attention:Not automatically updated when changed hardcoded. Example: geometry->GetVtkMatrix()->Rotate(....) */ } diff --git a/Core/Documentation/Doxygen/Concepts/Interaction.dox b/Core/Documentation/Doxygen/Concepts/Interaction.dox index 1aff8413b9..e31e17477e 100644 --- a/Core/Documentation/Doxygen/Concepts/Interaction.dox +++ b/Core/Documentation/Doxygen/Concepts/Interaction.dox @@ -1,127 +1,127 @@ /** \deprecated \page InteractionPage Interaction and Undo/Redo Concepts \note The following page refers to the deprecated interaction frame work. Please refer to \ref DataInteractionPage for information about the current one. \tableofcontents \section InteractionPage_Introduction Interaction in MITK \b Interaction is one of the most important tasks in clinically useful image processing software. Due to that, MITK has a special interaction concept, with which the developer can map the desired interaction. For a simple change in interaction he doesn't have to change the code. All information about the sequence of the interaction is stored in an XML-file that is loaded by the application during startup procedure at runtime. That even allows the storage of different interaction patterns, e.g. an interaction behaviour like in MS PowerPoint, in Adobe Photoshop or like the interaction behaviour on a medical image retrieval system. \section InteractionPage_Statemachines_Implementation Statemachines to implement Interaction The interaction in MITK is implemented with the concept of state machines (by Mealy). This concept allows to build the steps of interaction with different states, which each have different conditions, very alike the different interactions that may have to be build to develop medical imaging applications. Furthermore state machines can be implemented using object oriented programming (OOP). Due to that we can abstract from the section of code, that implements the interaction and focus on the sequence of interaction. What steps must the user do first before the program can compute a result? For example he has to declare three points in space first and these points are the input of a filter so only after the definition of the points, the filter can produce a result. The according interaction sequence will inform the filter after the third point is set and not before that. Now the filter after an adaption only needs two points as an input. The sequence of the interaction can be easily changed if it is build up as a sequence of objects and not hard implemented in a e.g. switch/case block. Or the user wants to add a point in the scene with the right mouse button instead of the left. Wouldn't it be nice to only change the definition of an interaction sequence rather than having to search through the code and changing every single if/else condition? \subsection InteractionPage_Statemachine State Machine So a separation of the definition of a sequence in interaction and its implementation is a useful step in the development of an interactive application. To be able to do that, we implemented the concept of state machines with several classes: States, Transitions and Actions define the interaction pattern. The state machine itself adds the handling of events, that are sent to it. -\imageMacro{statemachine.jpg,"",16} +\imageMacro{statemachine.jpg,"",10} \subsubsection InteractionPage_ExampleA Example A: A deterministic Mealy state machine has always one current state (here state 1). If an event 1 is sent to the state machine, it searches in its current state for a transition that waits for event 1 (here transition 1). The state machine finds transition 1, changes the current state to state2, as the transition points to it and executes actions 1 and 2. Now state 2 is the current state. The state machine receives an event 2 and searches for an according transition. Transition 2 waits for event 2, and since the transition leads to state 2 the current state is not changed. Action 3 and 4 are executed. Now Event 3 gets send to the state machine but the state machine can't find an according transition in state 2. Only transition 2 , that waits for event 2 and transition 4, that waits for event 4 are defined in that state. So the state machine ignores the event and doesn't change the state or execute an action. Now the state machine receives an event 4 and finds transition 3. So now the current state changes from state 2 to state 1 and actions 5 and 1 are executed. Several actions can be defined in one transition. The execution of an action is the active part of the state machine. Here is where the state machine can make changes in data, e.g. add a Point into a list. See mitk::StateMachine, mitk::State, mitk::Event, mitk::Action, mitk::Transition, mitk::Interactor \subsection InteractionPage_GuardState Guard States Guard States are a special kind of states. The action, that is executed after the state is set as current state, sends a new event to the state machine, which leads out of the guard state. So the state machine will only stay in a guard state for a short time. This kind of state is used to check different conditions, e.g. if an Object is picked or whether a set of points will be full after the addition of one point. -\imageMacro{statemachine_guard.jpg,"",16} +\imageMacro{statemachine_guard.jpg,"",10} \subsubsection InteractionPage_ExampleB Example B: Event 1 is sent to the state machine. This leads the current state from state 1 into state check. The action 1 is executed. This action checks a condition and puts the result into a new event, that is sent and handled by the same (this) state machine. E.g. is the object picked with the received mouse-coordinate? The event, that is generated, will be Yes or No. In case of event No, the state machine sets the current state back to state 1 and executes action 2. In case of event Yes, the state machine changes the state from state check into state 2 and executes action 3, which e.g. can select said object. \subsection InteractionPage_XMLDefinitionStatemachine Definition of a State machine Due to the separation of the definition of an interaction sequence and its implementation, the definition has to be archived somewhere, where the application can reach it during startup and build up all the objects (states, transitions and actions) that represent the sequence of a special interaction. In MITK, these informations are defined in an XML-file (usually in Core/Code/Resources/Interactions/Legacy/StateMachine.xml) \note Please note that since this is a resource which is compiled into the executable, changes you make to this file will only be reflected in application behavior after you recompile your code. The structure is the following (from \ref InteractionPage_ExampleA) : \code \endcode The identification numbers (ID) inside a state machine have to be unique. Each state machine has to have one state, that is defined as the start-state of that state machine. This means, initially, the current state of the state machine is the start-state. The Event-Ids seen above are also defined in the statemachine.xml file. They specify a unique number for a combination of input-conditions (key, mouse and so on). See \ref InteractionPage_InteractionEvents for further informations. The statemachine is compiled into an application at compile time. The definition of one single state machine is called the \a statemachine-pattern. Since this pattern is build up during startup with objects (states, transitions and actions) and these objects only hold information about what interaction may be done at the current state, we can also reuse the pattern. \note You as a developer don't necessarily have to implement your own XML-File! We already have defined some interaction-patterns (e.g. for setting Points in 2D or 3D) which you can use and adapt. \subsubsection InteractionPage_ReusePattern Reuse of Interaction Patterns If we for example have a pattern called "pointset", which defines how the user can set different points into the scene and there is an instance of a state machine called "PointSetInteractor". This state machine has a pointer pointing to the current state in its assigned state machine pattern. Several events are send to the state machine, which moves the pointer from one state to the next, according to the transitions, and executes the actions, referenced in the transitions. But now a new instance of the class "PointSetInteractor" has to be build. So we reuse the pattern and let the current state pointer of the new object point to the start state of the pattern "pointset". The implementation of the actions is \b not done inside a class of the pattern (\a state, \a transition, \a action), it is done inside a state machine class (see the reference for mitkStatemachine). \subsection InteractionPage_InteractionEvents Events During runtime, events are thrown from e.g. the mouse to the operating system, are then send to your graphical user interface and from there it has to be send to the MITK-object called \a mitkEventMapper. This class maps the events received with an internal list of all events that can be understood in MITK. The definition of all understandable events is also located in the XML-File the state machines are defined in. If the received event can be found in the list, an internal mitk-eventnumber is added to the event and send to the object \a mitkGlobalInteraction. See mitk::Event, mitk::GlobalInteraction \subsection InteractionPage_GlobalInteraction GlobalInteraction This object administers the transmission of events to registered state machines. There can be two kinds of state machines, the ones that are only listening and ones that also change data. Listening state machines are here called Listeners and state machines that also change data are called Interactors. \note The discrimination between \a Listener and \a Interactor is only made in mitkGlobalInteraction. As Listener an object derived from class StateMachine can be added and removed from GlobalInteraction and as Interactor an object derived from class Interactor can be added and removed. See the interaction class diagram for further information. To add or remove a state machine to the list of registered interactors, call \a AddInteractor or \a RemoveInteractor of \a GlobalInteraction or to add or remove a listener call \a AddListener of \a RemoveListener. Listeners are always provided with the events. Interactors shall only be provided with an event, if they can handle the event. Because of that the method CanHandleEvent is called, which is implemented in each Interactor. This method analyses the event and returns a value between 0 (can't handle event) and 1 (Best choice to handle the event). Information, that can help to calculate this jurisdiction can be the bounding box of the interacted data and the picked mouse-position stored in the event. So after the object \a GlobalInteraction has received an event, it sends this event to all registered Listeners and then asks all registered Interactors through the method \a CanHandleEvent how good each Interactor can handle this event. The Interactor which can handle the event the best receives the event. Also see the documented code in \a mitkGlobalInteraction. To not ask all registered interactors on a new event, the class \a Interactor also has a mode, which can be one of the following: deselected, subselected (deprecated since HierarchicalInteraction has been removed), selected. These modes are also used for the event mechanism. If an interactor is in a state, where the user builds up a graphical object, it is likely that the following events are also for the build of the object. Here the interactor is in mode selected as long as the interactor couldn't handle an event. Then it changes to mode deselected. The mode changes are done in the actions through operations (described further down) and so declared inside the interaction pattern. See mitk::GlobalInteraction \subsection InteractionPage_Interactors Interactors The class \a Interactor is the superclass for all state machines, that solve the interaction for a single data-object. An example is the class \a mitkPointSetInteractor which handles the interaction of the data \a mitkPointSet. Inside the class \a mitkPointSetInteractor all actions, defined in the interaction-pattern "pointsetinteractor", are implemented. Inside the implementation of these actions (\a ExecuteAction(...) ), so called \a mitkOperations are created, filled with information and send to the \a mitkUndoController and to \a mitkOperactionActor (the data, the interaction is handled for). See mitk::Interactor \subsection InteractionPage_ExecOperations Executing Operations The class mitkOperation and its subclasses basically holds all information needed to execute a certain change of data. This change of data is only done inside the data-class itself, which is derived from the interface \a mitkOperationActor. Interactors handle the interaction through state-differentiation and combine all informations about the change in a \a mitkOperation and send this operation-object to the method ExecuteOperation (of data-class). Here the necessary data is extracted and then the change of data is performed. When the operation-object, here called do-operation, is created inside the method \a ExecuteAction (in class \a mitkInteractor), an undo-operation is also created and together with the do-operation stored in an object called \a OperationEvent. After the Interactor has sent the do-operation to the data, the operation-event-object then is sent to the instance of class \a mitkUndoController, which administrates the undo-mechanism. See mitk::Operation, mitk::OperationActor \subsection InteractionPage_UndoController UndoController The instance of class \a mitkUndoController administrates different Undo-Models. Currently implemented is a limited linear Undo. Only one Undo-Model can be activated at a time. The UndoController sends the received operation events further to the current Undo-Model, which then stores it according to the model. If the method \a Undo() of UndoController is called (e.g. Undo-Button pressed from ) the call is send to the current Undo-Model. Here the undo-operation from the last operation event in list is taken and send to the data, referenced in a pointer which is also stored in the operation-event. A call of the method \a Redo() is handled accordingly. See mitk::UndoController, mitk::LimitedLinearUndo \subsection InteractionPage_references References [Bin99] Robert V. Binder. Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley, 1999 */ diff --git a/Core/Documentation/Doxygen/Concepts/Pipelining.dox b/Core/Documentation/Doxygen/Concepts/Pipelining.dox index 1065667ae3..1ff6d4f4f8 100644 --- a/Core/Documentation/Doxygen/Concepts/Pipelining.dox +++ b/Core/Documentation/Doxygen/Concepts/Pipelining.dox @@ -1,85 +1,85 @@ /** \page PipelineingConceptPage Pipelining Concept \tableofcontents \section PipelineingConceptPage_Introduction Introduction to Pipelining Image processing in MITK draws heavily from the pipelining concept, and a clear understaning of it is crucial when developing with MITK. This document will first clarify the general idea behind pipelining and then discuss some MITK specifics that you should know about. In the real world, a pipeline connects a source of some kind with a consumer of another. So we identify three key concepts: 1. The source, which generates data of some kind. 2. The pipeline, which transports the data. Many different pipeline segments can be switched in line to achieve this. 3. The consumer, which uses the data to do something of interest. The analogy to real pipelines falls a little short in one point: A physical pipeline would never process it's contents, while in software development a pipeline usually does (this is why they are often dubbed filters as well). One might ask why one shouldn't just implement the processing logic in the consumer onject itself, since it onviously knows best what to do with it's data. The two main reasons for this are reusability and flexibility. Say, one wants to display a bone segmentation from a CT-image. Let's also assume for the sake of this introduction, that this is a simple task. One could build a monolithic class that solves the problem. Or one builds a pipeline between the displaying class and the source. We know that bones are very bright in a CT Scan, so we use a treshold filter, and then a segmentation Filter to solve the problem. -\imageMacro{pipelining_example_ct.png,"",16} +\imageMacro{pipelining_example_ct.png,"",10} Now let's further assume that after successfully selling this new technology to a large firm, we plan to do the same with ultrasound imaging technology. The brithness relations in Ultrasound images are basically the same, but ultrasound images are very noisy, and the contrast is significantly lower. Since we used pipelining, this is no problem: We don't need to change our old segmentation class - we just plug two new filters in front of the pipeline: \imageMacro{pipelining_example_us.png,"",16} This may seem trivial, but when working with several input streams from many different devices that themselves stem from many different vendors, pipelining can save the day when it comes to broad range support of different specifications. \section PipelineingConceptPage_InMITK Pipelining in MITK \subsection PipelineingConceptPage_Update The Update() Mechanism The flow of data inside a pipeline is triggered by only one function call to the consumer, which is Update(). Each part of the pipeline then triggers the Update() method of it's antecessor. Finally, the source creates a new batch of data using it's own GenerateData() method, and notifies its successor that new data is available. The pipeline can then start to process the data until the finished data batch is available as an output of the last Filter. -\imageMacro{pipelining_update.png,"",16} +\imageMacro{pipelining_update.png,"",12} \subsection PipelineingConceptPage_Hierarchy The Pipeline Hierarchy Tha base class for all parts of the pipeline except the consumer (which can be of any class) is mitk::Baseprocess. This class introduces the ability to process data, has an output and may have an input as well. You will however rarly work with this class directly. \imageMacro{pipelining_hierarchy.png,"",16} Several source classes extend BaseProcess. Depending on the type of data they deliver, these are ImageSource, PointSetSource and SurfaceSource. All of these mark the start of a pipeline. The filters themselves extend one of the source classes. This may not immediately make sense, but remember that a filter basically is a source with an additional input. \section PipelineingConceptPage_WorkWith Working with Filter \subsection PipelineingConceptPage_Setup Setting Up a Pipeline \verbatim // Create Participants mitk::USVideoDevice::Pointer videoDevice = mitk::USVideoDevice::New("-1", "Manufacturer", "Model"); TestUSFilter::Pointer filter = TestUSFilter::New(); // Make Videodevice produce it's first set of Data, so it's output isn't empty videoDevice->Update(); // attacht filter input to device output filter->SetInput(videoDevice->GetOutput()); // Pipeline is now functional filter->Update(); \endverbatim \subsection PipelineingConceptPage_Implement Writing Your Own Filter When writing your first Filter, this is the recommended way to go about: - Identify which kinds of Data you require for input, and which for output - According to the information from step one, extend the most specific subclass of BaseProcess available. E.g. a filter that processes images, should extend ImageToImageFilter. - Identify how many inputs and how many outputs you require. - In the constructor, define the number of outputs, and create an output. \verbatim //set number of outputs this->SetNumberOfOutputs(1); //create a new output mitk::Image::Pointer newOutput = mitk::Image::New(); this->SetNthOutput(0, newOutput); \endverbatim - Implement MakeOutput(). This Method creats a new, clean Output that can be written to. Refer to Filters with similiar task for this. - Implement GenerateData(). This Method will generate the output based on the input it. At time of execution you can assume that the Data in input is a new set. */ \ No newline at end of file diff --git a/Core/Documentation/Doxygen/Concepts/QVTKRendering.dox b/Core/Documentation/Doxygen/Concepts/QVTKRendering.dox index 3ab3a8c7f5..4fce96518a 100644 --- a/Core/Documentation/Doxygen/Concepts/QVTKRendering.dox +++ b/Core/Documentation/Doxygen/Concepts/QVTKRendering.dox @@ -1,127 +1,127 @@ /** \page QVTKRendering Rendering Concept \tableofcontents The MITK rendering pipeline is derived from the VTK rendering pipeline. \section QVTKRendering_Pipeline_VTK VTK Rendering Pipeline -\imageMacro{RenderingOverviewVTK.png,"Rendering in VTK",16} +\imageMacro{RenderingOverviewVTK.png,"Rendering in VTK",12} In VTK, the vtkRenderWindow coordinates the rendering process. Several vtkRenderers may be associated to one vtkRenderWindow. All visible objects, which can exist in a rendered scene (2D and 3D scene), inherit from vtkProp (or any subclass e.g. vtkActor). A vtkPropAssembly is an assembly of several vtkProps, which appears like one single vtkProp. MITK uses a new interface class, the "vtkMitkRenderProp", which is inherited from vtkProp. Similar to a vtkPropAssembly, all MITK rendering stuff is performed via this interface class. Thus, the MITK rendering process is completely integrated into the VTK rendering pipeline. From VTK point of view, MITK renders like a custom vtkProp object. More information about the VTK rendering pipeline can be found at http://www.vtk.org and in the several VTK books. \section QVTKRendering_Pipeline_MITK MITK Rendering Pipeline This process is tightly connected to VTK, which makes it straight forward and simple. We use the above mentioned "vtkMitkRenderProp" in conjunction with the mitk::VtkPropRenderer for integration into the VTK pipeline. The QmitkRenderWindow does not inherit from mitk::RenderWindow, but from the QVTKWidget, which is provided by VTK. The main classes of the MITK rendering process can be illustrated like this: \imageMacro{qVtkRenderingClassOverview.png,"Rendering in MITK",16} A render request to the vtkRenderWindow does not only update the VTK pipeline, but also the MITK pipeline. However, the mitk::RenderingManager still coordinates the rendering update behavior. Update requests should be sent to the RenderingManager, which then, if needed, will request an update of the overall vtkRenderWindow. The vtkRenderWindow then starts to call the Render() function of all vtkRenderers, which are associated to the vtkRenderWindow. Currently, MITK uses specific vtkRenderers (outside the standard MITK rendering pipeline) for purposes, like displaying a gradient background (mitk::GradientBackground), displaying video sources (QmitkVideoBackround and mitk::VideoSource), or displaying a (department) logo (mitk::ManufacturerLogo), etc.. Despite these specific renderers, a kind of "SceneRenderer" is member of each QmitkRenderWindow. This vtkRenderer is associated with the custom vtkMitkRenderProp and is responsible for the MITK rendering. The vtkRenderer calls four different functions in vtkMitkRenderProp, namely RenderOpaqueGeometry(), RenderTranslucentPolygonalGeometry(), RenderVolumetricGeometry() and RenderOverlay(). These function calls are forwarded to the mitk::VtkPropRenderer. Then, depending on the mapper type (OpenGL- or VTK-based), OpenGL is enabled or disabled. In the case of OpenGL rendering, the Paint()-method of each individual mapper is called. If the mapper is VTK-based, the four function calls are forwarded to mitk::VtkMapper and within these methods the corresponding VtkProp is evaluated. Both strategies are illustrated in the sequence diagrams below: \imageMacro{qVtkRenderingSequenceVTK.png,"Sequence diagram for MITK VTK rendering",16} In MITK, VTK-based mapper are more common and we recommend on implementing VTK-based mappers. However, MITK supports OpenGL-based mappers as well. \imageMacro{qVtkRenderingSequenceGL.png,"Sequence diagram for MITK OpenGL rendering",16} \section QVTKRendering_Mapper MITK Mapper Architecture Mappers are used to transform the input data in tangible primitives, such as surfaces, points, lines, etc. The base class of all mappers is mitk::Mapper. The mapper hierarchy reflects the two possible ways to render in MITK: Subclasses of mitk::Mapper control the creation of rendering primitives that interface to the graphics library (e.g. via OpenGL, vtk). The mapper architecture is illustrated in the following UML diagram: \imageMacro{qVtkRenderingMapper.jpg,"Mapper architecture",16} mitk::Mapper::Update() calls the time step of the input data for the specified renderer and checks whether the time step is valid and calls method mitk::Mapper::GenerateDataForRenderer(), which is reimplemented in the individual mappers and should be used to generate primitives. mitk::Mapper::SetDefaultProperties() should be used to define mapper-specific properties. \section QVTKRendering_programmerGuide User Guide: Programming hints for rendering related stuff (in plugins) \li The QmitkRenderWindow can be accessed like this: this->GetRenderWindowPart()->GetRenderWindow("axial"); \li The vtkRenderWindow can be accessed like this: this->GetRenderWindowPart()->GetRenderWindow("axial")->GetVtkRenderWindow(); \li The mitkBaseRenderer can be accessed like this: mitk::BaseRenderer* renderer = mitk::BaseRenderer::GetInstance(this->GetRenderWindowPart()->GetRenderWindow("sagittal")->GetRenderWindow()); \li An update request of the overall QmitkStdMultiWidget can be performed with: this->GetRenderWindowPart()->GetRenderingManager()->RequestUpdateAll(); \li A single QmitkRenderWindow update request can be done like this: this->GetRenderWindowPart()->GetRenderingManager()->RequestUpdate(this->GetRenderWindowPart()->GetRenderWindow("axial")->GetVtkRenderWindow()); \note The usage of ForceImmediateUpdateAll() is not desired in most common use-cases. \subsection QVTKRendering_distinctRenderWindow Setting up a distinct Rendering-Pipeline It is sometimes desired to have one (or more) QmitkRenderWindows that are managed totally independent of the 'usual' renderwindows defined by the QmitkStdMultiWidget. This may include the data that is rendered as well as possible interactions. In order to achieve this, a set of objects is needed: \li mitk::RenderingManager -> Manages the rendering \li mitk::DataStorage -> Manages the data that is rendered \li mitk::GlobalInteraction -> Manages all interaction \li QmitkRenderWindow -> Actually visualizes the data The actual setup, respectively the connection, of these classes is rather simple: \code // create a new instance of mitk::RenderingManager mitk::RenderingManager::Pointer renderingManager = mitk::RenderingManager::New(); // create new instances of DataStorage and GlobalInteraction mitk::DataStorage::Pointer dataStorage = mitk::DataStorage::New(); mitk::GlobalInteraction::Pointer globalInteraction = mitk::GlobalInteraction::New(); // add both to the RenderingManager renderingManager->SetDataStorage( dataStorage ); renderingManager->SetGlobalInteraction( globalInteraction ); // now create a new QmitkRenderWindow with this renderingManager as parameter QmitkRenderWindow* renderWindow = new QmitkRenderWindow( parent, "name", renderer, renderingManager ); \endcode That is basically all you need to setup your own rendering pipeline. Obviously you have to add all data you want to render to your new DataStorage. If you want to interact with this renderwindow, you will also have to add additional Interactors/Listeners. \note Dynamic casts of a mitk::BaseRenderer class to an OpenGLRenderer (or now, to an VtkPropRenderer) should be avoided. The "MITK Scene" vtkRenderer and the vtkRenderWindow as well, are therefore now included in the mitk::BaseRenderer. \subsection QVTKRendering_userGuideMapper How to write your own Mapper If you want to write your own mapper, you first need to decide whether you want to write a VTK-based mapper or a GL-based mapper. We recommend to write a VTK-based mapper, as VTK is easy to learn and some GL-based mappers can have unexpected site effects. However, you need to derive from the respective classes. In the following we provide some programming hints for writing a Vtk-based mapper: \li include mitkLocalStorageHandler.h and derive from class BaseLocalStorage as a nested class in your own mapper. The LocalStorage instance should contain all VTK ressources such as actors, textures, mappers, polydata etc. The LocalStorageHandler is responsible for providing a LocalStorage to a concrete mitk::Mapper subclass. Each RenderWindow / mitk::BaseRenderer is assigned its own LocalStorage instance so that all contained ressources (actors, shaders, textures, ...) are provided individually per window. \li GenerateDataForRenderer() should be reimplemented in order to generate the primitives that should be rendered. This method is called in each Mapper::Update() pass, thus, all primitives that are rendered are recomputed. Employ LocalStorage::IsGenerateDataRequired() to determine whether it is necessary to generate the primitives again. It is not necessary to generate them again in case the scene has just been translated or rotated. \li For 2D mappers, it is necessary to determine the 3D primitives close to the current plane that should be drawn. Use planeGeometry = renderer->GetSliceNavigationController()->GetCurrentPlaneGeometry() to get the current plane. The distance to it can be determined by using planeGeometry->DistanceFromPlane(point). \li Reimplement GetVtkProp(), that should return the specific VtkProp generated in GenerateDataForRender() (e.g. a single actor or a propassembly, which is a combination of different actors). The VtkProp is picked up in one of the four render passes and thus integrated into the VTK render pipeline. \li SetDefaultProperties() should be used to define mapper-specific properties. */