OpenDX - Documentation
Full Contents QuickStart Guide User's Guide User's Reference
Previous Page Next Page Table of Contents Partial Table of Contents Index Search

Display

Category

Rendering

Function

Displays an image or renders a scene and displays an image.

Syntax


where = Display(object, camera, where, throttle);

Inputs

Name Type Default Description
object object none object to render or image to display
camera camera no default camera if rendering is required
where window or string the user's terminal host and window for display
throttle scalar 0 minimum time between image frames (in seconds)

Outputs

Name Type Description
where window window identifier for Display window

Functional Details

object

is the object to be displayed or to be rendered and displayed.

camera

is the camera to be used to render object. If camera is not specified, the system assumes that object is an image to be displayed (e.g., the output of the Render module).

Note: A transformed camera cannot be used for this parameter.

where

specifies the host and window for displaying the image. On a workstation, the format of the parameter string is:
X, display, window
where X refers to the X Window System; display is an X server name (e.g., host:0); and window is a window name (and must not begin with two #-characters). As a rule, it is not necessary to set this parameter. But when it is set, the resulting Image window is not controlled by the user interface (e.g., it has no menu options). The function of this parameter is to specify another workstation on which to display an image (e.g., by setting it to "X,workstationname:0, message"). Using the Image tool, you can display the same image to another workstation simply by connecting the module's two outputs to the two inputs of Display and setting where.

If you are using SuperviseState or SuperviseWindow to control user interactions in the Display window, then where should be set with the where output of SuperviseWindow.

Note: If you are using the where parameter, it is important to set its value before the first execution of Display.

throttle

specifies a minimum interval between successive image displays. The default is 0 (no delay).

where

The output can be used, for example, by ReadImageWindow to retrieve the pixels of an image after Display has run.

Notes:

  1. In the user interface, you must use the Image tool rather than Display if you want to use many of the interactive image-manipulation functions provided by Data Explorer. For more information, see "Controlling the Image: View Control..." in IBM Visualization Data Explorer User's Guide. However, see SuperviseWindow and SuperviseState for a discussion of how to create your own interaction modes when using the Display window.

  2. The Display module can render surfaces, volumes, and arbitrary combinations of surfaces and volumes. (However, the current volume-rendering algorithm does not support coincident or perspective volumes.) To render an object, that object must contain a "colors" component. Many modules add a default color. In addition, volume rendering (e.g., of cubes, as opposed to lines) requires an "opacities" component. all surfaces, the lack of an "opacities" component implies an opaque surface.

  3. Choosing appropriate color and opacity maps for volume rendering can be difficult. The AutoColor, AutoGrayScale, and Color modules use heuristics to generate good values; as a rule of thumb, the opacity should be ≈0.7/T, and the color value ≈1.4/T (where T is the thickness of the object in user units). See also "Coloring Objects for Volume Rendering".

Changing the Resolution of an Image

If you are using Display without a camera to simply display an image, you can increase or decrease the resolution of the image by using Refine or Reduce, respectively, on the image before passing it to Display (see Refine and Reduce).

Pasting Images Together

The Arrange module can be used before Display to lay out images side by side, or one above the other (see Arrange).

Delayed Colors and Opacities (Color and Opacity Lookup Tables)
Delayed colors are a way of compactly storing color and opacity information. By default, whenever you use one of the coloring modules (AutoColor, AutoGrayScale, Color), the colors and opacities are stored one-per-data value as a floating point RGB 3-vector or floating point value, respectively, ranging from 0 to 1. However, if you have unsigned byte data, then it is much more efficient to use "delayed colors" and "delayed opacities". When you use delayed colors or opacities, the "colors" or "opacities" component is simply a copy of (actually a reference to) the "data" component. When rendering occurs, these components are interpreted as indices with which to look up a color or opacity value in a table.

If you specify the delayed parameter as 1 to any of the coloring modules, they will automatically perform this "copy" of the "data" component, and will attach a "color map" or "opacity map" component which contains 256 RGB colors, or 256 opacities. If you already have a color or opacity map, either imported or created using the Colormap Editor, and wish to use delayed colors or delayed opacities, you can pass your color map or opacity map to the Color module as the color or opacity parameter, and set the delayed parameter to Color as 1.

The structure of a color map or opacity map is described in Color. The Colormap Editor produces as its two outputs well-formed color maps and opacity maps. Alternatively, if you already have a simple list of 3-vectors or list of scalar values, and want to create a color map or opacity map, you can do this using Construct. The first parameter to Construct should be [0], the second should be [1], and the third should be 256. This will create a "positions" component with positions from 0 to 255. The last parameter to Construct should be your list of 256 colors or opacities.

If you are reading a stored image using ReadImage, and the image is stored with a colormap, you can specify that the image should be stored internally in Data Explorer with delayed colors by using the delayed parameter to ReadImage.

You can also convert an image (or object) to a delayed colors version by using QuantizeImage.

Using Direct Color Maps

If you are using delayed colors (see "Delayed Colors and Opacities (Color and Opacity Lookup Tables)" and ReadImage) and displaying images directly (i.e. you are not providing a camera), Display will use the provided color map directly instead of dithering the image. (Depending on your X server, you may need to use the mouse to select the Image or Display window in order for the correct color to appear.) If you do not want Display to use the color map directly, use the Options module to set a "direct color map" attribute with a value of 0 (zero).

Attribute Name Value Description
direct color map 0 or 1 whether or not to use a direct color map

Using Default Color Maps

When displaying non-delayed color images in 8-bit windows, Display assumes that it can specify 225 individual colors. If this number is not currently available in the shared color map, Display will find the best approximations available. However, this may lead to a visible degradation of image quality. Display may instead use a private color map. This decision is based on the worst-case approximation that it must use with the default color map. If this approximation exceeds a threshold, a private color map will be used. The approximation quality is measured as Euclidean distance between the desired color and the best approximation for that color, in an RGB unit cube.

An environment variable, DX8BITCMAP, sets the level at which the change to using a private color map is made. The value of DX8BITCMAP should be a number between 0 (zero) and 1 (one), and it represents the Euclidean distance in RGB color space, normalized to 1, for the maximum allowed discrepancy. If you set DX8BITCMAP to 1, then a private color map will never be used. On the other hand, if you set DX8BITCMAP to -1, then a private color map will always be used. The default is 0.1. See also the -8bitcmap command line option for Data Explorer in Table 5 in IBM Visualization Data Explorer User's Guide.

Gamma Correction

Displayed images generated by Display or Image are gamma corrected. Gamma correction adjusts for the fact that for many display devices a doubling of the digital value of an image's brightness does not necessarily produce a doubling of the actual screen brightness. Thus, before displaying to the screen, the pixel values are adjusted non-linearly to produce a more accurate appearance.

The environment variables DXGAMMA_8BIT, DXGAMMA_12BIT, and DXGAMMA_24BIT are used to specify values for gamma of 8-, 12-, and 24-bit windows, respectively. If the appropriate DXGAMMA_nBIT environment variable is not set, the value of the environment variable DXGAMMA will be used if one is defined. Otherwise, the module uses the system default, which depends on the machine architecture and window depth. This default is always 2 (two) except for 8-bit sgi windows, for which it is 1 (one). Note that the default depends on the machine on which the software renderer is running, not on the machine that displays the image.

Obtaining a WYSIWYG image of a higher resolution

If you wish to render a displayed image at a higher resolution (for example to write to an output file), you can usually simply use Render on the same object as object, with a new camera (see AutoCamera or Camera). However, if object contains screen objects (captions and color bars), the new image will not be WYSIWYG (What You See Is What You Get), with respect to the displayed image, because the sizes of captions and color bars are specified in pixels rather than in screen-relative units. The ScaleScreen module (see ScaleScreen) allows you to modify the size of screen objects before rendering.

Image Caching

When given a camera input, the Display module (or Image tool) caches rendered images by default. The result is faster redisplay if the same object and camera are later passed to the module.

To turn off this automatic caching, use the Options module to attach a "cache" attribute (set to 0) to object.

It is important to remember that this caching is separate from the caching of module outputs, which is controlled by the -cache command-line option to dx.

Changing Rendering Properties

You can change the rendering properties of an object by using the Options module. The following table lists the shading attributes that can be set by the Options module for interpretation by the Display tool. (See the section on surface shading in IBM Visualization Data Explorer Programmer's Reference for more information.)

Attribute Type Default Description
"ambient" scalar 1 coefficient of ambient light ka
"diffuse" scalar .7 coefficient of diffuse reflection kd
"specular" scalar .5 coefficient of specular reflection ks
"shininess" integer 10 exponent of specular reflection sp

As a rule of thumb, except for purposes of special effects, ka should be 1 and kd + ks should be about 1. The larger ks, the brighter the highlight, and the larger e, the sharper the highlight. The Shade module provides a shortcut for setting rendering properties.

The attributes listed above apply to both the front and back of an object. In addition, for each attribute "x" there is also a "front x" and a "back x" attribute that applies only to the front and back of the surface, respectively. So, for example, to disable specular reflections from the back surfaces of an object, use the Options module to set the "back specular" attribute of the object to 0.

The determination of which faces are "front" and which are "back" depends on the way in which the "connections" component of the faces is defined. "Front colors" applies to clockwise faces, and "back colors" applies to counterclockwise faces.

Coloring Objects for Volume Rendering

The volume renderer interprets colors and opacities as values per unit distance. Thus the amount of color and degree of attenuation seen in an image object is determined in part by the extent of the object's volume. The Color, AutoColor, and AutoGrayScale modules attach "color multiplier" and "opacity multiplier" attributes to the object so that colors and opacities will be appropriate to the volume, while maintaining "color" and "opacity" components that range from 0 to 1 (so that objects derived from the colored volume, such as glyphs and boundaries, are colored correctly). See "Rendering Model" in IBM Visualization Data Explorer Programmer's Reference.

These attributes adjust the colors and opacities to values that should be "appropriate" for the object being colored. However, if the simple heuristics used by these modules to compute the attribute values are not producing the desired colors and opacities, you have two alternatives.

Only the first of these methods should be used for "delayed" colors.

Finally, if you color a group of volumes and the resulting image is black, the reason is that the current renderer does not support coincident volumes.


Attribute Type Description
color multiplier scalar Multiplies values in the "color" component
opacity multiplier scalar Multiplies values in the "opacity" component

Shading

Objects are shaded when rendered only if a "normals" component is present. Many modules (e.g. Isosurface) automatically add "normals", but the FaceNormals, Normals, and Shade modules can also be used to add normals. Even if an object has "normals", shading can be disabled by adding a shade with a value of 0 (the Shade module can do this).


Attribute Name Values Description
shade 0 or 1 used to specify whether or not to shade when normals are present

Object fuzz

Object fuzz is a method of resolving conflicts between objects at the same distance from the camera. For example, it may be desirable to define a set of lines coincident with a plane. Normally it will be unclear which object is to be displayed in front. In addition, single-pixel lines are inherently inaccurate (i.e. they deviate from the actual geometric line) by as much as one-half pixel; when displayed against a sloping surface, this x or y inaccuracy is equivalent to a z inaccuracy related to the slope of the surface. The "fuzz" attribute specifies a z value that will be added to the object before it is compared with other objects in the scene, thus resolving this problem. The fuzz value is specified in pixels. For example, a fuzz value of one pixel can compensate for the described half-pixel inaccuracy when the line is displayed against a surface with a slope of two.


Attribute Type Description
fuzz scalar object fuzz

To add fuzz to an object, pass the object through the Options module, specifying the attribute as fuzz and the value of the attribute as the number of pixels (typically a small integer).

Anti-aliasing and Multiple Pixel Width Lines

Hardware rendered images can be made to anti-alias lines, or draw lines as multiple pixels wide. Note that these options are not available in software rendering. To specify anti-aliasing of lines, use the Options module to set an attribute on the object passed to Display of antialias with the value of "lines". To specify multiple pixel width lines, use the Options module to set an attribute of line width with a value of the number of pixels wide you want the line to be.

Attribute Values Description
antialias "lines" causes lines to be anti-aliased
line width n causes lines to be drawn with a width of n pixels

Rendering Approximations

Data Explorer provides access to the hardware accelerators on the workstation, in addition to the default software rendering techniques. The hardware enhancements are available only on workstations that are equipped with 3-D graphic adapters. On systems without such adapters, only the software rendering options are available. This enhancement is intended to provide increased interactivity, especially in operations that involve only the rendering process.

Data Explorer can also provide accelerated rendering by approximating the rendering using points, lines, and opaque surfaces. Such geometric elements are often sufficient to approximate the appearance of the desired image, and thus are useful for preliminary visualizations of the data.

The approximations fall into three main categories: bounding box, dots, and wireframe. Wireframe is available only as a hardware rendering technique.

If you are using the graphical user interface and the Image tool, you can access the rendering options by using the Rendering Options option on the Options pull-down menu in the Image window. This option invokes a dialog box that allows you to set the rendering approximations for continuous and one-time execution. (For more information, see "Rendering Options..." in IBM Visualization Data Explorer User's Guide.)

If you are not using the Image tool, then you must use the Options module to set various attributes that control the rendering approximations. The following table lists the attributes that control rendering approximations, together with the permissible values for each attribute.

Attribute Name Values Description
"rendering mode" "software"
"hardware"
use software rendering
use hardware rendering
"rendering approximation" "none" "box" "dots" "wireframe" complete rendering object
bounding box only
dot approximation to object
wireframe approximation to object
"render every" n render every nth primitive
render everything (default)

Note: If you do not pass a camera to Display (i.e., if object is already an image), Display will always use software to display the image, regardless of the setting of any rendering options using the Options tool.

Differences between Hardware and Software Rendering
  1. For hardware rendering, when specifying "dots" for "rendering approximation," lines are drawn in their entirety, whereas for software rendering only the line end points are drawn. The "render every" and "wire" approximations are available only with hardware rendering. When the "box" approximation is specified, hardware rendering will show the bounding box of each field in the rendered object, while software rendering will show only the bounding box of the entire object.

  2. Some graphics adapters do not support clipping. On such adapters, "ClipBox" and "ClipPlane" have no effect.

  3. For some hardware platforms, surfaces specified with opacities are rendered by the hardware as screen-door surfaces (i.e., every other pixel is drawn, letting the background show through). This allows only one level of opacity and completely obscures a semi-opaque surface that is behind another semi-opaque surface. The transparency effect is hardware dependent, and can produce a completely opaque or completely transparent appearance. True transparency is supported for OpenGL platforms.

  4. The image displayed by the hardware rendering can be different from the image produced by the software rendering. This is a result of several differences in rendering techniques. The hardware rendering does not provide gamma correction, causing images to be slightly darker. Normals are not reversed when viewing the "inside" of a surface, with the result that lighting effects are much dimmer on the "inside" of a surface. Attributes applied to the "inside" of a surface (e.g., "back colors") are ignored.

  5. When using hardware rendering, the where parameter to Display cannot specify a host other than the one on which the Display module is running. However, it can specify a different display attached to the same host.

  6. The hardware renderer does not duplicate the "dense emitter" model used by the software renderer for rendering volumes. Only the data values at the boundary of the volume are rendered, producing the appearance of a transparent boundary of the volume.

  7. For hardware rendering, a wireframe rendering approximation is not intended to produce the same visual results as ShowConnections.

  8. Hardware rendering handles colors between 0.0 and 1.0. If colors are outside this range, each color channel is independently clamped, before lighting is applied. In software rendering, clamping is done after lighting is applied.

  9. Hardware rendering does not support view angles of less than 0.001 degree.

  10. Anti-aliasing and multiple pixels width lines is only available in hardware rendering.

Texture Mapping

If the machine on which Data Explorer is running supports OpenGL or GL, then texture mapping is available using hardware rendering. Texture mapping is the process of mapping an image (a field with 2-dimensional positions, quad connections, and colors) onto a geometry field with 2-dimensional connections and, typically, 3-dimensional positions (e.g., a color image mapped onto a rubbersheeted height field). The advantage of texture mapping over the use of Map, for example, is that the resulting image may have much greater resolution than the height map.

The geometry field must have 2-D connections (triangles or quads) and must also have a component, with the name "uv," that is dependent on positions and provides the mapping between the image and the positions of the geometry field. This component consists of 2-vectors. The origin of the image will be mapped to the uv value [0 0], and the opposite corner to the uv value [1 1].

The texture map should be attached to the geometry field as an attribute, with the attribute name "texture", which can be done with the Options module. A texture-mapped image can be retrieved from the Display window using ReadImageWindow and then written to a file using WriteImage.

Translucent textures are represented as image fields with opacities components–either float opacities, or ubyte opacities and a float opacity map. As with opacities objects, translucent textured objects aren't meshed, but are tossed into the translucent sort-list, sorted by depth, and rendered. Translucent textures can be impored into DX with the ReadImage module, as long as ImageMagick support was included, using image formats that support opacity masks.

Mipmapping is the process of creating a set of filtered texture maps of decreasing resolution generated from a single high resolution texture and used to improve accuracy during texture mapping. This filtering process allows OpenGL to apply an appropriate level of detail to an object depending on the objects viewable size, reducing aliasing and flickering. The filter to apply to a texture can be set with the attributes "texture min filter" and "texture mag filter".

Attribute Name Value Description
texture a texture map specifies a texture map
texture wrap s "clamp to border", "clamp", "repeat", or "clamp to edge" specifies how to apply the texture in the texture's s (horizontal) direction
texture wrap t "clamp to border", "clamp", "repeat", or "clamp to edge" specifies how to apply the texture in the texture's t (vertical) direction
cull face "off", "front", "back", or "front and back" specify which polygons should be drawn. culling basically turns off drawing of a polygon (increases rendering speed).
light model "one side" or "two side" two sided lighting specifies that calculations are computed for both the inside and the outside sides of polygons. Two sided is particularly useful when polygons have no normals or where the auto-computed normals bear no resemblence to the outside of the rendered object
texture min filter "nearest" or "linear", "nearest_mipmap_nearest", "nearest_mipmap_linear", "linear_mipmap_nearest" or "linear_mipmap_linear" specifies the filter to use to generate the set of mipmapped textures when the texture is rendered smaller than its actual size
texture mag filter "nearest" or "linear" specifies the filter to use to generate the set of mipmapped textures when the texture is rendered larger than its actual size
texture function "decal", "replace", "modulate" or "blend" specifies the texture mode. In decal mode with a three-component (RGB) the texture's colors replace the object's colors. With modulate, blend or with a four-component texture, the final color is a combination of the texture's and the object's colors. You use decal mode in situations where you want to apply an opaque texture to an object - if you were drawing a soup can with an opaque label, for example. For modulation, the object's color is modulated by the contents of the texture map. You need to use modulation to create a texture that responds to lighting conditions. Blending mode makes sense only for one- (A) or two-(LA) component textures.The replace function substitutes the object's color with the incoming texture color.

Components

The object input must have a "colors," "front colors," or "back colors" component.

Script Language Examples

  1. This example renders two views of the object and displays them in two separate windows, as specified by the where parameter.
    electrondensity = Import("/usr/local/dx/samples/data/watermolecule");
    isosurface = Isosurface(electrondensity, 0.3);
    camera1 = AutoCamera(isosurface, "front", resolution=300);
    camera2 = AutoCamera(isosurface, "top", resolution=300);
    image1 = Render(isosurface, camera1);
    image2 = Render(isosurface, camera2);
    Display(image1,where="X, localhost:0, view from front");
    Display(image2,where="X, localhost:0, view from top");
    

  2. This example sets the rendering mode to "hardware" with the approximation method of "dots."
    electrondensity = Import("/usr/local/dx/samples/data/watermolecule");
    isosurface = Isosurface(electrondensity, 0.3);
    from = Direction(65, 5, 10);
    camera = AutoCamera(isosurface, from);
    isosurface=Options(isosurface, "rendering mode", "hardware",
                      "rendering approximation", "dots");
    Display(isosurface,camera);
    

Example Visual Programs

MovingCamera.net
PlotLine.net
PlotTwoLines.net
ReadImage.net

ScaleScreen.net

TextureMapOpenGL.net
UsingCompute.net
UsingMorph.net

See Also

 Arrange,  Collect,  Filter,  Image,  Render,  Reduce,  Refine,  ScaleScreen,  Normals,  FaceNormals,  SuperviseWindow,  SuperviseState,  ReadImageWindow,  Options


Full Contents QuickStart Guide User's Guide User's Reference

[ OpenDX Home at IBM | OpenDX.org ]