Rendering recognizably unique textures on GPUs using shading

Fenfire goes N^^f6rttiseminaari Paperbot - unique textures for background use mainmenu.png Navidoc - linking documentations via imagemaps LEGO controllers for prototyping Navidoc - linkityst^^e4 UML-kaavioilla mainmenu_implicit

Janne V. Kujala and Tuomas Lukka

In this work, we apply the GPU to an unusual task: we are not attempting to obtain a specific appearance or effect. Instead, we use the GPU to produce an infinite amount of different, novel, randomly generated textures, with the goal that they should be recognizable by users.

In [Kujala & Lukka: "Rendering recognizably unique textures", to be published in Information Visualization'03 conference], we introduced the use of procedurally generated unique background textures as a visualization of document identity. In our approach, each document has a different, easily distinguishable background texture. The user can thus identify an item at a glance, even if only a fragment of the item is shown, without reading the title (which the fragment may not even show) [Figure 1]. The user should be able to learn the textures of the most often visited documents, as per Zipf's law. An initial experiment has shown that the generated textures are indeed recognizable.

The perceptually designed algorithm runs, after the random seeding and setup stages, entirely on the fragment pipeline of the GPU, in order to allow complicated mappings such as fisheye distortion between the paper and screen coordinates.

For each unique background texture, a small palette of colors is selected randomly from a heuristic distribution. The shapes of the final background texture are generated entirely from a small set of static "basis textures" bound to texture units with randomly chosen affine texture coordinate mappings using vertex programs. Even though the basis textures are RGB textures, they contain no color information: they are simply treated as 3- or 4-vectors and combined nonlinearly and the results are used to interpolate between the palette colors to produce the final fragment colors.

Plain OpenGL 1.3 does not by itself provide enough flexibility in the fragment pipeline to allow for generating features nonlinearly from the basis textures [Figure 3]. Because of this, and the availability of stable Linux drivers, our main platforms are NV10, i.e., OpenGL 1.3 + GL_NV_register_combiners, and NV25, i.e., NV10 + GL_NV_texture_shader3. We will be working on an implementation based on GL_ARB_fragment_program and GL_NV_fragment_program now that we have obtained our first NV3X-based card.

The use of the combiners is rather unconventional: we want to lose most of the original shapes of the basis textures in order to create new, different shapes from the interaction of the basis texture values and combiner parameters chosen randomly from the seed number. For this, we use dot products of texture values with each other and with random constant vectors, and scale up with the register combiner output mappings to sharpen the result [Figure 4]. The resulting values are used for interpolating between the palette colors.

On the NV25, we use offset textures to ease the creation of new shapes in the fragment pipeline.

images/paper/motivating.png

Figure 1. The motivating example for unique backgrounds: the BuoyOING focus+context interface for browsing bidirectionally hyperlinked documents. The interface shows the relevant fragments of the other ends of the links and animates them fluidly to the focus upon traversing the link. a) shows a small document network. b) and c) show what a user sees while browsing the network, b) without and c) with background texture. There are three keyframes where the animation stops. Two frames of each animation between the keyframes are shown. The unique backgrounds help the user notice that the upper right buoy in the last keyframe is actually a part of the same document (1) which was in the focus in the first keyframe. Our (as yet untested) hypothesis is that this will aid user orientation.

images/paper/model.png

Figure 2. The qualitative model of visual perception used to create the algorithm. The visual input is transformed into a feature vector, which contains numbers (activation levels) corresponding to e.g. colors, edges, curves and small patterns. The feature vector is matched against the memorized textures. In order to generate recognizable textures, random seed values should produce a distribution of feature vectors with maximum entropy.

images/paper/basistex.png

Figure 3. The complete set of 2D basis textures used by our implementation. All textures shown in this proposal are built from these textures and the corresponding HILO textures for offsetting.

images/paper/combiners.png

Figure 4. How the limited register combiners of the NV10 architecture can be used to generate shapes. Top: the two basis textures. Bottom left: dot product of the basis textures: 2(2a-1) . (2b-1)+1/2, where a and b are the texture RGB values. Bottom right: dot product of the basis textures squared: 32( (2a-1) . (2b-1) )^2. This term can then be used to modulate between two colors.

images/paper/examples.png

Figure 5. A number of unique backgrounds generated by our system. This view can be rendered, without pre-rendering the textures, in 20 ms on a GeForce4 Ti 4200 in a 1024x768 window (fill-rate/bandwidth limited).

images/paper/buoyoing1.png
images/paper/buoyoing2.png

Figures 6-7. Two different screenshots of a structure of PDF documents viewed in a focus+context view. The user interface shows relationships between specific points in the documents. Each document has a unique background, which makes it easy to see that the fragment of a document on the right side of the second view is the document fully seen in the first view; without unique backgrounds, this would be relatively difficult and would require traversing the link.