next up previous contents
Next: The Behavior System Up: Perception Modeling Previous: Detecting Danger of Collision with

Synthetic Vision Models

  As we have mentioned earlier, when the virtual environment becomes highly complex, for example, when the shapes of the obstacles or other objects are highly irregular, more sophisticated geometric methods for perception modeling can be employed. An alternative approach to geometric methods is one that involves building synthetic vision models of animals.

In the context of artificial life, we may also be interested in developing models of animal vision that are biologically more faithful. For instance, we may be interested in modeling the process of extracting useful perceptual information from the retinal images of an animal's vision system and studying how this process affects an animal's behavior.

Synthetic vision models can be built based on the animated retinal images rendered from an artificial animal's ``point of view''. One obvious advantage of this approach is that the retinal images readily respect the occlusion relationships between objects in the animal's view. In addition, these retinal images are perspective projections of the 3D virtual world and hence are biologically plausible models of animal vision. Some aspects of animal perception, such as the recognition of object textures or environmental illumination, cannot be captured by geometric methods but are unique properties that have to be extracted from images. The synthetic vision approach for animation was first explored by Renault, Magnenat-Thalmann, and Thalmann Renault90 in guiding synthetic actors' navigation through a dynamic world.

A simple procedure of synthetic vision is to render each object in the scene with a unique shaded color (i.e. color coding the objects). That way, by examining the presence of different colored pixels in the image, we know which objects can be seen (in the visual field and not occluded). The range of an object may be either obtained directly from the graphics database or computed by using its relative positions in the binocular images. Since the images are rendered with perspective projection, a collision test against an object may be performed by taking into account the relative size of that object in the images. Object identification can be made more efficient by using similar shaded colors for objects in the same category. For example, we can render all fishes with different shades of red, and all cylindrical obstacles with different shades of blue, etc., such that a quick pre-processing of the image can reveal the presence of certain kinds of objects. Such identity maps and range maps can be thought of as the analogues of intrinsic images [Barrow and Tenenbaum1981] of the scene. Fig. gif(a) shows an example of the binocular retinal images, Fig. gif(b) shows the corresponding color-coded identity maps, and Fig. gif(c) shows the range maps generated by assigning each pixel a color of gray according to its z-value. The further an object is, the darker is its color.

 

  figure1316


Figure: Analogues of intrinsic images.


next up previous contents
Next: The Behavior System Up: Perception Modeling Previous: Detecting Danger of Collision with
Xiaoyuan TuJanuary 1996