To summarize, in order to model an artificial fish's perceptual capability, at each animation time step, a visibility check with all environmental objects is performed. As we described above, the visibility check consists of two steps: First, the view volume test checks if an object is in the fish's view volume and, if it is, the occlusion test checks whether it is occluded.
Since the occlusion test algorithm is only performed against, hopefully, a small number of objects that fall within the fish's view volume, the corresponding computational cost does not increase linearly with the total number of objects in the scene. However, this is not true for the computation in the view volume test: If a naive algorithm is used, the computational cost involved (for each fish) is O(N) where N is the number of objects in the scene. The total cost for all the fish performing the view volume test will increase quadratically with the number of fish in the animation. Therefore, when the number of fish is large, for example, when animating a school of fish, a more efficient algorithm for the view volume test is desirable. To this end, we have implemented an algorithm that exploits the space-time coherence of objects. We associate, with each fish, a list of potentially-visible objects. One can view the potentially-visible objects as those that fall within a pseudo view volume that has a larger radius than that of the fish's view volume. This list is generated at the beginning of the animation and is only updated intermittently. At each animation time step, assuming constraints on the maximum fish velocity, instead of performing the view volume test on all the other objects, a fish need only check if any of the objects in the list of potentially-visible objects enters the view volume.
It is important to appreciate, however, that the value of our perception model does not lie in the particular method we choose to implement it. (Indeed, to achieve high simulation speed for animation, we have chosen a very simple approach which may not be readily extensible to a more complex virtual environment.) Rather our main purpose is to demonstrate the importance of simulating perceptual capability to the modeling of realistic behavior and, in this respect, our approach is adequate for the task at hand.
One avenue for future research would be to implement a more sophisticated geometric method that can scale well to more complicated virtual environments. For instance, we could tessellate the entire 3D environment into, say, M cells and establish a look-up table with M entries. Each entry of the table registers the indices of objects that reside in the corresponding cell. This table is computed at the beginning of the animation and is updated for each fish at each time step. Because of the space-time coherence, the update need not be computed with respect to all the cells. Rather only the adjacent cells are checked to see in which new cell the fish is currently situated. Once the table is updated, the objects that fall within a fish's view volume can be easily determined by accessing the table entries of those cells that are nearby. This algorithm is especially suitable if there is a large number of objects besides fish, such as static obstacles, seaweeds, etc.
Moreover, for the occlusion test, more sophisticated bounding volume algorithms, such as the Kay-Kajiya algorithm [Foley et al.1990], can be employed when the shape of the obstacles is more complicated.
|Xiaoyuan Tu||January 1996|