next up previous contents
Next: Task-level Motion Planning Up: The Modeling of Action Selection Previous: Goals and Means

Previous Work

In prior behavioral animation work, such as that of Reynolds Reynolds87, only stimulus-driven action selection process is modeled. For example, the three sub-behaviors of a flocking boid are activated directly by environmental conditions. If the environmental conditions for more than one sub-behavior occur, the one with the highest weight gets chosen (each sub-behavior is associated with a weighting factor that reflects its importance). Comparably, Sun and Green Sun93 specify the action selection of a synthetic actor through a set of ``relations'' each of which is a mapping from an external stimulus to a response of the actor.

Analogous strategies have been taken in robotics. Representative examples include the rule-based mechanism in ``Pengi'' developed by Agre and Chapman Agre-Chapman87; and reactive systems and systems with emergent functionality, such as those proposed by Kaelbling Kaelbling87, Brooks Brooks86 and Anderson Anderson90. These mechanisms have the main advantage of coping well with contingencies in the environment since actions are more or less directly coupled with external stimuli. However, as Tyrrell Tyrrell93 points out:

... while we realise that many traditional planning approaches are unsuitable for action selection problems due to their rigidness and disregard of the uncertainty of the environment, we also realize that stimulus-driven mechanisms err in the opposite direction.
In particular, these mechanisms do not model the agent's internal state thus cannot take into account the agent's motivations. They are therefore limited in dealing with more sophisticated action selection problems faced by agents with multiple high-level (probably motivational) behaviors, such as those faced by most animals.

Being well aware of the aforementioned problems, researchers in ALife and related fields (e.g. robotics) have come up with various implementation schemes for action selection in animats that takes into account both internal and external stimuli. This body of work provides valuable reference to our design of the behavior control system of the artificial fish.

Maes Maes90a,Maes91a proposed a distributed, non-hierarchical implementation of action selection, called a ``behavior choice network''. The results demonstrate that this model possesses certain properties that are believed to be important to action selection in real animals, such as persistence in behavior, opportunism and satisfactory efficiency. However, while the distributed structure offers good flexibility, it also causes some problems. For example, convergence to a correct choice of behavior is hard to guarantee. Using a similar architecture to that of Maes' network, Beer and Chiel Beer91 proposed a neuroethology-based implementation of an artificial nervous system for simple action selection in a robot insect.

Along the line of Maes' and Beer's work, others have also proposed various different network-type of action selection mechanisms. For instance, Sahota's Sahota94 mechanism allows behaviors to ``bid'', and the behavior with the highest bid represents the most appropriate choice.

A common attribute of the above mechanisms (and many others) is the use of a winner-takes-all selection/arbitration process, where the final decision is made exclusively by the winning action or behavior. While this offers highly focussed attention and hence efficiency, it ignores the importance of generating compromised actions. The ability to compromise between different, even conflicting, desires is evident in natural animals. Tyrrell Tyrrell92b emphasized this particular aspect of animal behavior in the implementation of what is known as free-flow hierarchies. A free-flow hierarchy implements compromised actions [Rosenblatt and Payton1989] within a hierarchical action selection architecture similar to those proposed by early ethologists, such as Tinbergen Tinbergen51. The winner is only chosen at the very bottom level of the hierarchy. Simulation results [Tyrrell1993a] show that free-flow hierarchies yield favorable choices of action compared to other mechanisms, such as Maes'. A similar scheme is used to design the brains of the pets in Coderre's Coderre87 PetWorld. The main difference between the decision-making (or data-flow) hierarchy in PetWorld and a free-flow hierarchy is that, in the former, sensory data flows from the bottom of the hierarchy to the top while in the latter, it flows top down. The major drawback of such an implementation is its high complexity, hence inefficiency, due to the large amount of computations required.

The behavior system of the artificial fish incorporates both stimulus-driven mechanisms and motivation-based mechanisms for action selection. As a result, the fish possesses a level of behavioral capacity to achieve coherence among a number of complex behaviors. In this regard, our work is compatible with the work by Coderre Coderre87, Maes Maes90a,Maes91a and Tyrrell Tyrrell92b.

Our implementation is similar to that of Tyrrell in that it employs a top-down hierarchical structure and real valued sensory readings, and it can generate compromised actions. Unlike Tyrrell's model, our mechanism employs essentially a winner-takes-all selection process and allows only certain losing behaviors to influence the execution of the winning behavior. This way action selection is carried out much more efficiently than a free-flow hierarchy. Since more than one behaviors influence the selection of the detailed actions taken for accomplishing the chosen behavior, the final choices of actions are preferable to that generated by a conventional winner-takes-all process. Moreover, the majority of the previous action selection models (including the aforementioned ones) are based on a discrete 2D world which simplifies the problem by greatly restricting legal choice of motor actions. Our model, however, is based on a continuous 3D environment in which the animated animals perform continuous motions.

next up previous contents
Next: Task-level Motion Planning Up: The Modeling of Action Selection Previous: Goals and Means
Xiaoyuan TuJanuary 1996