Compromised actions are enabled in the behavior system via the use of motor preferences. This design is considerably more efficient than that of a free-flow hierarchy [Tyrrell1993b]. In a free-flow hierarchy, every behavior must compute a recommendation for the next action. There will be no winner until all of the recommendations arrive at the motor command level, where a winning action is decided by fusing all the recommendations (see Appendix for more detail about free-flow hierarchy). While a free-flow hierarchy pays no special attention to the most desirable behavior given the current situation, our design of the intention generator takes the opposite approach. It picks a winning behavior at an early stage such that irrelevant sensory information need not be processed. This reflects the advantages of a winner-takes-all selection process. However, unlike a conventional winner-takes-all process, the use of the motor preferences allows unselected behaviors to influence how the winning behavior is performed. We believe that our implementation is a good compromise between efficiency and functionality.
Moreover, the qualitative nature of the motor preferences simplifies both the process of generating them and the way of using them (see Section ). This is in direct contrast to the quantitative action recommendations used in Tyrrell's implementation. Merging the recommendations into a single motor command was shown by Tyrrell to be a complicated and difficult problem even when all the actions are specified in a 2D discrete world. It will thus be even more problematic if the actions are defined in a 3D continuous world. In addition, it is doubtful that most animals make their decisions to act by carefully evaluating and optimizing all possible actions. For this reason, using qualitative motor preferences seems to be more biologically plausible.
|Xiaoyuan Tu||January 1996|