One can also use optimization techniques to derive control functions
automatically. An optimization algorithm tries to produce an *
optimal controller* through repeated controller and trajectory
generation, rewarding better generated motions according to some user
specified objective function. This generate-and-test procedure
resembles a trial-and-error learning process in humans and animals and
is therefore often referred to as ``learning''. The resulting motion
can be influenced indirectly by modifying the objective function. We
shall emphasize the difference between the optimization algorithm that
we are describing here and the constrained optimization technique
mentioned earlier. Here, the laws of physics are not treated as
constraints and motion is represented in the *actuator-time*
space, rather than in the state-time space.

Since motions are always generated in accordance with the laws of physics, the optimization algorithm is able to exploit the mechanical properties of the physics-based models as well as their environment [Funge1995]. Interesting modes of locomotion have been automatically discovered by using simple objective functions that reward low energy expenditure and distance traveled in a fixed time interval [Pandy, Anderson and Hull1992, van de Panne and Fiume1993, Ngo and Marks1993, Sims1994, Grzeszczuk and Terzopoulos1995]. The resulting motions bear a distinct qualitative resemblance to the way that animals with comparable morphologies perform similar locomotion tasks. We shall emphasize, again, that the fidelity of the dynamic model is critical to the realism of the resulting locomotion.

Although we have hand crafted the control functions for the artificial fish's muscles, our model is rich enough to allow such control functions to be obtained automatically through optimization, as is demonstrated by the work of Grzeszczuk and Terzopoulos Radek95.

Xiaoyuan Tu | January 1996 |