Fast Complementary Dynamics via Skinning Eigenmodes

Otman Benchekroun1, Jiayi Eris Zhang2, Siddhartha Chaudhuri3,
Eitan Grinspun1, Yi Zhou3, Alec Jacobson 1, 3

1University of Toronto , 2Stanford University, 3Adobe Research

SIGGRAPH North America 2023


We propose a reduced-space elasto-dynamic solver that is well suited for augmenting rigged character animations with secondary motion. At the core of our method is a novel deformation subspace based on Linear Blend Skinning that overcomes many of the shortcomings prior subspace methods face. Our skinning subspace is parameterized entirely by a set of scalar weights, which we can obtain through a small, material-aware and rig-sensitive generalized eigenvalue problem. The resulting subspace can easily capture rotational motion and guarantees that the resulting simulation is rotation equivariant. We further propose a simple local-global solver for linear co-rotational elasticity and propose a clustering method to aggregate per-tetrahedra non-linear energetic quantities. The result is a compact simulation that is fully decoupled from the complexity of the mesh.



  title = {Fast Complementary Dynamics via Skinning Eigenmodes},
  author = {Otman Benchekroun, Jiayi Eris Zhang, Siddhartha Chaudhuri, Eitan Grinspun, Yi Zhou, Alec Jacobson},
  year = {2023},
  journal = {ACM Transactions on Graphics},


We add secondary motion to rig animations in real-time by using a subspace for the physics simulation.

We propose a linear blend skinning-like subspace for secondary motion. Our subspace is fully parameterized by a set of skinning weights, which are found via an eigen-decomposition of the elastic energy Laplacian

With a simple modification to the elastic energy Laplacian, our skinning eigenmodes can be made rig-aware, and satisfy the rig-orthogonality constraint inherently

To accelerate computation of energetic non-linearities in our simulation, we extend our skinning modes with skinning clusters. Our clusters reflect the rig-awareness of our skinning weights

IK Chicken

Pose Tracking

Face Tracking

Interactive Aquarium


This project is funded in part by NSERC Discovery (RGPIN2017–05235, RGPIN-2021-03733, RGPAS–2017–507938) New Frontiers of Research Fund (NFRFE–201), the Ontario Early Research Award program, the Canada Research Chairs Program, a Sloan Research Fellowship, the DSI Catalyst Grant program and gifts by Adobe Inc. Otman Benchekroun was funded by an NSERC CGS-M scholarship, while Jiayi Eris Zhang is funded by a Stanford Graduate Fellowship. We thank David I.W. Levin, Danny M. Kaufman, Doug L. James, Ty Trusty, Sarah Kushner and Silvia Sellán for insightful conversations. We thank Alejandra Baptista Aguilar for early discussion and adoption in 3D environments, and for providing some of the models in the paper. We thank Silvia Sellán, Selena Ling, Shukui Chen, Aravind Ramakrishnan and Kinjal Parikh for proofreading. We thank Xuan Dam and John Hancock for technical and administrative assistance throughout the course of this project.