Broadly speaking, I am interested in all areas of computer graphics and computer vision. My work seeks to address the following high-level questions:
Computer Graphics: What powerful tools can we provide to artists, designers, scientists, and novice users for creating beautiful, expressive, artistic, and/or illustrative imagery and animation?
Computer Vision: How can we visually understand the world, extract meaning from images, and model the human visual system?
Moreover, I am increasingly interested in applications of Machine Learning — especially Bayesian inference — to these two areas. Computer vision and graphics both rely heavily on analysis and generation of data, and Bayesian learning provides extremely powerful tools for interpreting data. I collaborate actively with my colleagues at UofT. UofT has some of the strongest groups around in the areas of computer vision, machine learning, and graphics/HCI. I also collaborate with a number of labs internationally, including University of Washington's GRAIL lab.
Some specific research areas:
Learning Human Motion Models
How can we create virtual characters from live human performance? Doing so requires building computational models of human motion that incorporate physics of the body as well as an individual's style, explicitly or implicitly. Our work in this area led to the first system for creating animation by example, an area that has since become very popular in the graphics community. We have developed a real-time system for interactive character posing that uses machine learning to help determine which poses are "most likely," and has been licensed to a major game developer. More recently, we have developed some exciting new methods for learning biomechanical human body models. This method can predict how a person will move under new circumstances, based on muscle strengths and stiffnesses estimated from a motion capture sequence.
Collaborators: Matthew Brand, Seth Cooper, David J. Fleet, Keith Grochow, C. Karen Liu, Steven L. Martin, Zoran Popović, Rajesh P. N. Rao, Aaron P. Shon, Jack M. Wang
|Style machines||Style IK||Learning biomechanics||Motion Composition|
|Shared Latent GPs||Gaussian Process Dynamical Models||Style-Content Gaussian Processes||Active learning for mocap|
Visual tracking, reconstruction, and rotoscoping
How do we perceive the 3D structure of a video sequence that contains moving people and objects? We have developed several techniques that address aspects of this problem, both addressing fundamental computer vision questions, and also creating a practical application for special effects. In one project, we have developed a method for determining the 3D shape and motion of a non-rigid object from raw video, without any prior knowledge about the deforming object. This is based on our work on probabilistic non-rigid structure-from-motion. We have developed a method for detailed reconstruction of a shaded 3D object from video, for diffusely-reflecting surfaces. We have developed interactive tracking (rotoscoping) techniques that are now in use in the special effects industry.
Collaborators: Aseem Agarwala, Christoph Bregler, Marcus Brubaker, Brian Curless, Alexei A. Efros, David J. Fleet, Pascal Fua, James Hays, Evangelos Kalogerakis, David H. Salesin, Steven M. Seitz, Lorenzo Torresani, Raquel Urtasun, Olga Vesselova, Li Zhang
|Automatic non-rigid shape from video||Non-rigid SFM||Rotoscoping||Smooth surfaces from video|
|Kinematic person tracking||Physics-Based Person Tracking||Image Sequence Geolocation|
Controllers for simulated locomotion
We are developing techniques for creating human and animal motor controllers that move in physically-realistic and expressive ways. Our work is inspired by insights from biology, robotics, and reinforcement learning.
Collaborators: Mazen Al Borno, David J. Fleet, Martin de Lasa, Igor Mordatch, Jack M. Wang
|Prioritized Optimization||Optimizing Walking||Walking with Uncertainty||Feature-Based Controllers|
|Low-Dimensional Planning||Full-Body Spacetime|
Painting and line drawing algorithms
How can we write computer software that helps in creating artistic imagery and video? How can we enable computer animation in the styles of human painting and drawing? Answering these questions involves understanding aspects of artistic style and human perception. We have developed technqiues for painterly rendering, painterly animation, and pen-and-ink illustration of 3D surfaces.
Collaborators: Simon Breslav, Brian Curless, Todd Goodwin, Charles E. Jacobs, Evangelos Kalogerakis, Alex Kolliopoulos, Derek Nowrouzezahrai, Peter O'Donovan, Nuria Oliver, Ken Perlin, Steven M. Seitz, Ian Vollick, Jack M. Wang, Denis Zorin
|Painterly rendering||Painterly video||Illustrating smooth surfaces||Image Analogies|
|Paint By Relaxation||Curve Analogies||Segmentation-Based NPR||Artistic Stroke Thickness|
|Interactive painterly animation||Learning Hatching|
Art, Design, and Aesthetics
Collaborators: Aseem Agarwala, Maneesh Agrawala, Peter O'Donovan, Dan Vogel, Ian Vollick
|The Science of Art||Learning Label Layout||Color Compatibility|
Machine learning for geometry processing
Collaborators: Brett Allen, Henning Biermann, Brian Curless, Evangelos Kalogerakis, James McCrae, Derek Nowrouzezahrai, Zoran Popović, Patricio Simari, Karan Singh, Lexing Ying, Denis Zorin
|Surface Texture Synthesis||Learning body shape variation||Real-Time Curvature||Learning mesh segmentation|
Reconstructing objects with real-world materials
We have developed methods for 3D shape reconstruction of static objects in the difficult case of complex reflectance (such as shiny objects), cases that foil most shape reconstruction methods, including laser scanners. We have developed a method based on photometric stereo using reference objects that entails very simple setup and calibration, and does not require advance knowledge of shape, illumination, or materials. We have developed a more recent technique that reconstructs BRDFs as well and can be used for relighting and rerendering.
Collaborators: Brian Curless, Dan B. Goldman, Steven M. Seitz, Adrien Treuille
|Example-based photometric stereo||Example-based multiview stereo||Scanning with varying BRDFs|
Collaborators: Kannan Achan, Aseem Agarwala, Michael F. Cohen, Alex Colburn, Brian Curless, Rob Fergus, William T. Freeman, Brendan Frey, Michael Guerzhoy, Sam T. Roweis, Barun Singh
|Single-image deblurring||Segmental speech processing||Image-Based Remodeling||Latent Factor Travel Model|