Research Overview: Aaron Hertzmann

Broadly speaking, I am interested in all areas of computer graphics and computer vision. My work seeks to address the following high-level questions:

Moreover, I am increasingly interested in applications of Machine Learning — especially Bayesian inference — to these two areas. Computer vision and graphics both rely heavily on analysis and generation of data, and Bayesian learning provides extremely powerful tools for interpreting data. I collaborate actively with my colleagues at UofT. UofT has some of the strongest groups around in the areas of computer vision, machine learning, and graphics/HCI. I also collaborate with a number of labs internationally, including University of Washington's GRAIL lab.

Some specific research areas:

Learning Human Motion Models
How can we create virtual characters from live human performance? Doing so requires building computational models of human motion that incorporate physics of the body as well as an individual's style, explicitly or implicitly. Our work in this area led to the first system for creating animation by example, an area that has since become very popular in the graphics community. We have developed a real-time system for interactive character posing that uses machine learning to help determine which poses are "most likely," and has been licensed to a major game developer. More recently, we have developed some exciting new methods for learning biomechanical human body models. This method can predict how a person will move under new circumstances, based on muscle strengths and stiffnesses estimated from a motion capture sequence.

Collaborators: Matthew Brand, Seth Cooper, David J. Fleet, Keith Grochow, C. Karen Liu, Steven L. Martin, Zoran Popović, Rajesh P. N. Rao, Aaron P. Shon, Jack M. Wang

Style machines Style IK Learning biomechanics Motion Composition
Shared Latent Gaussian Processes Gaussian Process Dynamical Models Style-Content Gaussian Processes Active learning for mocap

Visual tracking, reconstruction, and rotoscoping
How do we perceive the 3D structure of a video sequence that contains moving people and objects? We have developed several techniques that address aspects of this problem, both addressing fundamental computer vision questions, and also creating a practical application for special effects. In one project, we have developed a method for determining the 3D shape and motion of a non-rigid object from raw video, without any prior knowledge about the deforming object. This is based on our work on probabilistic non-rigid structure-from-motion. We have developed a method for detailed reconstruction of a shaded 3D object from video, for diffusely-reflecting surfaces. We have developed interactive tracking (rotoscoping) techniques that are now in use in the special effects industry.

Collaborators: Aseem Agarwala, Christoph Bregler, Marcus Brubaker, Brian Curless, Andrew Fitzgibbon, David J. Fleet, Pascal Fua, Shahram Izadi, Cem Keskin, Gerard Pons-Moll, Varun Ramakrishna, David H. Salesin, Steven M. Seitz, Jamie Shotton, Richard Stebbing, Jonathan Taylor, Lorenzo Torresani, Raquel Urtasun, Li Zhang

Non-rigid SFM Automatic non-rigid shape from video Rotoscoping Smooth surfaces from video
Kinematic person tracking Physics-Based Person Tracking Metric Regression Forests Hand reconstruction

Controllers for simulated locomotion
We are developing techniques for creating human and animal motor controllers that move in physically-realistic and expressive ways. Our work is inspired by insights from biology, robotics, and reinforcement learning.

Collaborators: Mazen Al Borno, Eugene Fiume, David J. Fleet, Martin de Lasa, Igor Mordatch, Jack M. Wang

Prioritized Optimization Optimizing Walking Walking with Uncertainty Feature-Based Controllers
Low-Dimensional Planning Full-Body Spacetime Rotational control

Painting and line drawing algorithms
How can we write computer software that helps in creating artistic imagery and video? How can we enable computer animation in the styles of human painting and drawing? Answering these questions involves understanding aspects of artistic style and human perception. We have developed technqiues for painterly rendering, painterly animation, and pen-and-ink illustration of 3D surfaces.

Collaborators: Pierre Bénard, Simon Breslav, Brian Curless, Todd Goodwin, Charles E. Jacobs, Evangelos Kalogerakis, Michael Kass, Alex Kolliopoulos, Wilmot Li, Derek Nowrouzezahrai, Peter O'Donovan, Nuria Oliver, Ken Perlin, David H. Salesin, Steven M. Seitz, Ian Vollick, Jack M. Wang, Holger Winnemöller, Jun Xie, Denis Zorin

Painterly rendering Painterly video Illustrating smooth surfaces Image Analogies
Paint By Relaxation Curve Analogies Segmentation-Based NPR Artistic Stroke Thickness
Interactive painterly animation Learning Hatching Computing smooth contours PortraitSketch

Art, Design, and Aesthetics

Collaborators: Aseem Agarwala, Maneesh Agrawala, Trevor Darrell, Mira Dontcheva, Elena Garces, Diego Gutierrez, Helen Han, Sergey Karayev, Jānis Lībeks, Zhicheng Liu, Peter O'Donovan, Babak Saleh, Matthew Trentacoste, Dan Vogel, Ian Vollick, Holger Winnemöller

The Science of Art Learning Label Layout Color Compatibility Learning Single-Page Layout
Color personalization Clip art style similarity Font attributes Recognizing Image Style
Infographics style Interactive layout suggestions

Machine learning for geometry processing

Collaborators: Brett Allen, Henning Biermann, Brian Curless, Thomas Funkhouser, Evangelos Kalogerakis, Wilmot Li, Tianqiang Liu, James McCrae, Derek Nowrouzezahrai, Zoran Popović, Patricio Simari, Karan Singh, Lexing Ying, Denis Zorin

Surface Texture Synthesis Learning body shape variation Real-Time Curvature Learning mesh segmentation
Furniture style

Reconstructing objects with real-world materials
We have developed methods for 3D shape reconstruction of static objects in the difficult case of complex reflectance (such as shiny objects), cases that foil most shape reconstruction methods, including laser scanners. We have developed a method based on photometric stereo using reference objects that entails very simple setup and calibration, and does not require advance knowledge of shape, illumination, or materials. We have developed a more recent technique that reconstructs BRDFs as well and can be used for relighting and rerendering.

Collaborators: Brian Curless, Dan B Goldman, Steven M. Seitz, Adrien Treuille

Example-based photometric stereo Example-based multiview stereo Scanning with varying BRDFs

Other topics

Collaborators: Kannan Achan, Aseem Agarwala, Marcus Brubaker, Vladimir Bychkovsky, Yanshuai Cao, Michael F. Cohen, Alex Colburn, Brian Curless, Mira Dontcheva, Frédo Durand, Alexei A. Efros, Ali Farhadi, Rob Fergus, David J. Fleet, William T. Freeman, Brendan Frey, Michael Guerzhoy, James Hays, Matthew D. Hoffman, Hamid Izadinia, Ronnachai Jaroensri, Evangelos Kalogerakis, Zhicheng Liu, Sylvain Paris, Sam T. Roweis, Bryan C. Russell, Barun Singh, Olga Vesselova, Alan Wilson, Jian Zhao

Single-image deblurring Segmental speech processing Image-Based Remodeling Image Sequence Geolocation
Latent Factor Travel Model Sparse Gaussian Processes Acceptable photographic adjustments Event sequence visualization
Deep image tagging

Aaron Hertzmann