An Overview of the Paintstroke Primitive

Introduction

Paintstrokes are rendering meta-primitives useful for modelling a variety of phenomena, including fur, hair, fine branches, wicker, and pine needles. By incorporating various view-dependent image effects into the rendering algorithm, paintstrokes can be further used for modelling streams of water, icicles, and wisps of smoke. Self-shadowing is simulated on a global scale, offering an alternative to shadow mapping.

Since the paintstroke's shape is that of a generalized cylinder, it is not a general-purpose tool for building arbitrary objects. At the expense of this generality, however, comes a highly-optimized rendering algorithm that greatly improves over traditional methods for drawing the equivalent curved surfaces.

Implementation

A paintstroke primitive is a generalized cylinder whose colour, opacity, and reflectance can vary along its length, as can, of course, the radius. For each paintstroke, the user enters a series of 3-D control points that define the shape of the paintstroke's path and the behaviour of all of its other parameters along this path.

A common way to render this type of surface is to tessellate it into a fixed set of three-dimensional polygons. Our algorithm dynamically tessellates a paintstroke into polygons, directly making use of the image-space projection of the generalized cylinder. This capitalizes on important modelling and rendering efficiencies by arranging the polygons so as to maximize their projected screen size, thereby minimizing the number required to tile the paintstroke. The resulting savings in vertex transformations, rasterization overhead, and edge antialiasing more than repay the cost of the tessellation.

After transforming the control points from world to viewing coordintates, we generate Canmull-Rom spline interpolants for the path and radius components, and linear interpolants for the others. The paintstroke is then subdivided along its length into segments, with a granularity that adapts to its screen-projeced size and curvature. Each segment is subsequently tessellated into polygons such that the side of the paintstroke closest to the viewer is always completely covered by one, two, or four polygons, depending on the user-adjustable quality level. Quality-zero paintstrokes, shown in the colour image below, use a single polygon per segment and represent the most efficient tiling pattern, although their normal distributions are somewhat inaccurate and they do not render correctly when viewed head-on.

View-dependent Tessellation Meshes

Tessellation at high screen curvature

See full-size image (640x480)

Model contains approximately 12,000 paintstrokes.

Special Rendering Effects for Paintstrokes

The dynamic, view-dependent retessellation of paintstrokes allows for several useful rendering effects that would be difficult or impossible to achieve with a fixed polygonal model.

View-Dependent Lengthwise Opacity Variation

This feature simulates volume opacity, which varies according to the penetration of the light rays passing through a paintstroke segment. A simple formula involving the segment's path and the viewer's direction estimates the opacity, with a maximum value achieved when the viewing angle is along the path, and a minimum value when it is orthogonal to it.

Volume Opacity Effect

Breadthwise Opacity Variation

The opacity of the paintstroke can also vary across its breadth, allowing the user to specify distinct opacities for the centre and for the edges. Our view-dependent tesselation scheme makes this effect particularly easy to implement. It can be used to produce fuzzy paintstrokes or to simulate the Fresnel effect for transparent fluids.

Two types of breadthwise opacity variation

Global Shading Algorithm

This algorithm works well when a large number of control points are fairly uniformly distributed over a convex volume. Each control point of a paintstroke contains a user-specified global normal and depth value. The former indicates the normal of the global shape to which the control point belongs, and the latter the relative depth from the surface. The estimated amount of light penetration at each control point is computed using the depth, global normal, and average light direction. The control point's reflectance is then scaled accordingly. This method works particularly well for fur, but can also be applied to other models, such as trees.