
Neural Materials for Next-Generation Computer Graphics
Compelling computer graphics and immersive virtual environments require technology that can render complex objects accurately and in detail. Most computer graphics technologies today rely on reflection models—computer code that simulates the way smooth surfaces reflect light. Color and texture are added as a secondary step through a process called texture mapping. State-of-the-art reflection models and texture mapping can generate dazzling approximations of homogeneous materials, such as the painted steel of a car body or a skyscraper’s glass-and-concrete façade. But the same technology is incapable of rendering materials with complex surface structures, such as fabrics, close-up and in detail.
To overcome these limitations, a team of computer scientists at Cornell is using artificial intelligence to pursue a fundamentally new way of modeling surface reflection. Using neural function approximators, this collaborative project will develop new reflection models, called neural materials, that can represent the characteristic reflection patterns of any material, from granite to silk, based on many detailed measurements of surfaces. The result will be graphics technology that can render complex materials realistically in fine-scale detail.
This research could transform the graphics technology used in moviemaking, industrial design, marketing, advertising, architecture, virtual reality, and home remodeling. The neural materials developed by this project will be highly versatile for use in virtual environments and visual effects. Instead of attaching reflectance to opaque objects, this graphics technology will feature thin neural reflectance fields that accommodate fuzziness, translucency, and unmodeled geometric detail. Possible applications include automatic look development from only a few reference photographs and inverse object modeling from images that combine geometric and material detail.